id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
b2d0aa8c87d3e3c575cb40487e1ffe6a4e09cf71
|
[REMOVED]
|
{"len_cl100k_base": 10639, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 40215, "total-output-tokens": 13806, "length": "2e13", "weborganizer": {"__label__adult": 0.0006551742553710938, "__label__art_design": 0.0016241073608398438, "__label__crime_law": 0.0011224746704101562, "__label__education_jobs": 0.031951904296875, "__label__entertainment": 0.0005025863647460938, "__label__fashion_beauty": 0.0004606246948242187, "__label__finance_business": 0.1937255859375, "__label__food_dining": 0.0009088516235351562, "__label__games": 0.001911163330078125, "__label__hardware": 0.0016641616821289062, "__label__health": 0.0015001296997070312, "__label__history": 0.0008268356323242188, "__label__home_hobbies": 0.0004651546478271485, "__label__industrial": 0.0016040802001953125, "__label__literature": 0.0012731552124023438, "__label__politics": 0.0014791488647460938, "__label__religion": 0.0006666183471679688, "__label__science_tech": 0.1502685546875, "__label__social_life": 0.0004124641418457031, "__label__software": 0.11962890625, "__label__software_dev": 0.4853515625, "__label__sports_fitness": 0.0004467964172363281, "__label__transportation": 0.0011138916015625, "__label__travel": 0.0004954338073730469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63395, 0.04561]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63395, 0.13456]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63395, 0.93675]], "google_gemma-3-12b-it_contains_pii": [[0, 2171, false], [2171, 7971, null], [7971, 13294, null], [13294, 18608, null], [18608, 24041, null], [24041, 27857, null], [27857, 33342, null], [33342, 39200, null], [39200, 44907, null], [44907, 50450, null], [50450, 56096, null], [56096, 63395, null], [63395, 63395, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2171, true], [2171, 7971, null], [7971, 13294, null], [13294, 18608, null], [18608, 24041, null], [24041, 27857, null], [27857, 33342, null], [33342, 39200, null], [39200, 44907, null], [44907, 50450, null], [50450, 56096, null], [56096, 63395, null], [63395, 63395, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63395, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63395, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63395, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63395, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63395, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63395, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63395, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63395, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63395, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63395, null]], "pdf_page_numbers": [[0, 2171, 1], [2171, 7971, 2], [7971, 13294, 3], [13294, 18608, 4], [18608, 24041, 5], [24041, 27857, 6], [27857, 33342, 7], [33342, 39200, 8], [39200, 44907, 9], [44907, 50450, 10], [50450, 56096, 11], [56096, 63395, 12], [63395, 63395, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63395, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
9897351130ec929fa47e1543fc54464b5897b1c9
|
Design and implementation of a flexible hand gesture command interface for games based on computer vision
João L. Bernardes\textsuperscript{1} Ricardo Nakamura\textsuperscript{2} Romero Tori\textsuperscript{1}
\textsuperscript{1,2}Escola Politécnica da USP, PCS, Brazil \textsuperscript{1}Centro Universitário SENAC, Brazil
Abstract
This paper describes a command interface for games based on hand gestures defined by postures, movement and location. The large variety of gestures thus possible increases usability by allowing a better match between gesture and action. The system uses computer vision requiring no sensors or markers on the user or background. The analysis of requirements for games, the architecture and implementation are discussed, as well as the results of several tests to evaluate how well each requirement is met.
Keywords: computer vision, gesture recognition, human-computer interaction, electronic games
Authors’ contact:
\{joao.bernardes, ricardo.nakamura\}@poli.usp.br, tori@acm.org
1. Introduction
The possibility of relaying commands to a computer system using one’s own hands and gestures has interested researchers and users for a long time and was one of the first topics in user interface research, partly because it uses well-developed, everyday skills [Bowman 2005]. With the computational capacity available today and widespread use of image capture devices, even in domestic systems it is possible to implement this sort of interaction using computer vision. This brings the benefit of leaving the user's hands free of any gloves, cables or sensors. Gestures2Go, the system described here, provides this functionality and its implementation (in C++, illustrated in figure 1) is focused on electronic games.
Games are an ideal platform to test and popularize new user interface systems, for several reasons, such as an increased user willingness to explore in this medium [Starner et al. 2004]. There are many examples of academic research developing and studying new interfaces with games, particularly incorporating Augmented Reality [Bernardes et al., 2008]. The game industry has also introduced new (or of previously restricted use) interfaces and devices to the public. From the joystick to increasingly complex gamepads and controllers shaped as musical instruments, from datagloves to “pistols” that function as pointing devices and even haptic devices [Novint 2009], many are such examples, to the point that, today, some professionals are encouraged to play games to improve job-related skills [Dobnik 2004].
On the other hand, both the industry and academia acknowledge that new, more natural (and hopefully fun) interfaces are one way to attract new consumers to this economically important but still restricted market [Kane 2005]. And in the past few years, the search for these interfaces has been more widespread, continuous, well-publicized and commercially successful. After a popular gaming platform introduced motion and tilt detection in a simpler controller as its most innovating feature [AiLive 2007], motion detection was quickly added to other platforms and games and continues to be researched and improved upon. Several portable gaming systems, in particular, are taking advantage of motion and tilt sensing, touchscreens and even microphones in their interface. More recently still a project was unveiled to add interaction based on recognition of full-body motion, speech and faces to a popular platform [Snider 2009].
Despite this ebullience in game interfaces, the use of hand gestures, especially leaving the user's hands free, has seen little academic or commercial research in this area and is usually limited to analyzing only hand movement or a small number of hand postures. One of Gestures2Go's objectives is greater flexibility, to allow the use of a greater variety of gestures (currently defined by hand postures, movement or location and using both hands). Another important goal is that it must be easy to use for both players and developers. Gestures2Go should also be usable with existing games (designed for traditional interfaces) and allow multimodal interaction. These and other requirements arose, during system design, from an analysis focusing specifically on gestures and on game applications. Many of the same requirements exist in other applications as well, such as education or virtual and augmented reality, and the authors believe this system may be well suited for these applications, but will leave this discussion outside the scope of this paper.
2. Related Work
A few works have been proposed recently to use free hand gestures in games using computer vision. A multimodal multiplayer gaming system [Tse et al. 2007] combines a small number of postures, their location on a table-based interaction system and speech commands to interact with games and discusses results of using this platform to interact with two popular games. Interpreting movements or postures of the arms or the whole body is also usual. A body-driven multiplayer game system [Laakso & Laakso 2006] uses 8 postures of the two arms viewed from above, plus player location, to design and test the interaction in several games. Going further, tests with both functional prototypes and Wizard of Oz prototypes indicate that body movement patterns (such as running, swimming or flying), rather than specific gestures or trajectories, may be used to trigger similar actions on game characters [Hoyniemi et al. 2005].
Other tools facilitate the use of gesture recognition for applications in general, not only games. ICondensation [Isard & Blake 1998] is a probabilistic framework that allows the combination of different observation models, such as color and contours. HandVu [Kolsch et al. 2004] also uses condensation but provides a simpler interface to track hands in six predefined postures using skin color and a "flock" of Haar-like features. GART [Lyons et al. 2007] provides a high level interface to machine learning via Hidden Markov Models used to train and recognize gestures that consist only of movements (detected by sensors such as a camera, mouse or accelerometers). It is interesting to note that HandVu and GART can be combined to allow robust hand tracking and a larger number of gestures (combining postures and movement, like Gestures2Go) than either one isolated. Finally, EyesWeb [Camurri et al. 2003] is a framework with a graphical programming interface that presents several tools and metrics for segmentation and analysis of full body movements.
The literature regarding gesture recognition in general is vast and a complete review is beyond the scope of this paper, especially since established and comprehensive reviews [Pavlovic et al. 1997] as well as more recent but still comprehensive discussions [Imai et al. 2004] are available. Other works, when relevant to this implementation or future developments, are discussed in the correspondent sections.
3. HCI and Game-specific requisites
Both the use of gestures and having games as an application bring specific requirements to an interface and analyzing these requirements was one of the most important steps in designing Gestures2Go. For gesture-based interfaces, current research [Bowman et al. 2005, Shneidermann et al. 1998] point out the following:
Gestures are most often used to relay singular commands or actions to the system, instead of tasks that may require continuous control, such as navigation. Therefore, it is recommended that gestures be part of a multimodal interface [Bowman et al. 2005]. This also brings other advantages, such as decoupling different tasks in different interaction modalities, which may reduce the user's cognitive load. So, while gestures have been used for other interaction tasks in the past, including navigation [Mapes & Moshel 1995], Gestures2Go's primary requisite is to allow their use to issue commands. Issuing commands is a very important task in most games, usually accomplished by pressing buttons or keys. Often, games feature a limited number of commands, not even requiring all the buttons in a modern gamepad. Since other tasks, especially navigation, are very common as well, another requirement that naturally arises is that the system must allow multimodal interaction. Massively Multiplayer Online games (MMOs), in particular, often have much of their actual gameplay consisting of navigation plus the issuing of several commands in sequence [Fritsch et al. 2005].
Gesture-based interfaces are almost always "invisible" to the user, i.e. they contain no visual indicators of which commands may be issued at any particular time or context. To reduce short term memory load, therefore, the number of possible gestures in any given context, but not necessarily for the entire application, must be limited (typically to 7±2 [Miller 1956], or approximately 5 to 10 gestures). The gestures must also be highly learnable, chosen from the application domain so the gesture matches the intended command. Changing gears in a racing game, for instance, could be represented by pulling a fist towards or away from the user with the hand relatively low, as if driving a stick shift car, and pausing the game could be associated with an open palm extended forward, a well-known gesture meaning "stop". This means that while the system is not required to deal with a large
number of different gestures at any one time (which simplifies the implementation), being flexible by having a large number of possible gestures to choose from, so the interface designer may pick the most appropriate to associate with each user action, is indeed a requirement. Systems that violate either of these two requirements, requiring the memorization of a large number of gestures or limiting the space of possible gestures to only a few postures or movements, make the interface harder to learn and later to remember, reducing its usability.
The examples above (changing gears and stop) also show that the choice of each gesture for the interface depends not only on the application, context and command, but is also heavily culture-dependant, because the cognitive meaning of gestures may vary. In the case of gesture-based games, therefore, and with games being such a global market, localization could also entail changing which gesture is associated with each action [Bernal-Merino 2007]. All this leads to the requirement that the vocabulary of gestures in each context of the interface, while small, must be as simply and quickly modifiable as possible. Systems that require retraining for each set of possible gestures, for instance, could prove problematic in this case, unless such training could be easily automated.
The interface should also accept small variations for each gesture. Demanding that postures and movements be precise, while possibly making the recognition task easier, makes the interaction considerably harder to use and learn, demanding not only that the user remember the gestures and their meanings but also train how to do them precisely, greatly reducing usability.
It could be argued that, for particular games, reducing the usability could actually be part of the challenge presented to the player (the challenge could be remembering a large number of gestures, or learning how to execute them precisely, for instance). While the discussion of whether that is a good game design practice or not is beyond the scope of this paper, Gestures2Go opts for the more general goal of increasing usability as much as possible. This agrees with the principle that, for home and entertainment applications, ease of learning, reducing user errors, satisfaction and low cost are among the most important design goals [Shneidermann et al. 1998].
The system should also allow playing at home with minimal setup time required. Players prefer games where they can be introduced to the action as soon as possible, even while still learning the game and the interface [Hong 2008]. Therefore, the system should not require specific background or lighting conditions, complex calibration or repeated training. Allowing the use of the gesture-based interface with conventional games is also advantageous to the user, providing new options to enjoy a larger number of games. From the developer point of view, the system should be as easy as possible to integrate within a game, without requiring specific knowledge of areas such as computer vision or machine learning.
Finally, processing and response times are important requirements. Despite the growing availability of multi-core gaming platforms, it is still desirable that gesture recognition processing time be as low as possible, freeing processing power to other tasks such as artificial intelligence and physical simulation. It is limited by the acceptable response time, which, in turn, depends on the game. Performing a gesture, for instance, will almost always be slower than pressing a button or key, so this sort of interface is probably not a good choice for reflex-based games such as first person shooters. A genre that has already been mentioned as a good match for this sort of interface is MMOs. Not only much of their gameplay consists of navigation and issuing commands, MMOs use several strategies to deal with network latency [Fritsch et al. 2005] that also result in not penalizing the slower input from gestures, when compared, for instance, with button pressing. Such strategies include reducing the number of commands necessary in a fixed amount of time (for instance, it is common to “enter or exit attack mode”, instead of repeating a command for each attack) and accepting the queuing of only one new command while the action triggered by the last one has not finished (and actions are set up to take some time, usually spent with animations or special graphical effects). In the game Everquest 2, for instance, Fritsch et al. report that the use of these strategies, with actions usually taking 1000ms, makes the game playable with latencies of up to 1250ms. A more practical bound, however, pointed after the analysis of several related works, is around 250ms for interactive games [Henderson & Bhatti 2003]. In a setup such as the one described above, that would leave one second to be divided between gesture performance and system response time and this is the parameter that will be used for Gestures2Go. This applies, of course, even for games designed for regular interfaces. When designing a game specifically to explore gestures, similar game design strategies or even new ones could be adopted to compensate for the time the user spends performing the gesture.
4. Gestures2Go
Because one of the requirements for this system was ease of use, both for the player and the developer, it was named Gestures2Go to imply that the gesture recognition is ready to go, to take home, with little extra work. It consists of an abstract framework that divides the system in modules and defines the interface between these modules and, currently, of a single, simple implementation of this framework. It is important to note that the requirements discussed in section 3 apply to the current implementation, which is focused on games, and not to the abstract framework.
The computational task of identifying a gesture from a known vocabulary of possibilities is often divided in gesture modeling, analysis and recognition [Pavlovic et al. 1997].
Gesture modeling consists in how a gesture is defined by the system, from a computational point of view (since definitions of gesture abound in other areas). Gesture2Go’s abstract framework defines a gesture as an initial hand posture, an optional movement of the entire hand through an arbitrary path and a final posture, which is optional if the movement is omitted but mandatory otherwise. The starting location of the hand, relative to the user’s head (left or right, above or below or roughly aligned with the head), is also an optional parameter of this definition, since it often changes the meaning of a gesture. This means that a gesture may consist of a single posture, of an initial and a final posture or of an initial posture, a movement and a final posture, all depending or not on the initial hand position. It also means that changes of posture during the movement are not taken in consideration, since these changes rarely have semantic meaning [Quek 1994]. While the abstract framework also includes variable parameters in the gesture definition (such as speed or pointing direction), the simple implementation described here does not deal with parametric gestures. Finally, the abstract framework does not specify how each part of the gesture definition is actually modeled (each is identified by a string or numerical ID), so it can vary in each implementation. The hand posture could, for instance, be modeled as a collection of values for the degrees of freedom of a particular hand model, or it could consist of a list of 2D or 3D points of the hand’s contour.
During the analysis phase, the gesture’s spatial and temporal parameters (which depend on each model) are obtained from sensor data (in this case, from an image or a set of images) and this data is used during the recognition phase to identify the gesture within the vocabulary of possibilities. Analysis and recognition are often, but not necessarily, tightly inter-related.
4.1 The Abstract Framework
Figure 2 shows a UML Activity Diagram representing Gesture2Go’s object flow model.
**G2gGesture** is responsible for the gesture model, while **G2gAnalysis** and **G2gRecognition** define the interfaces for the classes that will implement gesture analysis and recognition. To these activities are added image capture and segmentation. **G2gCapture** provides an interface for capturing 2D images from one or multiple cameras or pre-recorded video streams (mostly for testing). The images must have the same size, but not necessarily the same color depth. A device could provide, for instance, one or more color images and a grayscale image to represent a dense depth map. **G2gSegmentation** should usually find in the original image(s) one or both hands and possibly the head (to determine relative hand position).
Figure 2 shows that the usual flow of information in Gesture2Go in each time step is as follows: one or more images serve as input to the image capture model, which makes these images available as an OpenCV’s `IplImage` object [OpenCV 2009]. The segmentation uses this image and provides a segmented image as an object of the same class (and same image size, but not necessarily color depth). Based on the segmented image, the analysis provides a collection of features as a `G2gFeatureCol` object which are in turn used by the recognition to output a gesture.
**G2gFeatureCol** is a collection of **G2gFeature** objects. **G2gFeature** contains a identifier string to describe the feature and either a scalar and an array of values (more often used) or an image (useful, for instance, for features in the frequency domain). **G2gFeature** already defines several identifiers, for those features most often found in the gesture recognition literature, to facilitate the interface between analysis and recognition, but user-created identifiers may also be used.
**Desc2Input** is an optional module that accompanies but is actually separate from Gesture2Go. It is responsible for facilitating, in a very simple way, both multimodal input and integration with games or engines not necessarily aware of Gesture2Go. It simply translates its input, which is a description (a numerical or string ID or a XML description, for instance) that may be supplied either by Gesture2Go or any other system (and here lies the possibility of multimodal interaction), into another type of input, such as a
system input (like a key down event) or input data to a particular game engine. In one of the tests, for instance, gestures are used for commands and a dancing mat is used for navigation.
Because this architecture consists mostly of interfaces, it is possible to create a single class that, through multiple inheritance, implements the entire system functionality. This is usually considered a bad practice in object orientation (should be avoided) and is actually one of the reasons why aggregation is preferred to inheritance [Eckel 2003]. There are design patterns that could have been used to force the use of aggregation and avoid multiple inheritance, but Gestures2Go opts for allowing it for a reason. Gesture recognition may be a very costly task in terms of processing, and must be done in real time for the purpose of interaction. Many algorithms may be better optimized for speed when performing more than one task (such as segmentation and analysis) together. Furthermore, analysis and recognition are very tightly coupled in some algorithms and forcing their separation could be difficult. So, while it is usually recommended to avoid using multiple inheritance and to implement each task in a different class, making it much easier to exchange one module for the other or to develop modules in parallel and in teams, the option to do otherwise exists, and for good reason.
Finally, all Gestures2Go classes must implement init( ) and cleanup( ) methods which are preferred to using the new and delete operators (the system is implemented in C++) to avoid problems with multiple inheritance and with synchronization.
4.2 Implementation
The requirement analysis pointed that an implementation of the abstract framework described above specifically for games should have the following characteristics: minimum need for setup, low processing demand even though the response time may be relatively high, a high number of possible gestures but with only a small and easily modifiable vocabulary in any one context, tolerance to variations in the execution of gestures, allow multimodal interaction and make development of games using gestures as easy as possible. With these requirements in mind and assuming that a single player in the scene will interact as possible. With these requirements in mind and assuming that a single player in the scene will interact with the cameras (and, in most cases, with no "off" option). G2gSimpleSkinSeg2 was then incremented with methods to accumulate and calculate averages and standard deviations for hue and saturation of several, arbitrary rectangular skin-colored regions. This allows an application to add a quick calibration step so the segmentation may use adequate skin hue and saturation values for the threshold operation.
At first, fixed average values and tolerances were adopted for the skin's hue and saturation. Testing in different lighting conditions, environments and using different cameras, however, showed large variations for these values in the captured images, either due to different lighting conditions or differences in the white balance [Viggiano 2004] performed automatically by the cameras (and, in most cases, with no "off" option). G2gSimpleSkinSeg2 was then incremented with methods to accumulate and calculate averages and standard deviations for hue and saturation of several, arbitrary rectangular skin-colored regions. This allows an application to add a quick calibration step so the segmentation may use adequate skin hue and saturation values for the threshold operation.
Finally, after tests in an environment where the background actually has a hue very similar to the skin's, a fixed background removal operation was added as an option. Figure 1 shows a sample result of this operation. Even with a color tolerance of 50 in a 256x256x256 RGB space, about half of the pixels do not match the recorded background (not showing as black), even when this background is far enough that its actual appearance is unlikely to change due to the presence of the hand. This problem is minimized by applying a 3x3 erosion operation after the background removal, also illustrated in figure 1, but due to local corrections imposed by the camera a region around the foreground elements still shows, looking like an "aura" around the color segmented hand images in figure 1.
The system, currently, does not segment the arm from the hand, which imposes the limitation that users must wear long sleeves. This is considered a serious limitation. Even without any information about hand posture, for most of them the arm could be segmented by finding the direction of its major axis, finding the point of minimum width or abrupt change in direction along this axis (the wrist) and segmenting there [Yoon et al. 2006]. This does not work well if only a small length of arm is showing, however, or for certain postures (such as preparing a "karate chop").
Other segmentation strategies that do not require knowledge of the hand's posture were attempted, such as using color histograms and probabilities instead of the simple average and deviation, as well as the use contour information, but so far showed little improvement and more computational cost.
The first step of the analysis activity, implemented in the G2gSCMAnalysis class, is to find the connected components in the segmented image. The system does not assume that the background is fixed or that there are no other skin colored regions in the image, but it does presume that the player using gestures is the
closest person to the camera, so it can assume that the three largest connected components correspond to the user's hands and face. There is also a minimum number of pixels for a connected component to be accepted as a region of interest. If only 2 components above this minimum size are found, the system assumes that the missing component corresponds to the user's non-dominant hand and if only one is present, it is assumed to be the head (the head was cropped from figures 1 and 3). To further simplify the identification of the hands and head, this implementation assumes that the left hand is the leftmost region with the head in the middle and the right hand to the right. While this certainly limits user movements and the number of possible gestures, it was considered a valid limitation in this case and, during informal testing, was accepted with no complaint from the users, who obeyed it most of the time even when not informed of it. This first step also reduces noise left after the segmentation and eliminates from the analysis other people who might wander in the background.
Analysis and recognition of the gestures themselves adopt a divide and conquer strategy [Wu & Huang 1999], separating the recognition of hand posture and hand movements. Postures are recognized through estimation by synthesis (ES), i.e. the real hand's image is compared with images synthesized from a 3D hand model so that 3D posture information (the parameters used to model the hand's posture) is obtained comparing only 2D images, instead of trying to match a 3D model to the real image, which can be accurate but computationally expensive and complicated by the presence of postures with self occlusion [Imai et al. 2004]. Unlike most applications of ES methods, however, it is not necessary to determine hand posture continuously and differentiate between postures with only small differences. Because tolerance of variation in postures is one of the system's requirements, it is not only acceptable but necessary that small differences in posture be disregarded. This implementation, therefore, may sidestep one of the most serious complication of ES methods. It only needs to compare the real hand image with a small number of possible postures, instead of thousands of possibilities. When no acceptable match is found, the system simply assumes the user is not performing a command gesture.
As in other ES methods [Shimada et al. 2001, Imai et al. 2004], the features G2gSCMAnalysis provides are based on the hand's 2D contour. The most important feature is a vector of the distances between the hand's centroid and a fixed number of points on the contour. These points are shown in figure 1. This vector is normalized in the analysis, so the maximum distance always corresponds to the same value and the features are scale-invariant, reducing the influence of the distance between the hand and the camera. All features for the vocabulary of possible, modeled postures are pre-calculated so only those for the real hand need to be determined in each execution step. Currently the number of points sampled from the contour in the feature vectors is, somewhat arbitrarily, set at 128. This number has shown to be small enough to allow fast computation and large enough that it is not necessary to worry about choosing points near remarkable contour features (usually local maxima and minima corresponding to tips and bases of fingers).
G2gSCMRecognition implements both posture and movement recognition. Posture recognition consists simply of comparing the feature vector obtained from the real hand's captured image with each vector for all the possible postures and finding the posture that minimizes the mean squared error between these two vectors. If the minimum error is still larger than a tolerance value, no posture is recognized (recognition returns a "not found" constant).
Unlike other ES implementations, however, the observed vector is not made rotation-invariant during recognition (by rotating it during each comparison so extremal points coincide with the model). While some tolerance in posture recognition is desired, rotation-invariance is not. Should this operation prove necessary, to avoid incorrect results due to the accumulation of many small errors caused by a small rotation, it could still be implemented while leaving the algorithm sensitive to rotation because recognition uses yet another feature: the angle between the highest point in the contour and the centroid. This feature, also provided by G2gSCMAnalysis, is currently used to speed up recognition by discarding, before the calculation of the mean squared error, any posture with an angle that differs by more than a certain tolerance from the one in the observed image. The highest point (usually a fingertip) is easy to determine because the contour-finding algorithm is implemented in a way to always find this point first. This angle could also be used to account for hand rotation if the vector of distances was made rotation-invariant, but tests so far have not shown the need for this operation.
The analysis also provides the centroid's absolute location in the image and its area (or number of pixels), which are used for movement recognition. Only 12 movements are recognized: left, right, up, down, back, forward, 4 diagonals, clockwise and counter-clockwise approximate rotations. The movement is temporally segmented by the gesture's initial and final postures, so it can be identified as one of these possibilities by a simple set of conditions, similar to a two stage scheme described in the literature [Mammen et al. 2001]. For the back and forward movements, the initial and final posture of the hand must be the same, since this movement is estimated by the variation in area.
In the current implementation, a gesture may be defined by movements and initial relative locations of both hands, but only postures of the dominant one (currently the right hand, but the next version will allow choosing left or right) are identified. There are now 41 postures available. Adding more postures is
quite simple and others were considered and could have been added, but they were either meaningless, quite hard to perform or had the same contour in a 2D image. With this number of available postures and movements, and remembering that a gesture might consist of one or two postures, or a movement bound by two postures that may be different (except when moving back or forward), there are almost 20,000 available gestures for the dominant hand alone, even before considering its location relative to the head or the movement of the other hand.
Finally, Desc2Input's implementation in the current version, for MS Windows only, has only two public methods: associate and sendDesc. The associate method receives a description (a string, representing a gesture or any other event, such as stepping on a dancing mat's "button") and the system input (key press, mouse move or click) and parameters (such as key or position) associated to that description. The sendDesc method only receives a description and indicates that Desc2Input must generate the associated input (which is broadcast to all windows). A priority for future versions is making this module easier to use, adding alternatives that require little programming (leaving the association of gestures and commands to an external configuration file, for instance).
5. Tests and Results
Four prototype applications were created to test the system in different conditions. The first priority was to verify the posture analysis and recognition strategy, independent of segmentation. To accomplish that, 120 already segmented pictures of hands in different postures were stored and ran through the analysis and recognition modules. These images were segmented using the same algorithm described before but were chosen manually at moments when it worked adequately (as in the examples shown in figure 3).
To allow the comparison of every posture with every other one, the angle difference between the highest point in each posture was discarded and the mean square error between the distance vectors was recorded. Table 1 shows the results, truncated to the nearest decimal, of one such test, comparing 15 postures. More postures are not shown due to the limited space. This particular test was chosen specifically because it contains similar postures that show problematic results.
In all cases the correct posture was identified (i.e. had the minimum error), as shown by the values with a gray background in table 1. In 8 cases, however, incorrect postures showed a low error as well (shown in bold on white). The system considers error values below 1 as possible matches. So, if "pinkyR" had not been one of the possible postures, for instance, "pinky" would have been accepted by the system as "pinkyR". Figure 3 shows these problematic postures. Two of these cases (pinky and pinkyR, point and pointL) are postures where a single finger is raised and that differ from each other by this finger's angle. Using the angle of the highest point as a feature eliminates these incorrect matches. The other mismatch that might have occurred is between the postures with the pinky up and the thumb up posture, but as seen in figure 3, these postures are actually quite similar. In all these static tests, all postures were recognized correctly but a few similar ones showed possible mismatches. In the test illustrated by table 1, for instance, only 8 comparisons in 225 were possible mismatches, approximately 3.5%.
<table>
<thead>
<tr>
<th>Posture</th>
<th>Error</th>
</tr>
</thead>
<tbody>
<tr>
<td>pinkyR</td>
<td>0.01</td>
</tr>
<tr>
<td>pinky</td>
<td>0.02</td>
</tr>
<tr>
<td>point</td>
<td>0.03</td>
</tr>
<tr>
<td>pointL</td>
<td>0.04</td>
</tr>
<tr>
<td>thumb up</td>
<td>0.05</td>
</tr>
</tbody>
</table>
Figure 3: Sample segmented postures used in static tests
A second test application shows identified postures in real time and allows the verification of the effects of the segmentation. It requires a few seconds for setup,
showing a region on the screen that the user must "cover" with a region of skin so initial averages and deviations for skin color can be determined. While the application allows this to be done several times (to capture, for instance, the colors of the palm and back of the hand as well as face regions), showing either many or any single region of skin have always given similar results during tests. The application also includes the options of recording and removing a known background and can either show a color image of the foreground or a monochrome image of the segmented skin regions. While showing the monochrome image, if a posture is identified the application also displays its description on the bottom of the screen. This application also identifies and displays the 8 possible movements. Actually, a gesture was defined for each movement, all 8 having as both initial and final posture a closed fist (which is very accurately identified by the system). The images labeled as "Erosion" and "Posture" in figure 1 are actually regions from screenshots of this application.
During the tests with this application, analysis and recognition continued to perform well when the images were well segmented. Often, however, a finger, usually the thumb or pinky, would disappear from the segmented image or only parts of the fingers would show, leading to postures not being recognized or for mismatches (such as an open palm identified as mimicking a claw). This was mostly due to problems with the illumination and image capture, such as a bloom showing between the fingers if the open hand was in front of a light source or bright light sources reflecting specularly from large regions of skin. Both make large skin regions show as white. Even in these environments with no controlled (and problematic) illumination, the system identified the right posture most of the time. Another problem that occurred during these tests happened when the long sleeves worn by the subjects slid down the wrist, showing a portion of the forearm. Only 2 or 3 centimeters needed to show to cause a dramatic drop in the recognition's quality. During these tests, the movements were always recognized correctly.
While Gestures2Go should be primarily used to issue commands with gestures, a third application was built to evaluate its use to select objects, replacing the use of the mouse. A posture was associated with moving the mouse and relative changes in hand position while that posture was recognized were mapped to relative positions in the mouse pointer using Desc2Input. Two other postures were associated with left and right clicks. The hand moved only in a small region of a 640x480 image while the mouse should move over a 1024x768 region, so the linear mapping between movements increased the hand's vertical and horizontal movements by different constants to apply it to the mouse. The system was still relatively easy to use even to click on smaller objects on the screen.
Finally, postures, movements, using the hand to move the mouse pointer and click and the use of a dancing mat for navigation were put together in a fourth test application which was used to control a popular MMO. Using the hand to move the mouse pointer and clicking was only necessary to manipulate some objects in the scenery. A gesture was associated with the command to select the next possible target and several gestures were associated with different actions to be performed on this target. This interface was especially adequate to this particular MMO because most actions are accompanied by easily identifiable hand motions of the player's avatar, so the mapping between gesture and game action was natural, very visible and enjoyable. To navigate in the game world using the dancing mat, it was connected to the computer's parallel port and a class was created to read its inputs and send them to Desc2Input to be translated as the arrow keys and commands for actions such as jumping. Because in systems derived from Windows NT only applications running in kernel mode can access the parallel port, it was necessary to either write a device driver or use an existing one. Using Inpout32 [logix4u 2009] was the chosen solution. It is a DLL with an embedded driver and functions for reading and writing to the parallel port (inp32 and out32). Up to the time of this writing, unfortunately, permission to use this MMO's name and images had not yet been granted by the publisher.
The performance of each module was also tested, using a 3GHz Intel Core 2 Duo CPU and 2GB of RAM (the test process ran in only one core, however). Table 2 shows approximate average times measured for each task in 375 tests (5 tests of 5s at 15 frames per second).
<table>
<thead>
<tr>
<th>Activity</th>
<th>Time (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Segmentation</td>
<td>13.600</td>
</tr>
<tr>
<td>Analysis</td>
<td>Components: 0.650</td>
</tr>
<tr>
<td></td>
<td>Moments: 0.013</td>
</tr>
<tr>
<td></td>
<td>Features: 0.003</td>
</tr>
<tr>
<td>Recognition</td>
<td>10 Postures: 0.002</td>
</tr>
<tr>
<td></td>
<td>Movement: <0.001</td>
</tr>
</tbody>
</table>
Table 2 shows how segmentation is by far the most costly activity. During analysis, finding the connected components is also the most time consuming task, but still only takes less than a millisecond. Finding the image moments for one hand's connected component takes approximately 13µs only because OpenCV's function calculates up to third order moments, while the system only requires moments of orders 0 and 1, so this operation could be easily sped up, but it is clearly not a priority. Calculating all features needed for recognition and the recognition itself were extremely fast during these tests, at less than 5µs. That's assuming there are 10 possible postures (recognition time increases linearly with possible postures) and a worst case scenario where the angle difference is never above tolerance, so the mean square error between distance vectors is calculated for every possibility.
recognition consists of only a few conditions and happened too fast to get accurate measurements. With these results, the system satisfies the requirement of low processing demand and should it be necessary to make it faster, it is trivial to parallelize the segmentation, either to run in more cores or to be done in the GPU. These processing times, however, indicate that finding a more robust segmentation strategy is much more important than increasing its performance.
6. Conclusion
This current implementation of Gestures2Go, focused specifically on games and other similar applications, satisfies most of the requirements for gesture-based interfaces and games which were studied during the system's design phase.
While there is need of some setup, to record the background and calculate the player's skin color parameters, this setup only takes a few seconds. Each execution step takes less than 15ms in a single 3GHz core, satisfying the requirements for low processing demand, especially considering that in most contexts the system must only differentiate between 5 to 10 gestures. However, combining 41 (or more) postures of one hand and 12 movements and initial hand locations (relative to the head) for both hands creates a vocabulary of thousands of possible gestures, greatly increasing the chance that the interface designer can find an appropriate gesture to associate with an action. Desc2Input facilitates multimodal interaction and the system as a whole is quite tolerant to variations in gesture execution, both for postures and movements.
One requirement cannot be considered satisfied yet, however: simplifying the development of games with gestures. Desc2Input should be responsible for this requirement, but currently its interface only allows the association of descriptions and inputs by hard coding them using the associate function. Furthermore, its current version is provided as source code that must be included within the same project as the gesture recognition system and systems for interpreting other modes of interaction (such as the dancing mat used in one of the tests, or speech recognition). This makes the system's use by programmers much more complex than desired. It is a priority for future works, therefore, to develop a better interface for Desc2Input. The next system's version will allow the association of descriptions and inputs through an external xml configuration file and Desc2Input will be available not only as source code but as a DLL to include in projects as well as a standalone executable that receives descriptions via sockets from different modules responsible for complementary interaction modes. Gestures2Go will also include a standalone application that generates regular system inputs from command gestures so that this sort of interface may be used with any other interactive application simply customizing a configuration file associating gestures to inputs, without requiring a single line of programming.
Another standalone application is in development to facilitate this configuration: instead of editing the configuration file directly, the user simply shows initial and final posture to the system and selects, in a graphical interface, movements, locations and which input that gesture must generate. A final improvement in this area is the integration of Gestures2Go with a game engine, but this depends on the engine's architecture and is beyond this paper's scope.
Another priority for future works is improving the segmentation. One of the system's requirements is that it must not demand controlled or special lighting or unusual or expensive equipment and, under those severe limitations, the segmentation actually works considerably well. But it is still the less robust part of the system and causes frequent and noticeable errors under some lighting conditions. Several robust probabilistic solutions exist to track hands and their contours, such as using variations of the condensation algorithm [Isard & Blake 1998]. Most of these solutions require knowledge either of one fixed hand posture, or a small number of postures and a transition model between them [Liu & Jia 2004] which complicates the addition of new postures and gestures. Even these methods often use depth data to aid in segmentation. Other methods do not require a known model for the hand but only track its position, not the contour, which is necessary for Gestures2Go. One promising approach that will be tested as soon as possible within this system is tracking the hand and its contour with no hand model information by using Kalman filters to estimate both the hand's movement and the positions of control points of curves that define hand shape [de Bem & Costa 2006]. This strategy will be adopted if tests show that its performance and accuracy are adequate while tracking enough control points to model a rapidly changing hand contour.
Using depth data [Nakamura & Tori 2008] is another planned improvement to the system, both to the segmentation and to allow a greater number of postures, such as pointing postures. Lastly, formal usability tests must be conducted to determine whether the interaction techniques using Gestures2Go in a MMO are effective in the context of games.
References
BOWMAN, 2005. 3D User Interfaces: Theory and Practice. Addison-Wesley.
MILLER, G., 1956. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. In: The Psychological Review 63, p. 81-97.
|
{"Source-Url": "http://sbgames.org/papers/sbgames09/computing/full/cp12_09.pdf", "len_cl100k_base": 9470, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32218, "total-output-tokens": 11675, "length": "2e13", "weborganizer": {"__label__adult": 0.0014314651489257812, "__label__art_design": 0.004364013671875, "__label__crime_law": 0.00168609619140625, "__label__education_jobs": 0.00263214111328125, "__label__entertainment": 0.0008177757263183594, "__label__fashion_beauty": 0.0008630752563476562, "__label__finance_business": 0.0005235671997070312, "__label__food_dining": 0.0014028549194335938, "__label__games": 0.0765380859375, "__label__hardware": 0.00724029541015625, "__label__health": 0.001964569091796875, "__label__history": 0.0014886856079101562, "__label__home_hobbies": 0.00026607513427734375, "__label__industrial": 0.0013828277587890625, "__label__literature": 0.0009889602661132812, "__label__politics": 0.0008749961853027344, "__label__religion": 0.0015926361083984375, "__label__science_tech": 0.31103515625, "__label__social_life": 0.00020396709442138672, "__label__software": 0.0174407958984375, "__label__software_dev": 0.56103515625, "__label__sports_fitness": 0.001667022705078125, "__label__transportation": 0.001674652099609375, "__label__travel": 0.00063323974609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51696, 0.04365]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51696, 0.60032]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51696, 0.9309]], "google_gemma-3-12b-it_contains_pii": [[0, 3506, false], [3506, 9397, null], [9397, 15273, null], [15273, 19848, null], [19848, 25395, null], [25395, 31499, null], [31499, 35335, null], [35335, 41216, null], [41216, 46991, null], [46991, 51696, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3506, true], [3506, 9397, null], [9397, 15273, null], [15273, 19848, null], [19848, 25395, null], [25395, 31499, null], [31499, 35335, null], [35335, 41216, null], [41216, 46991, null], [46991, 51696, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51696, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51696, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51696, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51696, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51696, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51696, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51696, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51696, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51696, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51696, null]], "pdf_page_numbers": [[0, 3506, 1], [3506, 9397, 2], [9397, 15273, 3], [15273, 19848, 4], [19848, 25395, 5], [25395, 31499, 6], [31499, 35335, 7], [35335, 41216, 8], [41216, 46991, 9], [46991, 51696, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51696, 0.11538]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
e7d74ad5c0356d8b4800b3eb6a697cacbffa14a3
|
HOTP: An HMAC-Based One-Time Password Algorithm
Status of This Memo
This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (2005).
Abstract
This document describes an algorithm to generate one-time password values, based on Hashed Message Authentication Code (HMAC). A security analysis of the algorithm is presented, and important parameters related to the secure deployment of the algorithm are discussed. The proposed algorithm can be used across a wide range of network applications ranging from remote Virtual Private Network (VPN) access, Wi-Fi network logon to transaction-oriented Web applications.
This work is a joint effort by the OATH (Open AuTHentication) membership to specify an algorithm that can be freely distributed to the technical community. The authors believe that a common and shared algorithm will facilitate adoption of two-factor authentication on the Internet by enabling interoperability across commercial and open-source implementations.
Table of Contents
1. Overview ........................................................ 3
2. Introduction .................................................... 3
3. Requirements Terminology ........................................ 4
4. Algorithm Requirements .......................................... 4
5. HOTP Algorithm .................................................. 5
5.1. Notation and Symbols .................................... 5
5.2. Description ................................................ 6
5.3. Generating an HOTP Value ................................ 6
5.4. Example of HOTP Computation for Digit = 6 .......... 7
6. Security Considerations ........................................ 8
7. Security Requirements ........................................... 9
7.1. Authentication Protocol Requirements .................... 9
7.2. Validation of HOTP Values ................................ 10
7.3. Throttling at the Server ................................ 10
7.4. Resynchronization of the Counter ....................... 11
7.5. Management of Shared Secrets ............................ 11
8. Bi-Directional Authentication .................................. 14
9. Conclusion .................................................... 15
10. Acknowledgements ............................................. 15
11. Contributors .................................................. 15
12. References .................................................... 15
12.1. Normative References ................................... 15
12.2. Informative References .................................. 16
Appendix A - HOTP Algorithm Security: Detailed Analysis .... 17
A.1. Definitions and Notations ................................ 17
A.2. The Idealized Algorithm: HOTP-IDEAL ................... 17
A.3. Model of Security ......................................... 18
A.4. Security of the Ideal Authentication Algorithm ....... 19
A.4.1. From Bits to Digits ................................ 19
A.4.2. Brute Force Attacks ................................ 21
A.4.3. Brute force attacks are the best possible attacks .. 22
A.5. Security Analysis of HOTP .............................. 23
Appendix B - SHA-1 Attacks ........................................ 25
B.1. SHA-1 Status ............................................. 25
B.2. HMAC-SHA-1 Status ....................................... 26
B.3. HOTP Status ................................................ 26
Appendix C - HOTP Algorithm: Reference Implementation ...... 27
Appendix D - HOTP Algorithm: Test Values ....................... 32
Appendix E - Extensions .......................................... 33
E.1. Number of Digits ......................................... 33
E.2. Alphanumeric Values ...................................... 33
E.3. Sequence of HOTP values .................................. 34
E.4. A Counter-Based Resynchronization Method ............. 34
E.5. Data Field ................................................ 35
1. Overview
The document introduces first the context around an algorithm that generates one-time password values based on HMAC [BCK1] and, thus, is named the HMAC-Based One-Time Password (HOTP) algorithm. In Section 4, the algorithm requirements are listed and in Section 5, the HOTP algorithm is described. Sections 6 and 7 focus on the algorithm security. Section 8 proposes some extensions and improvements, and Section 10 concludes this document. In Appendix A, the interested reader will find a detailed, full-fledged analysis of the algorithm security: an idealized version of the algorithm is evaluated, and then the HOTP algorithm security is analyzed.
2. Introduction
Today, deployment of two-factor authentication remains extremely limited in scope and scale. Despite increasingly higher levels of threats and attacks, most Internet applications still rely on weak authentication schemes for policing user access. The lack of interoperability among hardware and software technology vendors has been a limiting factor in the adoption of two-factor authentication technology. In particular, the absence of open specifications has led to solutions where hardware and software components are tightly coupled through proprietary technology, resulting in high-cost solutions, poor adoption, and limited innovation.
In the last two years, the rapid rise of network threats has exposed the inadequacies of static passwords as the primary mean of authentication on the Internet. At the same time, the current approach that requires an end user to carry an expensive, single-function device that is only used to authenticate to the network is clearly not the right answer. For two-factor authentication to propagate on the Internet, it will have to be embedded in more flexible devices that can work across a wide range of applications.
The ability to embed this base technology while ensuring broad interoperability requires that it be made freely available to the broad technical community of hardware and software developers. Only an open-system approach will ensure that basic two-factor authentication primitives can be built into the next generation of consumer devices such as USB mass storage devices, IP phones, and personal digital assistants.
One-Time Password is certainly one of the simplest and most popular forms of two-factor authentication for securing network access. For example, in large enterprises, Virtual Private Network access often requires the use of One-Time Password tokens for remote user authentication. One-Time Passwords are often preferred to stronger
This document proposes a simple One-Time Password algorithm that can be implemented by any hardware manufacturer or software developer to create interoperable authentication devices and software agents. The algorithm is event-based so that it can be embedded in high-volume devices such as Java smart cards, USB dongles, and GSM SIM cards. The presented algorithm is made freely available to the developer community under the terms and conditions of the IETF Intellectual Property Rights [RFC3979].
The authors of this document are members of the Open AuTHentication initiative [OATH]. The initiative was created in 2004 to facilitate collaboration among strong authentication technology providers.
3. Requirements Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
4. Algorithm Requirements
This section presents the main requirements that drove this algorithm design. A lot of emphasis was placed on end-consumer usability as well as the ability for the algorithm to be implemented by low-cost hardware that may provide minimal user interface capabilities. In particular, the ability to embed the algorithm into high-volume SIM and Java cards was a fundamental prerequisite.
R1 - The algorithm MUST be sequence- or counter-based: one of the goals is to have the HOTP algorithm embedded in high-volume devices such as Java smart cards, USB dongles, and GSM SIM cards.
R2 - The algorithm SHOULD be economical to implement in hardware by minimizing requirements on battery, number of buttons, computational horsepower, and size of LCD display.
R3 - The algorithm MUST work with tokens that do not support any numeric input, but MAY also be used with more sophisticated devices such as secure PIN-pads.
R4 - The value displayed on the token MUST be easily read and entered by the user: This requires the HOTP value to be of reasonable length.
The HOTP value must be at least a 6-digit value. It is also desirable that the HOTP value be ‘numeric only’ so that it can be easily entered on restricted devices such as phones.
R5 - There MUST be user-friendly mechanisms available to resynchronize the counter. Section 7.4 and Appendix E.4 details the resynchronization mechanism proposed in this document.
R6 - The algorithm MUST use a strong shared secret. The length of the shared secret MUST be at least 128 bits. This document RECOMMENDs a shared secret length of 160 bits.
5. HOTP Algorithm
In this section, we introduce the notation and describe the HOTP algorithm basic blocks -- the base function to compute an HMAC-SHA-1 value and the truncation method to extract an HOTP value.
5.1. Notation and Symbols
A string always means a binary string, meaning a sequence of zeros and ones.
If s is a string, then |s| denotes its length.
If n is a number, then |n| denotes its absolute value.
If s is a string, then s[i] denotes its i-th bit. We start numbering the bits at 0, so s = s[0]s[1]...s[n-1] where n = |s| is the length of s.
Let StToNum (String to Number) denote the function that as input a string s returns the number whose binary representation is s. (For example, StToNum(110) = 6.)
Here is a list of symbols used in this document.
<table>
<thead>
<tr>
<th>Symbol</th>
<th>Represents</th>
</tr>
</thead>
<tbody>
<tr>
<td>C</td>
<td>8-byte counter value, the moving factor. This counter MUST be synchronized between the HOTP generator (client) and the HOTP validator (server).</td>
</tr>
<tr>
<td>K</td>
<td>shared secret between client and server; each HOTP generator has a different and unique secret K.</td>
</tr>
<tr>
<td>T</td>
<td>throttling parameter: the server will refuse connections from a user after T unsuccessful authentication attempts.</td>
</tr>
</tbody>
</table>
resynchronization parameter: the server will attempt to verify a received authenticator across \( s \) consecutive counter values.
Digit number of digits in an HOTP value; system parameter.
5.2. Description
The HOTP algorithm is based on an increasing counter value and a static symmetric key known only to the token and the validation service. In order to create the HOTP value, we will use the HMAC-SHA-1 algorithm, as defined in RFC 2104 [BCK2].
As the output of the HMAC-SHA-1 calculation is 160 bits, we must truncate this value to something that can be easily entered by a user.
\[
\text{HOTP}(K,C) = \text{Truncate} (\text{HMAC-SHA-1}(K,C))
\]
Where:
- \( \text{Truncate} \) represents the function that converts an HMAC-SHA-1 value into an HOTP value as defined in Section 5.3.
The Key \((K)\), the Counter \((C)\), and Data values are hashed high-order byte first.
The HOTP values generated by the HOTP generator are treated as big endian.
5.3. Generating an HOTP Value
We can describe the operations in 3 distinct steps:
Step 1: Generate an HMAC-SHA-1 value
Let \( HS = \text{HMAC-SHA-1}(K,C) \) // \( HS \) is a 20-byte string
Step 2: Generate a 4-byte string (Dynamic Truncation)
Let \( S\text{bits} = \text{DT}(HS) \) // DT, defined below,
// returns a 31-bit string
Step 3: Compute an HOTP value
Let \( S\text{num} = \text{StToNum}(S\text{bits}) \) // Convert \( S \) to a number in \( 0\ldots2^{31}-1 \)
Return \( D = S\text{num} \mod 10^{\text{Digit}} \) // \( D \) is a number in the range \( 0\ldots10^{\text{Digit}}-1 \)
The Truncate function performs Step 2 and Step 3, i.e., the dynamic truncation and then the reduction modulo 10^Digit. The purpose of the dynamic offset truncation technique is to extract a 4-byte dynamic binary code from a 160-bit (20-byte) HMAC-SHA-1 result.
\[
\text{DT(String)} \quad // \text{String} = \text{String}[0]...\text{String}[19] \\
\text{Let OffsetBits be the low-order 4 bits of String[19]} \\
\text{Offset = StToNum(OffsetBits)} \quad // \ 0 \leqslant \text{Offset} \leqslant 15 \\
\text{Let } P = \text{String[Offset]}...\text{String[Offset+3]} \\
\text{Return the Last 31 bits of } P \\
\]
The reason for masking the most significant bit of P is to avoid confusion about signed vs. unsigned modulo computations. Different processors perform these operations differently, and masking out the signed bit removes all ambiguity.
Implementations MUST extract a 6-digit code at a minimum and possibly 7 and 8-digit code. Depending on security requirements, Digit = 7 or more SHOULD be considered in order to extract a longer HOTP value.
The following paragraph is an example of using this technique for Digit = 6, i.e., that a 6-digit HOTP value is calculated from the HMAC value.
5.4. Example of HOTP Computation for Digit = 6
The following code example describes the extraction of a dynamic binary code given that hmac_result is a byte array with the HMAC-SHA-1 result:
```c
int offset = hmac_result[19] & 0xf ;
int bin_code = (hmac_result[offset] & 0x7f) << 24
| (hmac_result[offset+1] & 0xff) << 16
| (hmac_result[offset+2] & 0xff) << 8
| (hmac_result[offset+3] & 0xff) ;
```
SHA-1 HMAC Bytes (Example)
* The last byte (byte 19) has the hex value 0x5a.
* The value of the lower 4 bits is 0xa (the offset value).
* The offset value is byte 10 (0xa).
* The value of the 4 bytes starting at byte 10 is 0x50ef7f19, which is the dynamic binary code DBC1.
* The MSB of DBC1 is 0x50 so DBC2 = DBC1 = 0x50ef7f19.
* HOTP = DBC2 modulo 10^6 = 872921.
We treat the dynamic binary code as a 31-bit, unsigned, big-endian integer; the first byte is masked with a 0x7f.
We then take this number modulo 1,000,000 (10^6) to generate the 6-digit HOTP value 872921 decimal.
6. Security Considerations
The conclusion of the security analysis detailed in the Appendix is that, for all practical purposes, the outputs of the Dynamic Truncation (DT) on distinct counter inputs are uniformly and independently distributed 31-bit strings.
The security analysis then details the impact of the conversion from a string to an integer and the final reduction modulo 10^Digit, where Digit is the number of digits in an HOTP value.
The analysis demonstrates that these final steps introduce a negligible bias, which does not impact the security of the HOTP algorithm, in the sense that the best possible attack against the HOTP function is the brute force attack.
Assuming an adversary is able to observe numerous protocol exchanges and collect sequences of successful authentication values. This adversary, trying to build a function F to generate HOTP values based on his observations, will not have a significant advantage over a random guess.
The logical conclusion is simply that the best strategy will once again be to perform a brute force attack to enumerate and try all the possible values.
Considering the security analysis in the Appendix of this document, without loss of generality, we can approximate closely the security of the HOTP algorithm by the following formula:
\[
\text{Sec} = \frac{s}{10^\text{Digit}}
\]
Where:
- Sec is the probability of success of the adversary;
- s is the look-ahead synchronization window size;
- v is the number of verification attempts;
- Digit is the number of digits in HOTP values.
Obviously, we can play with s, T (the Throttling parameter that would limit the number of attempts by an attacker), and Digit until achieving a certain level of security, still preserving the system usability.
7. Security Requirements
Any One-Time Password algorithm is only as secure as the application and the authentication protocols that implement it. Therefore, this section discusses the critical security requirements that our choice of algorithm imposes on the authentication protocol and validation software.
The parameters T and s discussed in this section have a significant impact on the security -- further details in Section 6 elaborate on the relations between these parameters and their impact on the system security.
It is also important to remark that the HOTP algorithm is not a substitute for encryption and does not provide for the privacy of data transmission. Other mechanisms should be used to defeat attacks aimed at breaking confidentiality and privacy of transactions.
7.1. Authentication Protocol Requirements
We introduce in this section some requirements for a protocol P implementing HOTP as the authentication method between a prover and a verifier.
RP1 - P MUST support two-factor authentication, i.e., the communication and verification of something you know (secret code such as a Password, Pass phrase, PIN code, etc.) and something you have (token). The secret code is known only to the user and usually entered with the One-Time Password value for authentication purpose (two-factor authentication).
RP2 - P SHOULD NOT be vulnerable to brute force attacks. This implies that a throttling/lockout scheme is RECOMMENDED on the validation server side.
RP3 - P SHOULD be implemented over a secure channel in order to protect users' privacy and avoid replay attacks.
7.2. Validation of HOTP Values
The HOTP client (hardware or software token) increments its counter and then calculates the next HOTP value $HOTP_{client}$. If the value received by the authentication server matches the value calculated by the client, then the HOTP value is validated. In this case, the server increments the counter value by one.
If the value received by the server does not match the value calculated by the client, the server initiate the resynch protocol (look-ahead window) before it requests another pass.
If the resynch fails, the server asks then for another authentication pass of the protocol to take place, until the maximum number of authorized attempts is reached.
If and when the maximum number of authorized attempts is reached, the server SHOULD lock out the account and initiate a procedure to inform the user.
7.3. Throttling at the Server
Truncating the HMAC-SHA-1 value to a shorter value makes a brute force attack possible. Therefore, the authentication server needs to detect and stop brute force attacks.
We RECOMMEND setting a throttling parameter $T$, which defines the maximum number of possible attempts for One-Time Password validation. The validation server manages individual counters per HOTP device in order to take note of any failed attempt. We RECOMMEND $T$ not to be too large, particularly if the resynchronization method used on the server is window-based, and the window size is large. $T$ SHOULD be set as low as possible, while still ensuring that usability is not significantly impacted.
Another option would be to implement a delay scheme to avoid a brute force attack. After each failed attempt $A$, the authentication server would wait for an increased $T^A$ number of seconds, e.g., say $T = 5$, then after 1 attempt, the server waits for 5 seconds, at the second failed attempt, it waits for $5 \times 2 = 10$ seconds, etc.
The delay or lockout schemes MUST be across login sessions to prevent attacks based on multiple parallel guessing techniques.
7.4. Resynchronization of the Counter
Although the server’s counter value is only incremented after a successful HOTP authentication, the counter on the token is incremented every time a new HOTP is requested by the user. Because of this, the counter values on the server and on the token might be out of synchronization.
We RECOMMEND setting a look-ahead parameter $s$ on the server, which defines the size of the look-ahead window. In a nutshell, the server can recalculate the next $s$ HOTP-server values, and check them against the received HOTP client.
Synchronization of counters in this scenario simply requires the server to calculate the next HOTP values and determine if there is a match. Optionally, the system MAY require the user to send a sequence of (say, 2, 3) HOTP values for resynchronization purpose, since forging a sequence of consecutive HOTP values is even more difficult than guessing a single HOTP value.
The upper bound set by the parameter $s$ ensures the server does not go on checking HOTP values forever (causing a denial-of-service attack) and also restricts the space of possible solutions for an attacker trying to manufacture HOTP values. $s$ SHOULD be set as low as possible, while still ensuring that usability is not impacted.
7.5. Management of Shared Secrets
The operations dealing with the shared secrets used to generate and verify OTP values must be performed securely, in order to mitigate risks of any leakage of sensitive information. We describe in this section different modes of operations and techniques to perform these different operations with respect to the state of the art in data security.
We can consider two different avenues for generating and storing (securely) shared secrets in the Validation system:
* Deterministic Generation: secrets are derived from a master seed, both at provisioning and verification stages and generated on-the-fly whenever it is required.
* Random Generation: secrets are generated randomly at provisioning stage and must be stored immediately and kept secure during their life cycle.
Deterministic Generation
------------------------
A possible strategy is to derive the shared secrets from a master secret. The master secret will be stored at the server only. A tamper-resistant device MUST be used to store the master key and derive the shared secrets from the master key and some public information. The main benefit would be to avoid the exposure of the shared secrets at any time and also avoid specific requirements on storage, since the shared secrets could be generated on-demand when needed at provisioning and validation time.
We distinguish two different cases:
- A single master key MK is used to derive the shared secrets; each HOTP device has a different secret, K_i = SHA-1 (MK,i) where i stands for a public piece of information that identifies uniquely the HOTP device such as a serial number, a token ID, etc. Obviously, this is in the context of an application or service -- different application or service providers will have different secrets and settings.
- Several master keys MK_i are used and each HOTP device stores a set of different derived secrets, {K_i,j = SHA-1(MK_i,j)} where j stands for a public piece of information identifying the device. The idea would be to store ONLY the active master key at the validation server, in the Hardware Security Module (HSM), and keep in a safe place, using secret sharing methods such as [Shamir] for instance. In this case, if a master secret MK_i is compromised, then it is possible to switch to another secret without replacing all the devices.
The drawback in the deterministic case is that the exposure of the master secret would obviously enable an attacker to rebuild any shared secret based on correct public information. The revocation of all secrets would be required, or switching to a new set of secrets in the case of multiple master keys.
On the other hand, the device used to store the master key(s) and generate the shared secrets MUST be tamper resistant. Furthermore, the HSM will not be exposed outside the security perimeter of the validation system, therefore reducing the risk of leakage.
Random Generation
-----------------
The shared secrets are randomly generated. We RECOMMEND following the recommendations in [RFC4086] and selecting a good and secure random source for generating these secrets. A (true) random generator requires a naturally occurring source of randomness. Practically, there are two possible avenues to consider for the generation of the shared secrets:
* Hardware-based generators: they exploit the randomness that occurs in physical phenomena. A nice implementation can be based on oscillators and built in such ways that active attacks are more difficult to perform.
* Software-based generators: designing a good software random generator is not an easy task. A simple, but efficient, implementation should be based on various sources and apply to the sampled sequence a one-way function such as SHA-1.
We RECOMMEND selecting proven products, being hardware or software generators, for the computation of shared secrets.
We also RECOMMEND storing the shared secrets securely, and more specifically encrypting the shared secrets when stored using tamper-resistant hardware encryption and exposing them only when required: for example, the shared secret is decrypted when needed to verify an HOTP value, and re-encrypted immediately to limit exposure in the RAM for a short period of time. The data store holding the shared secrets MUST be in a secure area, to avoid as much as possible direct attack on the validation system and secrets database.
Particularly, access to the shared secrets should be limited to programs and processes required by the validation system only. We will not elaborate on the different security mechanisms to put in place, but obviously, the protection of shared secrets is of the uttermost importance.
8. Composite Shared Secrets
It may be desirable to include additional authentication factors in the shared secret K. These additional factors can consist of any data known at the token but not easily obtained by others. Examples of such data include:
* PIN or Password obtained as user input at the token
* Phone number
* Any unique identifier programmatically available at the token
In this scenario, the composite shared secret K is constructed during the provisioning process from a random seed value combined with one or more additional authentication factors. The server could either build on-demand or store composite secrets -- in any case, depending on implementation choice, the token only stores the seed value. When the token performs the HOTP calculation, it computes K from the seed value and the locally derived or input values of the other authentication factors.
The use of composite shared secrets can strengthen HOTP-based authentication systems through the inclusion of additional authentication factors at the token. To the extent that the token is a trusted device, this approach has the further benefit of not requiring exposure of the authentication factors (such as the user input PIN) to other devices.
9. Bi-Directional Authentication
Interestingly enough, the HOTP client could also be used to authenticate the validation server, claiming that it is a genuine entity knowing the shared secret.
Since the HOTP client and the server are synchronized and share the same secret (or a method to recompute it), a simple 3-pass protocol could be put in place:
1- The end user enter the TokenID and a first OTP value OTP1;
2- The server checks OTP1 and if correct, sends back OTP2;
3- The end user checks OTP2 using his HOTP device and if correct, uses the web site.
Obviously, as indicated previously, all the OTP communications have to take place over a secure channel, e.g., SSL/TLS, IPsec connections.
10. Conclusion
This document describes HOTP, a HMAC-based One-Time Password algorithm. It also recommends the preferred implementation and related modes of operations for deploying the algorithm.
The document also exhibits elements of security and demonstrates that the HOTP algorithm is practical and sound, the best possible attack being a brute force attack that can be prevented by careful implementation of countermeasures in the validation server.
Eventually, several enhancements have been proposed, in order to improve security if needed for specific applications.
11. Acknowledgements
The authors would like to thank Siddharth Bajaj, Alex Deacon, Loren Hart, and Nico Popp for their help during the conception and redaction of this document.
12. Contributors
The authors of this document would like to emphasize the role of three persons who have made a key contribution to this document:
- Laszlo Elteto is system architect with SafeNet, Inc.
- Ernesto Frutos is director of Engineering with Authenex, Inc.
- Fred McClain is Founder and CTO with Boojum Mobile, Inc.
Without their advice and valuable inputs, this document would not be the same.
13. References
13.1. Normative References
13.2. Informative References
OATH Initiative for Open AuTHentication http://www.openauthentication.org
Crack Crack in SHA-1 code ‘stuns’ security gurus http://www.eetimes.com/showArticle.jhtml?articleID=60402150
Appendix A - HOTP Algorithm Security: Detailed Analysis
The security analysis of the HOTP algorithm is summarized in this section. We first detail the best attack strategies, and then elaborate on the security under various assumptions and the impact of the truncation and make some recommendations regarding the number of digits.
We focus this analysis on the case where Digit = 6, i.e., an HOTP function that produces 6-digit values, which is the bare minimum recommended in this document.
A.1. Definitions and Notations
We denote by \((0,1)^1\) the set of all strings of length 1.
Let \(\mathbb{Z}_n\) = \{0,.., n - 1\}.
Let \(\text{IntDiv}(a,b)\) denote the integer division algorithm that takes input integers \(a\) and \(b\) where \(a \geq b \geq 1\) and returns integers \((q,r)\)
the quotient and remainder, respectively, of the division of \(a\) by \(b\).
(Thus, \(a = bq + r\) and \(0 \leq r < b\).)
Let \(H: \{0,1\}^k \times \{0,1\}^c \rightarrow \{0,1\}^n\) be the base function that takes a \(k\)-bit key \(K\) and \(c\)-bit counter \(C\) and returns an \(n\)-bit output \(H(K,C)\).
(In the case of HOTP, \(H\) is HMAC-SHA-1; we use this formal definition for generalizing our proof of security.)
A.2. The Idealized Algorithm: HOTP-IDEAL
We now define an idealized counterpart of the HOTP algorithm. In this algorithm, the role of \(H\) is played by a random function that forms the key.
To be more precise, let Maps\((c,n)\) denote the set of all functions mapping from \((0,1)^c\) to \((0,1)^n\). The idealized algorithm has key space Maps\((c,n)\), so that a "key" for such an algorithm is a function \(h\) from \((0,1)^c\) to \((0,1)^n\). We imagine this key (function) to be drawn at random. It is not feasible to implement this idealized algorithm, since the key, being a function from \((0,1)^c\) to \((0,1)^n\), is way too large to even store. So why consider it?
Our security analysis will show that as long as \(H\) satisfies a certain well-accepted assumption, the security of the actual and idealized algorithms is for all practical purposes the same. The task that really faces us, then, is to assess the security of the idealized algorithm.
In analyzing the idealized algorithm, we are concentrating on assessing the quality of the design of the algorithm itself, independently of HMAC-SHA-1. This is in fact the important issue.
A.3. Model of Security
The model exhibits the type of threats or attacks that are being considered and enables one to assess the security of HOTP and HOTP-IDEAL. We denote ALG as either HOTP or HOTP-IDEAL for the purpose of this security analysis.
The scenario we are considering is that a user and server share a key K for ALG. Both maintain a counter C, initially zero, and the user authenticates itself by sending ALG(K,C) to the server. The latter accepts if this value is correct.
In order to protect against accidental increment of the user counter, the server, upon receiving a value z, will accept as long as z equals ALG(K,i) for some i in the range C,...,C + s-1, where s is the resynchronization parameter and C is the server counter. If it accepts with some value of i, it then increments its counter to i+1. If it does not accept, it does not change its counter value.
The model we specify captures what an adversary can do and what it needs to achieve in order to "win". First, the adversary is assumed to be able to eavesdrop, meaning, to see the authenticator transmitted by the user. Second, the adversary wins if it can get the server to accept an authenticator relative to a counter value for which the user has never transmitted an authenticator.
The formal adversary, which we denote by B, starts out knowing which algorithm ALG is being used, knowing the system design, and knowing all system parameters. The one and only thing it is not given a priori is the key K shared between the user and the server.
The model gives B full control of the scheduling of events. It has access to an authenticator oracle representing the user. By calling this oracle, the adversary can ask the user to authenticate itself and get back the authenticator in return. It can call this oracle as often as it wants and when it wants, using the authenticators it accumulates to perhaps "learn" how to make authenticators itself. At any time, it may also call a verification oracle, supplying the latter with a candidate authenticator of its choice. It wins if the server accepts this accumulator.
Consider the following game involving an adversary B that is attempting to compromise the security of an authentication algorithm ALG: K x \(\{0,1\}^c\) \(\rightarrow\) R.
Initializations - A key K is selected at random from K, a counter C is initialized to 0, and the Boolean value win is set to false.
Game execution - Adversary B is provided with the two following oracles:
Oracle AuthO()
-------------
A = ALG(K, C)
C = C + 1
Return O to B
Oracle VerO(A)
-------------
i = C
While (i <= C + s - 1 and Win == FALSE) do
If A == ALG(K, i) then Win = TRUE; C = i + 1
Else i = i + 1
Return Win to B
AuthO() is the authenticator oracle and VerO(A) is the verification oracle.
Upon execution, B queries the two oracles at will. Let Adv(B) be the probability that win gets set to true in the above game. This is the probability that the adversary successfully impersonates the user.
Our goal is to assess how large this value can be as a function of the number v of verification queries made by B, the number a of authenticator oracle queries made by B, and the running time t of B. This will tell us how to set the throttle, which effectively upper bounds v.
A.4. Security of the Ideal Authentication Algorithm
This section summarizes the security analysis of HOTP-IDEAL, starting with the impact of the conversion modulo 10^Digit and then focusing on the different possible attacks.
A.4.1. From Bits to Digits
The dynamic offset truncation of a random n-bit string yields a random 31-bit string. What happens to the distribution when it is taken modulo m = 10^Digit, as done in HOTP?
The following lemma estimates the biases in the outputs in this case.
Lemma 1
-------
Let \( N \geq m \geq 1 \) be integers, and let \((q, r) = \text{IntDiv}(N, m)\). For \( z \) in \( \mathbb{Z}_m \) let:
\[
P_{N, m}(z) = \Pr \{ x \mod m = z : x \text{ randomly pick in } \mathbb{Z}_n \}
\]
Then for any \( z \) in \( \mathbb{Z}_m \)
\[
P_{N, m}(z) = \begin{cases}
(q + 1) / N & \text{if } 0 \leq z < r \\
q / N & \text{if } r \leq z < m
\end{cases}
\]
Proof of Lemma 1
----------------
Let the random variable \( X \) be uniformly distributed over \( \mathbb{Z}_N \). Then:
\[
P_{N, m}(z) = \Pr \{ X \mod m = z \}
\]
\[
= \Pr \{ X < mq \} * \Pr \{ X \mod m = z \mid X < mq \} \\
+ \Pr \{ mq \leq X < N \} * \Pr \{ X \mod m = z \mid mq \leq X < N \}
\]
\[
= \frac{mq}{N} \cdot \frac{1}{m} + \frac{(N - mq) / N * 1 / (N - mq)}{if \ 0 \leq z < N - mq} \\
+ \frac{0}{if \ N - mq \leq z \leq m}
\]
\[
= \frac{q}{N} + \frac{r / N * 1 / r}{if \ 0 \leq z < N - mq} \\
+ \frac{0}{if \ r \leq z \leq m}
\]
Simplifying yields the claimed equation.
Let \( N = 2^{31}, d = 6, \) and \( m = 10^d \). If \( x \) is chosen at random from \( \mathbb{Z}_N \) (meaning, is a random 31-bit string), then reducing it to a 6-digit number by taking \( x \mod m \) does not yield a random 6-digit number.
Rather, \( x \mod m \) is distributed as shown in the following table:
<table>
<thead>
<tr>
<th>Values</th>
<th>Probability that each appears as output</th>
</tr>
</thead>
<tbody>
<tr>
<td>0,1,...,483647</td>
<td>2148/2^{31} roughly equals to 1.00024045/10^6</td>
</tr>
<tr>
<td>483648,...,999999</td>
<td>2147/2^{31} roughly equals to 0.99977478/10^6</td>
</tr>
</tbody>
</table>
If \( X \) is uniformly distributed over \( \mathbb{Z}_N \) (meaning, is a random 31-bit string), then the above shows the probabilities for different outputs of \( X \mod 10^6 \). The first set of values appears with
probability slightly greater than $10^{-6}$, the rest with probability slightly less, meaning that the distribution is slightly non-uniform.
However, as the table above indicates, the bias is small, and as we will see later, negligible: the probabilities are very close to $10^{-6}$.
A.4.2. Brute Force Attacks
If the authenticator consisted of $d$ random digits, then a brute force attack using $v$ verification attempts would succeed with probability $sv/10^d$.
However, an adversary can exploit the bias in the outputs of HOTP-IDEAL, predicted by Lemma 1, to mount a slightly better attack.
Namely, it makes authentication attempts with authenticators that are the most likely values, meaning the ones in the range $0,\ldots,r - 1$, where $(q,r) = \text{IntDiv}(2^{31},10^d)$.
The following specifies an adversary in our model of security that mounts the attack. It estimates the success probability as a function of the number of verification queries.
For simplicity, we assume that the number of verification queries is at most $r$. With $N = 2^{31}$ and $m = 10^6$, we have $r = 483,648$, and the throttle value is certainly less than this, so this assumption is not much of a restriction.
Proposition 1
-------------
Suppose $m = 10^d < 2^{31}$, and let $(q,r) = \text{IntDiv}(2^{31},m)$. Assume $s \leq m$. The brute-force-attack adversary $B-bf$ attacks HOTP using $v \leq r$ verification oracle queries. This adversary makes no authenticator oracle queries, and succeeds with probability
$$\text{Adv}(B-bf) = 1 - (1 - v(q+1)/2^{31})^s$$
which is roughly equal to
$$sv * (q+1)/2^{31}$$
With $m = 10^6$ we get $q = 2,147$. In that case, the brute force attack using $v$ verification attempts succeeds with probability
$$\text{Adv}(B-bf) \text{ roughly} = sv * 2148/2^{31} = sv * 1.00024045/10^6$$
As this equation shows, the resynchronization parameter \( s \) has a significant impact in that the adversary’s success probability is proportional to \( s \). This means that \( s \) cannot be made too large without compromising security.
### A.4.3. Brute force attacks are the best possible attacks.
A central question is whether there are attacks any better than the brute force one. In particular, the brute force attack did not attempt to collect authenticators sent by the user and try to cryptanalyze them in an attempt to learn how to better construct authenticators. Would doing this help? Is there some way to "learn" how to build authenticators that result in a higher success rate than given by the brute-force attack?
The following says the answer to these questions is no. No matter what strategy the adversary uses, and even if it sees, and tries to exploit, the authenticators from authentication attempts of the user, its success probability will not be above that of the brute force attack -- this is true as long as the number of authentications it observes is not incredibly large. This is valuable information regarding the security of the scheme.
Proposition 2 ------------- Suppose \( m = 10^{\text{Digit}} < 2^{31} \), and let \((q,r) = \text{IntDiv}(2^{31},m)\). Let \( B \) be any adversary attacking HOTP-IDEAL using \( v \) verification oracle queries and \( a \leq 2^c - s \) authenticator oracle queries. Then
\[
\text{Adv}(B) \leq sv \frac{(q+1)}{2^{31}}
\]
Note: This result is conditional on the adversary not seeing more than \( 2^c - s \) authentications performed by the user, which is hardly restrictive as long as \( c \) is large enough.
With \( m = 10^6 \), we get \( q = 2,147 \). In that case, Proposition 2 says that any adversary \( B \) attacking HOTP-IDEAL and making \( v \) verification attempts succeeds with probability at most
Equation 1
\[
sv \frac{2148}{2^{31}} \text{ roughly } = sv \frac{1.00024045}{10^6}
\]
Meaning, \( B \)'s success rate is not more than that achieved by the brute force attack.
A.5. Security Analysis of HOTP
We have analyzed, in the previous sections, the security of the idealized counterparts HOTP-IDEAL of the actual authentication algorithm HOTP. We now show that, under appropriate and well-believed assumption on H, the security of the actual algorithms is essentially the same as that of its idealized counterpart.
The assumption in question is that H is a secure pseudorandom function, or PRF, meaning that its input-output values are indistinguishable from those of a random function in practice.
Consider an adversary A that is given an oracle for a function f: \{0,1\}^c \rightarrow \{0, 1\}^n and eventually outputs a bit. We denote Adv(A) as the prf-advantage of A, which represents how well the adversary does at distinguishing the case where its oracle is H(K,.) from the case where its oracle is a random function of \{0,1\}^c to \{0,1\}^n.
One possible attack is based on exhaustive search for the key K. If A runs for t steps and T denotes the time to perform one computation of H, its prf-advantage from this attack turns out to be \((t/T)2^{-k}\). Another possible attack is a birthday one [Pr00], whereby A can attain advantage \(p^2/2^n\) in \(p\) oracle queries and running time about \(pT\).
Our assumption is that these are the best possible attacks. This translates into the following.
Assumption 1
-------------
Let \(T\) denotes the time to perform one computation of H. Then if A is any adversary with running time at most \(t\) and making at most \(p\) oracle queries,
\[
\text{Adv}(A) \leq (t/T)/2^k + p^2/2^n
\]
In practice, this assumption means that H is very secure as PRF. For example, given that \(k = n = 160\), an attacker with running time \(2^{60}\) and making \(2^{40}\) oracle queries has advantage at most (about) \(2^{-80}\).
Theorem 1
---------
Suppose \(m = 10^{\text{Digit}} < 2^{31}\), and let \((q,r) = \text{IntDiv}(2^{31},m)\). Let B be any adversary attacking HOTP using \(v\) verification oracle queries,
a <= 2^c - s authenticator oracle queries, and running time t. Let T
denote the time to perform one computation of H. If Assumption 1 is
true, then
\[ \text{Adv}(B) <= \frac{sv}{2} \frac{(q + 1)/2^{31} + \frac{t}{T}}{2^k} + \frac{(sv + a)^2}{2^n} \]
In practice, the \( (t/T)2^{-k} + ((sv + a)^2)2^{-n} \) term is much smaller
than the \( sv(q + 1)/2^n \) term, so that the above says that for all
practical purposes the success rate of an adversary attacking HOTP is
\( sv(q + 1)/2^n \), just as for HOTP-IDEAL, meaning the HOTP algorithm is
in practice essentially as good as its idealized counterpart.
In the case \( m = 10^6 \) of a 6-digit output, this means that an
adversary making \( v \) authentication attempts will have a success rate
that is at most that of Equation 1.
For example, consider an adversary with running time at most \( 2^{60} \)
that sees at most \( 2^{40} \) authentication attempts of the user. Both
these choices are very generous to the adversary, who will typically
not have these resources, but we are saying that even such a powerful
adversary will not have more success than indicated by Equation 1.
We can safely assume \( sv <= 2^{40} \) due to the throttling and bounds on
s. So:
\[ (t/T)/2^k + ((sv + a)^2)/2^n \leq 2^{60}/2^{160} + (2^{41})^2/2^{160} \]
roughly <= \( 2^{-78} \)
which is much smaller than the success probability of Equation 1 and
negligible compared to it.
Appendix B - SHA-1 Attacks
This section addresses the impact of the recent attacks on SHA-1 on the security of the HMAC-SHA-1-based HOTP. We begin with some discussion of the situation of SHA-1 and then discuss the relevance to HMAC-SHA-1 and HOTP. Cited references are in Section 13.
B.1. SHA-1 Status
A collision for a hash function \( h \) means a pair \( x,y \) of different inputs such that \( h(x)=h(y) \). Since SHA-1 outputs 160 bits, a birthday attack finds a collision in \( 2^{80} \) trials. (A trial means one computation of the function.) This was thought to be the best possible until Wang, Yin, and Yu announced on February 15, 2005, that they had an attack finding collisions in \( 2^{69} \) trials.
Is SHA-1 broken? For most practical purposes, we would say probably not, since the resources needed to mount the attack are huge. Here is one way to get a sense of it: we can estimate it is about the same as the time we would need to factor a 760-bit RSA modulus, and this is currently considered out of reach.
Burr of NIST is quoted in [Crack] as saying "Large national intelligence agencies could do this in a reasonable amount of time with a few million dollars in computer time". However, the computation may be out of reach of all but such well-funded agencies.
One should also ask what impact finding SHA-1 collisions actually has on security of real applications such as signatures. To exploit a collision \( x,y \) to forge signatures, you need to somehow obtain a signature of \( x \) and then you can forge a signature of \( y \). How damaging this is depends on the content of \( y \): the \( y \) created by the attack may not be meaningful in the application context. Also, one needs a chosen-message attack to get the signature of \( x \). This seems possible in some contexts, but not others. Overall, it is not clear that the impact on the security of signatures is significant.
Indeed, one can read in the press that SHA-1 is "broken" [Shal] and that encryption and SSL are "broken" [Res]. The media have a tendency to magnify events: it would hardly be interesting to announce in the news that a team of cryptanalysts did very interesting theoretical work in attacking SHA-1.
Cryptographers are excited too. But mainly because this is an important theoretical breakthrough. Attacks can only get better with time: it is therefore important to monitor any progress in hash functions cryptanalysis and be prepared for any really practical break with a sound migration plan for the future.
B.2. HMAC-SHA-1 Status
The new attacks on SHA-1 have no impact on the security of HMAC-SHA-1. The best attack on the latter remains one needing a sender to authenticate $2^{80}$ messages before an adversary can create a forgery. Why?
HMAC is not a hash function. It is a message authentication code (MAC) that uses a hash function internally. A MAC depends on a secret key, while hash functions don’t. What one needs to worry about with a MAC is forgery, not collisions. HMAC was designed so that collisions in the hash function (here SHA-1) do not yield forgeries for HMAC.
Recall that HMAC-SHA-1($K, x$) = SHA-1($K_o, SHA-1(K_i, x)$) where the keys $K_o, K_i$ are derived from $K$. Suppose the attacker finds a pair $x, y$ such that SHA-1($K_i, x$) = SHA-1($K_i, y$). (Call this a hidden-key collision.) Then if it can obtain the MAC of $x$ (itself a tall order), it can forge the MAC of $y$. (These values are the same.) But finding hidden-key collisions is harder than finding collisions, because the attacker does not know the hidden key $K_i$. All it may have is some outputs of HMAC-SHA-1 with key $K$. To date, there are no claims or evidence that the recent attacks on SHA-1 extend to find hidden-key collisions.
Historically, the HMAC design has already proven itself in this regard. MD5 is considered broken in that collisions in this hash function can be found relatively easily. But there is still no attack on HMAC-MD5 better than the trivial $2^{64}$ time birthday one. (MD5 outputs 128 bits, not 160.) We are seeing this strength of HMAC coming into play again in the SHA-1 context.
B.3. HOTP Status
Since no new weakness has surfaced in HMAC-SHA-1, there is no impact on HOTP. The best attacks on HOTP remain those described in the document, namely, to try to guess output values.
The security proof of HOTP requires that HMAC-SHA-1 behave like a pseudorandom function. The quality of HMAC-SHA-1 as a pseudorandom function is not impacted by the new attacks on SHA-1, and so neither is this proven guarantee.
Appendix C - HOTP Algorithm: Reference Implementation
package org.openauthentication.otp;
import java.io.IOException;
import java.io.File;
import java.io.DataInputStream;
import java.io.FileInputStream;
import java.lang.reflect.UndeclaredThrowableException;
import java.security.GeneralSecurityException;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidKeyException;
import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import java.security.GeneralSecurityException;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidKeyException;
import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
public class OneTimePasswordAlgorithm {
// These are used to calculate the check-sum digits.
private static final int[] doubleDigits =
{ 0, 2, 4, 6, 8, 1, 3, 5, 7, 9 };
/**
* Calculates the checksum using the credit card algorithm.
* This algorithm has the advantage that it detects any single
* mistyped digit and any single transposition of
* adjacent digits.
*
* @param num the number to calculate the checksum for
* @param digits number of significant places in the number
*
* @return the checksum of num
*/
public static int calcChecksum(long num, int digits) {
boolean doubleDigit = true;
int total = 0;
while (0 < digits--) {
int digit = (int) (num % 10);
num /= 10;
if (doubleDigit) {
digit = doubleDigits[digit];
}
total += digit;
doubleDigit = !doubleDigit;
}
int result = total % 10;
if (result > 0) {
result = 10 - result;
}
return result;
}
/**
* This method uses the JCE to provide the HMAC-SHA-1
*/
}
* algorithm.
* HMAC computes a Hashed Message Authentication Code and
* in this case SHA1 is the hash algorithm used.
*
* @param keyBytes the bytes to use for the HMAC-SHA-1 key
* @param text the message or text to be authenticated.
*
* @throws NoSuchAlgorithmException if no provider makes
* either HmacSHA1 or HMAC-SHA-1
digest algorithms available.
* @throws InvalidKeyException
* The secret provided was not a valid HMAC-SHA-1 key.
* */
public static byte[] hmac_sha1(byte[] keyBytes, byte[] text)
throws NoSuchAlgorithmException, InvalidKeyException
{
// try {
Mac hmacSha1;
try {
hmacSha1 = Mac.getInstance("HmacSHA1");
} catch (NoSuchAlgorithmException nsae) {
hmacSha1 = Mac.getInstance("HMAC-SHA-1");
}
SecretKeySpec macKey =
new SecretKeySpec(keyBytes, "RAW");
hmacSha1.init(macKey);
return hmacSha1.doFinal(text);
// } catch (GeneralSecurityException gse) {
// throw new UndeclaredThrowableException(gse);
// }
}
private static final int[] DIGITS_POWER
// 0 1 2 3 4 5 6 7 8
= {1,10,100,1000,10000,100000,1000000,10000000,100000000};
/**
* This method generates an OTP value for the given
* set of parameters.
*
* @param secret the shared secret
* @param movingFactor the counter, time, or other value that
* changes on a per use basis.
* @param codeDigits the number of digits in the OTP, not
* including the checksum, if any.
* @param addChecksum a flag that indicates if a checksum digit
* should be appended to the OTP.
* @param truncationOffset the offset into the MAC result to
* begin truncation. If this value is out of
* the range of 0 ... 15, then dynamic
* truncation will be used.
* Dynamic truncation is when the last 4
* bits of the last byte of the MAC are
* used to determine the start offset.
* @throws NoSuchAlgorithmException if no provider makes
* either HmacSHA1 or HMAC-SHA-1
* digest algorithms available.
* @throws InvalidKeyException
* The secret provided was not
* a valid HMAC-SHA-1 key.
*
* @return A numeric String in base 10 that includes
* {@link codeDigits} digits plus the optional checksum
* digit if requested.
*/
static public String generateOTP(byte[] secret,
long movingFactor,
int codeDigits,
boolean addChecksum,
int truncationOffset)
throws NoSuchAlgorithmException, InvalidKeyException
{
// put movingFactor value into text byte array
String result = null;
int digits = addChecksum ? (codeDigits + 1) : codeDigits;
byte[] text = new byte[8];
for (int i = text.length - 1; i >= 0; i--) {
text[i] = (byte) (movingFactor & 0xff);
movingFactor >>= 8;
}
// compute hmac hash
byte[] hash = hmac_sha1(secret, text);
// put selected bytes into result int
int offset = hash[hash.length - 1] & 0xf;
if ( (0<=truncationOffset) &&
(truncationOffset<(hash.length-4)) ) {
offset = truncationOffset;
}
int binary =
((hash[offset] & 0x7f) << 24) |
((hash[offset + 1] & 0xff) << 16) |
((hash[offset + 2] & 0xff) << 8)
int otp = binary % DIGITS_POWER[codeDigits];
if (addChecksum) {
otp = (otp * 10) + calcChecksum(otp, codeDigits);
}
result = Integer.toString(otp);
while (result.length() < digits) {
result = "0" + result;
}
return result;
}
Appendix D - HOTP Algorithm: Test Values
The following test data uses the ASCII string "12345678901234567890" for the secret:
Secret = 0x3132333435363738393031323334353637383930
Table 1 details for each count, the intermediate HMAC value.
<table>
<thead>
<tr>
<th>Count</th>
<th>Hexadecimal HMAC-SHA-1(secret, count)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>cc93cf18508d94934c64b65d8ba7667fb7cde4b0</td>
</tr>
<tr>
<td>1</td>
<td>75a48a19d4cbe100644e8ac1397eea747a2d33ab</td>
</tr>
<tr>
<td>2</td>
<td>0bacb7fa082fef30782211938bc1c5e70416ff44</td>
</tr>
<tr>
<td>3</td>
<td>66c28227d03a2d552926ff016a1e6ef76557ece</td>
</tr>
<tr>
<td>4</td>
<td>a904c900a64b35909874b33e61c5938a8e15ed1c</td>
</tr>
<tr>
<td>5</td>
<td>a37e783d7b723c083d4f62926c7a25f238d0316</td>
</tr>
<tr>
<td>6</td>
<td>bc9cd28561042c83f219324d3c607256c03272ae</td>
</tr>
<tr>
<td>7</td>
<td>a4fb960c0bc061eabb804e5b397cdc4b45596fa</td>
</tr>
<tr>
<td>8</td>
<td>1b3c89f65e6c9e883012052823443f048b4332db</td>
</tr>
<tr>
<td>9</td>
<td>1637409809a679dc698207310c8c7fc07290d9e5</td>
</tr>
</tbody>
</table>
Table 2 details for each count the truncated values (both in hexadecimal and decimal) and then the HOTP value.
<table>
<thead>
<tr>
<th>Count</th>
<th>Hexadecimal</th>
<th>Decimal</th>
<th>HOTP</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>4c93cf18</td>
<td>1284755224</td>
<td>755224</td>
</tr>
<tr>
<td>1</td>
<td>41397eea</td>
<td>1094287082</td>
<td>287082</td>
</tr>
<tr>
<td>2</td>
<td>82fef30</td>
<td>137359152</td>
<td>359152</td>
</tr>
<tr>
<td>3</td>
<td>66ef7655</td>
<td>1726969429</td>
<td>969429</td>
</tr>
<tr>
<td>4</td>
<td>61c5938a</td>
<td>1640338314</td>
<td>338314</td>
</tr>
<tr>
<td>5</td>
<td>33c083d4</td>
<td>868254676</td>
<td>254676</td>
</tr>
<tr>
<td>6</td>
<td>7256c032</td>
<td>1918287922</td>
<td>287922</td>
</tr>
<tr>
<td>7</td>
<td>4e5b397</td>
<td>82162583</td>
<td>162583</td>
</tr>
<tr>
<td>8</td>
<td>2823443f</td>
<td>673399871</td>
<td>399871</td>
</tr>
<tr>
<td>9</td>
<td>2679dc69</td>
<td>645520489</td>
<td>520489</td>
</tr>
</tbody>
</table>
Appendix E - Extensions
We introduce in this section several enhancements to the HOTP algorithm. These are not recommended extensions or part of the standard algorithm, but merely variations that could be used for customized implementations.
E.1. Number of Digits
A simple enhancement in terms of security would be to extract more digits from the HMAC-SHA-1 value.
For instance, calculating the HOTP value modulo $10^8$ to build an 8-digit HOTP value would reduce the probability of success of the adversary from $sv/10^6$ to $sv/10^8$.
This could give the opportunity to improve usability, e.g., by increasing $T$ and/or $s$, while still achieving a better security overall. For instance, $s = 10$ and $10v/10^8 = v/10^7 < v/10^6$ which is the theoretical optimum for 6-digit code when $s = 1$.
E.2. Alphanumeric Values
Another option is to use A-Z and 0-9 values; or rather a subset of 32 symbols taken from the alphanumerical alphabet in order to avoid any confusion between characters: 0, O, and Q as well as l, 1, and I are very similar, and can look the same on a small display.
The immediate consequence is that the security is now in the order of $sv/32^6$ for a 6-digit HOTP value and $sv/32^8$ for an 8-digit HOTP value.
$32^6 > 10^9$ so the security of a 6-alphanumeric HOTP code is slightly better than a 9-digit HOTP value, which is the maximum length of an HOTP code supported by the proposed algorithm.
$32^8 > 10^12$ so the security of an 8-alphanumeric HOTP code is significantly better than a 9-digit HOTP value.
Depending on the application and token/interface used for displaying and entering the HOTP value, the choice of alphanumerical values could be a simple and efficient way to improve security at a reduced cost and impact on users.
E.3. Sequence of HOTP Values
As we suggested for the resynchronization to enter a short sequence (say, 2 or 3) of HOTP values, we could generalize the concept to the protocol, and add a parameter \( L \) that would define the length of the HOTP sequence to enter.
Per default, the value \( L \) SHOULD be set to 1, but if security needs to be increased, users might be asked (possibly for a short period of time, or a specific operation) to enter \( L \) HOTP values.
This is another way, without increasing the HOTP length or using alphanumeric values to tighten security.
Note: The system MAY also be programmed to request synchronization on a regular basis (e.g., every night, twice a week, etc.) and to achieve this purpose, ask for a sequence of \( L \) HOTP values.
E.4. A Counter-Based Resynchronization Method
In this case, we assume that the client can access and send not only the HOTP value but also other information, more specifically, the counter value.
A more efficient and secure method for resynchronization is possible in this case. The client application will not send the HOTP-client value only, but the HOTP-client and the related \( C \)-client counter value, the HOTP value acting as a message authentication code of the counter.
Resynchronization Counter-based Protocol (RCP)
The server accepts if the following are all true, where \( C \)-server is its own current counter value:
1) \( C \)-client >= \( C \)-server
2) \( C \)-client - \( C \)-server <= \( s \)
3) Check that HOTP client is valid HOTP\( (K,C\)-Client\)
4) If true, the server sets \( C \) to \( C \)-client + 1 and client is authenticated
In this case, there is no need for managing a look-ahead window anymore. The probability of success of the adversary is only \( v/10^6 \) or roughly \( v \) in one million. A side benefit is obviously to be able to increase \( s \) "infinitely" and therefore improve the system usability without impacting the security.
This resynchronization protocol SHOULD be used whenever the related impact on the client and server applications is deemed acceptable.
E.5. Data Field
Another interesting option is the introduction of a Data field, which would be used for generating the One-Time Password values: HOTP (K, C, [Data]) where Data is an optional field that can be the concatenation of various pieces of identity-related information, e.g., Data = Address | PIN.
We could also use a Timer, either as the only moving factor or in combination with the Counter -- in this case, e.g., Data = Timer, where Timer could be the UNIX-time (GMT seconds since 1/1/1970) divided by some factor (8, 16, 32, etc.) in order to give a specific time step. The time window for the One-Time Password is then equal to the time step multiplied by the resynchronization parameter as defined before. For example, if we take 64 seconds as the time step and 7 for the resynchronization parameter, we obtain an acceptance window of +/- 3 minutes.
Using a Data field opens for more flexibility in the algorithm implementation, provided that the Data field is clearly specified.
Authors’ Addresses
David M’Raihi (primary contact for sending comments and questions)
VeriSign, Inc.
685 E. Middlefield Road
Mountain View, CA 94043 USA
Phone: 1-650-426-3832
EMail: dmraihi@verisign.com
Mihir Bellare
Dept of Computer Science and Engineering, Mail Code 0114
University of California at San Diego
9500 Gilman Drive
La Jolla, CA 92093, USA
EMail: mihir@cs.ucsd.edu
Frank Hoornaert
VASCO Data Security, Inc.
Koningin Astridlaan 164
1780 Wemmel, Belgium
EMail: frh@vasco.com
David Naccache
Gemplus Innovation
34 rue Guynemer, 92447,
Issy les Moulineaux, France
and
Information Security Group,
Royal Holloway,
University of London, Egham,
Surrey TW20 0EX, UK
EMail: david.naccache@gemplus.com, david.naccache@rhul.ac.uk
Ohad Ranen
Aladdin Knowledge Systems Ltd.
15 Beit Oved Street
Tel Aviv, Israel 61110
EMail: Ohad.Ranen@ealaddin.com
Copyright (C) The Internet Society (2005).
This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights.
This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Intellectual Property
The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be found in BCP 78 and BCP 79.
Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr.
The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf-ipr@ietf.org.
Acknowledgement
Funding for the RFC Editor function is currently provided by the Internet Society.
|
{"Source-Url": "https://tools.ietf.org/pdf/rfc4226.pdf", "len_cl100k_base": 15736, "olmocr-version": "0.1.53", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 70560, "total-output-tokens": 18410, "length": "2e13", "weborganizer": {"__label__adult": 0.00046539306640625, "__label__art_design": 0.0004396438598632813, "__label__crime_law": 0.0017766952514648438, "__label__education_jobs": 0.0005927085876464844, "__label__entertainment": 0.00013113021850585938, "__label__fashion_beauty": 0.000217437744140625, "__label__finance_business": 0.0008330345153808594, "__label__food_dining": 0.00045013427734375, "__label__games": 0.0013589859008789062, "__label__hardware": 0.003383636474609375, "__label__health": 0.0007801055908203125, "__label__history": 0.00037384033203125, "__label__home_hobbies": 0.00014090538024902344, "__label__industrial": 0.0007534027099609375, "__label__literature": 0.0003523826599121094, "__label__politics": 0.0004773139953613281, "__label__religion": 0.0006117820739746094, "__label__science_tech": 0.25390625, "__label__social_life": 0.00010132789611816406, "__label__software": 0.0283050537109375, "__label__software_dev": 0.70361328125, "__label__sports_fitness": 0.0003604888916015625, "__label__transportation": 0.0005230903625488281, "__label__travel": 0.00019800662994384768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64392, 0.04575]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64392, 0.38342]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64392, 0.84602]], "google_gemma-3-12b-it_contains_pii": [[0, 1125, false], [1125, 4193, null], [4193, 6786, null], [6786, 8792, null], [8792, 10548, null], [10548, 12106, null], [12106, 13750, null], [13750, 15655, null], [15655, 17669, null], [17669, 19692, null], [19692, 21772, null], [21772, 23874, null], [23874, 25646, null], [25646, 27579, null], [27579, 29178, null], [29178, 30256, null], [30256, 32436, null], [32436, 34903, null], [34903, 36327, null], [36327, 38164, null], [38164, 39983, null], [39983, 42048, null], [42048, 44041, null], [44041, 45464, null], [45464, 47992, null], [47992, 50025, null], [50025, 50688, null], [50688, 51865, null], [51865, 53426, null], [53426, 54991, null], [54991, 55224, null], [55224, 56749, null], [56749, 58520, null], [58520, 60482, null], [60482, 61615, null], [61615, 62467, null], [62467, 64392, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1125, true], [1125, 4193, null], [4193, 6786, null], [6786, 8792, null], [8792, 10548, null], [10548, 12106, null], [12106, 13750, null], [13750, 15655, null], [15655, 17669, null], [17669, 19692, null], [19692, 21772, null], [21772, 23874, null], [23874, 25646, null], [25646, 27579, null], [27579, 29178, null], [29178, 30256, null], [30256, 32436, null], [32436, 34903, null], [34903, 36327, null], [36327, 38164, null], [38164, 39983, null], [39983, 42048, null], [42048, 44041, null], [44041, 45464, null], [45464, 47992, null], [47992, 50025, null], [50025, 50688, null], [50688, 51865, null], [51865, 53426, null], [53426, 54991, null], [54991, 55224, null], [55224, 56749, null], [56749, 58520, null], [58520, 60482, null], [60482, 61615, null], [61615, 62467, null], [62467, 64392, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64392, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64392, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64392, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64392, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64392, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64392, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64392, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64392, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64392, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64392, null]], "pdf_page_numbers": [[0, 1125, 1], [1125, 4193, 2], [4193, 6786, 3], [6786, 8792, 4], [8792, 10548, 5], [10548, 12106, 6], [12106, 13750, 7], [13750, 15655, 8], [15655, 17669, 9], [17669, 19692, 10], [19692, 21772, 11], [21772, 23874, 12], [23874, 25646, 13], [25646, 27579, 14], [27579, 29178, 15], [29178, 30256, 16], [30256, 32436, 17], [32436, 34903, 18], [34903, 36327, 19], [36327, 38164, 20], [38164, 39983, 21], [39983, 42048, 22], [42048, 44041, 23], [44041, 45464, 24], [45464, 47992, 25], [47992, 50025, 26], [50025, 50688, 27], [50688, 51865, 28], [51865, 53426, 29], [53426, 54991, 30], [54991, 55224, 31], [55224, 56749, 32], [56749, 58520, 33], [58520, 60482, 34], [60482, 61615, 35], [61615, 62467, 36], [62467, 64392, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64392, 0.0494]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
0413445c60be82714d38b01885ba9dcd47258aed
|
SOFTWARE ENGINEERING A MULTI-LAYER AND SCALABLE
AUTONOMOUS FORCES "A.I." FOR PROFESSIONAL MILITARY TRAINING
Michael J. Pelosi
Michael Scott Brown
ITEC Software Engineering Department
University of Maryland University College
Adelphi, MD, 20783, USA
ABSTRACT
Described herein is a general-purpose software engineering architecture for autonomous, computer controlled opponent implementation in modern maneuver warfare simulation and training. The implementation has been developed, refined, and tested in the user crucible for several years. The approach represents a hybrid application of various well-known AI techniques, including domain modeling, agent modeling, and object-oriented programming. Inspired by computer chess approaches, the methodology combines this theoretical foundation with a hybrid and scalable portfolio of additional techniques. The result remains simple enough to be maintainable and comprehensible for the code writers as well as the end-users, and robust enough to handle a wide spectrum of possible mission scenarios and circumstances without modification.
1 INTRODUCTION
"There is no substitute for a human opponent." — Vincent "T.J." Taijeron, USMA Warfighting Simulation Center, West Point, NY. When one is lacking, however, we attempt to offer a usable substitute. In this paper we describe an architecture and a methodology for software engineering a Computer Opponent Artificial Intelligence (COAI) for professional military training. Ideally, such an architecture should meet the design goals of being frugal and efficient in code, easily maintainable, and produce an acceptable level of realism and flexibility for military training personnel and administrators. A truly low-overhead and low-impact solution to the vexing "AI" problem for professional military training at echelons below division and corps level is desired. At the current time, a paucity of software exists, either commercial off-the-shelf computer games or DoD produced and acquired software, for low-cost training in this regard. Army simulation training has typically used extremely complex, sophisticated, and costly software that necessitates set-up time and planning, large staffs, large budgets, training, and Herculean scenario design efforts. Recently, there has been a shifting of emphasis to what is called low-overhead/low-impact computerized training that lower-level echelons, which traditionally did not have access to large-scale simulation support, can utilize effectively and efficiently. The offerings in this area are slim, and typically commercial computer wargames are wholly inadequate for many reasons. In particular, realistic and useful computer opponent "AIs" are virtually completely lacking. Tasking organization staff to "play the part" of opposing forces is a plausible solution, but necessarily involves a huge commitment of resources when theoretically the CPU can be doing the same thing at little to no cost. Architecting and implementing a targeted solution to the "AI" problem at the appropriate level of simulation and modeling fidelity has been a persistent issue for more than a decade (Lane et al. 2005, Johnston et al. 2015).
The military simulation community has been working on similar proposed solutions for decades. A whole simulation conference series addressed issues relevant to this paper: the Computer Generated Forces/Human Behavior Representation (CGF/HBR) series, later renamed in Behavior Representation and Implementation M&S (BRIMS). A NATO Technical Report on Computer Generated Forces Human Behavior Representation (CGF/HBR) series, later renamed in Behavior Representation and Implementation M&S (BRIMS). A NATO Technical Report on Computer Generated Forces Technology (NATO Document Nr: RTO-TR-11 AC/323(SAS)TP/8) described similar solutions as desirable objectives in 1999. A chapter by Bharathy, Yilmaz, & Tolk (2012) on "Agent directed simulation for combat modeling and distributed simulation," in Engineering Principles of Combat Modeling and Distributed Simulation, gives several related examples and points to related research. Previous related Winter Simulation Conference papers include Cioppa et. al. (2004), Middleton (2010), Kuramoto & Furuichi (2013), Løvlid et. al. (2013), and one of the early papers presented at WSC which was revolutionary at the time is Karr & Franceschini (1994). The present paper is embedded into a rich WSC and SISO history acknowledged here.
Large-scale simulation exercises are frequently conducted at the higher levels of the Army command structure. These include division, corps, and army echelon levels. Lower-level training has largely been restricted to manual map exercises, or expensive field training and wargames (U. S. Army 2003). Software tools at the company, battalion, and brigade training levels have been sparse. One frequently utilized piece of software is the VBS simulation, which is commercially marketed as "Arma". This is a first-person shooter type game that has been utilized for squad and platoon level infantry type training. However, at the next higher echelons software training tools are largely nonexistent. Even the widely utilized VBS platform has a woefully inadequate AI — cooks and mechanics are routinely tasked to drive trucks, fly planes and helicopters, and play civilians, as part of the simulation exercise.
Despite 50 years of advancements in computer technology, computer chess AI still relies on a largely brute force approach. Using the Min-Max algorithm, transposition tables, and other optimizations, the chess AI scans through the game tree, analyzing millions of potential moves, before producing the highest scored next move (Shannon 1950, Newborn 2012). Thankfully for chess, there are only 64 squares and a maximum of 32 pieces. In a professional military simulation, a map may consist of millions of individual 100 meter sized grid squares, thousands of units, and a completely unpredictable terrain and mission. In other words, searching through the game tree of all possible actions is tens of magnitudes more difficult. The chess modeling approach is not scalable, not nearly so. The chess analogy AI cannot be made to fit, yet we can adopt some of the lessons learned from computer chess. This includes dividing the session into phases: opening, middle-game, and endgame. Scoring the value of different pieces (maneuver unit groups), and evaluating final desired states, and how to get there, have proven to be extremely useful concepts.
The approach we have implemented and described in this paper could be considered as a hybrid AI approach. Relying on the chess analogy and metaphor, particularly the game phase concept, we add on an expert system that models and abstracts the actual units in decision-making processes wherever possible. For example, the AI groups subunits into their actual mission structure, including companies and battalions. These are controlled as in the military force structure chain of command. For the creation of plans, it is possible to adopt the actual military staff procedure and adapt this into a software planning sequence. Further, lower-level planning details such as route determination can be supplemented by the ubiquitous A* Algorithm.
2 METHODOLOGY
A realistic AI useful for professional training purposes should both model and mimic the military decision-making process at various echelons. In that light, as a robust foundation the AI goes straight to the U.S. Army field manuals for guidance. Fortuitously, more than 100 years of modern warfighting experience has distilled down Army planning doctrine to a few formulaic processes in the Military
Pelosi and Brown
Decision-Making Procedures (MDMP). These include TLP, METT-TC, and OCOKA (see next page for descriptions). These processes can and have been modeled almost directly in code.
U.S. Army Field Manual 101–5, *Staff Organization and Operations*, explains in detail the Army MDMP (U.S. Army 1997). "The MDMP is an adaptation of the Army’s analytical approach to problem solving. The MDMP is a tool that assists commanders and staff in developing estimates and plans. The full MDMP is a detailed, deliberate, sequential, and staff-intensive process used when adequate planning time and sufficient staff support are available to thoroughly examine numerous friendly and enemy courses of action (COAs). This staff effort has one objective—to collectively integrate information with sound doctrine and technical competence to assist the commander (in our case the COAI "commander") in decisions, leading ultimately to effective plans. The analytical aspects of the MDMP continues at all levels during operations."
The COAI is presented a military mission that is contained within scenario and AI option specification files. The files include information on task organization, friendly forces, and a timeline. Objectives are also specified with points values for various objective type such as occupying a location, clearing an area of enemy forces, moving friendly forces past a certain demarcation zone, or searching for a hidden target.
Where possible the military decision-making process (MDMP) is then followed both in modeling and implementation. The COAI is designed to follow a process similar to what is recommended doctrine in the Army field manuals. Likewise, parallel modeling and decision-making takes place at each of the important unit echelon levels: platoon, company, battalion or task-force, and support. The following further outlines MDMP aspects:
**TLP** (Troop Leading Procedures) consist of the following steps: 1. receive the mission and conduct METT-TC and OCOKA, 2. prepare for the mission and issue preliminary orders, 3. make a tentative plan: identify goals, gather information, generate/analyze/compare possible solutions, and implement the best tentative plan, 4. start movement, 5. conduct reconnaissance, and 6. follow through with execution of the final plan.
**METT-TC** is: Mission analysis, Enemy analysis, Terrain analysis, Troops analysis, Time limit analysis, Civilian impact analysis. **OCOKA** is conducted as part of terrain analysis.
**OCOKA** stands for Observation and fields of fire, Cover and concealment, Obstacles, Key terrain, and Avenues of approach. This constitutes a more detailed terrain analysis. Obstacles can include man-made and urban terrain obstacles, natural terrain obstacles, and water obstacles. Key terrain may involve, for example, high elevation or easily traversed terrain near objectives. Avenues of approach include roads and otherwise clear areas. Trafficability can be evaluated both for vehicle and troop movement in regard to slowing, diverting, or stopping movement.

The Course of Action (COA) for the COAI that is produced is a result of the analysis of the above factors and constraints. Reducing all considerations to a quantitative scoring allows a brute force solution, that randomly generates various plans and each plan can be scored for its suitability and feasibility. The highest scoring feasible plan is selected as the best COA. Since plans are randomly generated, differing and unique plans are generated during each new instance of the same scenario mission design. This is important for replay value.
Execution requires close supervision and monitoring, as well as continuous analysis, the updating of intelligence, and refinement of the COA plan. In certain cases, the plan must be discarded and regenerated completely.
The COAI has certain unit groups assigned to it in the scenario and mission design. This allows the possibility of multiple instances of the AI, each controlling its own respective force grouping. Likewise, human participants would each be controlling various force groupings.
Scenario design inputs to the COAI for information analysis include: time limit constraints, enemy order-of-battle (OOB), friendly OOB, quantified scenario objectives, along with AI options and settings. Five major phases are accomplished during a preliminary mission analysis:
1. Analysis and calculation of the goal state to satisfy objectives. Often times this will involve the ideal placement of forces by the scenario end time, such as the occupation of objective locations.
2. Analysis and calculation of known enemy dispositions and force allocations. Here relative points values are calculated relying on "combat power" summations for known enemy unit types in a catalog database of unit types. Friendly unit group combat power totals are likewise analyzed to create favorable force match ups. In general, a ≥ 3:1 points total advantage will be necessary for successfully taking the occupation of locational objective from a defender (U. S. Army 2002).
3. Analysis and calculation of a tentative plan to reach the goal end state. This is accomplished by using a brute force approach similar to the solving of the Traveling Sales Person problem (Russell and Norvig, 2009). Several thousand likely plans are randomly generated and scored. Top scoring plans are then further evaluated and selected.
4. Surplus time and resources are evaluated. If the plan can be accomplished before the scenario mission end time, further refinements and optimizations can be preliminarily executed. This can include actions such as further intelligence gathering and reconnaissance, softening up of target locations through preliminary bombardments and airstrikes, and conducting feign attacks or deep pincer movements.
5. Finally, initial tentative movements and actions for the first game phase are calculated, however these may change when the final COA is adopted.
As part of the mission analysis and execution, map zones are segmented and demarcated. Map zones are segmented based on the center mass coordinates of friendly forces, and known or likely center mass coordinates of enemy forces. With these two points fixed in space, a relative "mapping" of center, left and

right flanks, and depth can take place. If portions of these areas are off the given terrain area, they are ignored and become notional. Areas which are untrafficable (for vehicular and/or foot units respectively) are also ignored for deployment or movements. The point halfway between friendly center mass and enemy center mass becomes the Forward Edge of the Battle Area (FEBA) anchor. Ten discrete zones are then demarcated from this mapping: forward, screen left, left, center, right, screen right, and corresponding rear areas respectively. Reconnaissance missions will be routed into the far side of the FEBA, and missions will be allocated to specific zones. Combat group deployments take place generally in the left, center, and right, with screening units assigned to screen left and screen right areas. Supporting units, including headquarters, artillery, and logistics, are routed toward the rear areas. Spacing between units is calculated by the frontage span of the respective maps zone. Reserves are held in the rear area as well. Thus, any map size from 5 km × 5 km up to 500 km × 500 km can be automatically mapped into a convenient "scenario" mission and planning space, based on the map size, force size, and initial deployments.
As part of the mission analysis, a detailed terrain analysis is conducted and the results are stored in a map database of grid squares. Each map grid square is assigned a weighted and then normalized score value for specific characteristics. These include the following as shown in Table 1.
**Table 1: Static and Dynamic Map Terrain Analyses.**
<table>
<thead>
<tr>
<th>Analysis</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Objectives</td>
<td>Value and proximity to objective locations.</td>
</tr>
<tr>
<td>Water</td>
<td>Water obstructions to ground movement — note this may have an important lack of effect on the many amphibious vehicles in operation.</td>
</tr>
<tr>
<td>Elevations</td>
<td>In many circumstances higher elevation locations are seen as more valuable to occupy.</td>
</tr>
<tr>
<td>Grades</td>
<td>Steep uphill and downhill grade serve as detriments to mobility.</td>
</tr>
<tr>
<td>LOS</td>
<td>Line-of-sight to nearby grid squares; some locations can observe much more of the surrounding terrain.</td>
</tr>
<tr>
<td>Blocks</td>
<td>Blocks can include highly dense vegetation, urban locations, as well as man-made obstacles.</td>
</tr>
<tr>
<td>Cover</td>
<td>Cover provides shelter from blast effects and observation.</td>
</tr>
<tr>
<td>Avenues</td>
<td>Key avenues for movement, central locations networks to objectives are preferred.</td>
</tr>
<tr>
<td>Concealment</td>
<td>Concealment has low line of sight visibility as well as good cover.</td>
</tr>
<tr>
<td>Defense</td>
<td>Combination of effects from above for defensibility.</td>
</tr>
<tr>
<td>Survivability</td>
<td>Cover, concealment, and defensibility modified for survivability aspects.</td>
</tr>
<tr>
<td>Ambush</td>
<td>Areas with good visibility/survivability, nearby to key avenues of movement.</td>
</tr>
<tr>
<td>Countermobility</td>
<td>Places where the enemy's movement can be stopped efficiently.</td>
</tr>
<tr>
<td>Valuable Areas</td>
<td>Avenues, high elevations, nearby to objectives, etc.</td>
</tr>
<tr>
<td>Friendly Proximity</td>
<td>Weighted/normalized value for staying close to friendly concentrations.</td>
</tr>
<tr>
<td>Enemy Proximity</td>
<td>Weighted/normalized value for known(updated) enemy concentrations.</td>
</tr>
</tbody>
</table>
The COAI user interface produces shaded map graphics depicting weighted and normalized values for each of the terrain analyses based on grid square location. The last three terrain analyses are dynamically updated as the simulation progresses, and reflect new and current information.
As a consequence of the COAI not considering individual unit entities at the lowest level (platoon and section sized entities), the COAI is only aware of the unit groupings, relative combat power, and the group types. Using these characteristics, OOB analysis is capable of assigning various groups to specific objective missions. For example, an armor company may only include 14 tanks but have triple the combat power of a 90 soldier infantry company. The combat power is based on points totals from the unit.
catalogs. As part of the COA production, the COAI analyzes the most optimal assignment of groups to objectives using its limited knowledge. With regards to enemy forces — given adequate knowledge, OOB analysis will endeavor to produce adequate match ups, specifically the greater than 3 to 1 advantage of an attacker over a defender in terms of combat power.
The COAI must keep track of known and likely enemy force locations. Initially, the training scenario designer can choose to reveal as much or as little about enemy force locations as desired. This can be loaded into an initial enemy spot table at scenario start, and be used for COAI planning. After that the COAI is on its own gleaning, updating, and aging information as it comes in. It keeps track of this in a dynamic spot table that contains coordinates, unit type and points value, and most recent spot time. As spots are aged they are reduced in weighting importance for relevance and accuracy. The simulation produces a listing of current force enemy spots and hands it off to the COAI, which collates and posts the information in the spot table.
As previously mentioned, planning the force allocations of groups to objectives involves a brute force planning approach. For each of several thousand plan iterations, groups are randomly assigned to objectives. Then, based on the group data, objective data, and enemy locations, a special function calculates the feasibility of each allocation. Time to reach the objective, attack or defense ratio, and other factors, are able to cull out infeasible assignments. Of the remaining feasible plans, these are scored based on minimum cost in terms of movement time, and attractiveness of force match ups (desired minimum 3:1 on offense) and other factors. A final "minimum time to complete the plan" duration is calculated, and from this any surplus time available for further measures, such as reconnaissance or softening up of targets, is then known. Force allocation intrinsically calculates the end-game phase plan. Since each scenario objective is assigned a relative points value, scoring of plans takes into consideration the satisfaction of more valuable objectives, as well as the distance from each respective group to its assigned objective. For further information on brute force planning using this approach, traveling salesperson solutions are a good starting point (see Russell and Norvig, 2009).
Once final endgame force allocations have been favorably calculated, if sufficient surplus time and resources exist, a middle-game "playbook" COA can be adopted by the COAI to further shift the favorable odds preliminary to the endgame phase. In the case of a defensive posture this can include securing objectives, static defense, or in-depth defense (U. S. Army 2001). For attack postures, broad front attacks, counterattacks, or deep attack "playbooks" can be adopted. A reserve force can possibly be selected. The playbook selected is not optimal, but suitable for the given situation and circumstances. This is analogous to the football play: a running play may not be any better than a deep pass, but it keeps the other team guessing. Seemingly random intelligent plans and actions in terms of time and execution are an important part of a realistic and engaging COAI with suitable replay value.
Once the COA has been finalized execution will be transitioned through a series of major game phases, relying on the chess motif. These include the opening, middle-game, and endgame. Assuming little slack time exists for achieving the mission objectives — the execution phase will be shifted immediately to "endgame". Endgame can be considered the all-out effort to achieve the objectives immediately. Otherwise, if slack time exists, perhaps an opening and middle-game phase will be adopted as part of the execution.
The execution phases are analogous to the very important military precept of the OODA loop (Boyd 1976). The opening is comparable to the Observing phase. Here, advantages are to be acquired in terms of additional intelligence and other measures, such as occupation of key terrain. Middle-game is analogous to Orienting — the major reorienting of friendly forces to further tip the balance for further movements and attacks. Endgame is the Decide and Act portion, where commitment to a decisive outcome is adopted. Final actions are wagered here. Replanning is necessitated between the opening phase and the middle-game phase, as well as between the middle-game and endgame phase.
Transitions between the major game phases are based on the characteristics of what should be taking place generally in each phase — once again relying on the chess metaphor. The opening takes place
between the scenario start and first enemy contact, first weapons fire, and/or first friendly casualties. Transition to the end-game occurs after the middle-game when the previously calculated time deadline for accomplishing the mission objectives is reached. This also includes a time safety factor built-in. In other words, final execution is committed to when there is still enough time to safely accomplish the objectives. That said, in order to preserve verisimilitude, there exists the possibility of a "lightning battle" COA adoption where the COAI will skip the opening and/or middle-game phases and progress directly to an endgame phase. This is analogous to a surprise execution, which forces the training audience to consider all possibilities. Skipping phases is easily incorporated into the COAI options as probability factors for skipping middle-game, and skipping opening and middle-game. At the juncture of each major game phase, replanning takes place based on the phase goals described further below.
The opening phase is largely characterized by observing the enemy's respective force deployments and dispositions. Goals here include grouping and further deploying friendly forces, exploiting terrain based on mission analysis terrain calculations, execution of reconnaissance and counter reconnaissance missions, and the seizing of any easy objectives closer than the enemy.
The middle-game is characterized by major force movements to orient deployment for final attacks and/or defense. Seizing of key intermediate terrain is conducted. Harassment missions are perhaps selected, these would include randomized probes or artillery fire missions. Allocation of resources to the endgame is recalculated. Further, most COAs will hold a major reserve and/or counterattack force for unforeseen events. Counterattacks can also take place. Attacks use as a basis the "4Fs" for planning and execution: find – fix – flank – finish (U.S. Army 1997).
The endgame embarks on achieving the final scenario objectives. A regrouping of scattered forces may be necessary for the execution of the final plan. Final attacks are enacted, if necessary, and objectives are occupied. The plan is irretrievably executed at this point — for either final success or failure. Ideally, the opening and middle-game phases have set up the COAI for uncontested victory at this point through incremental and methodical gaining of advantage. As mentioned, the endgame is analogous to the Decide and Act portions of the OODA loop, and opening and middle-game phases only take place if surplus time and resources exist for the satisfaction of the mission goals. Otherwise, the COAI would need to embark on an endgame plan immediately at the scenario start.
Figure 3: COAI execution phases and major aspects of each phase.
Analogous to the IOS (International Organization for Standardization, 1989) model for computer networking (and its inherent division of responsibilities and functionality), there are at least six layers and levels of modeling that are used in the COAI architecture. Most of these parallel a corresponding layer in the military decision-making process and unit echelon structure, in real world military forces. This leverages the concept of object modeling for real-world abstractions, and also organizes and simplifies the
Pelosi and Brown
architecture software code. At the lowest level are the simulation entities, nominally platoon down to section sized organic units. The simulation in usage models each of these uniquely as a C++ class entity. Typically, these are grouped into company to battalion sized units, each consisting of 4 to 10 subunits. It is these groups that constitute the "unit groups" that are under direct control of a scripting engine layer. The scripting engine layer is responsible for issuing the entire group order directives discussed in more detail below. Above the scripting layer is mission control, more directly controlled by the COAI. Once the group has been assigned to a mission, the mission instance is responsible for autonomous control over the group through the scripting engine. Mission sequences are, in turn, controlled by the overall COA class plan that is being implemented. And finally, the COA class is planned in response to the overall scenario objectives, available resources, game phase, terrain map, and options.
<table>
<thead>
<tr>
<th>Table 2: Modeling Layers of Abstraction, Planning, and Control.</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. Execution Phase: Open, Middle, Endgame -> Determines strategic approach/goals.</td>
</tr>
<tr>
<td>2. Course of Action (COA) -> Self-contained master plan for phase, controls Layer 3.</td>
</tr>
<tr>
<td>3. AI Mission Sequence Collection -> Insertion, deletion, reordering possible.</td>
</tr>
<tr>
<td>4. AI Independent Mission Control Agent -> An autonomous OOP class, with reports.</td>
</tr>
<tr>
<td>5. Group Scripting Engine -> Programmed Sequences of Events/Actions/Responses.</td>
</tr>
<tr>
<td>7. Simulation Entity -> Platoon/Section/Section/Battery/Vehicle. Hi-fidelity modeling.</td>
</tr>
</tbody>
</table>
Section to platoon entities are the lowest level of fidelity in the simulation in usage. The COAI does not control this echelon directly. The group and scripting engine issues direct orders and commands to entities at this level. In summary, sections and platoons are characterized by locations, ammunition and fuel levels, strengths and casualty levels, current orders and status, among other data in a cornucopia of minutiae. Accurately modeling this spectrum of characteristics is an extremely labor-intensive task that requires copious research and data entry. Accounting for hundreds of data items into COAI considerations is architecturally untenable, hence the abstraction to larger unit groupings is necessary to accomplish the goal of a robust and usable AI with a frugal amount of code. Codewise, these entities are modeled as classes.
Scenario design creates company and task force level unit groupings that normally model individual combat companies or battalions, artillery batteries, helicopter flights, and other unit groupings that would normally be controlled by a battalion or brigade level task force organization. Unit groups are modeled as a class and are the owner of combinations of the platoon and lower level entity grouping.
Group orders and tasks are relatively straightforward and implemented easily by the controlling mission class. The controlling mission class merely instructs the scripting engine to calculate and implement the command order. Group orders contain such simplistic directives as: move n meters, change facing, set speed, dismount infantry, improve position, camouflage, discharge smoke, or set formation. Set formation automatically orients the group in, among others, line, column, box, diamond, forward wage, reverse wedge, and echelon formations. Company and battalion sized formations will typically orient themselves in one of the aforementioned formations to advantageously engage likely targets. The scripting engine handles the details, while the COAI concentrates on decision making at the next echelon above. Lower level units are responsible for handling their own engagement of targets of opportunity. Standard Operating Procedure (SOP) allows independent decision-making for units in regards to firing smoke or vehicle engine exhaust smoke systems for defense, reversing on enemy sightings, or aggressively attacking new contacts.
Group order scripting consists of a collection of sequential orders. As mentioned earlier, movements, formation changes, camouflage orders, orders to "dig-in", etc., can be added to a scripting sequence. The scripting engine automatically executes the orders serially until completion. The COAI process
Pelosi and Brown
communicates into the simulation the desired scripting sequence and parameters through active mission classes. Scripting can be manually controlled as desired, and saved to file.
The COAI enters a control main loop after completing the mission analysis and creating a preliminary COA. Each time the loop executes, more information is extracted from the simulation, this is processed, and the COAI may modify directives or give additional orders to the group scripting engine. Groups with scripting orders pending can have those sequences cleared if necessary. The main loop continues until the scenario end time is reached and objective condition scores calculated.
COAs are modeled as a class and store their own data and update themselves inherently. Additionally, given certain circumstances, they are capable of canceling themselves which will necessitate and involve an automatic regeneration of a COA. This can optionally be done at random, or in the case of catastrophic goal failure. COAs contain enough information to be considered a high-level plan, with very little implementation details.
In satisfaction of the current COA, unit groups are assigned sequences of missions that fall into various categories. Each of the missions is defined in a C++ class, and the sequences of missions are collections of missions. The unit group conducts the next mission listed in its respectively assigned mission collection, until each one is completed. If necessary, a new mission can be spawned and inserted at the top of the collection, at which time the unit grouping will embark on the new mission, and resume the second mission once the newly spawned mission has been completed. For example, a grouping on a movement mission toward an objective can be assigned a newly spawned mission to attack a target of opportunity. Once this attack mission has been completed, the movement toward the objective mission will be resumed. Further, since missions are modeled as autonomous agents, they can spawn their own new missions as necessary which may supersede the current mission. The hierarchical breakdown of various COAI mission classes developed totals over 40 at the present time.
As mentioned, missions are modeled as classes and have a decoupled implementation. Each mission has a pair of classes closely related: a planner class and an implementer class. The planner class plans the mission and hands over the implementation details to the implementer class. If the implementer class runs into problems, it will call the planner class to once again reinitialize and replan the mission. Once a mission is spawned, it is initialized with several goal variables and the mission class code itself calculates how to correctly carry out the mission. During each iteration update (periodically calculated based on an AI update time step), the mission updates itself and the commands to the mission unit grouping as necessary. Upon mission completion, the mission class instance is removed from the mission sequence collection and ceases to exist. Some of the major mission taxonomy types, which rely on C++ class inheritance from the mission base class, include movement missions (which activate A* pathfinding), attack, defense, recon, and support missions. Missions are queryable for public properties such as mission start time, estimated completion time, status codes, and other information. As a result, it is straightforward to keep the user interface updated with graphical status for users.
Movement missions are generally tactical movement or road movement missions. Tactical movement will move in a tactical formation using advantageous routes to the goal endpoint. Road movement will simply travel by the most trafficable route to the goal location. Generally, movement missions will respond to React To Contact (RTC) events using an SOP, which may include attack, evade, stop movement, or retreat doctrines. Recon missions are similar but will move to advantageous locations for observation based on the terrain analysis, among other doctrinal differences.
Pathfinding to mission waypoints along a movement route is calculated using a modified version of the A* Algorithm (Hart, Nilsson, and Raphael 1968). The implementation considers multiple goals, for example the cost function can include factors for avoiding or approaching the enemy, attractiveness for traveling on roads, moving through high line-of-sight grid squares, or maintaining terrain cover. For example, if one of the mission goals is concealment during movement, lower movement cost can be assigned for terrain covered by forested areas or buildings. Pathfinder estimated time of arrival results are based on the speed over terrain of the slowest unit in the group.
Pelosi and Brown
An important consideration to note is that pathfinding algorithm code must take place in its own CPU thread. As a result, when a mission needs a pathfinding route, it sends desired goal coordinates as well as group data and route preference to a collection of pathfinder processes executing on the machine. The pathfinding request is queued and the mission class waits until a result is returned. Pathfinding is by far the most computationally intensive element of the entire COAI architecture, other than initial scenario and terrain analysis. Route movement of mission groups is 80% of what the COAI does. The importance of this cannot be under-stressed: a brick-house architecture for planning and implementing movement has been essential.
Various type of attack missions are implemented based on group composition; vehicular, foot, aircraft, etc. Generally for attack missions an endpoint location is assigned as well as a casualty threshold. The group will generally conduct tactical movement toward the objective, execute the attack several times as necessary, and regroup between attacks. During movement react-to-contact standard operating procedure is enacted.
Generally attacks are coded to take place using the well-established military metaphor for success which elaborates on the "4F's": find 'em, fix 'em, flank 'em, and finish 'em (U. S. Army 2007). Therefore, planned attacks would normally involve coordinated movements and attacks by two or more unit groupings, comprising at least a ≥ 3:1 combat power points advantage. Surprisingly, attacks are tractable to plan with acceptable realism once an enemy defender has been located. The fixing force approaches directly within weapons distance and begins firing. Meanwhile, the flanking force(s) conduct flanking movements around the left, right, or rear, and engage the enemy from those directions. The finishing force can be either the flanking force or another available group, which will move in for a final clobbering. A casualty loss threshold is established preliminarily, and if it is reached the attack will be broken off as unsuccessful and the forces reallocated. Isolated group defend missions are more simple and will merely move to the endpoint location and prepare a defense, normally by "digging in". If a casualty threshold is met the group will withdraw to a safer location. Support missions include being held in reserve, artillery fire support, logistics and supply, and headquarters missions. Forces allocated to support missions, in addition to conducting their primary mission, will relocate periodically to maintain a relative position behind the FEBA. Most support missions will also periodically relocate based on their proximity to the enemy. Headquarters units, for example, will maintain a safe distance between themselves and the nearest enemy, or advance to maintain a general distance to the FEBA. Artillery monitors the current spot table and fires on lucrative targets of opportunity, or fires in support of attacking or defending units when advantageous.
In the special cases of artillery and attack aviation support missions, available direct fire artillery as well as available attack aviation assets are placed in pools. Group missions can request fire or aviation support based on their circumstances, in which case it is added to a request listing. Artillery and aviation assets periodically evaluate the listing and prioritize their response based on likely effectiveness and proximity. Once satisfied or determined infeasible, requests are removed from the queue listing.
Missions are assigned based on unit group type, which is known from the scenario design specification. In general, most unit groupings can be categorized into one of the following major groups: armor, mechanized, mechanized with dismounts, infantry, artillery, recon, screening, aviation (attack/transport/recon), refueling support, ammunition support, or headquarters. The sum of the combat power points for various units is used for calculation of overall combat power and force match up ratios. Periodically, groups may find themselves unassigned to any particular mission. In this case the COA class will score and produce their next best mission assignment.
As mentioned, COAs are implemented as a class object, and they own mission sequence collections of mission classes. Each unit group has its own mission sequence collection. The current mission is the mission on the first position in the mission sequence collection. This mission class is responsible for controlling its assigned mission group. It passes high-level scripting orders to the simulation scripting interface. The scripting interface controls sequences of orders which include items such as move to
waypoint, change formation, change facing, move at speed, and weapons tight or weapons free, among others.
When the COAI process is initialized with data, it begins scenario analysis, terrain analysis, and OOB analysis. Typically, this process can take several minutes. When preliminary analysis is completed, an endgame COA is produced as well as the group maneuver plan. Assuming surplus time and resources exist, an opening and/or middle-game COA may be implemented instead. At this point the COA is formally implemented, missions are spawned (which are responsible for planning their own independent execution, and updating themselves, or spawning new missions as replacements). Missions are then implemented. Current missions are posted to a Gantt chart in the user interface which includes each unit grouping. The Gantt chart depicts the mission stack for each group, as well as an estimated completion time for respective missions. The COA is then controlled until the next game phase is reached. The scenario ends at the scenario end time and the victor is calculated based on objective points totals.
It is important to give the training audience and training administrators a detailed window into the internal workings of the COAI; this is both so that they can understand it as well as appreciate the realism inherent in the modeling. Further, they can more intelligently tweak the COAI settings and options and locate defects and probable improvements. The COAI has its own process independent UI running in parallel with simulation code. Some of the available graphical interfaces include an event log, static terrain map, updated maps with group locations, objective table listing, COAI options, and scenario options windows.
It is important to note that the AI solution to the scenario problem does not have to be optimal in order to be realistic and acceptable as a computer opponent. It is well known that human opponents are far from optimal. However, they can be counted on for a unique solution to most specific problems that will vary significantly between occurrences. If optimal mission solutions were calculated, this would result in a decreased training benefit and replay value for the AI implementation. Using randomness, crucially, also greatly simplifies the coding and modeling requirement necessary. Missions can be randomly specified within certain acceptable limits; in type, time, and space.
3 RESULTS AND CONCLUSIONS
The model described in this paper constitutes a computer opponent AI system conceived for the conducting of professional military training. Although many approaches exist, it is important that the envisioned solution be capable of a low-overhead implementation in usage. In other words, a large stimulation staff and large budget must not be required for end-users to conduct their own training. No special hardware, facilities, or preparation must be required. Bringing low-cost computer-assisted training to the echelon targeted (company, battalion, and brigade level training) requires this characteristic.
REFERENCES
Pelosi and Brown
AUTHOR BIOGRAPHIES
MICHAEL J. PELOSI is Professor of Software Engineering at UMUC. He received his Ph.D. from Nova Southeastern University, Ft. Lauderdale, FL. His research interests include software engineering and artificial intelligence. His email address is michael.pelosi@faculty.umuc.edu.
MICHAEL SCOTT BROWN is Program Director of Software Engineering at UMUC. He received his Ph.D. from Nova Southeastern University, Ft. Lauderdale, FL. His research interests include software engineering and artificial intelligence. His email address is michael.brown@umuc.edu.
|
{"Source-Url": "http://www.informs-sim.org/wsc16papers/275.pdf", "len_cl100k_base": 8722, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 33877, "total-output-tokens": 9874, "length": "2e13", "weborganizer": {"__label__adult": 0.0006251335144042969, "__label__art_design": 0.0006270408630371094, "__label__crime_law": 0.0016126632690429688, "__label__education_jobs": 0.005706787109375, "__label__entertainment": 0.0001729726791381836, "__label__fashion_beauty": 0.0003709793090820313, "__label__finance_business": 0.0005826950073242188, "__label__food_dining": 0.00055694580078125, "__label__games": 0.005832672119140625, "__label__hardware": 0.0022029876708984375, "__label__health": 0.0008206367492675781, "__label__history": 0.0007958412170410156, "__label__home_hobbies": 0.00017845630645751953, "__label__industrial": 0.00131988525390625, "__label__literature": 0.0003752708435058594, "__label__politics": 0.0009431838989257812, "__label__religion": 0.0005083084106445312, "__label__science_tech": 0.1390380859375, "__label__social_life": 0.00019073486328125, "__label__software": 0.0116424560546875, "__label__software_dev": 0.82275390625, "__label__sports_fitness": 0.001125335693359375, "__label__transportation": 0.001868247985839844, "__label__travel": 0.00030922889709472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46735, 0.01087]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46735, 0.33909]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46735, 0.92863]], "google_gemma-3-12b-it_contains_pii": [[0, 3179, false], [3179, 7670, null], [7670, 10812, null], [10812, 14104, null], [14104, 18420, null], [18420, 23147, null], [23147, 26482, null], [26482, 30958, null], [30958, 35726, null], [35726, 40501, null], [40501, 44441, null], [44441, 46735, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3179, true], [3179, 7670, null], [7670, 10812, null], [10812, 14104, null], [14104, 18420, null], [18420, 23147, null], [23147, 26482, null], [26482, 30958, null], [30958, 35726, null], [35726, 40501, null], [40501, 44441, null], [44441, 46735, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46735, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46735, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46735, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46735, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46735, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46735, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46735, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46735, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46735, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46735, null]], "pdf_page_numbers": [[0, 3179, 1], [3179, 7670, 2], [7670, 10812, 3], [10812, 14104, 4], [14104, 18420, 5], [18420, 23147, 6], [23147, 26482, 7], [26482, 30958, 8], [30958, 35726, 9], [35726, 40501, 10], [40501, 44441, 11], [44441, 46735, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46735, 0.21774]]}
|
olmocr_science_pdfs
|
2024-12-05
|
2024-12-05
|
eeab0609b8792553c8d40a68d15bba15bd35dca6
|
A Comparison of PVS and Isabelle/HOL
DAVID GRIFFIOEN¹,²* MARIEKE HUISMAN²
¹ CWI, Amsterdam.
² Computing Science Institute, Univ. Nijmegen,
P.O. Box 9010, 6500 GL Nijmegen, The Netherlands.
{marieke,davidd}@cs.kun.nl
Abstract. There is an overwhelming number of different proof tools available and it is hard to find the right one for a particular application. Manuals usually concentrate on the strong points of a proof tool, but to make a good choice, one should also know (1) which are the weak points and (2) whether the proof tool is suited for the application in hand. This paper gives an initial impetus to a consumers' report on proof tools. The powerful higher-order logic proof tools PVS and Isabelle are compared with respect to several aspects: logic, specification language, prover, soundness, proof manager, user interface (and more). The paper concludes with a list of criteria for judging proof tools, it is applied to both PVS and Isabelle.
1994 Mathematics Subject Classification: 03B35 Mechanisation of proof and logical operations; 03B15 Higher-order logic and type-theory.
1998 Computing Reviews Classification System: F.4.3 Formal Languages; D.2.4 Software/Program Verification; F.3.1 Specifying and Verifying and Reasoning about Programs.
Keywords and Phrases: Proof Tools, Isabelle/HOL, PVS.
1 Introduction
There is an overwhelming number of different proof tools available (e.g. in the Database of Existing Mechanised Reasoning Systems one can find references to over 60 proof tools [Dat]). All have particular applications that they are especially suited for. Introductionary papers on proof tools usually emphasise their strong points by impressive examples. But, if one really wishes to start using one particular proof tool, this information is usually not enough. To make the right choice, one should also know (1) which are the weak points of the proof tool and (2) whether the proof tool is suited for the application in hand. The choice of a proof tool is very important: it can easily take half a year before one fully masters a tool and is able to work on significant applications.
It would be desirable to have some assistance in choosing the appropriate proof tool. When one wishes to buy a toaster, there is also a wide choice, but one is assisted by the reports from consumers' organisations. It is desirable to have
* Supported by the Netherlands Organisation for Scientific Research (NWO) under contract SION 612-316-125.
similar consumers’ reports for proof tools. Such reports should not summarise the manuals, but they should be based on practical experience with these tools. It should discuss several important aspects from a users’ perspective. These aspects should be both theoretical (e.g., the logic used) and practical (e.g., the user interface). It also should contain a list of criteria on which all proof tools are judged. This consumers’ report can assist in selecting an appropriate proof tool, but it can also be interesting for people who are already using a particular proof tool (and do not have any plans to change this), because knowing about other proof tools also helps understanding the proof tool one is usually working with.
We are aware that proof tools change in time and that such a consumers’ report can only have temporary validity. However, it would be nice if it could have some influence on the direction in which proof tools are developing.
This paper gives the initial impetus to such a report. It describes two proof tools, PVS [Sha96] and Isabelle [Pau94]. We have chosen PVS and Isabelle as the basis for our comparison, because both are known as powerful proof tools for higher-order logic, which have shown their capabilities in non-trivial applications. Both PVS and Isabelle are very complex tools and it is impossible to take all features into account. Therefore, our opinion on the important advantages and disadvantages of working with PVS or Isabelle, is to some extend subjective and influenced by our own histories and fields of research.
Section 1.1 briefly gives some background information on PVS and Isabelle. Next, Section 2 compares PVS and Isabelle/HOL. Section 3 discusses our experiences with PVS and Isabelle. Section 4 sketches what we think is the best of both tools. Finally, in Section 5 we apply a list of criteria to both PVS and Isabelle.
We based our experiences on PVS version 2.417 and on Isabelle versions 94-8 and 98.
Related Work We are not the first to compare different proof tools. A comparison of ACL2, a first-order logic prover based on Lisp, and PVS based on the verification of the Oral Message algorithm is described in [You97]. HOL is compared to PVS in the context of a floating-point standard [CM95]. In the first comparison, the specification language of PVS is described as too complex and sometimes confusing, while the second comparison is more enthusiastic about it. Gordon describes PVS from a HOL perspective [Gor95]. Other comparisons have been made between HOL and Isabelle/ZF (in the field of set theory) [AG95] and HOL and Coq [Zam97]. Three proof tool interfaces (including PVS) are compared from a human-computer interaction perspective in [MH96].
To the best of our knowledge, we are the first to compare PVS and Isabelle/HOL. Our comparison is not based on a particular example, but treats systematically several aspects of both tools.
1.1 Short overview of PVS and Isabelle
The PVS Verification System is being developed at SRI International Computer Science Laboratory. Work on PVS started in 1990 and the first version was made
available in 1993. A short overview of the history of the system can be found in [Rus]. PVS is written in Lisp and it is strongly integrated with (Gnu and X) Emacs. The source code is not freely available.
PVS has been applied to several serious problems. For example to specify and design fault-tolerant flight control systems, including a requirements specification for the Space Shuttle [CD96]. References to more applications of PVS can be found in [Rus].
Isabelle is being developed in Cambridge, UK, and in Munich. The first version of the system was made available in 1986. Isabelle uses several ideas of the LCF prover [GMW79]: formulae are ML values, theorems are part of an abstract data type and backward proving is supported by tactics and tacticals. The aim of the designers of Isabelle was to develop a generic proof checker, supporting a variety of logics, with a high level of automation. Isabelle has been called the next 700 provers [Pau90]. Isabelle is written in ML, and the source code is freely available.
Isabelle is used in a broad range of applications: formalising mathematics (including semantics), logical investigations, program development, specification languages, and verification of programs or systems. References to applications of Isabelle can be found in [Pfe].
2 A comparison of PVS and Isabelle/HOL
This section first describes several important aspects of a proof tool in general. The comparison of PVS and Isabelle will then be structured along these lines. The division is somewhat artificial, because strong dependencies exist between the various parts, but is helpful in the comparison. The emphasis will be on aspects that are important from a users’ perspective.
The first aspect that we distinguish is the logic that is used by the tool. In this paper we will restrict ourselves to (extensions of) typed higher-order logic.
Strongly related with the logic is the specification language. It is very important to have a good specification language, because a significant part of a verification effort comes down to specifying what one actually wishes to verify. It is not very useful to have a fully verified statement, if it is not clear what the statement means.
The next aspect that we distinguish is the prover. An important issue for the prover is which proof commands (tactics) are available (i.e. which steps can be taken in a proof). Strongly related with this is the choice of a tactical language. Tactics or proof strategies are functions which build new proof commands, using more basic ones. A sophisticated tactical language significantly improves the power of a prover. Another important aspect is whether decision procedures (such as for linear arithmetic and for abstract data types) are available.
A next aspect is the structure of the tool, i.e. whether there is a small kernel which does all logical inferences. When the code of the kernel is available (and small) it is possible to convince oneself of the soundness of the tool.
Another component is the **proof manager**, which determines *e.g.* how the current subgoals are displayed, whether the proof trace is recorded and how proof commands can be undone.
Theoretically non-existent, but very important for the actual use of a tool, is the **user interface**. Of course this does not influence the “computing power” of the tool, but a good user interface can significantly increase the effectiveness and usability of a proof tool.
### 2.1 The logic
**PVS** PVS implements classical typed higher-order logic, extended with predicate subtypes and dependent types. PVS has many built-in types, such as bools, lists, reals and integers; standard operations on these types are also hard-coded in the tool. Type constructors are available to build complex types *e.g.* function types, product types, records (labelled products) and recursively-defined abstract data types. The use of predicate subtypes and dependent types will be explained in more detail below.
**Isabelle** Isabelle has a meta-logic, which is a fragment of higher-order logic. Formulae in the meta-logic are build using implication \( \Rightarrow \), universal quantification \( \forall \) and equality \( \equiv \). All other logics (the object logics) are represented in this meta-logic. Examples of object logics are first-order logic, the Barendregt cube, Zermelo-Fraenkel set theory and (typed) higher-order logic.
In this paper we will restrict attention to typed higher-order logic (HOL) as object logic. The formalisation of HOL in Isabelle relies heavily on the meta-logic. HOL uses the polymorphic type system of the meta-logic. In its turn, the type system of the meta-logic is similar to the type system of ML, the implementation language. Implication, quantification and equality are immediately defined in terms of the meta-logic. Together with some appropriate axioms, these form the basis for the higher-order logic theory. All other definitions, theorems and axioms are formulated in terms of these basic constructs.
**Predicate subtypes and dependent types** Predicate subtypes and dependent types as in PVS are not common in mechanical proof checkers, but they can be very useful in writing down a succinct and correct specification.
A predicate subtype is a new type constructed from an existing type, by collecting all the elements in the existing type that satisfy the predicate. Perhaps, the most famous basic example of a predicate subtype is the type of non-zero-numbers. This type is used in the declaration of the division operator in PVS. The code below\(^1\) is a fragment of the PVS prelude (which contains the theories that are built-in to the PVS system).
```plaintext
nonzero_real: NONEMPTY_TYPE = {r: real | r /= 0} % /= is inequality
+
-,
* : [real, real -> real]
/ : [real, nonzero_real -> real]
```
\(^1\) All examples in this paper are available at [http://www.cs.kun.nl/~marieke/Comparison.html](http://www.cs.kun.nl/~marieke/Comparison.html).
Ex_Array[T:TYPE]: THEORY
BEGIN
Ex_Array: TYPE = [# length : nat,
val : (below(length) -> T )#
#]
END Ex_Array
Fig. 1. Dependent typing in PVS
When the division operator is used in a specification, type checking will require that the denominator is nonzero. As this is not decidable in general, a so-called Type Correctness Condition (TCC) is generated, which forces the user to prove that the denominator is indeed nonzero. A theory is not completely verified unless all of its type correctness conditions have been proven. In practice, most of the TCCs can be proven automatically by the tool. The use of predicate subtypes improves the readability of a specification and helps in detecting many semantical errors, as the user can state explicitly all the type constraints. Carreño and Miner come to the same conclusion in [CM95].
As mentioned, PVS offers another typing facility namely dependent typing. In Figure 2.1 a theory of arrays is depicted. The type Ex_Array is a record with two fields: length a natural number denoting the length of the array, and val a function denoting the values at each position in the array. The domain of val is the predicate subtype below(length) of the natural numbers less than length. The type of val thus depends on the actual length of the array2.
2.2 The specification language
PVS The specification language of PVS is rich, containing many different type constructors, predicate subtypes and dependent types. As an example, a specification of the quicksort algorithm can be found in Figure 2. We discuss some specific points.
- PVS has a parametrised module system. A specification is usually divided in several theories and each theory can be parametrised with both types and values. Theories can import (multiple) other theories from every point in the theory, so that a value or type that has just been declared or defined can immediately be used as an actual parameter.
Polymorphism is not available in PVS, but it is approximated by theories with type parameters. To define a polymorphic function, one can put it in a theory which is parametrised with the type variables of the function. However, this approach is not always convenient, because when a theory is imported all parameters should have a value, thus when a function does not use all type parameters of a theory, the unused types should still get some instantiation.
2 Dependent typing and predicate subtyping in general are separate matters, but in PVS dependent types can only be constructed using predicate subtypes.
sort[T:TYPE,<=: [T,T->bool]]: THEORY % parametrised theory
BEGIN
ASSUMING
% assuming clause
total: ASSUMPTION total_order?(<=) % infix operator
ENDASSUMING
1 : VAR list[T]
e : VAR T
sorted(l): RECURSIVE bool = % recursive definitions
IF null?(l) OR null?(cdr(l)) % with measure
THEN true
ELSE car(1) <= car(cdr(l)) AND sorted(cdr(l))
ENDIF
MEASURE length(1)
qsort(l): RECURSIVE list[T] =
IF null?(l) THEN null
ELSE LET piv = car(l)
IN append(qsort(filter(cdr(l),(LAMBDA e: e <= piv))),
cons(piv,
qsort(filter(cdr(l),(LAMBDA e: NOT e <= piv)))))
ENDIF
MEASURE length(1)
qsort_sorted: LEMMA sorted(qsort(l))
END sort
Fig. 2. A specification of the quicksort algorithm in PVS
• PVS has a rich overloading structure. Different functions can have the same name as long as they have different input types. Different functions in different theories can have the same name, even when they have the same (input) type. The theory name can be used as a prefix to distinguish between them. Names for theorems and axioms can be reused as well, as long as they are in different theories. Again, the theory name can be used to disambiguate this.
• A theory can start with a so-called assuming clause, where one states assumptions, usually about the parameters of the theory. These assumptions are used as a fact in the rest of the theory. When the theory is imported, TCCs are generated, which force the user to prove that the assumptions hold for the actual parameters.
• Recursive data types and functions can be defined in PVS. An induction principle and several standard functions, such as map and reduce, are automatically generated from an abstract data type definition. PVS allows general recursive function definitions. All functions in PVS have to be total,
QSort = HOL + List + WF_Rel + (* theory importings *)
consts (* infix operators *)
"<=" :: "['a, 'a] => bool" (* infixed 65 *)
axclass (* axiomatic type class *)
ordclass < term
total_ord "total (op <=)"
consts (* primitive recursion *)
sorted :: "[('a :: ordclass) list] => bool"
primrec sorted list
sorted_nil "sorted [] = True"
sorted_cons "sorted (x#xs) = ((case xs of [] => True | y#ys => x <= y) & sorted xs)"
consts (* well-founded recursion *)
qsort :: "[('a :: ordclass) list] => ('a :: ordclass) list"
recdef
qsort "measure size"
"qsort [] = []"
"qsort (x # xs) = qsort [y : xs. y <= x] @
(x # qsort [y : xs. ~ y <= x])"
end
**Fig. 3.** A specification of the quicksort algorithm in Isabelle
therefore termination of the recursive function has to be shown, by giving a measure function which maps the arguments of the function to a type with a well-founded ordering. The tool generates TCCs that force the user to prove that this measure decreases with every recursive call.
- **PVS** has much fixed **syntax**. Many language constructs, such as IF ... and CASES ... are built-in to the language and the prover. There is a fixed list of symbols which can be used as infix operators; most common infix operators, such as + and <= are included in this list. Sometimes PVS uses syntax which is not the most common, e.g. [A,B] for a Cartesian product of types A and B and (:x,y,z:) for a list of values x,y,z.
**Isabelle** The specification language of Isabelle is inspired by functional programming languages (especially ML). In Figure 3 the quicksort example is shown in Isabelle syntax. We discuss some specific aspects.
- The **module system** allows importing multiple other theories, but it does not permit parametrisation. The type parameters of PVS are not necessary in Isabelle, because functions can be declared polymorphically. The value parameters of PVS can be thought of as an implicit argument for all functions in the theory. Making this argument explicit could be the way to 'mimic' the value parameters in Isabelle.
• **Axiomatic type classes** [Wen95,Wen97] are comparable to the assuming clause in PVS, and type classes in functional programming [WB89]. In a type class polymorphic declarations for functions are given. Additionally, in *axiomatic* type classes required properties about these functions can also be stated. These properties can be used as axioms in the rest of the theory. The user can make different instantiations of these axiomatic type classes, by giving appropriate bodies for the functions and proving that the properties hold. Notice that a limited form of overloading can be realised using Isabelle’s axiomatic type classes, only for functions with a single polymorphic type.
• Isabelle automatically generates induction principles for each **recursive data type**. The user can give inductive and coinductive function definitions. There is a special construct to define primitive recursive functions. Well-founded recursive functions can be defined as well, together with a measure function to show their termination.
• Isabelle **syntax** can easily be extended. In particular, Isabelle allows the user to define arbitrary infix and mixfix operators. There is a powerful facility to give priorities and to describe a preferred syntax. This allows the user to define that lists should be represented for input and output as e.g. `[1,2,3]` while internally this is represented as `(cons 1 (cons 2 (cons 3 nil)))`. Language constructs like `if...then...else` are defined explicitly in terms of the basic operators.
2.3 The prover
**PVS** PVS represents theorems using the sequent calculus. Every subgoal consists of a list of assumptions $A_1, \ldots, A_n$ and a list of conclusions $B_1, \ldots, B_m$. One should read this as: the conjunction of the assumptions implies the disjunction of the conclusions i.e. $A_1 \land \ldots \land A_n \Rightarrow B_1 \lor \ldots \lor B_m$.
The proof commands of PVS can be divided into three different categories:\footnote{This division is made by the authors, not by the developers of PVS. Nevertheless it resembles the division made in [COR+95].}
• **Creative proof commands.** These are the proof steps one also writes down explicitly when writing a proof by hand. Examples of such commands are *induct* (start to prove by induction), *inst* (instantiate a universally quantified assumption, or existentially quantified conclusion), *lemma* (use a theorem, axiom or definition) and *case* (make a case distinction). For most commands, there are variants which increase the degree of automation, e.g. the command *inst*? tries to find an appropriate instantiation itself.
• **Bureaucratic proof commands.** When writing a proof by hand, these steps usually are done implicitly. Examples are *flatten* (disjunctive simplification) *expand* (expanding a definition), *replace* (replace a term by an equivalent term) and *hide* (hide assumptions or conclusions which have become irrelevant).
• **Powerful proof commands.** These are the commands that are intended to handle all “trivial” goals. The basic commands in this category are *simplify*
and prop (simplification and propositional reasoning). A more powerful example is assert. This uses the simplification command and the built-in decision procedures and does automatic (conditional) rewriting. PVS has some powerful decision procedures, dealing, among other things, with linear arithmetic. The most powerful command is grind, which unfolds definitions, skolemizes quantifications, lifts if-then-elbes and tries to instantiate and simplify the goal.
**Isabelle** The basic proof method of Isabelle is resolution. The operation ES is the standard resolution operation. It unifies the conclusion of its first argument with the first assumption of the second argument. As an example, when doing resolution with \( \{ ?P \} \rightarrow ?P \lor ?Q \) and \( \{ ?R ; ?S \} \rightarrow ?R \land ?S \)\), this results in the theorem \( \{ ?P ; ?S \} \rightarrow (?P \lor ?Q) \land ?S \).
Isabelle supports both forward and backward proving, although its emphasis lies on backward proving by supplying many useful tactics for it. A tactic transforms the proof goal into several subgoals and gives a justification for this transformation.
In Isabelle, every goal consists of a list of assumptions and one conclusion. The goal \( \{ A_1 ; A_2 ; \ldots ; A_n \} \Rightarrow B \) should be read as \( A_1 \Rightarrow (A_2 \Rightarrow \ldots (A_n \Rightarrow B)) \). Notice that \( \Rightarrow \) is the implication of the meta-logic.
Isabelle tactics usually do not return a single next state, but a lazy list with possible next states. Many tactics try to find a useful instantiation themselves and return a lazy list containing (almost) all possible instantiations (in a suitable order). When the first instantiation is not satisfactory the next instantiation can be tried with back. This possibility is mainly used by powerful tactics.
The proof commands of Isabelle can be divided in several categories as well, although these are different from the categories used earlier for PVS.
- **Resolution** is the basis for many tactics. The standard one is resolve_tac. It tries to unify the conclusion of a theorem with the conclusion of a subgoal. If this succeeds, it creates new subgoals to prove the assumptions of the theorem (after substitution).
- Another basic tactic is assume_tac, which tries to unify the conclusion with one of the assumptions.
- **Induction** is done by induct_tac, which does resolution with an appropriate induction rule.
- **Use an axiom or theorem** by adding it to the assumption list. There are several variants: with and without instantiation, in combination with resolution etc.
- **Simplification** tactics for (conditional) rewriting. For every logic a so-called simplification set can be build. This set contains theorems, axioms and definitions, that can be used to rewrite a goal. It is possible to extend the simplification set (temporarily or permanent). Isabelle's simplifier uses a special strategy to handle permutative rewrite rules, i.e., rules where the left and right hand side are the same, up to renaming of variables. A standard lexical order on terms is defined and a
permutative rewrite rule only is applied if this decreases the term, according to this order. The most common example of a permutative rewrite rule is commutativity \((x \oplus y = y \oplus x)\). With normal rewriting (as is done by PVS) this rule will loop, but ordered rewriting avoids this.
- **Classical reasoning** is another powerful proof facility of Isabelle. There are various tactics for classical reasoning. One of them, `blast_tac`, uses a tableau prover, coded directly in ML. The proof that is generated is then reconstructed in Isabelle.
- **Bureaucratic** tactics are also available, such as `rotate_tac`, which changes the order of the assumptions. This can be necessary for rewriting with the assumptions, because this is done from top to bottom.
A theorem can contain so-called meta-variables, which can be bound while proving it. As an example, consider the specification of quicksort (Figure 3). Suppose that we instantiated the axiomatic type class with the natural numbers (defining \(<=\) as \(\le\)) and that the definition of quicksort is automatically rewritten. Now we can state for example the following goal
\[
\text{goal QSort.thy "qsort[4, 2, 3] = ?x";}
\]
where \(?x\) is a meta-variable. When simplifying this goal, the meta-variable is bound to \([2, 3, 4]\) (and the theorem is proven). The theorem is stored as `qsort[4, 2, 3] = [2, 3, 4]` This feature makes Isabelle well-suited for transformational programming [AB96] and writing a Prolog interpreter [Pau94].
**Tactical language** A tactical (or proof strategy) is a function to build complex tactics (or proof commands) using more basic ones. A well-known example is the tactical `then`. This tactical gets two tactics as arguments and applies them sequentially to the goal.
PVS has a very limited proof strategy language; roughly it is only possible to concatenate and repeat proof commands in several ways. When one wishes to go beyond this, for example to inspect the goal, this should be done in Lisp. The Lisp data structure that contains the proof goal is not officially documented; some accessor functions are known to work but the developers explicitly allow themselves to change PVS at this level of implementation. Probably it is possible to change the goal in Lisp without a logical justification.
In Isabelle the tactical language is ML, so a complete functional language is available. All logical inferences on terms of type `thm` (the theorems) are performed by a limited set of functions. In ML a type can be 'closed', which means that a programmer can express that no other functions than a number of 'trusted' functions are allowed to manipulate values of this type (in this case: theorems). In this way the full power of ML can be used to program proof strategies, and soundness is guaranteed via the interface.
**Proving with powerful proof commands** Both PVS and Isabelle can do simple calculations quite fast. For instance the theorem below is proven in (almost) zero time in PVS by `(ASSERT)`, using the built-in integer arithmetic.
calc: LEMMA $700 \times 400 \times 11 = 2 \times 7 \times 22 \times 10000$
In Isabelle/HOL we have a similar result. After loading the theories defining the integers we can prove the following goal in (almost) zero time using simplification. Note that integers have a sharp-sign # as prefix. Operations on integers are defined using their binary representation, so in contrast to PVS, arithmetic is not part of the kernel, but defined in the logic.
goal Bin.thy "#700 * #400 * #11 = #2 * #7 * #22 * #10000";
Linear (and some non-linear) arithmetic has standard support in PVS and the next theorem is also proven with a single command.
arith: LEMMA $7 + x < 8 + x$ AND $2 \times x \times x \leq 3 \times x \times x$
In Isabelle a package to cancel out common summands (and factors) is available. It is loaded standardly for the naturals, but not for the integers. The following goal is proven in one step, using simplification.
goal Arith.thy "1 + x < 2 + x";
A well-known [COR+95] example of the simplification procedures of PVS is the proof of the characterisation of the summation function. The theorem below is proven by a single command (induct-and-simplify "k")
sum(k:nat): RECURSIVE nat =
IF k = 0 THEN 0 ELSE k + sum(k-1) ENDIF
MEASURE k
sum_char: LEMMA sum(k) = k\times(k+1)/2
An impressive example of the classical reasoner of Isabelle is the following theorem, problem 41 of Pelletier. PVS can not prove this in one command, while Isabelle can, using the classical reasoner (Blast_tac).
(ALL z. EX y. ALL x. Jxy = (Jxz & (~Jxx))) --> (~(EX z. ALL x. J x z)
2.4 System organisation and soundness
PVS The developers of PVS designed their prover to be useful for real world problems. Therefore the specification language should be rich and the prover fast with a high degree of automation. To achieve this, powerful decision procedures were added to PVS. However, these decision procedures sometimes cause soundness problems, thus the procedures are part of the kernel, which makes the kernel large and complex. Further, PVS once was considered to be a prototype for a new SRI prover. Perhaps for these reasons PVS still seems to contain a lot of bugs and frequently new bugs shows up. An overview of the known bugs at the moment can be seen on http://www.csl.sri.com/htbin/pvs/pvs-bug-list. It would be desirable that the bugs in PVS would only influence completeness and not soundness. Unfortunately, this is not the case, as some recent proofs of
true=false have shown [Owr]. Most bugs do not influence soundness, but they can be very annoying.
Because of the soundness bugs in the past, it is reasonable to assume that PVS will continue to contain soundness bugs. The obvious question thus arises, why use a proof tool that probably contains soundness bugs? Our answer is threefold:
PVS is still a very critical reader of proofs. PVS lets fewer mistakes slip through than many of our human colleagues (and PVS is much more patient), thus in comparing PVS to an average logician/mathematician PVS is much more precise and sceptic.
Furthermore, history tells us that the fixed soundness bugs are hardly ever unintentionally explored, we know of only a single case.
Thirdly, most mistakes in a system that is to be verified are detected in the process of making a formal specification. Thus economically spoken, the specification is very important, and PVS has a expressive and human friendly specification language. Therefore when we specify a system in the language of PVS this gives extra confidence that the specification expresses what is 'meant'.
A lot of effort has been put into the development of PVS. For this reason SRI does not make the code of PVS freely available. As a consequence, to most users the structure of the tool is unknown and making extensions or bug fixes is impossible, although sometimes users go to SRI to implement a feature.
**Isabelle**
Isabelle was developed from quite a different perspective. The main objective was to develop a flexible and sound prover, and next to develop powerful tactics, so that large proof steps could be taken at once. Isabelle seems to be much more stable than PVS. It does not show unpredictable behaviour. Recently a new Isabelle version was released\(^4\). To our surprise some tactics (especially Auto_tac) were changed, so that our old proofs really had to be adapted, and not all of these changes were clearly documented.
### 2.5 The proof manager
**PVS**
All proofs in PVS are done in a special proof mode. The tool manages which subgoals still have to be proven and which steps are taken to construct a proof, so it is not the users responsibility to maintain the proof trace. Proofs are represented as trees. There is an Tcl/Tk interface which gives a picture of the proof tree (see Figure 4). It helps the user to see which branches of the proof are not proven yet. One can click on a turnstile to see a particular subgoal, also the proof commands can be displayed in full detail.
When using a proof tool most of the time the theorems and specification are under construction, as the processes of specifying and proving are usually intermingled. The notion of “unproved theorem” allows to concentrate on the crucial theorems first and prove the auxiliary theorems later. PVS keeps track of the status of proofs, e.g. whether it uses unproved theorems.
\(^4\) Isabelle98
Line numbers can be used in PVS to specify that a command should work only on some of the assumptions/conclusions, e.g. \( \text{expand "f" 2} \) expands \( f \) in the second conclusion. When a specification or theorem is slightly changed (e.g. an conjuct is added), the line numbers in the goal often change. It would be more robust, if one could use commands expressing things like: expand all \( fs \) with zero as first argument, and only expand \( f \) in the assumptions where function \( g \) occurs. This has an additional advantage, namely that intention of the proof step becomes clearer. The authors have made their own Lisp functions to calculate a list of line numbers that satisfy a simple regular expression. This is already helpful (especially in strategies), but many extensions are possible. For example, in the presence of overloading it would be useful to expand \( fs \) of a specific type.
**Isabelle** Isabelle does not give elaborate proof support. The user has to keep track of everything him/herself (including the undos). The proofs are structured linearly, there is just a list of all subgoals. This stimulates the use of tacticals such as \textbf{ALLGOALS}, but it is not so easy to see how “deep” or in which branch one is in a proof. On the other hand, in Isabelle it is possible to undo an undo (or actually: a choplev, which steps back an arbitrary number of levels, or to a particular level). And even more, it is also possible to look at the subgoals at an earlier level, without undoing the proof.
### 2.6 User interface
PVS’s standard user interface is better developed than Isabelle’s. It is is strongly integrated with Emacs. Recently, a batch mode was added to PVS. The \textit{de facto} interface for Isabelle is Isamode (also based on Emacs). There are some more advanced user interfaces based on Tcl/Tk, but they only work for particular versions of Isabelle.
2.7 Manuals and support
PVS has a number of different manuals, but none of these is completely up-to-date. There is an introductionary manual with a fully elaborated (non-trivial) example to get started. On the mailing list one can ask starters questions.
Isabelle also comes equipped with several manuals. These are more up-to-date and concise, but often they explain things very briefly (and sometimes cryptic). The introductionary manual does not really give an interesting example, and it is hard to start using Isabelle, only on the basis of the manuals. The best way to start is to take the (annual) Isabelle course. There is good (personal) support from the developers. They usually reply very quickly (same day) on emails with questions and problems. We found that this was really helpful.
2.8 Runtime speed
We did not compare the speed of the tools because we think the game is not to “run” a proof, but to construct it. This construction consists of building a specification of a problem and proving appropriate theorems. This is hard and depends heavily on the user, his/her experience with the proof tool etc. We do mention though that the “experienced speed” of the two tools is comparable. By this we mean the time it takes to type check a specification or to execute a smart tactic.
3 Our experiences
In this section we wish to discuss in some detail our own, more personal, experiences. After using PVS for several years we became increasingly unhappy with it, because so many bugs appeared. Sometimes it felt that we would spend more time on working around small bugs, than on proving serious properties. In this period the first author visited Munich and became enthusiastic about Isabelle. However, reading the Isabelle manuals did not provide enough background to get really started with it. Therefore, in September 1997 the second author visited the Isabelle course in Cambridge. After this course, it seemed relatively easy to start working seriously with Isabelle.
To start with a well-understood, but non-trivial example, the Tree Identification Phase (TIP) [DGRV97] of the 1394 protocol was selected, as the first author had already worked extensively on it using PVS. The first challenge was to transform the PVS specification into Isabelle, because Isabelle’s specification language lacks e.g. records and function updates.
The next step was to start proving. We are used to PVS’s proof manager, which records all the steps we take in a proof. Isabelle only provides a so-called listener, which records everything the user types in (including the typos and steps that were undone later), so the proof has to be filtered out. We experienced that it works faster to copy the steps immediately than to use the listener.
When we then really started proving, we noticed a big difference in the handling of conditional expressions (i.e. if...then...else). In PVS, conditionals
are built-in and the prover knows how to deal with them. In Isabelle conditional expressions are explicitly defined and the prover does not have special facilities for them. We discussed this with Larry Paulson and Tobias Nipkow, which resulted in a solution for Isabelle94-8. In Isabelle98 more tactics to deal with conditional expressions are standard available.
After proving some invariants over the TIP protocol, we also studied whether a translation of object-oriented specifications into higher-order logic (part of a different project [HHJT98]) could be adapted to Isabelle. In the translation to PVS we made extensive use of overloading and this caused serious difficulties. In discussions with the Isabelle developers we tried several solutions, but none of these were satisfactory. Isabelle98 has the possibility to define different name spaces and this might help. Due to time constraints and lack of documentation we did not investigate this option.
4 The best of both worlds
When comparing PVS and Isabelle we realised that both tools had their advantages and disadvantages. Our ideal proof tool would combine the best of both worlds.
The logic Predicate subtyping and dependent typing give so much extra expressiveness and protection against semantical errors, that this should be supported. The loss of decidability of type checking is easily (and elegantly) overcome by the generation of TCCs and the availability of a proof checker.
The meta-logic of Isabelle gives the flexibility to use different logics, even in a single proof. However, in our applications, we did not feel the need to use a logic other than HOL and the interference with the meta-logic sometimes complicated matters.
The specification language The specification language should be readable, expressive and easily extendible. For function application, we have a slight preference for the bracketless syntax of Isabelle.
It should be possible to parametrise theories with values. We have a preference for type parametrised theories, because polymorphism is hard to combine with overloading. A disadvantage of type inference, in combination with implicitly (universally) quantified variables, is that typos introduce new variables, and do not produce an error. As an example, suppose that one has declared a function myFunction :: nat => nat, but that by accident the following goal is typed in: "myFunction x < myFuntion (x+1)". This is internally equivalent to: "ALL myFuntion. myFunction x < myFuntion (x+1)". This error can only be detected by asking explicitly for the list of variables (and their types) in the goal.
The prover The ideal prover has powerful proof commands for classical reasoning and rewriting, including ordered rewriting. A tactic should return a lazy list of possible next states, as this is useful to try (almost) all possible instantiations.
Also, decision procedures (for example for linear arithmetic) should be available. Preferably, these decision procedures are not built-in to the kernel, but written in the tactical language, so that they can not cause soundness problems. The style of the interactive proof commands of PVS is preferred over that of Isabelle, because this is more intuitive. It is important to have a structured tactical language, which allows the user to access the goal. For this purpose, the structure of the goal should be well-documented.
**System organisation** To ensure soundness of the proof tool, the system should have a small kernel. The code of the tool should be freely available, so that users can easily extend it for their own purposes and (if necessary) implement bug fixes.
**The proof manager and user interface** The tool should keep track of the proof trace. Proofs are best represented as trees, because this is more natural, compared to a linear structure. The tree representation also allows easy navigation through the proof, supported by a visual representation of the tree. When replaying the proof, after changing the specification, the tool can detect for which branches the proof fails, thanks to the tree representation.
## 5 Conclusions and future work
We tried to describe some important aspects of PVS and Isabelle which are not in the ‘advertising of the tool’, but are important in making a decision on which tool to use. To conclude, Figure 5 gives a list of criteria for judging a proof tool, filled in for PVS and Isabelle. This list is not complete and based on the available features of PVS and Isabelle and our work done with these proof tools. We hope that in the future users of other proof tools will produce a similar consumers’ test on “their” proof tool too, so that a broad overview of users’ experiences with different proof tools will be available.
Maybe such comparisons will lead to a proof tool which combines the best of all available proof tools. Looking only at PVS and Isabelle, it would be desirable to have a proof tool with the specification language, proof manager and user interface of PVS, but the soundness, flexibility and well-structuredness of Isabelle.
**Acknowledgements**
We thank Bart Jacobs and Frits Vaandrager for their comments on earlier drafts of this paper.
**References**
<table>
<thead>
<tr>
<th>logic</th>
<th>PVS 2.417</th>
<th>Isabelle98/HOL</th>
</tr>
</thead>
<tbody>
<tr>
<td>dependent types</td>
<td>++</td>
<td>not available</td>
</tr>
<tr>
<td>predicate subtypes</td>
<td>++</td>
<td>not available</td>
</tr>
<tr>
<td>standard syntax</td>
<td>++/+</td>
<td>+</td>
</tr>
<tr>
<td>flexible syntax</td>
<td>-</td>
<td>++</td>
</tr>
<tr>
<td>module system</td>
<td>++/+</td>
<td>+</td>
</tr>
<tr>
<td>polymorphism</td>
<td>-</td>
<td>++</td>
</tr>
<tr>
<td>overloading</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>abstract data types</td>
<td>++/+</td>
<td>++/+</td>
</tr>
<tr>
<td>recursive functions</td>
<td>++/+</td>
<td>++/+</td>
</tr>
<tr>
<td>proof command language</td>
<td>+</td>
<td>+/-</td>
</tr>
<tr>
<td>tactical language</td>
<td>-</td>
<td>++</td>
</tr>
<tr>
<td>automation</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>arithmetic decision procedures</td>
<td>+</td>
<td>+/-</td>
</tr>
<tr>
<td>libraries</td>
<td>+</td>
<td>++/+</td>
</tr>
<tr>
<td>proof manager</td>
<td>++</td>
<td>+/-</td>
</tr>
<tr>
<td>interface</td>
<td>++</td>
<td>+</td>
</tr>
<tr>
<td>soundness</td>
<td>-</td>
<td>++</td>
</tr>
<tr>
<td>upwards compatible</td>
<td>+/-</td>
<td>+/-</td>
</tr>
<tr>
<td>easy to start using</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>manuals</td>
<td>+/-</td>
<td>+/-</td>
</tr>
<tr>
<td>support</td>
<td>+</td>
<td>++</td>
</tr>
<tr>
<td>time it takes to fix a bug</td>
<td>-</td>
<td>?</td>
</tr>
<tr>
<td>ease of installation</td>
<td>++</td>
<td>++</td>
</tr>
</tbody>
</table>
Fig. 5. A consumer report of PVS and Isabelle
Analysis of Systems, Passau, Germany, volume 1055 of LNCS. Springer-Verlag, April 1996.
Database of existing mechanized reasoning systems.
Bug numbers: 71, 82, 113 and 160.
Markus Wenzel. Using axiomatic type classes in Isabelle, a tutorial, 1995.
http://www4.informatik.tu-muenchen.de/~wenzelm/papers.html.
Markus Wenzel. Type classes and overloading in higher-order logic. In Gunter and Felty [GF97].
|
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/18686/18686_compofpva.pdf?sequence=1", "len_cl100k_base": 10259, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 43306, "total-output-tokens": 11922, "length": "2e13", "weborganizer": {"__label__adult": 0.00034332275390625, "__label__art_design": 0.0004208087921142578, "__label__crime_law": 0.00039267539978027344, "__label__education_jobs": 0.000865936279296875, "__label__entertainment": 8.696317672729492e-05, "__label__fashion_beauty": 0.00015819072723388672, "__label__finance_business": 0.0002627372741699219, "__label__food_dining": 0.0004045963287353515, "__label__games": 0.0006952285766601562, "__label__hardware": 0.0009546279907226562, "__label__health": 0.0005078315734863281, "__label__history": 0.0003218650817871094, "__label__home_hobbies": 0.00010454654693603516, "__label__industrial": 0.0005350112915039062, "__label__literature": 0.0003151893615722656, "__label__politics": 0.0003151893615722656, "__label__religion": 0.0005574226379394531, "__label__science_tech": 0.0633544921875, "__label__social_life": 0.00010126829147338869, "__label__software": 0.0085906982421875, "__label__software_dev": 0.919921875, "__label__sports_fitness": 0.0002884864807128906, "__label__transportation": 0.0005383491516113281, "__label__travel": 0.00019669532775878904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47933, 0.0177]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47933, 0.52217]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47933, 0.90719]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2472, false], [2472, 5588, null], [5588, 8591, null], [8591, 11572, null], [11572, 14134, null], [14134, 15959, null], [15959, 18009, null], [18009, 21110, null], [21110, 24238, null], [24238, 27294, null], [27294, 29768, null], [29768, 32673, null], [32673, 34580, null], [34580, 37487, null], [37487, 40351, null], [40351, 42916, null], [42916, 45692, null], [45692, 47933, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2472, true], [2472, 5588, null], [5588, 8591, null], [8591, 11572, null], [11572, 14134, null], [14134, 15959, null], [15959, 18009, null], [18009, 21110, null], [21110, 24238, null], [24238, 27294, null], [27294, 29768, null], [29768, 32673, null], [32673, 34580, null], [34580, 37487, null], [37487, 40351, null], [40351, 42916, null], [42916, 45692, null], [45692, 47933, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47933, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47933, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47933, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47933, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47933, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47933, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47933, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47933, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47933, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47933, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2472, 2], [2472, 5588, 3], [5588, 8591, 4], [8591, 11572, 5], [11572, 14134, 6], [14134, 15959, 7], [15959, 18009, 8], [18009, 21110, 9], [21110, 24238, 10], [24238, 27294, 11], [27294, 29768, 12], [29768, 32673, 13], [32673, 34580, 14], [34580, 37487, 15], [37487, 40351, 16], [40351, 42916, 17], [42916, 45692, 18], [45692, 47933, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47933, 0.09728]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
7fa73f5b654ac606a4d3d7a6a238524c77426174
|
Practical Smart Contract Sharding with Ownership and Commutativity Analysis
George Pîrlea∗
National University of Singapore
Singapore
gpirlea@comp.nus.edu.sg
Amrit Kumar
Zilliqa Research
United Kingdom
amrit@zilliqa.com
Ilya Sergey
Yale-NUS College
National University of Singapore
Singapore
ilya.sergey@yale-nus.edu.sg
Abstract
Sharding is a popular way to achieve scalability in blockchain protocols, increasing their throughput by partitioning the set of transaction validators into a number of smaller committees, splitting the workload. Existing approaches for blockchain sharding, however, do not scale well when concurrent transactions alter the same replicated state component—a common scenario in Ethereum-style smart contracts.
We propose a novel approach for efficiently sharding such transactions. It is based on a folklore idea: state-manipulating atomic operations that commute can be processed in parallel, with their cumulative result defined deterministically, while executing non-commuting operations requires one to own the state they alter. We present CoSplit—a static program analysis tool that soundly infers ownership and commutativity summaries for smart contracts and translates those summaries to sharding signatures that are used by the blockchain protocol to maximise parallelism. Our evaluation shows that using CoSplit introduces negligible overhead to the transaction validation cost, while the inferred signatures allow the system to achieve a significant increase in transaction processing throughput for real-world smart contracts.
CCS Concepts: • Computing methodologies → Distributed programming languages.
Keywords: • Computing methodologies → Distributed programming languages.
1 Introduction
The idea of Nakamoto consensus (aka blockchain) has been instrumental for enabling decentralised digital currencies, such as Bitcoin [48]. The applications of blockchains have further expanded with the wide-spread adoption of smart contracts [62]—self-enforcing, self-executing protocols governing an interaction between several mutually distrusting parties. The Ethereum blockchain has provided a versatile framework for defining smart contracts as blockchain-replicated stateful objects identified by their account numbers [65].
The open and decentralised nature of Nakamoto consensus comes at the price of throughput scalability. At a high level, in order for a sequence of transactions (so-called block) to be agreed upon system-wide, the system’s participants (so-called miners) have to validate those transactions, with each miner executing them individually [4]. As a result, the throughput of blockchain systems such as Bitcoin and Ethereum does not improve, and even slightly deteriorates, as more participants join the system: Bitcoin currently processes up to 7 transactions per second, while Ethereum’s throughput is around 18 transactions per second. Even worse, popular smart contracts may cause high congestion, forcing protocol participants to exclusively process transactions specific to those contracts. This phenomenon has been frequent in Ethereum: in the past, multiple ICOs (Initial Coin Offering, a form of a crowdfunding contract) and games, such as CryptoKitties, have rendered the system useless for any other purposes for noticeable periods of time [14].
Sharding in Blockchains. One of the most promising approaches to increase blockchain throughput is to split the set of miners into a number of smaller committees, so they can process incoming transactions in parallel, subsequently achieving a global agreement via an additional consensus mechanism—an idea known as sharding. Sharding transaction executions, as well as sharding the replicated state, has been an active research topic recently, both in industry [23, 30, 39, 51, 60, 64, 67] and academia [1, 17, 36, 45, 66].
Many of those works focus exclusively on sharding the simplest kind of transactions—user-to-user transfers of digital funds—which are paramount in blockchain-based cryptocurrencies, while ignoring sharding of smart contracts [36, 45, 66, 67]. Existing proposals tackling smart contracts impose.
heavy restrictions on contract-manipulating transactions, for instance, requiring the accounts of both the contract and its user to be assigned the same shard, or processing all such transactions in a specialised shard [17, 30, 39]. Other solutions assume a complex cross-shard communication protocol to reconcile possible conflicts [23, 31, 36, 64], or adopt a contract design very different from Ethereum [1].
To the best of our knowledge, none of these approaches allows for parallel sharded executions involving the same smart contract. That is, none solve the mentioned congestion problem in Ethereum, caused by highly-popular contracts.
In this work, we describe a novel approach for significantly increasing the throughput of blockchains for smart contract-manipulating transactions. To achieve this, instead of treating contract implementations as “black boxes” (as do all the works mentioned above), we design a solution based on PL techniques, specifically, on static program analysis.
**Our Approach.** Why can user-to-user money transfers be sharded efficiently without complex inter-shard communication, and how can we generalise (perhaps, conservatively) this logic to shard arbitrary smart contracts?
Consider a transaction $tx_1$ that manifests a transfer of 10 units of some digital currency from the user $A$ to $B$, and a transaction $tx_2$ that states that $A$ transfers 20 units to $C$. In order to ensure that $A$ does not double-spend, both $tx_1$ and $tx_2$ have to be executed in the same shard—the one that owns $A$’s account and keeps track of $A$’s balance. However, neither $B$ nor $C$ need to be owned by $A$’s shard: as long as $tx_1$ and $tx_2$ are validated within $A$’s shard, the positive deltas to $B$ and $C$’s accounts can be simply broadcast through the network, so their balances are increased accordingly with no extra inter-shard interaction.
Now consider a transaction $tx_3$, in which $D$ transfers 15 units to $C$. Notice that it does not matter in which order $tx_1$ and $tx_3$ are going to be processed, as they commute: either of their relative orderings will increase $C$’s balance by 35.
The notions of state ownership and operation commutativity have been central in a number of works dedicated to reasoning about deterministic parallelism and proving correctness of concurrent programs [18, 20, 23, 30, 40, 44, 50]. In those works, the ownership discipline determines what parts of the shared state need to be manipulated sequentially by the same thread, while commutativity allows certain actions to be executed in concurrent threads in parallel, with a deterministic result. The virtues of commutative operations have also been studied in the systems community for scaling concurrent software [3, 13, 52, 55] and achieving faster consensus in replication protocols [11, 37, 42, 47]. However, to the best of our knowledge, no attempts to automatically leverage commutativity in user-defined replicated computations (e.g., smart contracts) have been made to date.
In this work, we present CoSplit—a static analysis tool that soundly infers both ownership and commutativity information from source code of smart contracts and translates it to sharding signatures. The signatures are used, upon the deployment of a contract, to define a sharding strategy for the contract-manipulating transactions via the following rules:
- All transactions touching parts of a contract’s state owned by a shard $S$ must be executed in this shard;
- Transactions executed in different shards are guaranteed to commute. Their cumulative result can be obtained by means of “joining” their respective contributions in a way prescribed by the sharding signature.
These two rules allow the system to enjoy a notion of consistency for parallel transaction executions adopted from works on the semantics of concurrent revisions [7, 8, 43]:
1. Potentially conflicting contract-manipulating transactions will be executed in some globally-agreed order.
2. Commuting transactions can be executed in parallel, as their effect does not depend on their order.
As we will discuss in Sec. 2, popular Ethereum-style contracts often allow for a “logical split” of their state into disjointly owned components, which is much more fine-grained than assigning an entire contract to a single shard. This split makes it possible to process transactions affecting those contracts in parallel in different shards, thus providing a practical solution to scale up the network throughput.
**Our Contributions.** The contributions of this work are:
- Identifying logical state ownership and operation commutativity as enabling mechanisms for sharding Ethereum-style contracts, and demonstrating adequacy of those notions for real-world Ethereum-style contracts (Sec. 2).
- A compositional static analysis that infers ownership and commutativity signatures for contracts written in Scilla [51] and translates them to shard allocation strategies (Sec. 3).
- An implementation of the analysis and of the algorithm for deriving sharding strategies in the tool called CoSplit and an end-to-end integration of CoSplit with a production-grade sharded blockchain protocol [45, 67] (Sec. 4). The archived software artefact is publicly available [53].
- Evaluation of parallelism enabled by CoSplit-inferred signatures, demonstrating a consistent increase in system throughput with increasing the number of shards (Sec. 5).
**2 Motivation and Key Ideas**
**2.1 Contract Usage in Ethereum**
To motivate the design of our approach for sharding, we first present the trends for smart contract usage in Ethereum. Since there are over 700 million Ethereum transactions to date, processing all the execution traces is too computationally expensive. Therefore, we selected a random sample of
As the left plot in Fig. 2 shows, ordinary user-to-user transfers are on a solid downward trend. Moreover, single-contract transactions take up to 55% of the recent blocks in our sample.
In this work we focus on sharding single-contract transactions. The right plot in Fig. 1 shows the dominance of a specific type of such transactions that represent token transfers in a special kind of contract—ERC20 token contracts [24]. ERC20 and other similar standardised contracts pose a big bottleneck to the network throughput: each of them requires sequential processing of all transactions that affect it.
2.2 Towards Sharding an ERC20 Contract
Fig. 2 shows a fragment of the implementation of an ERC20 token contract [24] in Ethereum’s high-level language Solidity [25]. The contract’s replicated state is represented by two mutable fields: the mapping balances that contains data about the amount of tokens owned by token holders; and the mapping allowances that captures the amounts of tokens authorised for third-party transfers by their holders. The state manipulations are done by transactions initiated by users (aka senders) calling one of the functions: transfer for transferring tokens, approve for granting the transfer rights for a certain amount of tokens to a third party, and transferFrom for transferring tokens on behalf of the user identified as sender. The subtractions in lines 16 and 21 will fail if the approved spender (resp. the sender) does not have enough allowance, thus preventing double-sends.
The design of the ERC20 contract provides ample opportunities for shard-based parallelism. Consider the left part of Fig. 3 that shows a fragment of the mutable ERC20 state: the balances field mapping account addresses A–E (top part of each box) to the respective token balances (bottom part of the box), and allowances, which is a mapping from addresses (e.g., A) to mappings of amounts allowed to transfer by third parties (e.g., D and E) on their behalf. Now consider the following four single-contract transactions accessing that state concurrently by invoking functions from Fig. 2: $tx_1 = \text{transfer}_A(B, v_1); tx_2 = \text{transfer}_C(A, v_2); tx_3 = \text{transferFrom}_D(A, C, v_3); tx_4 = \text{approve}_A(E, v_4)$. Here, a subscript denotes the transaction sender’s address (accessed via \_msgSender() in Fig. 2), while $v_i$ stand for various non-negative amounts, whose exact value is not important. All those transactions alter the contract state; the left part of Fig. 3 shows their corresponding footprints, i.e., components of the state that they interact with. It is easy to see that the footprints of, e.g., $tx_1$ and $tx_4$ are disjoint, thus, their effects on the contract’s state commute. Therefore, assuming the system provides an operation to join (i.e., merge) updates
---
1The Ethereum dataset and analysis are part of the archived artefact [53].
on the “logically disjoint” state components, it should be possible to execute, e.g., $tx_1$ and $tx_4$ in different shards.
**Sharding Strategy 1: Disjoint State Ownership.** Let us formulate the constraints for parallel execution of the transactions $tx_1$–$tx_4$ from Fig. 3, out of the knowledge that some of them commute, thanks to their footprint disjointness. We will denote by $\text{Owns}(S, \{f_1, \ldots, f_n\})$ an ownership constraint, meaning that the shard $S$ logically owns the contract’s state components (fields or map entries) $f_1, \ldots, f_n$ and, thus, only this shard may alter the values of those components by sequentially processing all the corresponding transactions.
Now consider two shards, $S_1$ and $S_2$ and the following set of ownership constraints, where $bal$ and all denote the corresponding fields balance and allowances:
$$\text{Owns}(S_1, \{\text{bal}[A], \text{bal}[B], \text{bal}[C], \text{all}[A][D]\}), \text{Owns}(S_2, \{\text{all}[A][E]\})$$
Clearly, $S_1$ and $S_2$ own disjoint portions of the contract’s state, thus, it will be safe to assign transactions to shards as $S_1 \mapsto \{tx_1, tx_2, tx_3\}$ and $S_2 \mapsto \{tx_4\}$, obtain the final result deterministically by merging their non-conflicting changes. This sharding strategy scales for more shards with designated ownership of the contract’s components and larger number of transactions with logically disjoint footprints.
**Sharding Strategy 2: Commutativity of Addition.** Even though transaction $tx_2$ modifies the component $\text{bal}[A]$, it does so in a commutative fashion and, thus, cannot affect the outcome of any other of the listed transactions (ditto for $tx_3$ and $\text{bal}[B]$). With this observation, we can refine the transaction footprints and the notion of ownership (Fig. 3, right), allowing for a parallel execution with three shards:
$$\text{Owns}(S_1, \{\text{bal}[A], \text{all}[A][D]\}), \text{Owns}(S_2, \{\text{all}[A][E]\}), \text{Owns}(S_3, \{\text{bal}[C]\})$$
In the constraints above, $S_1$ no longer has to own $\text{bal}[B]$ or $\text{bal}[C]$, while shard $S_2$ now needs to own $\text{bal}[C]$. In order to perform transactions allocated as $S_1 \mapsto \{tx_1, tx_3\}$, $S_2 \mapsto \{tx_4\}$, $S_3 \mapsto \{tx_2\}$, obtaining the same result as in the previous case, we need to redefine the state join operation. Specifically, instead of overwriting the values in entries $\text{bal}[B]$ and $\text{bal}[A]$ upon “disjoint merging” as before, we will need to add up the deltas to those components resulting from token transfers in $tx_1$ and $tx_3$, similarly to handling ordinary transfers (Sec. 1).
**The Main Idea.** To summarise these observations: contracts such as ERC20, whose operations only manipulate a small part of the state, allow for parallel conflict-free execution of their operations, if these operations commute. The ownership constraints state which parts of a contract’s state a shard must have exclusive access to in order to execute its operations without conflicts with other shards altering the same contract concurrently. The join defines the way to deterministically reconcile outcomes of the parallel executions.
### 2.3 Commutativity and State Ownership
It is common to reason about operation commutativity in terms of action traces [13]. That said, our way of thinking is inspired by the logical abstractions used for compositional verification of heap-manipulating programs [9, 10, 50].
In our setup, we are interested in parallelising executions of a family of single-contract transactions over a state-space $\Sigma$ of a contract, collectively represented by a function $F_x : \Sigma \rightarrow \Sigma$. Here, $x$ denotes a vector of user inputs, i.e., specifying which contract’s function to call, as well as its inputs. Two transactions identified by different user inputs $x_1$ and $x_2$ commute iff for any state $\sigma$, $F_{x_1}(F_{x_2}(\sigma)) = F_{x_2}(F_{x_1}(\sigma))$.
In order to enable parallelism, our goal is to identify a commutative, associative, and partial operation $\triangledown : \Sigma \rightarrow \Sigma \rightarrow \Sigma$, such that for any $\sigma_1$, $\sigma_2$, and $x$, if $F_x(\sigma_1)$ is defined (i.e., $\sigma_1$ contains at least the footprint of $F_x$) and $\sigma_1 \triangledown \sigma_2$ is defined, then $F_x(\sigma_1 \triangledown \sigma_2) = F_x(\sigma_1) \triangledown F_x(\sigma_2)$. This equality is referred to as action locality in the program logics literature [10], and, when it holds, it enables compositional program analyses [21] and concurrency specifications [44].
The virtue of $F_x$’s locality for our purposes becomes apparent by observing the following chain of equalities for $\sigma = \sigma_1 \triangledown \sigma_2$, when $\sigma_1$ and $\sigma_2$ are such that $F_{x_1}(\sigma_1)$ and $F_{x_2}(\sigma_2)$ are defined:
$$F_{x_1}(F_{x_2}(\sigma)) = F_{x_1}(F_{x_2}(\sigma_1 \triangledown \sigma_2)) = F_{x_1}(\sigma_1 \triangledown F_{x_2}(\sigma_2))$$
$$= F_{x_1}(\sigma_1) \triangledown F_{x_2}(\sigma_1)$$
$$= F_{x_1}(F_{x_2}(\sigma_1)) \triangledown F_{x_2}(F_{x_1}(\sigma_1))$$
This reasoning demonstrates the desired commutativity, and also provides a recipe for computing the final result in a divide-and-conquer fashion by taking it to be $F_{x_1}(\sigma_1) \triangledown F_{x_2}(\sigma_2)$; the order does not matter, as $\triangledown$ is commutative and associative. One can think of $\triangledown$ as both the “logical split and join” operations, while a footprint of a transaction executing $F_X$ is the minimal part $\sigma'$ of the contract state, which must be owned by the shard executing it, so that $F_X(\sigma')$ is defined.
Getting back to our motivating example of ERC20 sharding, Strategy 1 corresponds to $\triangledown$ taken as a disjoint union of the entry sets of the contract’s mapping fields (let’s call it OwnOverwrite). Strategy 2 corresponds to $\triangledown$ defined as a non-disjoint union with an implicit split of integer values in map entries—this way in the case of concurrent updates, the result can be obtained by summation of the per-shard portions of those values (we will call it IntMerge).
### 2.4 Pragmatic Considerations and Technical Setup
Ethereum is the most popular smart contract platform, and a number of other blockchain ecosystems also use the Ethereum Virtual Machine (EVM). While it would be desirable to implement our ideas directly in Ethereum, unfortunately, its infrastructure is currently unsuitable for our purposes:
---
2Readers familiar with state-of-the-art program logics for concurrency can recognise that we are looking for a suitable Partial Commutative Monoid (PCM) [10, 34, 44], which would enable framing of contract operations $F_X$.
---
1331
Protocol-level Support. At the time of writing, the available prototype of Ethereum 2.0 [64] does not support cross-shard transactions or smart contracts.
Language-level Support. EVM bytecode, Ethereum’s low-level language, is difficult to analyse soundly, due to the lack of modularity and structured control flow [65]. One could engineer an analysis inferring ownership constraints for Solidity [25], a high-level Ethereum language, by possibly decompiling EVM contracts [29]. However, a number of Solidity’s features (e.g., inter-contract calls) and unpredictable performance of EVM decompilers make it a challenging target for an efficient sound static analysis [33].
Bringing Ethereum infrastructure to the state necessary to deploy the described ideas would be an effort going well beyond the scope of this paper. Instead, we implemented our approach as a static analysis for Scilla [57]—a strongly-typed ML-style language for smart contracts. Scilla is supported natively (via a definitional interpreter) by an industry-scale blockchain [45, 67] that (a) provides rudimentary infrastructure for sharding, (b) is available open-source, and (c) is widely used and contains dozens of contracts implemented in Scilla by users and available for evaluating our approach.
3 CoSPLIT Analysis in a Nutshell
3.1 The Language
Scilla [57] is a minimimalistic memory- and type-safe functional language, similar to OCaml and Haskell for an account-based (i.e., Ethereum-style) smart contract model. Scilla provides a very small set of state-manipulating primitives for altering contract state (i.e., reading from the blockchain state and changing the values of contract fields). Its pure (i.e., side effect-free) fragment corresponds to System F [27, 54] without recursion (but with bounded iteration). All of the standard library as well as user-defined contract-agnostic computations are implemented in Scilla as pure functions. This design choice removes the need for inter-contract calls for the sake of code reuse and makes contract analysis scale, as pure functions need to be analysed only once.
Contracts in Scilla are encoded as communicating state-transition systems in the style of I/O-automata [46]. That is, all interaction between contracts is done by means of message passing. A contract’s state changes as a result of executing its transitions as reactions to received messages from the users or other contracts. While transitions in Scilla contracts are similar to functions in Solidity, they provide stronger encapsulation and atomicity guarantees, in particular, disallowing reentrancy. This model allows one to analyse each contract’s transitions in isolation from any other contract’s code, thus allowing for deriving their signatures statically without over-approximating the effects of the external calls.
Figure 5. FungibleToken Transfer transition in Scilla.
Figure 6. Components of CoSPLIT abstract domain.
3.3 State Footprints
Fig. 6 shows the CoSPLIT abstract domain. The first stage of the analysis computes an over-approximation of the state footprints of contract transitions, expressed as a set of effects (denoted \( \epsilon \)). Effects describe how the transition interacts with the blockchain state. For instance, the AcceptFunds effect (contributed by the accept statement) changes both the contract’s and the sender’s native token balances.3 Similarly, the SendMsg effect (contributed by send) might invoke transitions of other contracts or send native tokens. Finally, the Read and Write effects describe which portions of the contract’s own state may be accessed by the transition. For each transition, CoSPLIT iterates over every statement in the transition’s code and determines the static over-approximation of that statement’s effect. In some cases, this over-approximation is the unininformative effect \( \top \). Due to Scilla’s design, the translation between statements and effects is direct. As an example, we show the analysis rules for map reads and writes, which can be found in the top box of Fig. 7. These rules are applied when analysing lines 2, 9, and 14 in Fig. 5. The parts shown in ‘grey boxes’, including the rule for the non-effectful \( \text{BIND} \) statement, appear due to the second stage of the analysis, explained below.
The \( \text{MAPGET} \) and \( \text{MAPUPDATE} \) rules extend the transition’s summary \( \Sigma \) with the appropriate Read or Write effect, which identifies the portion \( f \) (for field) of the contract state that is operated on. For map accesses, \( f \) includes the name of the map and the symbolic names of keys \( l_k \) used to index into the map. For accesses to non-map contract fields, only the name of the field is included.
Whereas contract fields can always be described, the keys used to index into a map can be the result of a computation and may even depend on contract state. As such, we only assign an informative effect \( \epsilon \) to accesses when the keys used to index into the map are transaction parameters, as they are in Fig. 5. Moreover, for nested maps, we require that the access is bottom-level, i.e., it touches a primitive value rather than a map. These constraints are captured by \( \text{CanSummarise} \).
If the access cannot be summarized, the \( \top \) effect is given. Generally, an access can be statically described whenever the keys are not dependent on the contract state, but we limit ourselves to keys that are transition parameters to simplify transaction dispatch, which is described in Sec. 4.3.
3.4 Contribution Types
The summaries produced so far are sufficient to enable the disjoint state ownership sharding strategy. But, as we have seen, we can execute in parallel even transactions with effects over the same state, as long as the effects commute. The second stage of the analysis annotates the effects produced by the first stage with contribution types (denoted \( \tau \)), which help determining whether they commute. Concretely, for every expression \( e \), CoSPLIT computes a contribution type, an over-approximation of the set of arithmetic operations and of the set of contract state components from the beginning of the transition execution that contribute to \( e \)’s result.
The type \( \tau \) (cf. Fig. 6) ascribed to an expression \( e \), which contribution sources, i.e., parts of the contract state...
then even though
\( \text{Condition}(\text{balances}[\_\text{sender}], \text{amount}) \)
Write(\text{balances}[\_\text{sender}], (\text{amount}&&\text{balances}[\_\text{sender}], 1, \text{sub}))
Write(\text{balances}[\_\text{to}], (\text{amount}&&\text{balances}[\_\text{to}], 1, \text{add}))
\( \text{SendMsg}(\text{funds} = \text{zero}; \text{destination} = \_\text{to}) \)
\( \text{SendMsg}(\text{funds} = \text{zero}; \text{destination} = \_\text{sender}) \)
\( \text{Figure 8. Set of effects of the Transfer transition.} \)
(Field f), transition parameters/constants (Const \( c \)), and function parameters (Formal \( i \)), flow into \( e \)’s result, what operations are applied to those sources, and how many times each source contributes to \( e \)’s result. The precision component \( p \) in types records whether over-approximation of the set of operations has taken place due to joining control flows, i.e., the analysis has lost precision. This lets us answer questions like “Can the transition’s effect to field \( f \) be represented as an addition of a constant to \( f \)’s old value?” If the \( r \) ascribed to the value written to field \( f \) has as its only exact contribution Field \( f \mapsto (1, \text{Builtin add}) \), and also some constants and transition parameters, the answer is yes.
The Importance of Cardinalities. The most important component in the type is the cardinality of Field \( f \). If \( f \) did not show up in \( r \), then the written value would be constant, and different writes would not commute, as they might have potentially different values (e.g., different transition parameters). Conversely, if \( f \)’s contribution was non-linear, i.e., its cardinality is \( \omega \) (“many”), then even though \( f \) is modified by a commutative operation (addition), the effect would not commute. For example, the linear function \( f(x) = x + 1 \) commutes with \( g(x) = x + 2 \), but does not with \( h(x) = x + x + 1 \), as \( f(h(a)) \neq h(f(a)) \). The linearity (“used-once”) information attached to contribution sources lets us ensure that operations are used in ways that guarantee commutative effects. We lift the \( \odot \) operator on cardinalities to types by adding the cardinalities of matching sources and set-unioning their operations, ascribing to the result the \( \sqcup \) of their precisions. For \( \odot \), which is defined only between a type and a single contribution, we multiply the cardinalities of the arguments and modify the other components analogously to the \( \odot \) lifting.
Computing Contribution Types. The values read from the mutable contract state, literals, transition and contract parameters are all contribution sources. For example, the Literal and MapGet rules in Fig. 7 show how new contribution sources are introduced. A read into a binder \( i_t \) from a location that was not overwritten, extends the typing context \( \Gamma \) by giving \( i_t \) the linear contribution type \( r \) shown in the rule. The type shows that the value of \( i_t \) is the value of the respective “pseudo-field” \( k_t \) (i.e., a map entry) at the beginning of the transition execution. Contributions from multiple sources are combined with their cardinalities added up point-wise via \( \odot \) operator (cf. Fig. 6). For example, the
\[ \text{MatchC}(x, r_x, \text{pat}_x, v, \pi) \triangleq \begin{cases} \tau_{\text{cond}} \odot \sqcup \pi & \text{where } \tau_{\text{cond}} \triangleq \begin{cases} \tau & \text{if IsKnownOp}(x, \text{pat}_x, \pi) \text{ then } \bot \text{ else } \text{AdaptC}(r_x) \\ \text{AdaptC}(\langle x \mapsto (\_\_), \_\_ \rangle) & \text{if SomeVars}(\pi) \\ \text{AdaptC}(\langle 0 \mapsto (0, \text{Cond}), \text{Exact} \rangle) & \text{if SomeVars}(\pi) \\ \text{AdaptC}(\langle 0 \mapsto (0, \text{Cond}), \text{Inexact} \rangle) & \text{else} \end{cases} \end{cases} \]
The additional contribution \( \tau_{\text{cond}} \) accounts for whether matching over the scrutinee \( x \) induces non-trivial data flow (in which case its contribution is determined via AdaptC), or simply “peels off” a constructor of an option value (in which case it has no contribution). The latter is a very common special scenario, (see, e.g., lines 11–13 of Fig. 5), and without this machinery the analysis would lose too much precision.
3.5 Calculating Sharding Signatures
From the set of transition summaries of a given contract (one such summary is shown in Fig. 8), and provided with user input as to what transitions to attempt to shard and which fields can be treated weakly for reading (cf. Sec. 4.2.3), CoSplit derives a sharding signature, consisting of a set of constraints \( \pi \) for each transition in the contract and a join
\( (\text{constraint}) \ o ::= \text{Owns}(f) \mid \text{UserAddr}(x) \mid \text{NoAliases}(x, y) \mid \text{SenderShard} \mid \text{ContractShard} \mid \bot \)
\( \text{Join} \ o \sigma ::= \text{OwnOverwrite} \mid \text{IntMerge} \)
\[ \begin{array}{|c|c|}
\hline
\text{Effect} & \text{Constraint} \\
\hline
\text{SendMsg}(\top) & \top \\
\text{AcceptFunds} & \text{SenderShard} \\
\text{SendMsg}(\_\_\_\_, r \neq 0) & \text{ContractShard} \\
\text{SendMsg}(\_\_\_\_, r = r) & \text{UserAddr}(r) \\
\text{Read/Write}(m[\_\_\_\_]), \text{Read/Write}(m[\_\_\_\_]) & \text{NoAliases}(x, y) \\
\hline
\end{array} \]
Finally, if the developer accepts that reads from
Algorithm 3.1: Derive Sharding Signature
```
input : effect summaries, selected transitions, weak reads
output: transition (ownership) constraints, field join operations
Σ ← effect summaries of selected transitions
wr ← reads the user accepted might be stale
for Σ ∈ Σ do
cfs ← GetConstantFields(Σ)
Σ ← ∀ f ∈ cfs, Σ.remove(Read(f))
Σ ← Σ.MarkConstantsInTypes(cfs)
lcws ← GetTransitionCommWrites(Σ)
cws, joins ← TryConsolidateJoinsGlobally(lcws)
Σ ← Σ.RemoveSpuriousReads(cws)
if joins ≠ ∅ ∧ wrs = StaleReads(Σ, joins) then
oc ← {}
for Σ ∈ Σ do
c ← GenEnvironmentConstraints(Σ)
foreach Read(f) ∈ Σ do c ← c ∪ Owns(f)
foreach Write(f) ∈ (Σ \ cws) do c ← c ∪ Owns(f)
oc ← oc ∪ c
return (oc, joins)
```
operation ψᵢ for each field. The top part of Fig. 9 enumerates the constraints that summaries can impose, as well as join operations we currently support. Constraints are static symbolic representations of conditions that must be satisfied at runtime. They refer to mutable fields or transition parameters as symbolic values, e.g., Owns(f) and UserAddr(x). For instance, the Owns(balances[sender]) constraint denotes that a shard executing the transition must own the _sender portion of the balances state component, where _sender is replaced at runtime by the actual value given by the transaction. The other constraints are imposed by the blockchain environment, e.g., SenderShard, which must be satisfied if the contract accepts funds, or arise as preconditions for the soundness of our analysis, e.g., that keys used for map accesses do not alias. ⊥ corresponds to an unsatisfiable constraint, meaning that the transition cannot be executed in parallel with other transactions over the same contract.
Algorithm 3.1 shows the procedure for deriving the sharding signature. The contract developer selects a set of transitions that should be executed in parallel, and the algorithm inspects their summaries to determine what constraints must be satisfied to enable parallel execution. First, it identifies which (if any) contract fields are not written to in the selected transitions, and marks their reads as non-effectual (i.e., it removes them from the summary) and their contributions as constant. Second, looking at each summary in isolation, it identifies using the types, which writes have a commutative effect, e.g. Write(balances[to]) in Fig. 8. Then, it determines if a join operation exists for every field, i.e., the writes across transitions are compatible, and marks reads that only flow into commutative writes, e.g. Read(balances[to]) (since balances[to] does not appear in any other type), as non-effectual. Finally, if the developer accepts that reads from fields that are commutatively written to might return stale data (cf. Sec. 4.2.3), the algorithm translates effects into constraints via the mapping in Fig. 9 and by requiring ownership of every field that is read or non-commutatively written to.
4 Enabling Parallelism with CoSPLIT
In this section, we show how the signatures inferred by CoSPLIT, as described in Sec. 3, can be used to allow for parallel transaction execution in a sharded blockchain.
4.1 The Sharding Model
We integrated CoSPLIT with Zilliqa blockchain [67]. Zilliqa is one of the first sharded chains in production. It implements the Elastico protocol for secure sharding [45] and relies on an optimised version of the Practical Byzantine Fault Tolerance (PBFT) protocol for consensus in the network [12, 61]. At the time of writing, Zilliqa mainnet has processed 9.6 million transactions and contained 28 types of unique smart contracts (some of them have many deployed copies). Below, we outline the relevant parts of its architecture and transaction processing logic, referring the reader to the corresponding manuscripts for details (on, e.g., security and epoch-based mining) [45, 67], which are not critical for our presentation.
Network Architecture. The Zilliqa network consists of three main components: the lookup nodes, the shards, and the Directory Service (aka the DS committee) (cf. Fig. 10).
Lookup nodes are the entry-point to the network. Any transaction created by a user has to be sent to the lookup nodes, which thereupon group several transactions together in a packet and dispatch them to one of the shards or the DS committee for processing. Each shard (and similarly the DS committee, which in fact is a special shard) stores the full blockchain state and runs PBFT to reach consensus on validated transactions. It then proposes a MicroBlock (MB) that contains information on the transactions that it has processed. MicroBlocks are then sent to the DS committee together with StateDelta (SD) which encodes the changes in the state of the accounts that were touched by the transactions within a MicroBlock. Once all the MicroBlocks and the corresponding StateDeltas reach the DS committee, the latter combines them all in the form of a FinalBlock (FB) and a FinalStateDelta (FSD). The FinalBlock and FinalBlockDelta are then sent back to each shard so that all the shards have the same view of the full global state—Zilliqa shards transaction execution, but not state storage.
The Default Sharding Strategy. A client-issued transaction can be processed either by one of the shards or by the DS committee (Fig. 10). Zilliqa employs a simple deterministic transaction assignment strategy to shards to ensure that double spends are detected within a shard without complex cross-shard communication [39]. User-to-user payment transactions are deterministically assigned to shards based on the sender’s address. That is, all transactions from the...
same user get handled in the same shard, so any double spend from a specific user can be detected within a single shard in the same way it gets handled in a non-sharded architecture.
For smart contracts, Zilliqa implements a non-efficient conservative strategy. Specifically, the network statically assigns both contracts and end users to shards. Transactions to a contract invoked by users residing in the same shard as the contract are handled within the shard, while transactions to a contract invoked by users from an outside shard are handled in the DS committee. To ensure that shards and the DS do not end up manipulating the state of the same contract concurrently, the protocol requires the DS committee to process transactions assigned to it only after the shards have finished processing their transactions.
Given this simple deterministic assignment, the parallelism achieved for smart contract transaction processing is quite limited. In fact, the more shards there are, the more transactions will need to be processed by the DS committee.
4.2 Revising the Account-Based Blockchain Model
In order to employ the described sharding model for CoSplit-enabled parallelism, we need to revise a few core aspects in the design of Ethereum-style blockchains.
4.2.1 Relaxing the Nonce Mechanism. Ethereum’s account-based model (adopted by Zilliqa and similar systems) uses the nonce mechanism for defining a total order on all transactions emitted by a particular user. Nonces are calculated by counting the number of transactions sent from a user address and are digitally signed, addressing the following design aspects: (a) strict, gap-free, user-defined ordering of transactions, and (b) prevention of replay attacks. Thanks to the nonces, the user can send many transactions with subsequent numbers, and they are going to be processed in this exact order—the protocol will not process transaction with a nonce \( n + 1 \) before the one with nonce \( n \).
Because of aspect (a), nonces pose a bottleneck to shared executions. While in plain Zilliqa all transactions from a single user are guaranteed to be handled in the same shard, the nonce mechanism prevents parallel executing of transactions with the same origin in different shards, as the total order of nonces cannot be communicated. We notice that in practice no applications rely on a specific order of user transactions before they are committed by the protocol.\(^6\) Therefore, it suffices for transactions to be processed in an increasing nonce order, without waiting for all “gaps” to be filled, treating them similarly to ballots in Paxos [41]. This relaxation requires a very small change in the protocol logic. With it, we kept the aspect (b) of the nonce mechanism, while allowing for parallel executions. For instance, this way we can execute in parallel two disjoint sets of commuting transactions from the same user with nonces \( \{1, 3, 5\} \) and \( \{2, 4\} \), respectively.
4.2.2 Parallel Gas Accounting. Gas accounting is a mechanism to charge users for executing transactions [65]. Such deductions must be treated sequentially to avoid overspending. We circumvent this bottleneck to parallelism by splitting a user’s balance across shards (with a larger fraction given to the shard handling money transfers from that user), so gas costs can be charged without coordinating balance changes.
4.2.3 Weak Reads. In the Transfer transition (Fig. 8), we saw that the write into balances[\(to\)] has a commutative effect. As a result, the processing shard does not need to own the field to execute the transition. However, allowing commutative writes means that transitions executing in a different shard might read stale values of balances[\(\_sender\)]. In our example, this is fine—the sender may have more tokens that she thinks. Yet, in general, introducing commutative writes weakens the semantics of reads and a static analysis cannot determine whether this is “fine” for specific contracts. As a rule of thumb, a read can safely be marked as weak if the contract semantics is “monotone” in the corresponding value with respect to some lattice (as in [40]), i.e., the behaviour of the contract is not affected if a higher value is read and other shards can increase the value, but not decrease it. Ideally, the programming language itself would allow contract developers to mark certain reads as weak, but neither Scilla nor Solidity, both designed with sequential semantics in mind, currently have this feature. For now, we require that weak reads be provided as input to Algorithm 3.1.
4.3 CoSplit in Action
Using CoSplit assumes two modes: offline and online. In the former, the user who is about to deploy her contract to the blockchain, provides hints to the tool in order to choose the most suitable sharding signature (Fig. 11). In the latter, CoSplit is run automatically by the miners as a part of the validation pipeline for contracts proposed to be deployed.
\(^6\)Furthermore, the UTxO blockchain model adopted by, e.g., Bitcoin [48] promotes this kind of weak notion of consistency, in which the user cannot predict the order in which her transactions are committed.
5.1 Evaluating the Analysis
We implemented CoSplit in OCaml as a pluggable checker, adding an optional phase to the existing Scilla type checking pipeline. Put together, the analysis, query solver, transaction dispatcher, and state delta merger measure 2900 lines of OCaml code. All interaction between CoSplit and the nodes of Zilliqa network happens via JSON-RPC; that is, the approach can be reused by any other system that provides a way to serialise/deserialise the state of Scilla contracts.
In our evaluation of CoSplit, we focus on two aspects of the tool: the quality of the analysis for sharding signatures (Sec. 5.1) and the impact of using the signatures to the system throughput when executing transactions to popular contracts (Sec. 5.2) from the Zilliqa blockchain.7
5.1.1 Analysis Performance. All Scilla contracts, upon deployment to the blockchain, are validated by the miners that are forced to parse their code and run the type-checker. We ran the contract deployment pipeline (parsing,
5.1.2 Analysis Efficacy.
The bar chart on the right summarises the number of transitions (from 1 to 18) for our 49 contracts. While it may be more likely that a contract with a large number of transitions can be sharded efficiently, having many transitions might also indicate having complex logic, making it difficult to infer a useful signature. To quantify the efficacy of the analysis, we introduce some new terminology.
Definition 5.1 (Hogged fields). A contract’s transition $T$ hogs a field $f$ in a sharding signature $sg$ iff $sg$’s ownership constraints require a shard to fully own $f$ to execute $T$.
Definition 5.2. A sharding signature $sg$ is good enough (GE) for its selection of $k$ contract transitions, iff either
- $k = 1$ and the selected transition does not hog fields, or
- $k > 1$ and any field is hogged by at most one transition.
Intuitively, a sharding signature is good enough if it allows for the existence of a contract state, in which some $k$ of the transitions can be run in parallel by different shards. Fig. 13a shows the sizes of the largest good enough signatures for our contract selection. It is worth noting that a larger GE signature (in terms of a number $k$) might perform worse under real-world load than one with a smaller $k$, which shards different but more frequently used transitions. The following definition outlines the signatures worth comparing.
Definition 5.3 (Maximal GE signature). A GE sharding signature is maximal if its selection of transitions is not a proper subset of some other GE signature’s selection.
A contract might have a number of maximal signatures of various sizes. The plot in Fig. 13b depicts those numbers for our contracts. Computing the maximal signatures at the mining time is impractical, as it requires making $\sum_{k=1}^{n} \binom{n}{k}$ queries to the sharding solver (Fig. 11). Luckily, this computation can be done offline by a contract implementer, who decides prior to the deployment which of the signatures to propose. The miners need to validate only that one signature.
These findings suggest that CoSplit indeed uncovers many opportunities for parallel execution of smart contracts.
5.2 Evaluating Sharded Executions
We evaluate the integration of CoSplit with Zilliqa by measuring the impact on throughput for five representative Scilla contracts. The chosen contracts are: (1) the most popular contracts on Zilliqa (e.g., UD), (2) equivalent to their popular counterparts on Ethereum currently (e.g., FungibleToken and NonfungibleToken) and in the past (e.g., Crowdfunding).
The table above summarises contract sizes, number of transitions, largest GE strategy, and the number of maximal GE signatures. We set out to answer two main questions:
- What throughput improvement, in terms of transactions per second, can CoSplit help to achieve (Sec. 5.2.1)?
- What is the impact of the overheads imposed by CoSplit-enabled sharding (Sec. 5.2.2)?
**Experimental Setup.** To obtain the throughput figures, we deployed small-scale testnets in various configurations on Amazon EC2 containers. Each node runs on a t2.medium machine with 2 logical CPUs and 4GB of RAM, running Ubuntu 16.04. These specifications reflect the minimum requirements needed to run a Zilliqa node. For our benchmarks, we fix the shard size to be 5 nodes per shard and measure the effect on throughput of increasing the number of shards. We use the same shard and DS gas limits as the Zilliqa mainnet.
**Selection of Sharding Signatures.** We deploy each of the five contracts in two configurations, one with no sharding information (baseline), and one with a “reasonable” sharding signature, informed by expected usage of the contract. For our experiments, we make the choices as follows:
- For FungibleToken (FT, Zilliqa’s ERC20), we shard the Mint, Transfer and TransferFrom, but not IncreaseAllowance, Burn, or other administrative transitions.
- For NonfungibleToken (NFT, Zilliqa’s ERC-721 [59]), we shard Mint and Transfer (which includes transfer-from functionality), but not Burn and Approve.
- For ProofIPFS, we shard the transition that notarises a hash, but not the one that removes it from the contract.
- For the Unstoppable Domains (UD) registry, we shard granting a new domain name and updating the record associated with a name, but not transfers of ownership.
For the Crowdfunding contract, there is only one possible choice, which is to shard the Donate and Claimback (if the goal was not reached) transitions. We argue that our choices reflect what a reasonable contract deployer would select and, as such, the measured throughput reflects probable scenarios.
We remark that we had to slightly rewrite the NFT and UD contracts, compared to their mainnet definitions, to make them shardable. These rewrites did not affect the semantics of the contracts. We discuss the details of the changes, as well as the potential to automate such modifications, in Sec. 6.
5.2.1 Measuring Throughput. After deployment, we subject the contracts, in sequence, to different workloads sustained over 10 epochs (roughly 8.5 minutes) and measure the resulting throughput. As Fig. 14 shows, for most of the workloads in the benchmark, we obtain a roughly linear TPS increase as the number of shards goes up. The two exceptions are the “FT fund” and the “ProofIPFS register” workloads. The former transfers fungible tokens from a single source to multiple destinations (all transactions go to the source’s shard). The latter notarises a hash, but also keeps a list of notarised items for each user, and thus accesses two separate fields, which typically will be owned by different shards, hence many transactions need to be processed by the DS Committee. We note that for workloads that do not shard well, performance does not degrade as compared to the baseline, and in some cases (e.g., ProofIPFS) marginally improves.
The “FT transfer” workload sends tokens from random sources to random destinations. In the baseline configuration, the throughput is the same as for the single-source workload. The CoSPLIT-empowered sharding strategy, on the other hand, fully utilises the shards’ processing capacity and we get an almost linear increase in throughput as the number of shards increases. A similar effect arises for crowdfunding donations. Interestingly, the “NFT mint” workload (which creates new tokens) is also single-source, just like “FT fund”. However, the relevant transition does not affect state depending on the identity of the transaction sender, but only on the identifier of the created token. As such, we can obtain linear scaling even for a single-source workload. This is only possible because of the changes to the account-based model that we detailed in Sec. 4.2. Finally, Unstoppable Domains is the most popular smart contract on the Zilliqa mainnet, accounting for over half of the smart contract executions. We manage to shard the most popular transitions on this contract, which account for 90% of usage, and show linear increases in throughput for them as well.
5.2.2 Introduced Overheads. Integrating CoSPLIT adds overheads to transaction dispatch and state delta merging. Concretely, we see transaction dispatch time increase from an average 8 µs to an average 475 µs, and the state delta merging increasing from 0.8 µs to 48.65 µs per changed state field. This amounts to a 60x slowdown of these operations, most of it as a result of serialisation and deserialisation costs, but this is fully justified by the resulting increase in overall system throughput. The overall performance gain comes from the fact that applying a delta is much faster than executing all the transactions that resulted in it. For instance, for FungibleToken, the effects of 50 seconds of transaction execution time can be merged in roughly 0.5 seconds.
5.2.3 Ownership versus Commutativity. In terms of the contribution of the two sharding strategies to the throughput improvement, we observe that contracts that manipulate non-fungible quantities (e.g., NFTs, domains, notary) benefit from the disjoint state ownership analysis and contracts that manipulate fungible quantities (e.g., FT) benefit from the commutativity analysis. Contracts that have a mixture of non-fungible and fungible quantities, e.g., a voting contract
that keeps track of who voted and of the total number of votes, can benefit from both strategies.
6 Discussion and Future Work
Handling Integer Overflows. CoSplit’s signature inference does not take possible integer overflows into account. Overflows (and underflows) may cause a problem in the case when IntMerge is used to join state deltas from different shards that individually do not cause an overflow, but do so when joined. At the moment, our implementation ignores this issue. A working solution would be to modify the Scilla interpreter, providing it with information about the number \( N \) of shards. Specifically, it should perform additional post-hoc validation of a transaction, which will fail if the difference between the initial (per epoch) value \( z \) of any affected integer-valued component and its value after the transaction is executed exceeds \( \left\lfloor \frac{\text{MAX}_\text{INT} - z}{N} \right\rfloor \). The information about such components is already available in the sharding signatures. Furthermore, a user might be given an option to pay a higher gas fee, in order to reduce a risk for her transaction to be rejected due to this conservative check. Such transactions will be routed directly to the DS committee, and, thus, processed sequentially.
Automated Contract Repair. As mentioned in Sec. 3.3, the analysis can describe map accesses only if the keys used to index into the maps are translation parameters. During the evaluation, we discovered a small number of contracts that CoSplit cannot shard due to this limitation, but which can be made shardable by a simple refactoring. For instance, an NFT contract’s Transfer transition ensures that the transaction’s sender is authorized by the token owner to initiate the transfer by checking for inclusion in approvals[tokenOwner], where tokenOwner is read from the contract’s state. This cannot be shardable. However, if we make tokenOwner a parameter and rewrite the transition to check that the supplied value matches the value in the contract’s state before attempting the transfer (akin to compare-and-swap), the transition becomes shardable. In future work, we plan to address this using program repair techniques, suggesting the shardable contract version to the developer before deployment.
CoSplit and Other Blockchains. As demonstrated in Sec. 3, the core analysis of CoSplit does not rely on any specific features of Scilla but is easy to implement for it due to the language’s minimalism and restrictions (e.g., very limited set of side effects). Should other account-based blockchains, e.g., Tezos [28] and Ethereum [65], provide a sharded architecture in the future, we believe a similar analysis could be implemented for them as well. The key challenge of developing CoSplit for Michelson, the language of Tezos [63], is in reasoning about its stack-based executions, tracking the provenance and cardinality of pushed and popped values. Ethereum’s EVM could be supported through decompilation into a high-level language [29]. The approach will most likely have to be restricted to contracts with no external calls.
7 Related Work
Sharding Contracts in Blockchains. Several industry proposals outline approaches for smart contract sharding, yet none of them provide an efficient solution for sharding same-contract transactions. For example, the Elrond protocol moves a smart contract to the same shard where its static dependencies lie [23], which takes at least 5 rounds of consensus for a transaction to be finalised. Harmony [30] allows one to deploy contracts in individual shards, with no cross-shard communication allowed. Ethereum 2.0 proposes a cross-shard yanking scheme where the contract code and data is moved into a shard at runtime [64]. The shard then locks the contract to block any parallel execution of other transactions affecting it. At the time of writing, none of these solutions appear to be fully implemented.
The Chainspace protocol allows for shared execution of smart contracts (as well as state sharding) by representing state evolution as a directed acyclic graph [1], similar to Bitcoin’s UTxO transaction model. In Chainspace, contracts have to be written in a specific fashion, so their execution happens off the chain. For an unspecified kind of transaction, the authors of Chainspace report linearly increasing throughput of approximately 75 TPS per each two shards of four nodes in each [1, Fig. 6]. Importantly, Chainspace does not address the scalability problem with same-contract transactions (which we do): under high contention for the same contract the rate of aborted transactions rises. This is because its protocol, S-BAC (a combination of PBFT [12] and Two-Phase Commit for inter-shard communication), implements a variant of optimistic concurrency control, whereas CoSplit allows for pessimistic (race-free) concurrency.
Smart Contracts and Concurrency. Dickerson et al. [18] describe a commutativity-based approach where miners execute smart contracts in parallel locally, using software transactional memory techniques [31, 32]. Unlike our approach that detects possible transaction conflicts pessimistically by means of a static analysis prior to contract deployment, the work by Dickerson et al. exercises the optimistic approach, where conflicts are identified on-the-fly by the miners, with the corresponding executions aborted. It is not clear how to apply these ideas from optimistic concurrency for efficient sharding of general smart contracts in a Byzantine setting, where an adversary can craft cheap (in terms of gas costs) conflict-producing transactions that force the shards to re-execute expensive transactions from the same batch. A more fine-grained identification of conflicts would most probably require an analysis similar to ours, as conjectured in [56]. At the same time, our solution is complementary to Dickerson et al.’s work and other similar approaches. For instance, one can add single-node parallelism on top of CoSPLIT-enabled sharding, and the analysis can help identify which transactions are guaranteed not to require rollback and re-execution, even within a single shard.
Recent work by Bartoletti et al. makes an observation similar to ours that commutativity (which they dub swap-pability) of transactions manipulating Ethereum contracts enables their parallel execution [6]. Their syntactic criterion for inferring swap-pability is, however, more coarse-grained than our analysis, and is based on determining disjointness of transaction footprints, without taking into account commutativity of operations such as addition. As such, their approach would not allow to shard individual non-conflicting updates in an ERC20 contract as we do. Bartoletti et al.’s approach has not been implemented in practice.
Inferring Commutativity. Reasoning about commutativity between program parts is an important problem with applications including parallelising compilers [38, 55], speculative execution [31], and race detection [19]. Most of existing techniques for inferring commutativity are based on analysing dynamic executions [2, 19, 26] or by solving SMT constraints [5] and, thus, cannot be used efficiently as a part of transaction validation. Our analysis is close in spirit to the work (in [55]) on static analysis to determine operation commutativity for compile-time parallelisation. Our analysis uses simpler abstractions, allowing it to be implemented in a compositional fashion and have linear execution cost.
8 Conclusion
We presented a new approach to shard the execution of smart contracts in an account-based blockchain model, based on inferring ownership constraints and commutativity for state-manipulating contract operations. In our approach, smart contracts are first processed by a static analysis tool that produces their sharding signatures, which are then used for shard allocation that maximises parallelism. We have demonstrated that our approach, when integrated into the sharded production-scale blockchain, allows for linear scaling of the transaction throughput for a selection of common smart contracts that were considered unavoidable execution bottlenecks in existing blockchain systems.
Acknowledgments
We thank the OSDI’20 and PLDI’21 PC and AEC reviewers, and our shepherd, Gustavo Petri, for their valuable feedback on earlier drafts of this paper and the artefact. We thank Aquinas Hobor, Yaoqi Jia, and Prateek Saxena who participated in the discussions on smart contract sharding at earlier stages of this work. We are grateful to Haichuan Lian, Antonio Nicolas Nunez, and Jun Hao Tan, for their help in integrating CoSplit with the Zilliqa protocol, and to Bryan Tan for conducting preliminary experiments. We benefited a lot from discussions and feedback on preliminary versions of this work by Andrea Costea, Kiran Gopinathan, Jacob Johannsen, Vaivvaswatha Nagaraj, and Anton Trunov. Ilya Sergey’s work has been supported by the grant of Singapore NRF National Satellite of Excellence in Trustworthy Software Systems (NSoE-TSS) and by the NUS Crystal Centre.
References
|
{"Source-Url": "https://ilyasergey.net/assets/pdf/papers/cosplit-pldi21.pdf", "len_cl100k_base": 13719, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 62716, "total-output-tokens": 19062, "length": "2e13", "weborganizer": {"__label__adult": 0.0005235671997070312, "__label__art_design": 0.00042366981506347656, "__label__crime_law": 0.0005407333374023438, "__label__education_jobs": 0.0006594657897949219, "__label__entertainment": 0.00010544061660766602, "__label__fashion_beauty": 0.00022470951080322263, "__label__finance_business": 0.0014505386352539062, "__label__food_dining": 0.0004875659942626953, "__label__games": 0.0008988380432128906, "__label__hardware": 0.0016021728515625, "__label__health": 0.0008473396301269531, "__label__history": 0.0004656314849853515, "__label__home_hobbies": 0.00015985965728759766, "__label__industrial": 0.0008330345153808594, "__label__literature": 0.00038051605224609375, "__label__politics": 0.0005645751953125, "__label__religion": 0.0006499290466308594, "__label__science_tech": 0.1287841796875, "__label__social_life": 0.00011402368545532228, "__label__software": 0.00925445556640625, "__label__software_dev": 0.849609375, "__label__sports_fitness": 0.00038242340087890625, "__label__transportation": 0.000957489013671875, "__label__travel": 0.00026917457580566406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71133, 0.04769]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71133, 0.34305]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71133, 0.87791]], "google_gemma-3-12b-it_contains_pii": [[0, 4143, false], [4143, 9939, null], [9939, 12843, null], [12843, 19646, null], [19646, 22530, null], [22530, 26033, null], [26033, 31412, null], [31412, 37148, null], [37148, 42329, null], [42329, 43339, null], [43339, 46288, null], [46288, 51650, null], [51650, 57822, null], [57822, 64464, null], [64464, 71133, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4143, true], [4143, 9939, null], [9939, 12843, null], [12843, 19646, null], [19646, 22530, null], [22530, 26033, null], [26033, 31412, null], [31412, 37148, null], [37148, 42329, null], [42329, 43339, null], [43339, 46288, null], [46288, 51650, null], [51650, 57822, null], [57822, 64464, null], [64464, 71133, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 71133, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71133, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71133, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71133, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 71133, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71133, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71133, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71133, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71133, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71133, null]], "pdf_page_numbers": [[0, 4143, 1], [4143, 9939, 2], [9939, 12843, 3], [12843, 19646, 4], [19646, 22530, 5], [22530, 26033, 6], [26033, 31412, 7], [31412, 37148, 8], [37148, 42329, 9], [42329, 43339, 10], [43339, 46288, 11], [46288, 51650, 12], [51650, 57822, 13], [57822, 64464, 14], [64464, 71133, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71133, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
19cccd69922ceeb8c4817cf0e2d4f68661b7e274
|
[REMOVED]
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/19596746/Jackson_Schanda_ET_AL_2013_Auditing_User_Provided_Axioms_in_software_verification_conditions.pdf", "len_cl100k_base": 8567, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 38468, "total-output-tokens": 10329, "length": "2e13", "weborganizer": {"__label__adult": 0.0003669261932373047, "__label__art_design": 0.00027871131896972656, "__label__crime_law": 0.0004229545593261719, "__label__education_jobs": 0.0005197525024414062, "__label__entertainment": 5.78761100769043e-05, "__label__fashion_beauty": 0.0001569986343383789, "__label__finance_business": 0.00018346309661865232, "__label__food_dining": 0.00035691261291503906, "__label__games": 0.0005202293395996094, "__label__hardware": 0.0007047653198242188, "__label__health": 0.0005526542663574219, "__label__history": 0.0001844167709350586, "__label__home_hobbies": 7.808208465576172e-05, "__label__industrial": 0.0003714561462402344, "__label__literature": 0.0002498626708984375, "__label__politics": 0.0002644062042236328, "__label__religion": 0.00044798851013183594, "__label__science_tech": 0.0184326171875, "__label__social_life": 8.386373519897461e-05, "__label__software": 0.004913330078125, "__label__software_dev": 0.9697265625, "__label__sports_fitness": 0.0003337860107421875, "__label__transportation": 0.0005307197570800781, "__label__travel": 0.00018668174743652344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41663, 0.02583]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41663, 0.49834]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41663, 0.92585]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2353, false], [2353, 5580, null], [5580, 8614, null], [8614, 11652, null], [11652, 14358, null], [14358, 16717, null], [16717, 19803, null], [19803, 22804, null], [22804, 25756, null], [25756, 28406, null], [28406, 30768, null], [30768, 32830, null], [32830, 35560, null], [35560, 38321, null], [38321, 41663, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2353, true], [2353, 5580, null], [5580, 8614, null], [8614, 11652, null], [11652, 14358, null], [14358, 16717, null], [16717, 19803, null], [19803, 22804, null], [22804, 25756, null], [25756, 28406, null], [28406, 30768, null], [30768, 32830, null], [32830, 35560, null], [35560, 38321, null], [38321, 41663, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41663, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41663, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41663, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41663, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41663, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41663, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41663, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41663, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41663, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41663, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2353, 2], [2353, 5580, 3], [5580, 8614, 4], [8614, 11652, 5], [11652, 14358, 6], [14358, 16717, 7], [16717, 19803, 8], [19803, 22804, 9], [22804, 25756, 10], [25756, 28406, 11], [28406, 30768, 12], [30768, 32830, 13], [32830, 35560, 14], [35560, 38321, 15], [38321, 41663, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41663, 0.03398]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
6f3acb87d73c95c3f008ffc903a2830fd3e8221c
|
GauchoChat: Towards Proactive, Controllable, and Personalized Social Conversation
Hong Wang, Weizhi Wang, Rajan Saini, Marina Zhukova, Xifeng Yan
Department of Computer Science
University of California, Santa Barbara
Santa Barbara, CA 93106
{hongwang600,weizhiwang,rajansaini,mzhukova,xyan}@ucsb.edu
Abstract
In this paper, we introduce GauchoChat, a social bot developed for the Amazon Alexa Prize SocialBot Grand Challenge 5. Leveraging recent advances in generative language models as the primary response generator, GauchoChat introduces three main innovative solutions that lead to proactive, controllable, and personalized conversational interactions, ultimately improving user experience and satisfaction. GauchoChat introduces a LLM-based Promptist to dynamically select a set of prompting strategies based on the current user intent, persona, and emotion, resulting in use-specific responses for high-quality user engagement. Additionally, GauchoChat explores a proactive topic switching mechanism for transitioning from reactive conversations to proactive engagement with users. The proposed topic switching module intelligently determines when to switch conversation topics and integrates externally-sourced materials into the conversation. Finally, we developed real-time image retrieval to display image contents on multimodal Alexa devices. By implementing these solutions, GauchoChat ensures that its conversations remain engaging, diverse, and well-informed, fostering a proactive dialogue experience. In this paper, we present the system design and architecture of the socialbot system GauchoChat developed for the Amazon Alexa Prize SocialBot Grand Challenge 5, as well as the evaluations to demonstrate its effectiveness.
1 Introduction
Conversational Artificial Intelligence (AI) has been a long-standing area of interest in Natural Language Processing (NLP), and it is regarded as a significant milestone towards Artificial General Intelligence (AGI). In conventional research of NLP, constructing a conversational social chatbot (SocialBot) is formulated as a task of creating an open-domain dialogue system (Wang et al., 2022b; Hosseini-Asl et al., 2020; Johnston et al., 2023). However, the applicable socialbot system is supposed to be a complex system with multiple sub-modules. Besides integrating the most basic dialogue system module, the socialbot should be able to manage and retrieve the large-scale external knowledge bases to provide richer chat content in the dialogue as well. The socialbot needs to consider the user’s emotion in the real-time conversation process and respond emotionally from the perspective of the personified role.
The previous methods in the SocialBot Grand Challenge mainly focused on constructing a neural-based dialogue response generator. This generator was trained on well-collected dialogue datasets using a fully supervised learning approach. Since the introduction of LLMs such as (OpenAI, 2022, 2023; Chiang et al., 2023; Komeili et al., 2022), the general-purpose natural language generation capability of SocialBot is no longer the major bottleneck in constructing strong social chatbots as LLMs can now be used to guide the use and direct the dialogue rather than just generating potential responses.
During SocialBot Grand Challenge 5, the state-of-the-art methods in conversational AI have gone through a revolution. The Large Language Models (LLMs) enabled by reinforcement learning from human instruction and feedback have dominated the tasks and applications in human language technologies. The emergence of ChatGPT [OpenAI, 2022] demonstrates that constructing a SocialBot now goes beyond the vanilla function of chatting with humans. The stronger next-generation SocialBot is required to engage users with more fruitful contents grounded on knowledge bases, provide multi-modal user-bot interaction, and even offer emotional and mental support from the perspective of friends. To enable such advanced capabilities of SocialBot, we propose a novel GauchoChat as our systematical solution to Alexa SocialBot Grand Challenge 5. In the following sections, we will present the high-level design principles, the system architecture, and the evaluations of the proposed GauchoChat system.
1.1 Design Philosophy and Goals
As the proposed GauchoChat can be regarded as a complex SocialBot System, we will first provide an overview of the Design Philosophy of GauchoChat:
- **General**: Previous chatbots were mainly designed for specific purposes, such as shopping, reservations, daily tasks, and more. In contrast, the proposed GauchoChat system is a general-purpose social chatbot capable of engaging in social conversations across various domains, simulating roles like friends, assistants, and family members within human social networks.
- **Personalized**: GauchoChat takes into consideration the personality and preferences of the user during the response generation process. Additionally, the chatbot maintains a consistent language style and interlocutor role throughout the interaction.
- **Proactive**: Unlike traditional reactive information providers, GauchoChat is proactive and can initiate interesting topics, share jokes, and offer emotional support during conversations.
- **Modular**: The entire system is a pipelined robust chatbot service application consisting of multiple well-designed, independent modules that collaborate with each other in a pipelined order. Each module has a unique functionality and clear input-output flow. Importantly, the system remains functional even if any of the modules are temporarily disabled from the pipeline.
- **Scalable**: The proposed system is not just an experimental research demo; it aims to be a mature application capable of meeting latency requirements while being easily reproduced and deployed to support a large number of user requests.
- **Multimodal**: In addition to its voice capabilities, GauchoChat offers users a captivating and immersive multimodal experience through dynamic content showcased on Alexa screen devices.
1.2 High-level Conversational Principles
Our bot should be capable of engaging users on a range of high-level topics, including travel and vacation planning, sport and wellness, food and cooking, news and current events, entertainment and pop culture, technology and gadgets, personal development, and self-improvement. To maximize engagement, we have developed a set of core principles that inform our conversational approach. These include using open-ended questions to encourage users to share their thoughts and opinions, providing follow-up responses that demonstrate active listening and an interest in the user’s perspective, using short and focused prompts that avoid generic small talk, and incorporating multimedia elements such as images and prompts on the screen to enhance the user’s experience. By adhering to these principles, we aim to create a dynamic and engaging conversational experience that keeps users coming back to talk to our bot. Empathy is another crucial component of any successful social interaction, and it is particularly important in the context of our bot. We recognize that users may be looking for more than just a chit-chat; they may also be seeking emotional support or a sense of connection. To that end, we have incorporated the bot’s ability to show empathy during conversation. This includes using language that conveys understanding and validation, such as acknowledging the user’s feelings and experiences.
2 System Design and Architecture
2.1 Overview
The proposed GauchoChat system is a multimodal and controllable socialbot system that can engage users in personalized and proactive high-quality conversations. The system overview is illustrated in Figure 1. The proposed GauchoChat relies on a primary response generator based on a LLM, Vicuna-13b (Chiang et al., 2023) to generate the final system response. Additional to generating the final system response, the LLM also performs other dialogue management tasks, including knowledge retrieval query generation and image search query generation. All significant prompts used in GauchoChat for both response generation and dialogue management are presented in Appendix A.1.
To achieve proactive, controllable, and personalized social conversation, we propose two novel methods, LLM as a promptist and proactive topic switching to control and construct personalized and user-specific prompts as input to LLM. With the generated personalized initial response, we then propose various autonomous conversational control modules to manage the grounded knowledge base and system policy for the current turn. We demonstrate the high correlation between the interesting conversation topic and user engagement, and thus we propose to achieve active language modeling via controlling the conversational topic flow when user boredom is detected. Additionally, the multi-modal engagement provides compelling user-machine interactions.
2.2 Proactive Topic Switching
Most large language models, such as GPT-4 (OpenAI, 2023), are trained to perform completions via instructional tuning. While strong performance on responding to human instructions has led to a wide variety of emerging capabilities, it does not help the LLM-based socialbots become active agents. Although LLMs can generate plausible responses to queries and utterances, these are all reactive behaviors. As Figure 2 illustrates, when faced with passive users, the reactive instruction completion manner will make the chatbots fail in proposing novel ideas or topics to arouse user interests, leading to the stop of conversations. Therefore, proactively driving the conversation via topic flow controlling is still a key unresolved challenge. We imagine an agent that can model its conversational partner’s emotional state, make predictions about ways to interest them, and keep them engaged with knowledge from the external world. This can be crystallized into three key challenges, determining (1) when to switch to a new conversation topic, (2) predicting what topic would appeal to the customer, and (3) integrating externally-sourced materials into the conversation (see Figure 3 and Figure 4 below). In the socialbot challenge, we propose to achieve active language modeling via a
The whole topic switching and controlling steps and flows are illustrated as below:
**STEP 1.** We determine when to switch topics by building multiple classifiers that infer the user’s level of interest, the topic’s maturity, etc. These classifiers monitor the conversation’s state so that a new topic can be proposed at any moment. For example, the level of interest can often be inferred from speech patterns and context. Low-effort statements such as "I don’t know," “sure,” and "maybe" indicate that the user is not investing cognitive effort into the conversation, likely due to a lack of interest. If this happens, we should move on to something else.
We repurpose our LLM for utterance boredom classification and have it monitor every user utterance for signs of boredom (indicated by a lack of cognitive effort). To increase the accuracy of user boredom detection, we run multiple prompts in parallel, with each corresponding to a classifier, followed by an ensemble-based method to make the topic switch decision.
\[
s_i = \text{Ensemble}(l_j), \quad l_j = \text{LMClassify}(\text{prompt}_j, u_i),
\]
where \(s_i\) is the ensemble boredom label at the \(i\)-th dialogue turn, and \(u_i\) is the user utterance of the \(i\)-th turn (or a few recent turns). The ensemble function is responsible for figuring out the boredom label during the last few turns. An example of the switch decision flow is shown in Figure 4.
**STEP 2:** Once a decision is made that the topic needs to be changed, we need to decide on a new conversational topic to improve user engagement. Getting this right is essential because the conversation will stop if the new topic is not relevant or interesting to the customer. Although heuristics, such as universal popularity, can be useful, a prediction conditioned on information in the dialogue history (such as a love for bowling) will be more nuanced and relevant.
One way to propose a new interesting topic is to ask the LLM to "generate a response that proposes a new topic while leaving the conversation open to continue along the previous track." The LLM can also be used to generate a hook for the new topic and monitor the user’s reaction. If the user shows further signs of disengagement, the model will recover the conversation, and another topic can be proposed in the next turn while taking this new dislike into account.
**STEP 3:** Once a topic is selected, it then becomes essential to engage the customer with new and relevant content around this topic. We maintain external knowledge bases, which have been
Figure 3: A high-level illustration of our topic-switching action during simulated exemplar conversation (the text generated by the team as an exemplar conversation).
pre-collected from a wide variety of sources, including scientific publications, news articles, podcast transcripts, and more. There are several methods to select the articles, and we adopt both random rotation and a more sophisticated embedding-based retrieval method. We deploy an LLM-based embedding tool to generate the vectorized representations \( \{e_i\}_{i=1}^{|A|} \) for a set of articles. Afterward, a short description of the new interesting topic is generated and encoded into \( t \). A nearest search is conducted to find articles that match the customer’s interests and preferences,
\[
\arg \max_{i, \forall i \in |A|} \frac{e_i \cdot t}{||e_i|| \cdot ||t||},
\]
where \( |A| \) represents the pre-collected KB size, the articles with the highest cosine similarity are retrieved. We then inject a logically-ordered summary of this knowledge into the response-generation prompt so that our bot can select content based on the recent conversation and share this new content with the user. See above Figure 3 that demonstrates the simplified logic of knowledge retrieval and injection process.
2.3 Language Model as a Promptist
As there is no universal prompt for LLMs to engage in satisfying conversations with diverse users, it is crucial to dynamically adjust the prompting strategy to meet users’ needs. In this paper, we propose a combination of a base prompt that outlines the bot’s primary objective and an add-on prompt that defines the specific strategy to employ in the conversation. The strategy prompt candidates are derived from both pre-defined, diverse candidates that cover diverse user-groups, and generated candidates from LLMs based on the current dialogue context. The question then becomes choosing the prompting strategy to maximize user engagement. We model this problem as a bandit problem, where a policy learns to select the best candidate strategy from the pool to achieve the highest reward, such as the highest satisfaction score and the longest conversation turns. An illustrative example of
Figure 4: A set of feature-based classifiers for topic change decision (the text generated by the team as an exemplar conversation)
Figure 5: Topical knowledge retrieval pipeline of GauchoChat when a topic switch flag is raised.
dynamic prompting, controlling, and selection for user-engaging response generation is presented in Figure 6
In this subsection, we will firstly describe a general metric for evaluating the user preference towards system responses based on the number of dialogue turns, as human annotation is labor-intensive and expensive. Secondly, we present the training details and objectives for the reward model to score future generated responses. Lastly, we describe how we train the policy of LLM Promptist and how incorporate the such LLM Promptist into the pipeline of GauchoChat.
The user is a sport fan. Talk with the user with passion. Combine with some existing news and interesting story. Talk about some skills. Have a dive-deep discussion with him.
Indeed! His range and accuracy are unmatched. Remember when he broke his own record, hitting 402 threes in a season?
Oh definitely. That is incredible. It truly is. Curry’s impact on the game is undeniable. Did you hear about his mini-golf show “Holey Moley”?
What is that?
“Holey Moley” is a mini-golf competition show on ABC, executive produced by Steph Curry. It’s a fun, family-friendly show.
Figure 6: An illustrative example showcasing the LLM Promptist’s ability to dynamically select a combination of prompt strategies during a conversation (the utterances were generated by the team).
**Metric for evaluating user preference and engagement.** It is natural to use the preference score as one significant metric, as the feedback rating scores are collected on a daily basis thanks to the Alexa devices and the team.
Besides that, we hypothesize that the longer turns of dialogue mean that the user is more engaged in the current dialogue and that the generated response quality is higher. Therefore, we propose to deploy Return as the metric to evaluate user preference and engagement, which is a reward score obtained from the user interactions and preference score when the task is finished.
\[
\text{Return} = \begin{cases}
\alpha \cdot \text{turn} + \text{score} & \text{if } \text{turn} \geq 5, \\
-\alpha \cdot \text{max_turn} + \text{score} & \text{otherwise.}
\end{cases}
\]
(3)
where max_turn is the maximum turns in the whole user-bot dialogue data we collect from Alexa users, \(\alpha = 0.2\) is a scalar to normalize the magnitude of turns into the scale of scoring and then the two scoring metrics can contribute equally. In addition, we assume that every generated response for each turn in one dialogue contributes equally to the user engagement due to the lack of turn-level scores and thus we assign the same Return score to every response within one dialogue.
**Training for Reward Model.** Aligning language model natural language generation to follow human intents and instructions is critically important to capture user preference and enhance user satisfaction (Ouyang et al., 2022). After labeling each generated response \(a_i\) in collected user-bot interaction feedback dataset \((a_i, p_i) \in D\), we can transfer the training task of a reward model into a regression task. The pre-trained RoBERTa-large (Liu et al., 2019) language model is used as the backbone of the reward model. To construct the inputs into reward model, we concatenate both the previous dialogue history \(p_i\) and current generated response \(a_i\). We incorporate a Linear Layer with input dimension of embedding size and output dimension of one on the top of pre-trained
RoBERTa-large model. We select the output encoding towards the first [CLS] token as the reward modeling scoring towards each sample. We tune the RoBERTa-large pre-trained language model on the collected user feedback data with the mean-squared error (MSE) loss:
\[ L_{mse} = \| r(a_i, p_i) - f_\theta(a_i, p_i) \|^2 \]
(4)
In this way, given the user utterances and generated system response, the reward model is capable of generating a simulated user rating towards the system response.
**LLM Promptist with Reward Model.** It is costly to fully tune the language model used for response generation to be aligned with human instruction. Instead, we propose a lightweight LLM Promptist to control and sample the customized strategies for generating response. Such lightweight LLM is also a pre-trained RoBERTa model. We manually construct a list of 20 prompts representing different strategies for language generation. The 20 prompt strategies are designed from various perspectives of user interests, chatting styles, system policies, and user intentions, shown in Table 1. The prompts can guide the language model to generate personalized and interesting response which is aligned with user instruction and preference.
<table>
<thead>
<tr>
<th>Index</th>
<th>Prompt</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Speak in a conversational tone, as if you are having a face-to-face conversation.</td>
</tr>
<tr>
<td>2</td>
<td>Share a thought-provoking quote and ask the user for their interpretation.</td>
</tr>
<tr>
<td>3</td>
<td>Share a fun fact and ask the user for their opinion.</td>
</tr>
<tr>
<td>4</td>
<td>Ask an icebreaker question.</td>
</tr>
<tr>
<td>5</td>
<td>Pose a thought-provoking question.</td>
</tr>
<tr>
<td>6</td>
<td>Provide factual information without asking questions.</td>
</tr>
<tr>
<td>7</td>
<td>Incorporate humor into your response.</td>
</tr>
<tr>
<td>8</td>
<td>Suggest a book based on the user’s interests.</td>
</tr>
<tr>
<td>9</td>
<td>Suggest a movie based on the user’s interests.</td>
</tr>
<tr>
<td>10</td>
<td>Proactively share your personal opinion about the subject.</td>
</tr>
<tr>
<td>11</td>
<td>Offer an empathetic and supportive response to make the user feel valued.</td>
</tr>
<tr>
<td>12</td>
<td>Present a hypothetical situation to encourage the user to think creatively.</td>
</tr>
<tr>
<td>13</td>
<td>Share a quote from a famous person and ask the user if they agree with it.</td>
</tr>
<tr>
<td>14</td>
<td>Encourage the user to ask you questions and engage in a dialogue.</td>
</tr>
<tr>
<td>15</td>
<td>Try to keep the conversation going for as long as possible.</td>
</tr>
<tr>
<td>16</td>
<td>Encourage storytelling.</td>
</tr>
<tr>
<td>17</td>
<td>Encourage self-reflection.</td>
</tr>
<tr>
<td>18</td>
<td>Encourage sharing personal achievements.</td>
</tr>
<tr>
<td>19</td>
<td>Share a personal fact.</td>
</tr>
<tr>
<td>20</td>
<td>Add a related joke, ensuring it is safe for kids.</td>
</tr>
</tbody>
</table>
Table 1: 20 hand-crafted prompts used in dynamic prompting.
Formally, given a list of dialogue history \( p_i = \{ u_{i1}, u_{i2}, \ldots, u_{in} \} \), the LLM promptist samples a set of prompting strategies \( e_i = \{ e_i^1, e_i^2, \ldots, e_i^{20} \} \), \( e_i^j \in \{0,1\} \) from a candidate pool \( E_{cand} \) with 20 hand-crafted prompts. Then the sampled prompting strategies are reconstructed into the main customized prompt and forwarded to LLM response generator, Vicuna-13b to generate the answer \( a_i \), with the goal of maximizing a reward \( r_i = R_\theta(a_i | p_i) \). The set of prompting strategies are sampled from the Bernoulli distribution, of which the probability is generated according to a policy
\[ e_i \sim \pi_\phi(e_i | p_i) \],
(5)
where \( \phi \) are the policy’s parameters. The answer is generated through: \( a_i = LM(e_i, p_i) \) using the selected prompting strategies and the dialogue history as the input prompt. The reward is then computed by the trained reward model \( r_i = R_\theta(a_i | p_i) \).
We optimize the reward with respect to the parameters of the policy network using the Policy Gradient method (Sutton et al., 1998). In our implementation, we use the REINFORCE policy gradient algorithm (Williams, 1992):
\[ \nabla \mathbb{E}_{e_i \sim \pi_\phi(e_i | p_i)} [R_\theta(e_i, p_i)], \]
(6)
where \( \pi_\phi \) is the policy of prompt controller, \( R_\theta(\cdot) \) is the reward model function, and \( (e_i, p_i) \) is the pair of prompting strategies and dialogue history.
2.4 Multimodal Interface
The multimodal nature of Alexa devices opens up a unique opportunity for visually-enhanced conversation (Wang et al., 2022a). While working on the multimodal interface, we developed a set of design principles that prioritize customer engagement, safety, and ease of use. We created flexible APL (Alexa Presentation Language) templates to display content related to the conversation and implemented a dynamic image retrieval algorithm to ensure that visual content enhances the customer’s experience.
2.4.1 User Interface Design Principles
Aiming to create an engaging multimodal experience for every Alexa customer, we developed a set of user interface design principles to guide our implementation of visual content and to ensure it is consistent, intuitive, and safe.
Firstly, we want to create a sense of familiarity and trust with the customers by using the official Alexa Prize logo, competition title, and Alexa color palette. Our visual content is designed to be displayed in the background with the use of backgroundColorOverlay, with minimal text to avoid distractions and to maintain the flow of the conversation. Secondly, we ensure that all displayed images are safe, child-friendly, and copyright-free. We use dynamic retrieval to display visual content that is relevant to the conversation to enhance the customer’s engagement with the bot. Third, we aim to improve customer experience by providing ideas for the next conversation topic. We use a mix of visual and text hints to guide Alexa customers. These hints are designed to be intuitive and easy to use for everyone.
Overall, our user interface design principles prioritize engagement, safety, and ease of use, and we believe that they help us to create a truly appealing conversational experience for Amazon Alexa customers. To deliver an engaging multimodal experience with Social Bot, we have used APL templates that dynamically display images and text, which are presented in Figure 7 and Appendix Table 3. These templates have flexible design which can be customized to different conversation flows, which is shown in Figure 12, 13, 14, 16, 17, 18.

2.4.2 Dynamic Image Retrieval
A dynamic solution to retrieve images on-the-fly is essential for an exceptional customer experience. The novelty from these visuals would inspire customers to be more creative in their interactions with Alexa and make the experience more immersive. While calling an API to retrieve and display an image has been possible for quite a while, LLMs have only recently achieved the capability to dynamically generate a summary of the conversation that can be applied for the image search query and further implemented on a multimodal device. Following that, we designed the system in a way that we retrieve and display images that are on-topic, diverse, and safe to use. Our image retrieval works in real-time manner and returns images relevant to the most recent conversation topic. The image retrieval pipeline is shown in Figure 8.
Our prompt instruction for *vicuna* is quite high-level, but carefully-worded. The phrase “child-friendly” adds a layer of protection to prevent display of inappropriate images. We experimented with the number of recent conversation turns and found that using the last 2 turns gave the most reliable results. An example prompt and a search tag is:
Prompt: Return a general, child-friendly search tag to find an image relevant to the most recent topic of the following conversation. Do not output anything other than this. If it is a person or organization, do not return anything. Conversation: bot: “Sure, what would you like to talk about? Music, sports, games, anything in particular?”, 'user': ‘travel’, bot: ‘Traveling is always a great topic. Where would you like to go next? ’user’:‘i am thinking new york city’
Result: *new york city travel*.
The demonstration of image retrieved with a search tag ‘new york travel’ is shown in Figure 9.
Figure 8: Simplified logic of image retrieval
Figure 9: Demonstration of image retrieved with a search tag ‘new york travel’ (the conversation was generated by the team for illustrative purposes).
Then, we forward these queries to an image-hosting service for retrieval. We use **pixabay** as its images are copyright-free, it has a safe search option, and reasonable API requests limits. When no images are found (**pixabay** is smaller than **Google Images** or **Unsplash**), we display a default ‘Response’ template with a neutral background. Although our approach is simple, we found it to be quite effective. Whether the conversation is centered around “dance forms of Thailand”, “Star Wars”, or “smoothie recipes”, relevant images are shown each time [19][20][21]. In fact, its simplicity is a major advantage, lowering the barriers for integration into existing systems. Our inference costs are low (0.61 seconds on average to generate the search query, 0.7 seconds to perform the image search), and the prompt can easily be changed or moved in.
Our solution is not specific to a specific LLM or image resource and can be replicated. For example, one can explore dynamic image retrieval and display with ATM, and image resources like **Wikimedia Commons**, **Unsplash**, or **Getty Images**. We demonstrate how LLMs can help retrieve relevant images dynamically and how this capability can be further improved. Although the bot does not engage in conversations about image content, multimodal models will make this possible in the future. This enables a wide variety of conversation topics, such as analyzing a map or discussing objects on the picture. LLMs can also predict the next user response based on recent turns and display it as a conversation hint on the footer of the APL template.
### 3 Experiments
#### 3.1 Impact of Proactive Topic Switching
In order to mitigate bias from adversarial users, we discard conversations containing words in Google’s profanity list. From the remaining conversations, we measure the proportion of successful topic-switches. Specifically, each time our topic-switching module activates, we consider the switch “successful” if the user’s next response contains a set of keywords that indicate increased engagement. We define a switch as successful if the user’s next turn contains any of the following keywords: "yes", "sure", "definitely", "yeah", "okay", "I haven’t", and "love". These keywords are derived from the most frequent user responses to topic switches in our conversation transcripts (excluding obvious negatives, like "no" and "not really"). We prefer this approach over LLM-based evaluations because it (1) avoids exposing user utterances, (2) avoids training-dataset-induced biases common to all LLMs, (3) is fully interpretable (i.e. no reliance on any black-boxes), and (4) scales across all of our transcripts.
Although it is quite intuitive that a successful topic switch should be positively correlated with better user engagement, we perform some statistical analysis on such correlation with user feedback data collected between semi-final phase. Concluding from these results in Table 2 it is easy to conclude that the rating is highly correlated to successful topic switching. Therefore, we would expect consistent and dynamic topic switches to increase both engagement and rating.
<table>
<thead>
<tr>
<th>Conversation rating</th>
<th>Frequency of successful topic switches</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.0-2.0</td>
<td>37.5%</td>
</tr>
<tr>
<td>2.0-3.0</td>
<td>50%</td>
</tr>
<tr>
<td>3.0-4.0</td>
<td>50%</td>
</tr>
<tr>
<td>4.0-5.0</td>
<td>64.2%</td>
</tr>
</tbody>
</table>
Table 2: Correlation between successful Topic Switch and higher conversation ratings. The data is collected during semi-final phase (06/10/2023-06/23/2023).
We introduced the proactive topic switching and controlling module to our experimental traffic on May 21st. The last-3-day (l3d) rating time series of the experimental traffic of our bot around the introduction date of topic switch module are shown in Figure[10]. We can observe that our ratings significantly increase since the introduction of proactive topic switch module. Also the regressed trend line also demonstrates the positive effect of topic switch.
Note that the decline observed starting on May 28 results from adding a latency timeout to our full topic switching pipeline, causing our bot to give false promises for interesting information whenever our knowledge retrieval took more than 4 seconds. This was required to stay within Alexa’s hard 10 second limit. However, retrieval was later moved to a background task, alleviating this issue.
3.2 Impact of LLM Promptist
We introduce the proposed LLM-based Promptist method to our bot on March 19th and the time series of last-3-day (l3d) rating near the introduction date has been shown in Figure 11. We can clearly see an increase in l3d rating on the figure after the introduction of the LLM Promptist method.
Figure 11: The time series of average 3-day rating since the introduction of proposed dynamic prompting method. The dashed line is the trend line for the l3d rating.
4 Conclusion
In this paper, we propose a novel Socialbot System, GauchoChat as our solution to Alexa Socialbot Grand Challenge 5. The proposed system is constructed based a primary large-language-model based response generator and various autonomous controlling modules. We propose three major technical contributions to engage user in proactive and personalized conversation. The whole system demonstrates its robustness and effectiveness in several evaluation periods in terms of user ratings and conversation duration.
References
Michael Johnston, Cris Flagg, Anna Gottardi, Sattvik Sahai, Yao Lu, Samyuth Sagi, Luke Dai, Prasoon Goyal, Behnam Hedayatnia, Lucy Hu, Di Jin, Patrick Lange, Shaohua Liu, Sijia Liu, Daniel Pressel, Hangjie Shi, Zhejia Yang, Chao Zhang, Desheng Zhang, Leslie Ball, Kate Bland,
A Appendix
A.1 Key Prompts
Main Dialogue Prompt Have a conversation with the user like a friend in English. Ask engaging questions. Keep responses to 20-30 tokens. Share your opinions. Talk like a talk show host. If the user doesn’t react actively, try changing the subject to a creative new topic.
Here’s some external knowledge you can use when making your response:
“<EXTERNAL KNOWLEDGE FROM RETRIEVAL>”
<DIAGNOSE HISTORY>
Assistant:
Knowledge Summarization Prompt Here’s an article on <topic name>. This article will eventually get fed into a prompt used in a dialogue system, for retrieval augmented generation. Summarize it in a dramatic way into 3 paragraphs so that the dialogue system can introduce the knowledge turn-by-turn well. <LONG SCRAPED ARTICLE>
Boredom Detection Prompt Example 1 Given a conversation transcript, classify if the person is likely to be bored or not. Do not output anything other than "bored" or "not bored. I believe that the user is "
Assistant:
**Boredom Detection Prompt Example 2** Based on the person’s responses, do they seem disinterested or bored? Consider the following factors: short length of response, lack of details, repetitive response, a lack of engagement. Please return ‘bored’ or ‘not bored’. Do not output anything other than this.
**Image Retrieval Query Generation Prompt** Generate a description of an image in English that captures the most relevant elements of the given conversation.
"<DEMONSTRATION EXAMPLES>"
<DIALOGUE HISTORY>
**Query:**
### A.2 APL Template Design Overview
<table>
<thead>
<tr>
<th>Screen</th>
<th>Components</th>
<th>Description</th>
<th>Examples</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. Welcome Screen</td>
<td>Alexa Prize logo, competition name, background image, welcome message, “Waving Hand” emoji, footer with text hint.</td>
<td>The first screen customers see when they start chatting with our Social Bot. Features the Alexa Prize logo, competition name, a neutral background image, and a welcome message. Includes a “Waving Hand” emoji for connection and a footer displaying a hard-coded text hint in the form of “Try, ‘xxx’”.</td>
<td>Figure 12</td>
</tr>
<tr>
<td>2. Topic Selection</td>
<td>Alexa Prize logo, competition name, background image, conversation topics with images, text hints.</td>
<td>Displayed when the customer wants to change the topic. Features the Alexa Prize logo, competition name, a neutral background image, and 6 common conversation topics with relevant images and text hints to motivate customers to switch topics.</td>
<td>Figure 13</td>
</tr>
<tr>
<td>3. Image Response</td>
<td>Alexa Prize logo, competition name, dynamically retrieved background image, bot response, footer with text hint.</td>
<td>Displayed when there is a relevant image for the conversation topic. Features the Alexa Prize logo, competition name, a dynamically retrieved background image, and the bot response. Footer displays a selected hard-coded text hint related to the topic.</td>
<td>Figure 14</td>
</tr>
<tr>
<td>4. Text Response</td>
<td>Alexa Prize logo, competition name, neutral background image, bot response (2 lines of text), footer with text hint.</td>
<td>Displayed when there is no relevant image. Features the Alexa Prize logo, competition name, a neutral background image, and the bot response summarized in 2 lines of text. Footer displays a randomly selected hard-coded text hint.</td>
<td>Figure 15</td>
</tr>
<tr>
<td>5. Visual Hints</td>
<td>Alexa Prize logo, competition name, background image (visual hint), footer with hint.</td>
<td>Displayed every 5th conversation turn to guide the customer to a specific topic. Features the Alexa Prize logo, a background image (visual hint), and a footer hint in the form of “Try, ‘xxx’”.</td>
<td>Figure 16, 17</td>
</tr>
<tr>
<td>6. Feedback</td>
<td>Alexa Prize logo, competition name, neutral background image, feedback question, two buttons.</td>
<td>Displayed when there is no relevant image or text response. Features the Alexa Prize logo, competition name, a neutral background image, and a feedback question with two buttons.</td>
<td>Figure 18</td>
</tr>
</tbody>
</table>
Table 3: APL Template Design Overview
### A.3 Multimodal Engagement Examples
Figures [12][13][14][15][16][17][18] show examples for various APL templates which are filled in during response generation, as described in Section 2.4. Figures [19][20][21][22][23] present instantiated versions of these templates in the developer’s console.
Figure 12: Start Screen on Alexa Echo Show 8
Figure 13: Topic Selection on Alexa Echo Show 8
Figure 14: Image Response on Alexa Echo Show 8
Figure 15: Text Response on Alexa Echo Show 8
Figure 16: Visual Hints on Alexa Echo Show 8
Figure 17: Visual Hints on Alexa Echo Show 8
Figure 18: Feedback Button on Alexa Echo Show 8
Figure 19: Demonstration of image retrieval on a popular topic (the conversation was generated by the team for illustrative purposes).
Figure 20: Demonstration of image retrieval on a popular topic (the conversation was generated by the team for illustrative purposes).
Figure 21: Demonstration of image retrieval on a niche topic (the conversation was generated by the team for illustrative purposes).
|
{"Source-Url": "https://assets.amazon.science/80/c9/c241c7c94d09bfca04b4a73a4dc9/gauchochat-towards-proactive-controllable-and-personalized-social-conversation.pdf", "len_cl100k_base": 8377, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 40204, "total-output-tokens": 9124, "length": "2e13", "weborganizer": {"__label__adult": 0.00047516822814941406, "__label__art_design": 0.0016984939575195312, "__label__crime_law": 0.00042057037353515625, "__label__education_jobs": 0.004119873046875, "__label__entertainment": 0.0007753372192382812, "__label__fashion_beauty": 0.0003349781036376953, "__label__finance_business": 0.0005555152893066406, "__label__food_dining": 0.000530242919921875, "__label__games": 0.0022125244140625, "__label__hardware": 0.0019893646240234375, "__label__health": 0.0008707046508789062, "__label__history": 0.0004775524139404297, "__label__home_hobbies": 0.00019419193267822263, "__label__industrial": 0.0003740787506103515, "__label__literature": 0.0011186599731445312, "__label__politics": 0.0003085136413574219, "__label__religion": 0.0004241466522216797, "__label__science_tech": 0.317626953125, "__label__social_life": 0.0004634857177734375, "__label__software": 0.0731201171875, "__label__software_dev": 0.5908203125, "__label__sports_fitness": 0.0002980232238769531, "__label__transportation": 0.0004189014434814453, "__label__travel": 0.0002551078796386719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38609, 0.02896]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38609, 0.19254]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38609, 0.88896]], "google_gemma-3-12b-it_contains_pii": [[0, 3355, false], [3355, 7612, null], [7612, 10404, null], [10404, 12968, null], [12968, 15175, null], [15175, 15515, null], [15515, 18856, null], [18856, 22947, null], [22947, 26067, null], [26067, 27215, null], [27215, 31781, null], [31781, 33640, null], [33640, 34634, null], [34634, 37877, null], [37877, 38019, null], [38019, 38157, null], [38157, 38477, null], [38477, 38609, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3355, true], [3355, 7612, null], [7612, 10404, null], [10404, 12968, null], [12968, 15175, null], [15175, 15515, null], [15515, 18856, null], [18856, 22947, null], [22947, 26067, null], [26067, 27215, null], [27215, 31781, null], [31781, 33640, null], [33640, 34634, null], [34634, 37877, null], [37877, 38019, null], [38019, 38157, null], [38157, 38477, null], [38477, 38609, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38609, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38609, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38609, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38609, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38609, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38609, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38609, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38609, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38609, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38609, null]], "pdf_page_numbers": [[0, 3355, 1], [3355, 7612, 2], [7612, 10404, 3], [10404, 12968, 4], [12968, 15175, 5], [15175, 15515, 6], [15515, 18856, 7], [18856, 22947, 8], [22947, 26067, 9], [26067, 27215, 10], [27215, 31781, 11], [31781, 33640, 12], [33640, 34634, 13], [34634, 37877, 14], [37877, 38019, 15], [38019, 38157, 16], [38157, 38477, 17], [38477, 38609, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38609, 0.2]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
78abc009a0c4d0bc8676f4435349c806aec88121
|
[REMOVED]
|
{"Source-Url": "http://dl.ifip.org/db/conf/fmoods/fmoods2005/NicolaGP05.pdf", "len_cl100k_base": 9865, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 53067, "total-output-tokens": 11540, "length": "2e13", "weborganizer": {"__label__adult": 0.00036787986755371094, "__label__art_design": 0.0004489421844482422, "__label__crime_law": 0.0003197193145751953, "__label__education_jobs": 0.0007300376892089844, "__label__entertainment": 0.00010114908218383788, "__label__fashion_beauty": 0.00016307830810546875, "__label__finance_business": 0.0003437995910644531, "__label__food_dining": 0.0004494190216064453, "__label__games": 0.00054168701171875, "__label__hardware": 0.0010385513305664062, "__label__health": 0.0006155967712402344, "__label__history": 0.00035572052001953125, "__label__home_hobbies": 0.0001291036605834961, "__label__industrial": 0.0005578994750976562, "__label__literature": 0.0005846023559570312, "__label__politics": 0.0003032684326171875, "__label__religion": 0.0006361007690429688, "__label__science_tech": 0.08447265625, "__label__social_life": 0.00012755393981933594, "__label__software": 0.00914764404296875, "__label__software_dev": 0.8974609375, "__label__sports_fitness": 0.0002732276916503906, "__label__transportation": 0.0006852149963378906, "__label__travel": 0.0002332925796508789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39382, 0.0183]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39382, 0.4128]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39382, 0.82929]], "google_gemma-3-12b-it_contains_pii": [[0, 3045, false], [3045, 6462, null], [6462, 9541, null], [9541, 12643, null], [12643, 15354, null], [15354, 18442, null], [18442, 21414, null], [21414, 24495, null], [24495, 27125, null], [27125, 29794, null], [29794, 32479, null], [32479, 34873, null], [34873, 37925, null], [37925, 39382, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3045, true], [3045, 6462, null], [6462, 9541, null], [9541, 12643, null], [12643, 15354, null], [15354, 18442, null], [18442, 21414, null], [21414, 24495, null], [24495, 27125, null], [27125, 29794, null], [29794, 32479, null], [32479, 34873, null], [34873, 37925, null], [37925, 39382, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39382, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39382, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39382, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39382, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39382, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39382, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39382, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39382, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39382, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39382, null]], "pdf_page_numbers": [[0, 3045, 1], [3045, 6462, 2], [6462, 9541, 3], [9541, 12643, 4], [12643, 15354, 5], [15354, 18442, 6], [18442, 21414, 7], [21414, 24495, 8], [24495, 27125, 9], [27125, 29794, 10], [29794, 32479, 11], [32479, 34873, 12], [34873, 37925, 13], [37925, 39382, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39382, 0.11858]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
1b74b8813940022af30fee83f4b673ade01208ed
|
Citation for published version
DOI
Link to record in KAR
http://kar.kent.ac.uk/21644/
Document Version
UNSPECIFIED
Copyright & reuse
Content in the Kent Academic Repository is made available for research purposes. Unless otherwise stated all content is protected by copyright and in the absence of an open licence (e.g., Creative Commons), permissions for further reuse of content should be sought from the publisher, author or other copyright holder.
Versions of research
The version in the Kent Academic Repository may differ from the final published version. Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the published version of record.
Enquiries
For any further enquiries regarding the licence status of this document, please contact: researchsupport@kent.ac.uk
If you believe this document infringes copyright then please contact the KAR admin team with the take-down information provided at http://kar.kent.ac.uk/contact.html
Parallel and Distributed Computing
in Education (Invited Talk)
Peter H. Welch
Computing Laboratory, University of Kent at Canterbury, CT2 7NT.
P.H.Welch@ukc.ac.uk
Abstract. The natural world is certainly not organised through a central thread of control. Things happen as the result of the actions and interactions of unimaginably large numbers of independent agents, operating at all levels of scale from nuclear to astronomic. Computer systems aiming to be of real use in this real world need to model, at the appropriate level of abstraction, that part of it for which it is to be of service. If that modelling can reflect the natural concurrency in the system, it ought to be much simpler.
Yet, traditionally, concurrent programming is considered to be an advanced and difficult topic — certainly much harder than serial computing which, therefore, needs to be mastered first. But this tradition is wrong.
This talk presents an intuitive, sound and practical model of parallel computing that can be mastered by undergraduate students in the first year of a computing (major) degree. It is based upon Hoare’s mathematical theory of Communicating Sequential Processes (CSP), but does not require mathematical maturity from the students — that maturity is pre-engineered in the model. Fluency can be quickly developed in both message-passing and shared-memory concurrency, whilst learning to cope with key issues such as race hazards, deadlock, livelock, process starvation and the efficient use of resources. Practical work can be hosted on commodity PCs or UNIX workstations using either Java or the occam multiprocessing language. Armed with this maturity, students are well-prepared for coping with real problems on real parallel architectures that have, possibly, less robust mathematical foundations.
1 Introduction
At Kent, we have been teaching parallel computing at the undergraduate level for the past ten years. Originally, this was presented to first-year students before they became too set in the ways of serial logic. When this course was expanded into a full unit (about 30 hours of teaching), timetable pressure moved it into the second year. Either way, the material is easy to absorb and, after only a few (around 5) hours of teaching, students have no difficulty in grappling with the interactions of 25 (say) threads of control, appreciating and eliminating race hazards and deadlock.
Parallel computing is still an immature discipline with many conflicting cultures. Our approach to educating people into successful exploitation of parallel mechanisms is based upon focusing on parallelism as a powerful tool for simplifying the description of systems, rather than simply as a means for improving their performance. We never start with an existing serial algorithm and say: ‘OK, let’s parallelise that!’ And we work solely with a model of concurrency that has a semantics that is compositional – a fancy word for WYSIWYG – since, without that property, combinatorial explosions of complexity always get us as soon as we step away from simple examples. In our view, this rules out low-level concurrency mechanisms, such as spin-locks, mutexes and semaphores, as well as some of the higher-level ones (like monitors).
**Communicating Sequential Processes (CSP)**[1–3] is a mathematical theory for specifying and verifying complex patterns of behaviour arising from interactions between concurrent objects. Developed by Tony Hoare in the light of earlier work on monitors, CSP has a compositional semantics that greatly simplifies the design and engineering of such systems – so much so, that parallel design often becomes easier to manage than its serial counterpart. CSP primitives have also proven to be extremely lightweight, with overheads in the order of a few hundred nanoseconds for channel synchronisation (including context-switch) on current microprocessors [4, 5].
Recently, the CSP model has been introduced into the Java programming language [6–10]. Implemented as a library of packages [11, 12], JavaPP[10] enables multithreaded systems to be designed, implemented and reasoned about entirely in terms of CSP synchronisation primitives (channels, events, etc.) and constructors (parallel, choice, etc.). This allows 20 years of theory, design patterns (with formally proven good properties – such as the absence of race hazards, deadlock, livelock and thread starvation), tools supporting those design patterns, education and experience to be deployed in support of Java-based multithreaded applications.
## 2 Processes, Channels and Message Passing
This section describes a simple and structured multiprocessing model derived from CSP. It is easy to teach and can describe arbitrarily complex systems. No formal mathematics need be presented – we rely on an intuitive understanding of how the world works.
### 2.1 Processes
A process is a component that encapsulates some data structures and algorithms for manipulating that data. Both its data and algorithms are private. The outside world can neither see that data nor execute those algorithms. Each process is alive, executing its own algorithms on its own data. Because those algorithms are executed by the component in its own thread (or threads) of control, they express
the behaviour of the component from its own point of view\(^1\). This considerably simplifies that expression.
A *sequential process* is simply a process whose algorithms execute in a single thread of control. A *network* is a collection of processes (and is, itself, a process). Note that recursive hierarchies of structure are part of this model: a network is a collection of processes, each of which may be a sub-network or a sequential process.
But how do the processes within a network interact to achieve the behaviour required from the network? They can’t see each other’s data nor execute each other’s algorithms – at least, not if they abide by the rules.
### 2.2 Synchronising Channels
The simplest form of interaction is synchronised message-passing along channels. The simplest form of channel is zero-buffered and point-to-point. Such channels correspond very closely to our intuitive understanding of a wire connecting two (hardware) components.

In Figure 1, A and B are processes and \(c\) is a channel connecting them. A wire has no capacity to hold data and is only a medium for transmission. To avoid undetected loss of data, channel communication is synchronised. This means that if A transmits before B is ready to receive, then A will block. Similarly, if B tries to receive before A transmits, B will block. When both are ready, a data packet is transferred – directly from the state space of A into the state space of B. We have a synchronised distributed assignment.
### 2.3 Legoland
Much can be done, or simplified, just with this basic model – for example the design and simulation of self-timed digital logic, multiprocessor embedded control systems (for which **occa**n\(^\text{13-16}\) was originally designed), GUIs etc.
Here are some simple examples to build up fluency. First we introduce some elementary components from our ‘teaching’ catalogue – see Figure 2. All processes are cyclic and all transmit and receive just numbers. The Id process cycles
\(^1\) This is in contrast with simple ‘objects’ and their ‘methods’. A method body normally executes in the thread of control of the invoking object. Consequently, object behaviour is expressed from the point of view of its environment rather than the object itself. This is a slightly confusing property of traditional ‘object-oriented’ programming.
through waiting for a number to arrive and, then, sending it on. Although inserting an Id process in a wire will clearly not affect the data flowing through it, it does make a difference. A bare wire has no buffering capacity. A wire containing an Id process gives us a one-place FIFO. Connect 20 in series and we get a 20-place FIFO – sophisticated function from a trivial design.

Fig. 2. Extract from a component catalogue
Succ is like Id, but increments each number as it flows through. The Plus component waits until a number arrives on each input line (accepting their arrival in either order) and outputs their sum. Delta waits for a number to arrive and, then, broadcasts it in parallel on its two output lines – both those outputs must complete (in either order) before it cycles round to accept further input. Prefix first outputs the number stamped on it and then behaves like Id. Tail swallows its first input without passing it on and then, also, behaves like Id. Prefix and Tail are so named because they perform, respectively, prefixing and tail operations on the streams of data flowing through them.
It's essential to provide a practical environment in which students can develop executable versions of these components and play with them (by plugging them together and seeing what happens). This is easy to do in occam and now, with the JCSP library[11], in Java. Appendices A and B give some of the details. Here we only give some CSP pseudo-code for our catalogue (because that's shorter than the real code):
\[
\text{Id} \ (\text{in, out}) = \text{in} ? x \rightarrow \text{out} ! x \rightarrow \text{Id} \ (\text{in, out})
\]
\[
\text{Succ} \ (\text{in, out}) = \text{in} ? x \rightarrow \text{out} ! (x+1) \rightarrow \text{Succ} \ (\text{in, out})
\]
Plus (in0, in1, out)
= (((in0 ? x0 --> SKIP) || (in1 ? x1 --> SKIP));
out (! (x0 + x1) --> Plus (in0, in1, out))
Delta (in, out0, out1)
= in ? x --> ((out0 ! x --> SKIP) || (out1 ! x --> SKIP));
Delta (in, out0, out1)
Prefix (n, in, out) = out ! n --> Id (in, out)
Tail (in, out) = in ? x --> Id (in, out)
[Notes: ‘free’ variables used in these pseudo-codes are assumed to be locally declared and hidden from outside view. All these components are sequential processes. The process \((\text{in} ? \text{x} --> \text{P} (...)\)) means: “wait until you can engage in the input event \((\text{in} ? \text{x})\) and, then, become the process \(\text{P} (...)\)”. The input operator (?) and output operator (!) bind more tightly than the -->.]
2.4 Plug and Play
Plugging these components together and reasoning about the resulting behaviour is easy. Thanks to the rules on process privacy\(^2\), race hazards leading to unpredictable internal state do not arise. Thanks to the rules on channel synchronisation, data loss or corruption during communication cannot occur\(^3\). What makes the reasoning simple is that the parallel constructor and channel primitives are deterministic. Non-determinism has to be explicitly designed into a process and coded – it can’t sneak in by accident!
Figure 3 shows a simple example of reasoning about network composition. Connect a Prefix and a Tail and we get two Ids:
\[(\text{Prefix (in, c)} \mid \text{Tail (c, out)}) = (\text{Id (in, c)} \mid \text{Id (c, out)})\]
Equivalence means that no environment (i.e. external network in which they are placed) can tell them apart. In this case, both circuit fragments implement a 2-place FIFO. The only place where anything different happens is on the internal wire and that’s undetectable from outside. The formal proof is a one-liner from the definition of the parallel (\(||\)) communications (!, ?) and and-then-becomes (-->) operators in CSP. But the good thing about CSP is that the mathematics engineered into its design and semantics cleanly reflects an intuitive human feel for the model. We can see the equivalence at a glance and this quickly builds confidence both for us and our students.
\(^2\) No external access to internal data. No external execution of internal algorithms (methods).
\(^3\) Unreliable communications over a distributed network can be accommodated in this model – the unreliable network being another active process (or set of processes) that happens not to guarantee to pass things through correctly.
Fig. 3. A simple equivalence
Fig. 4. Some more interesting circuits
Figure 4 shows some more interesting circuits with the first two incorporating feedback. What do they do? Ask the students! Here are some CSP pseudo-codes for these circuits:
\textbf{Numbers (out)}
\[
= \text{Prefix}(0, c, a) \mid \text{Delta}(a, \text{out}, b) \mid \text{Succ}(b, c)
\]
\textbf{Integrate (in, out)}
\[
= \text{Plus}(\text{in}, c, a) \mid \text{Delta}(a, \text{out}, b) \mid \text{Prefix}(0, b, c)
\]
\textbf{Pairs (in, out)}
\[
= \text{Delta}(\text{in}, a, b) \mid \text{Tail}(b, c) \mid \text{Plus}(a, c, \text{out})
\]
Again, our rule for these pseudo-codes means that a, b and c are locally declared channels (hidden, in the CSP sense, from the outside world). Appendices A and B list \texttt{occam} and Java executables – notice how closely they reflect the CSP.
Back to what these circuits do: \texttt{Numbers} generates the sequence of natural numbers, \texttt{Integrate} computes running sums of its inputs and \texttt{Pairs} outputs the sum of its last two inputs. If we wish to be more formal, let \texttt{c\langle i\rangle} represent the \texttt{i}th element that passes through channel \texttt{c} – i.e. the first element through is \texttt{c\langle 1\rangle}. Then, for any \texttt{i >= 1}:
\begin{align*}
\texttt{Numbers:} & & \texttt{out}\langle i\rangle &= i - 1 \\
\texttt{Integrate:} & & \texttt{out}\langle i\rangle &= \text{Sum}\{\texttt{in}\langle j\rangle \mid j = 1..i\} \\
\texttt{Pairs:} & & \texttt{out}\langle i\rangle &= \texttt{in}\langle i\rangle + \texttt{in}\langle i + 1\rangle
\end{align*}
Be careful that the above details only part of the specification of these circuits: how the values in their output stream(s) relate to the values in their input stream(s). We also have to be aware of how flexible they are in synchronising with their environments, as they generate and consume these streams. The base level components \texttt{Id, Succ, Plus and Delta} each demand one input (or pair of inputs) before generating one output (or pair of outputs). \texttt{Tail} demands two inputs before its first output, but thereafter gives one output for each input. This effect carries over into \texttt{Pairs}. \texttt{Integrate} adds 2-place buffering between its input and output channels (ignoring the transformation in the actual values passed). \texttt{Numbers} will always deliver to anything trying to take input from it.
If necessary, we can make these synchronisation properties mathematically precise. That is, after all, one of the reasons for which CSP was designed.
\section*{2.5 Deadlock – First Contact}
Consider the circuit in Figure 5. A simple stream analysis would indicate that:
\begin{align*}
\texttt{Pairs2:} & & a\langle i\rangle &= \texttt{in}\langle i\rangle \\
\texttt{Pairs2:} & & b\langle i\rangle &= \texttt{in}\langle i\rangle \\
\texttt{Pairs2:} & & c\langle i\rangle &= b\langle i + 1\rangle = \texttt{in}\langle i + 1\rangle \\
\texttt{Pairs2:} & & d\langle i\rangle &= c\langle i + 1\rangle = \texttt{in}\langle i + 2\rangle \\
\texttt{Pairs2:} & & \texttt{out}\langle i\rangle &= a\langle i\rangle + d\langle i\rangle = \texttt{in}\langle i\rangle + \texttt{in}\langle i + 2\rangle
\end{align*}
But this analysis only shows what would be generated if anything were generated. In this case, nothing is generated since the system deadlocks. The two Tail processes demand three items from Delta before delivering anything to Plus. But Delta can’t deliver a third item to the Tails until it’s got rid of its second item to Plus. But Plus won’t accept a second item from Delta until it’s had its first item from the Tails. Deadlock!
In this case, deadlock can be designed out by inserting an Id process on the upper (a) channel. Id processes (and FIFOs in general) have no impact on stream contents analysis but, by allowing a more decoupled synchronisation, can impact on whether streams actually flow. Beware, though, that adding buffering to channels is not a general cure for deadlock.
So, there are always two questions to answer: what data flows through the channels, assuming data does flow, and are the circuits deadlock-free? Deadlock is a monster that must – and can – be vanquished. In CSP, deadlock only occurs from a cycle of committed attempts to communicate (input or output): each process in the cycle refusing its predecessor’s call as it tries to contact its successor. Deadlock potential is very visible – we even have a deadlock primitive (STOP) to represent it, on the grounds that it is a good idea to know your enemy!
In practice, there now exist a wealth of design rules that provide formally proven guarantees of deadlock freedom[17-22]. Design tools supporting these rules – both constructive and analytical – have been researched[23, 24]. Deadlock, together with related problems such as livelock and starvation, need threaten us no longer – even in the most complex of parallel systems.
2.6 Structured Plug and Play
Consider the circuits of Figure 6. They are similar to the previous circuits, but contain components other than those from our base catalogue – they use components we have just constructed. Here is the CSP:
Fibonacci (out)
= Prefix (1, d, a) || Prefix (0, a, b) ||
Delta (b, out, c) || Pairs (c, d)
Squares (out)
= Numbers (a) || Integrate (a, b) || Pairs (b, out)
Demo (out)
= Numbers (a) || Fibonacci (b) || Squares (c) ||
Tabulate3 (a, b, c, out)
Fig. 6. Circuits of circuits
One of the powers of CSP is that its semantics obey simple composition rules. To understand the behaviour implemented by a network, we only need to know the behaviour of its nodes – not their implementations.
For example, Fibonacci is a feedback loop of four components. At this level, we can remain happily ignorant of the fact that its Pairs node contains another three. We only need to know that Pairs requires two numbers before it outputs anything and that, thereafter, it outputs once for every input. The two Prefixes initially inject two numbers (0 and 1) into the circuit. Both go into Pairs, but
only one (their sum) emerges. After this, the feedback loop just contains a single
circulating packet of information (successive elements of the Fibonacci sequence).
The Delta process taps this circuit to provide external output.
Squares is a simple pipeline of three components. It’s best not to think of
the nine processes actually involved. Clearly, for $i \geq 1$:
Squares: $a^{(i)} = i - 1$
Squares: $b^{(i)} = \sum \{j - 1 \mid j = 1..i\} = \sum \{j \mid j = 0..(i - 1)\}$
Squares: $out^{(i)} = \sum \{j \mid j = 0..(i - 1)\} + \sum \{j \mid j = 0..i\} = i \times i$
So, Squares outputs the increasing sequence of squared natural numbers. It
doesn’t deadlock because Integrate and Pairs only add buffering properties
and it’s safe to connect buffers in series.
Tabulate3 is from our base catalogue. Like the others, it is cyclic. In each
cycle, it inputs in parallel one number from each of its three input channels and,
then, generates a line of text on its output channel consisting of a tabulated
(15-wide, in this example) decimal representation of those numbers.
Tabulate3 (in0, in1, in2, out)
= ((in0 ? x0 - SKIP) || (in1 ? x1 - SKIP) || (in2 ? x2 - SKIP));
print (x0, 15, out); print (x1, 15, out); println (x2, 15, out);
Tabulate3 (in0, in1, in2, out)
Connecting the output channel from Demo to a text window displays three
columns of numbers: the natural numbers, the Fibonacci sequence and perfect
squares.
It’s easy to understand all this – thanks to the structuring. In fact, Demo
consists of 27 threads of control, 19 of them permanent with the other 8 being
repeatedly created and destroyed by the low-level parallel inputs and outputs
in the Delta, Plus and Tabulate3 components. If we tried to understand it on
those terms, however, we would get nowhere.
Please note that we are not advocating designing at such a fine level of gran-
ularity as normal practice! These are only exercises and demonstrations to build
up fluency and confidence in concurrent logic. Having said that, the process
management overheads for the occam Demo executables are only around 30 mi-
croseconds per output line of text (i.e. too low to see) and three milliseconds
for the Java (still too low to see). And, of course, if we are using these tech-
niques for designing real hardware[25], we will be working at much finer levels
of granularity than this.
2.7 Coping with the Real World – Making Choices
The model we have considered so far – parallel processes communicating through
dedicated (point-to-point) channels – is deterministic. If we input the same data
in repeated runs, we will always receive the same results. This is true regardless
of how the processes are scheduled or distributed. This provides a very stable
base from which to explore the real world, which doesn’t always behave like this.
Any machine with externally operable controls that influence its internal operation, but whose internal operations will continue to run in the absence of that external control, is not deterministic in the above sense. The scheduling of that external control will make a difference. Consider a car and its driver heading for a brick wall. Depending on when the driver applies the brakes, they will end up in very different states!
CSP provides operators for internal and external choice. An external choice is when a process waits for its environment to engage in one of several events – what happens next is something the environment can determine (e.g. a driver can press the accelerator or brake pedal to make the car go faster or slower). An internal choice is when a process changes state for reasons its environment cannot determine (e.g. a self-clocked timeout or the car runs out of petrol). Note that for the combined (parallel) system of car-and-driver, the accelerating and braking become internal choices so far as the rest of the world is concerned.
occam provides a constructor (ALT) that lets a process wait for one of many events. These events are restricted to channel input, timeouts and SKIP (a null event that has always happened). We can also set pre-conditions – run-time tests on internal state – that mask whether a listed event should be included in any particular execution of the ALT. This allows very flexible internal choice within a component as to whether it is prepared to accept an external communication4. The JavaPP libraries provide an exact analogue (Alternative.select) for these choice mechanisms.
If several events are pending at an ALT, an internal choice is normally made between them. However, occam allows a PRI ALT which resolves the choice between pending events in order of their listing. This returns control of the operation to the environment, since the reaction of the PRI ALTing process to multiple communications is now predictable. This control is crucial for the provision of real-time guarantees in multi-process systems and for the design of hardware. Recently, extensions to CSP to provide a formal treatment of these mechanisms have been made[26, 27].

4 This is in contrast to monitors, whose methods cannot refuse an external call when they are unlocked and have to wait on condition variables should their state prevent them from servicing the call. The close coupling necessary between sibling monitor methods to undo the resulting mess is not WYSIWYG[9].
Figure 7 shows two simple components with this kind of control. Replace listens for incoming data on its in and inject lines. Most of the time, data arrives from in and is immediately copied to its out line. Occasionally, a signal from the inject line occurs. When this happens, the signal is copied out but, at the same time, the next input from in is waited for and discarded. In case both inject and in communications are on offer, priority is given to the (less frequently occurring) inject:
Replace (in, inject, out)
= (inject ? signal --> ((in ? x --> SKIP) || (out ! signal --> SKIP))
[PRI]
in ? x --> out ! x --> SKIP
);
Replace (in, inject, out)
Replace is something that can be spliced into any channel. If we don’t use the inject line, all it does is add a one-place buffer to the circuit. If we send something down the inject line, it gets injected into the circuit – replacing the next piece of data that would have travelled through that channel.
Fig. 8. Two controllable processes
Figure 8 shows RNumbers and RIntegrate, which are just Numbers and Integrate with an added Replace component. We now have components that are resettable by their environments. RNumbers can be reset at any time to continue its output sequence from any chosen value. RIntegrate can have its internal running sum redefined.
Like Replace, Scale (figure 7) normally copies numbers straight through, but scales them by its factor m. An inject signal resets the scale factor:
```
Scale (m, in, inject, out) = (inject ? m --> SKIP
[PRI]
in ? x --> out ! m*x --> SKIP
);
Scale (m, in, inject, out)
```
Figure 9 shows RPairs, which is Pairs with the Scale control component added. If we send just +1 or -1 down the reset line of RPairs, we control whether it’s adding or subtracting successive pairs of inputs. When it’s subtracting, its behaviour changes to that of a differentiator – in the sense that it undoes the effect of Integrate.
```
+1
RPairs (in, out, reset)
```
**Fig. 9.** Sometimes Pairs, sometimes Differentiate
This allows a nice control demonstration. Figure 10 shows a circuit whose core is a resettable version of the Squares pipeline. The Monitor process reacts to characters from the keyboard channel. Depending on its value, it outputs an appropriate signal down an appropriate reset channel:
```
Monitor (keyboard, resetN, resetI, resetP) = (keyboard ? ch -->
CASE ch
'N': resetN ! 0 --> SKIP
'I': resetI ! 0 --> SKIP
'+' : resetP ! +1 --> SKIP
'-' : resetP ! -1 --> SKIP
);
Monitor (keyboard, resetN, resetI, resetP)
```
Fig. 10. A user controllable machine
When Demo2 runs and we don’t type anything, we see the inner workings of the Squares pipeline tabulated in three columns of output. Keying in an ‘N’, ‘I’, ‘+’ or ‘-’ character allows the user some control over those workings. Note that after a ‘-’, the output from Rpairs should be the same as that taken from RNumbers.
2.8 A Nastier Deadlock
One last exercise should be done. Modify the system so that output freezes if an ‘F’ is typed and unfreezes following the next character.
Two ‘solutions’ offer themselves and Figure 11 shows the wrong one (Demo3). This feeds the output from Tabulate3 back to a modified Monitor2 and then on to the screen. The Monitor2 process PRI ALTs between the keyboard channel and this feedback:
```
Monitor2 (keyboard, feedback, resetN, resetI, resetP, screen) = (keyboard ? ch -->
CASE ch
... deal with ‘N’, ‘I’, ‘+’, ‘-’ as before
‘F’: keyboard ? ch --> SKIP
[PRI]
feedback ? x --> screen ! x --> SKIP
);
Monitor2 (keyboard, feedback, resetN, resetI, resetP, screen)
```
5 In practice, we need to add another process after Tabulate3 to slow down the rate of output to around 10 lines per second. Otherwise, the user cannot properly appreciate the immediacy of control that has been obtained.
14
Traffic will normally be flowing along the feedback-screen route, interrupted only when Monitor2 services the keyboard. The attraction is that if an ‘F’ arrives, Monitor2 simply waits for the next character (and discards it). As a side-effect of this waiting, the screen traffic is frozen.
But if we implement this, we get some worrying behaviour. The freeze operation works fine and so, probably, do the ‘N’ and ‘T’ resets. Sometimes, however, a ‘+’ or ‘-’ reset deadlocks the whole system – the screen freezes and all further keyboard events are refused!
The problem is that one of the rules for deadlock-free design has been broken: any data-flow circuit must control the number of packets circulating! If this number rises to the number of sequential (i.e. lowest level) processes in the circuit, deadlock always results. Each node will be trying to output to its successor and refusing input from its predecessor.
The Numbers, RNumbers, Integrate, RIntegrate and Fibonacci networks all contain data-flow loops, but the number of packets concurrently in flight is kept at one.\(^6\)
In Demo3 however, packets are continually being generated within RNumbers, flowing through several paths to Monitor2 and, then, to the screen. Whenever Monitor2 feeds a reset back into the circuit, deadlock is possible – although not certain. It depends on the scheduling. RNumbers is always pressing new packets into the system, so the circuits are likely to be fairly full. If Monitor2 generates a reset when they are full, the system deadlocks. The shortest feedback loop is from Monitor2, Rpairs, Tabulate3 and back to Monitor2 – hence, it is the ‘+’ and ‘-’ inputs from keyboard that are most likely to trigger the deadlock.
\(^6\) Initially, Fibonacci has two packets, but they combine into one before the end of their first circuit.
The design is simply fixed by removing that feedback at this level — see Demo4 in Figure 12. We have abstracted the freezing operation into its own component (and catalogued it). It's never a good idea to try and do too many functions in one sequential process. That needlessly constrains the synchronisation freedom of the network and heightens the risk of deadlock. Note that the idea being pushed here is that, unless there are special circumstances, parallel design is safer and simpler than its serial counterpart!
Demo4 obeys another golden rule: every device should be driven from its own separate process. The keyboard and screen channels interface to separate devices and should be operated concurrently (in Demo3, both were driven from one sequential process — Monitor2). Here are the driver processes from Demo4:
Freeze (in, freeze, out)
```haskell
= (freeze ? x --> freeze ? x --> SKIP
[PRI]
(in ? x --> out ! x --> SKIP
));
Freeze (in, freeze, out)
```
Monitor3 (keyboard, resetN, resetI, resetP, freeze)
```haskell
= (keyboard ? ch -->
CASE ch
... deal with ‘N’, ‘I’, ‘+’, ‘-’ as before
‘F’: freeze ! ch --> keyboard ? ch --> freeze ! ch --> SKIP
);
Monitor3 (keyboard, resetN, resetI, resetP, freeze)
```
2.9 Buffered and Asynchronous Communications
We have seen how fixed capacity FIFO buffers can be added as active processes to CSP channels. For the o\-cam binding, the overheads for such extra processes are negligible.
With the JavaPP libraries, the same technique may be used, but the channel objects can be directly configured to support buffered communications – which saves a couple of context switches. The user may supply objects supporting any buffering strategy for channel configuration, including normal blocking buffers, overwrite-when-full buffers, infinite buffers and black-hole buffers (channels that can be written to but not read from – useful for masking off unwanted outputs from components that, otherwise, we wish to reuse intact). However, the user had better stay aware of the semantics of the channels thus created!
Asynchronous communication is commonly found in libraries supporting interprocess message-passing (such as PVM and MPI). However, the concurrency model usually supported is one for which there is only one thread of control on each processor. Asynchronous communication lets that thread of control launch an external communication and continue with its computation. At some point, that computation may need to block until that communication has completed.
These mechanisms are easy to obtain from the concurrency model we are teaching (and which we claim to be general). We don’t need anything new. Asynchronous sends are what happen when we output to a buffer (or buffered channel). If we are worried about being blocked when the buffer is full or if we need to block at some later point (should the communication still be unfinished), we can simply spawn off another process\(^7\) to do the send:
\[
\text{(out ! packet \--> SKIP |PRI| someMoreComputation (...));} \\
\text{continue (...));} \\
\]
The continue process only starts when both the packet has been sent and someMoreComputation has finished. someMoreComputation and sending the packet proceed concurrently. We have used the priority version of the parallel operator {PRI, which gives priority to its left operand}, to ensure that the sending process initiates the transfer before the someMoreComputation is scheduled. Asynchronous receives are implemented in the same way:
\[
\text{(in ? packet \--> SKIP |PRI| someMoreComputation (...));} \\\n\text{continue (...));} \\
\]
2.10 Shared Channels
CSP channels are strictly point-to-point. o\-cam\[^3\] introduced the notion of (securely) shared channels and channel structures. These are further extended in the KRoC o\-cam\[^2\] and JavaPP libraries and are included in the teaching model.
\(^7\) The o\-cam overheads for doing this are less than half a microsecond.
A channel structure is just a record (or object) holding two or more CSP channels. Usually, there would be just two channels – one for each direction of communication. The channel structure is used to conduct a two-way conversation between two processes. To avoid deadlock, of course, they will have to understand protocols for using the channel structure – such as who speaks first and when the conversation finishes. We call the process that opens the conversation a client and the process that listens for that call a server\(^8\).
\[\text{Fig. 13. A many-many shared channel}\]
The CSP model is extended by allowing multiple clients and servers to share the same channel (or channel structure) – see Figure 13. Sanity is preserved by ensuring that only one client and one server use the shared object at any one time. Clients wishing to use the channel queue up first on a client-queue (associated with the shared channel) – servers on a server-queue (also associated with the shared channel). A client only completes its actions on the shared channel when it gets to the front of its queue, finds a server (for which it may have to wait if business is good) and completes its transaction. A server only completes when it reaches the front of its queue, finds a client (for which it may have to wait in times of recession) and completes its transaction.
Note that shared channels – like the choice operator between multiple events – introduce scheduling dependent non-determinism. The order in which processes are granted access to the shared channel depends on the order in which they join the queues.
Shared channels provide a very efficient mechanism for a common form of choice. Any server that offers a non-discriminatory service\(^9\) to multiple clients should use a shared channel, rather than ALTing between individual channels from those clients. The shared channel has a constant time overhead – ALTing is linear on the number of clients. However, if the server needs to discriminate between its clients (e.g. to refuse service to some, depending upon its internal state), ALTing gives us that flexibility. The mechanisms can be efficiently combined. Clients can be grouped into equal-treatment partitions, with each group clustered on its own shared channel and the server ALTing between them.
\(^8\) In fact, the client/server relationship is with respect to the channel structure. A process may be both a server on one interface and a client on another.
\(^9\) Examples for such servers include window managers for multiple animation processes, data loggers for recording traces from multiple components from some machine, etc.
For deadlock freedom, each server must guarantee to respond to a client call within some bounded time. During its transaction with the client, it must follow the protocols for communication defined for the channel structure and it may engage in separate client transactions with other servers. A client may open a transaction at any time but may not interleave its communications with the server with any other synchronisation (e.g. with another server). These rules have been formalised as CSP specifications [21]. Client-server networks may have plenty of data-flow feedback but, so long as no cycle of client-server relations exist, [21] gives formal proof that the system is deadlock, livelock and starvation free.
Shared channel structures may be stretched across distributed memory (e.g. networked) multiprocessors [15]. Channels may carry all kinds of object — including channels and processes themselves. A shared channel is an excellent means for a client and server to find each other, pass over a private channel and communicate independently of the shared one. Processes will drag pre-attached channels with them as they are moved and can have local channels dynamically (and temporarily) attached when they arrive. See David May’s work on Icarus [30,31] for a consistent, simple and practical realisation of this model for distributed and mobile computing.
3 Events and Shared Memory
Shared memory concurrency is often described as being ‘easier’ than message passing. But great care must be taken to synchronise concurrent access to shared data, else we will be plagued with race hazards and our systems will be useless. CSP primitives provide a sharp set of tools for exercising this control.
3.1 Symmetric Multi-Processing (SMP)
The private memory/algorithim principles of the underlying model — and the security guarantees that go with them — are a powerful way of programming shared memory multiprocessors. Processes can be automatically and dynamically scheduled between available processors (one object code fits all). So long as there is an excess of ( runnable ) processes over processors and the scheduling overheads are sufficiently low, high multiprocessor efficiency can be achieved — with guaranteed no race hazards. With the design methods we have been describing, it’s very easy to generate lots of processes with most of them runnable most of the time.
3.2 Token Passing and Dynamic CREW
Taking advantage of shared memory to communicate between processes is an extension to this model and must be synchronised. The shared data does not belong to any of the sharing processes, but must be globally visible to them — either on the stack ( for occam ) or heap ( for Java ).
The JavaPP channels in previous examples were only used to send data values between processes — but they can also be used to send objects. This steps outside the automatic guarantees against race hazard since, unconstrained, it allows parallel access to the same data. One common and useful constraint is only to send immutable objects. Another design pattern treats the sent object as a token conferring permission to use it — the sending process losing the token as a side-effect of the communication. The trick is to ensure that only one copy of the token ever exists for each sharable object.
Dynamic CREW (Concurrent Read Exclusive Write) operations are also possible with shared memory. Shared channels give us an efficient, elegant and easily provable way to construct an active guardian process with which application processes synchronise to effect CREW access to the shared data. Guarantees against starvation of writers by readers — and vice-versa — are made. Details will appear in a later report (available from [32]).
3.3 Structured Barrier Synchronisation and SPMD
Point-to-point channels are just a specialised form of the general CSP multiprocess synchronising event. The CSP parallel operator binds processes together with events. When one process synchronises on an event, all processes registered for that event must synchronise on it before that first process may continue. Events give us structured multiway barrier synchronisation[29].

We can have many event barriers in a system, with different (and not necessarily disjoint) subsets of processes registered for each barrier. Figure 14 shows the execution traces for three processes (P, M and D) with time flowing horizontally. They do not all progress at the same — or even constant — speed. From time to time, the faster ones will have to wait for their slower partners to reach an agreed barrier before all of them can proceed. We can wrap up the system in typical SPMD form as:
```
| | i = 0 FOR 3
| $ (i, ..., b0, b1, b2)
```
where b0, b1 and b2 are events. The replicated parallel operator runs 3 instances of S in parallel (with i taking the values 0, 1 and 2 respectively in the different instances). The S process simply switches into the required form:
\[ S \ (i, \ldots, b_0, b_1, b_2) \]
\[ = \text{CASE } i \]
\[ 0 : P \ (\ldots, b_0, b_1) \]
\[ 1 : M \ (\ldots, b_0, b_1, b_2) \]
\[ 2 : D \ (\ldots, b_1, b_2) \]
and where P, M and D are registered only for the events in their parameters. The code for P has the form:
\[ P \ (\ldots, b_0, b_1) \]
\[ = \text{someWork } (\ldots); b_0 \rightarrow \text{SKIP}; \]
\[ \text{moreWork } (\ldots); b_0 \rightarrow \text{SKIP}; \]
\[ \text{lastBitOfWork } (\ldots); b_1 \rightarrow \text{SKIP}; \]
\[ P \ (\ldots, b_0, b_1) \]
3.4 Non-Blocking Barrier Synchronisation
In the same way that asynchronous communications can be expressed (section 2.9), we can also achieve the somewhat contradictory sounding, but potentially useful, non-blocking barrier synchronisation.
In terms of serial programming, this is a two-phase commitment to the barrier. The first phase declares that we have done everything we need to do this side of the barrier, but does not block us. We can then continue for a while, doing things that do not disturb what we have set up for our partners in the barrier and do not need whatever it is that they have to set. When we need their work, we enter the second phase of our synchronisation on the barrier. This blocks us only if there is one, or more, of our partners who has not reached the first phase of its synchronisation. With luck, this window on the barrier will enable most processes most of the time to pass through without blocking:
\[ \text{doOurWorkNeededByOthers } (\ldots); \]
\[ \text{barrier.firstPhase } (); \]
\[ \text{privateWork } (\ldots); \]
\[ \text{barrier.secondPhase } (); \]
\[ \text{useSharedResourcesProtectedByTheBarrier } (\ldots); \]
With our lightweight CSP processes, we do not need these special phases to get the same effect:
\[ \text{doOurWorkNeededByOthers } (\ldots); \]
\[ (\text{barrier } \rightarrow \text{SKIP } | \text{PRI} \text{ privateWork } (\ldots)); \]
\[ \text{useSharedResourcesProtectedByTheBarrier } (\ldots); \]
The explanation as to why this works is just the same as for the asynchronous sends and receives.
3.5 Bucket Synchronisation
Although CSP allows choice over general events, the ocaml and Java bindings do not. The reasons are practical – a concern for run-time overheads\(^\text{10}\). So, synchronising on an event commits a process to wait until everyone registered for the event has synchronised. These multi-way events, therefore, do not introduce non-determinism into a system and provide a stable platform for much scientific and engineering modelling.
\textit{Buckets}\(^\text{15}\) provide a non-deterministic version of events that are useful for when the system being modelled is irregular and dynamic (e.g. motor vehicle traffic\(^\text{33}\)). Buckets have just two operations: \texttt{jump} and \texttt{kick}. There is no limit to the number of processes that can jump into a bucket – where they all block. Usually, there will only be one process with responsibility for kicking over the bucket. This can be done at any time of its own (internal) choosing – hence the non-determinism. The result of kicking over a bucket is the unblocking of all the processes that had jumped into it\(^\text{11}\).
4 Conclusions
A simple model for parallel computing has been presented that is easy to learn, teach and use. Based upon the mathematically sound framework of Hoare’s CSP, it has a compositional semantics that corresponds well with our intuition about how the world is constructed. The basic model encompasses object-oriented design with active processes (i.e. objects whose methods are exclusively under their own thread of control) communicating via passive, but synchronising, wires. Systems can be composed through natural layers of communicating components so that an understanding of each layer does not depend on an understanding of the inner ones. In this way, systems with arbitrarily complex behaviour can be safely constructed – free from race hazard, deadlock, livelock and process starvation.
A small extension to the model addresses fundamental issues and paradigms for shared memory concurrency (such as token passing, CREW dynamics and bulk synchronisation). We can explore with equal fluency serial, message-passing and shared-memory logic and strike whatever balance between them is appropriate for the problem under study. Applications include hardware design (e.g. FPGAs and ASICs), real-time control systems, animation, GUIs, regular and irregular modelling, distributed and mobile computing.
\texttt{ocaml} and Java bindings for the model are available to support practical work on commodity PCs and workstations. Currently, the \texttt{ocaml} bindings are
\(^\text{10}\) Synchronising on an event in \texttt{ocaml} has a unit time overhead, regardless of the number of processes registered. This includes being the last process to synchronise, when all blocked processes are released. These overheads are well below a microsecond for modern microprocessors.
\(^\text{11}\) As for events, the \texttt{jump} and \texttt{kick} operations have constant time overhead, regardless of the number of processes involved. The bucket overheads are slightly lower than those for events.
the fastest (context-switch times under 300 nano-seconds), lightest (in terms of memory demands), most secure (in terms of guaranteed thread safety) and quickest to learn. But Java has the libraries (e.g. for GUIs and graphics) and will get faster. Java thread safety, in this context, depends on following the CSP design patterns – and these are easy to acquire\textsuperscript{12}.
The Java\textsuperscript{PP} JCSP\textsuperscript{11} library also includes an extension to the Java AWT package that drops channel interfaces on all GUI components\textsuperscript{13}. Each item (e.g. a Button) is a process with a configure and action channel interface. These are connected to separate internal handler processes. To change the text or colour of a Button, an application process outputs to its configure channel. If someone presses the Button, it outputs down its action channel to an application process (which can accept or refuse the communication as it chooses). Example demonstrations of the use of this package may be found at [11]. Whether GUI programming through the process-channel design pattern is simpler than the listener-callback pattern offered by the underlying AWT, we leave for the interested reader to experiment and decide.
All the primitives described in this paper are available for KRoC\textsuperscript{occam} and Java. Multiprocessor versions of the KRoC kernel targeting NoWs and SMPs\textsuperscript{12} will be available later this year. SMP versions of the JCSP\textsuperscript{11} and CJT\textsuperscript{12} libraries are automatic if your JVM supports SMP threads. Hooks are provided in the channel libraries to allow user-defined network drivers to be installed. Research is continuing on portable/faster kernels and language/tool design for enforcing higher level aspects of CSP design patterns (e.g. for shared memory safety and deadlock freedom) that currently rely on self-discipline.
Finally, we stress that this is undergraduate material. The concepts are mature and fundamental – not advanced – and the earlier they are introduced the better. For developing fluency in concurrent design and implementation, no special hardware is needed. Students can graduate to real parallel systems once they have mastered this fluency. The CSP model is neutral with respect to parallel architecture so that coping with a change in language or paradigm is straightforward. However, even for uni-processor applications, the ability to do safe and lightweight multithreading is becoming crucial both to improve response times and simplify their design.
The experience at Kent is that students absorb these ideas very quickly and become very creative\textsuperscript{14}. Now that they can apply them in the context of Java, they are smiling indeed.
\textsuperscript{12} Java active objects (processes) do not invoke each other’s methods, but communicate only through shared passive objects with carefully designed synchronisation properties (e.g. channels and events). Shared use of user-defined passive objects will be automatically thread-safe so long as the usage patterns outlined in Section 3 are kept – their methods should not be synchronized (in the sense of Java monitors).
\textsuperscript{13} We believe that the new Swing GUI\textsuperscript{libraries} from Sun (that will replace the AWT) can also be extended through a channel interface for secure use in parallel designs – despite the warnings concerning the use of Swing and multithreading\textsuperscript{34}.
\textsuperscript{14} The JCSP libraries used in Appendix B were produced by Paul Austin, an undergraduate student at Kent.
References
Appendix A: occam Executables
Space only permits a sample of the examples to be shown here. This first group are
from the 'Legoland' catalogue (Section 2.3):
PROC Id (CHAN OF INT in, out) PROC Succ (CHAN OF INT in, out)
WHILE TRUE WHILE TRUE
INT x: INT x:
SEQ SEQ
in ? x in ? x
out ! x out ! x PLUS 1
:
:
PROC Plus (CHAN OF INT in0, in1, out)
WHILE TRUE
INT x0, x1:
SEQ
PAR
in0 ? x0
in1 ? x1
out ! x0 PLUS x1
:
PROC Prefix (VAL INT n, CHAN OF INT in, out)
SEQ
out ! n
Id (in, out)
:
Next come four of the 'Plug and Play' examples from Sections 2.4 and 2.6:
PROC Numbers (CHAN OF INT out) PROC Integrate (CHAN OF INT in, out)
CHAN OF INT a, b, c: CHAN OF INT a, b, c:
PAR PAR
Prefix (0, c, a) Plus (in, c, a)
Delta (a, out, b) Delta (a, out, b)
Succ (b, c) Prefix (0, b, c)
:
:
PROC Pairs (CHAN OF INT in, out) PROC Squares (CHAN OF INT out)
CHAN OF INT a, b, c: CHAN OF INT a, b:
PAR PAR
Delta (in, a, b) Numbers (a)
Tail (b, c) Integrate (a, b)
Plus (a, c, out) Pairs (b, out)
:
:
26
Here is one of the controllers from Section 2.7:
```plaintext
PROC Replace (CHAN OF INT in, inject, out)
WHILE TRUE
PRI ALT
INT x:
inject ? x
PAR
INT discard:
in ? discard
out ! x
INT x:
in ? x
out ! x
:
```
Asynchronous receive from Section 2.9:
```plaintext
SEQ
PRI PAR
in ? packet
someMoreComputation (...)
continue (...)
```
Barrier synchronisation from Section 3.3:
```plaintext
PROC P (..., EVENT b0, b2)
... local state declarations
SEQ
... initialise local state
WHILE TRUE
SEQ
someWork (...)
synchronise.event (b0)
moreWork (...)
synchronise.event (b0)
lastBitOfWork (...)
synchronise.event (b1)
:
```
Finally, non-blocking barrier synchronisation from Section 3.4:
```plaintext
SEQ
doOurWorkNeededByOthers (...)
PRI PAR
synchronise.event (barrier)
privateWork (...)
useSharedResourcesProtectedByTheBarrier (...)
```
Appendix B: Java Executables
These examples use the JCSP library for processes and channels[11]. A process is an instance of a class that implements the CSProcess interface. This is similar to, but different from, the standard Runnable interface:
package jcsp.lang;
public interface CSProcess {
public void run ();
}
For example, from the ‘Legoland’ catalogue (Section 2.3):
import jcsp.lang.*; // processes and object carrying channels
import jcsp.lang.ints.*; // integer versions of channels
class Succ implements CSProcess {
private ChannelInputInt in;
private ChannelOutputInt out;
public Succ (ChannelInputInt in, ChannelOutputInt out) {
this.in = in;
this.out = out;
}
public void run () {
while (true) {
int x = in.read ();
out.write (x + 1);
}
}
}
class Prefix implements CSProcess {
private int n;
private ChannelInputInt in;
private ChannelOutputInt out;
public Prefix (int n, ChannelInputInt in, ChannelOutputInt out) {
this.n = n;
this.in = in;
this.out = out;
}
28
public void run () {
out.write (n);
new Id (in, out).run ();
}
}
JCSP provides a Parallel class that combines an array of CSProcesses into a CSProcess. It's execution is the parallel composition of that array. For example, here are two of the 'Plug and Play' examples from Sections 2.4 and 2.6:
class Numbers implements CSProcess {
private ChannelOutputInt out;
public Numbers (ChannelOutputInt out) {
this.out = out;
}
public void run () {
One2OneChannelInt a = new One2OneChannelInt ();
One2OneChannelInt b = new One2OneChannelInt ();
One2OneChannelInt c = new One2OneChannelInt ();
new Parallel (
new CSProcess [] {
new Delta (a, out, b),
new Succ (b, c),
new Prefix (0, c, a),
}
).run ();
}
}
class Squares implements CSProcess {
private ChannelOutputInt out;
public Squares (ChannelOutputInt out) {
this.out = out;
}
public void run () {
One2OneChannelInt a = new One2OneChannelInt ();
One2OneChannelInt b = new One2OneChannelInt ();
new Parallel (
new CSProcess [] {
new Numbers (a),
new Integrate (a, b),
new Pairs (b, out),
}
).run ();
}
}
Here is one of the controllers from Section 2.7. The processes ProcessReadInt and ProcessWriteInt just read and write a single integer (into and from a public value field) and, then, terminate:
class Replace implements CSProcess {
private AltingChannelInputInt in;
private AltingChannelInputInt inject;
private ChannelOutputInt out;
public Replace (AltingChannelInputInt in,
AltingChannelInputInt inject,
ChannelOutputInt out) {
this.in = in;
this.inject = inject;
this.out = out;
}
public void run () {
Alternative alt = new Alternative (new Guard[] {inject, in});
final int INJECT = 0, IN = 1; // Guard indices (prioritised)
ProcessWriteInt forward = new ProcessWriteInt (out); // a CSProcess
ProcessReadInt discard = new ProcessReadInt (in); // a CSProcess
CSProcess par10 = new Parallel (new CSProcess[] {discard, forward});
while (true) {
switch (alt.priSelect ()) {
case INJECT:
forward.value = inject.read ();
par10.run ();
break;
case IN:
out.write (in.read ());
break;
}
}
}
}
JCSP also has channels for sending and receiving arbitrary Objects. Here is an asynchronous receive (from Section 2.9) of an expected Packet:
// set up processes once (before we start looping ...)
ProcessRead readObj = new ProcessRead (in); // a CSProcess
CSProcess someMore = new someMoreComputation (...);
CSProcess async = new PriParallel (new CSProcess[] {readObj, someMore});
while (looping) {
async.run ()
Packet packet = (Packet) readObj.value
continue (...);
}
|
{"Source-Url": "https://kar.kent.ac.uk/21644/1/Parallel_and_Distributed_Computing_in_Education_(Invited_Talk).pdf", "len_cl100k_base": 13136, "olmocr-version": "0.1.49", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 135452, "total-output-tokens": 16869, "length": "2e13", "weborganizer": {"__label__adult": 0.0003902912139892578, "__label__art_design": 0.0006351470947265625, "__label__crime_law": 0.00034737586975097656, "__label__education_jobs": 0.007396697998046875, "__label__entertainment": 0.00012576580047607422, "__label__fashion_beauty": 0.0001931190490722656, "__label__finance_business": 0.00031065940856933594, "__label__food_dining": 0.00042557716369628906, "__label__games": 0.0008320808410644531, "__label__hardware": 0.0022125244140625, "__label__health": 0.0005960464477539062, "__label__history": 0.0005359649658203125, "__label__home_hobbies": 0.0002111196517944336, "__label__industrial": 0.0008749961853027344, "__label__literature": 0.0004777908325195313, "__label__politics": 0.00035381317138671875, "__label__religion": 0.00084686279296875, "__label__science_tech": 0.134521484375, "__label__social_life": 0.00018703937530517575, "__label__software": 0.0097198486328125, "__label__software_dev": 0.8369140625, "__label__sports_fitness": 0.00044846534729003906, "__label__transportation": 0.00131988525390625, "__label__travel": 0.0002949237823486328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63154, 0.02311]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63154, 0.41078]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63154, 0.88817]], "google_gemma-3-12b-it_contains_pii": [[0, 1215, false], [1215, 3629, null], [3629, 6491, null], [6491, 8872, null], [8872, 10684, null], [10684, 13226, null], [13226, 13295, null], [13295, 16479, null], [16479, 18606, null], [18606, 19330, null], [19330, 22153, null], [22153, 24719, null], [24719, 25722, null], [25722, 27364, null], [27364, 28657, null], [28657, 30489, null], [30489, 31726, null], [31726, 34461, null], [34461, 37113, null], [37113, 39821, null], [39821, 41888, null], [41888, 44211, null], [44211, 47329, null], [47329, 50961, null], [50961, 53920, null], [53920, 56928, null], [56928, 57918, null], [57918, 58801, null], [58801, 60044, null], [60044, 61401, null], [61401, 63154, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1215, true], [1215, 3629, null], [3629, 6491, null], [6491, 8872, null], [8872, 10684, null], [10684, 13226, null], [13226, 13295, null], [13295, 16479, null], [16479, 18606, null], [18606, 19330, null], [19330, 22153, null], [22153, 24719, null], [24719, 25722, null], [25722, 27364, null], [27364, 28657, null], [28657, 30489, null], [30489, 31726, null], [31726, 34461, null], [34461, 37113, null], [37113, 39821, null], [39821, 41888, null], [41888, 44211, null], [44211, 47329, null], [47329, 50961, null], [50961, 53920, null], [53920, 56928, null], [56928, 57918, null], [57918, 58801, null], [58801, 60044, null], [60044, 61401, null], [61401, 63154, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63154, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63154, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63154, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63154, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63154, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63154, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63154, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63154, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63154, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63154, null]], "pdf_page_numbers": [[0, 1215, 1], [1215, 3629, 2], [3629, 6491, 3], [6491, 8872, 4], [8872, 10684, 5], [10684, 13226, 6], [13226, 13295, 7], [13295, 16479, 8], [16479, 18606, 9], [18606, 19330, 10], [19330, 22153, 11], [22153, 24719, 12], [24719, 25722, 13], [25722, 27364, 14], [27364, 28657, 15], [28657, 30489, 16], [30489, 31726, 17], [31726, 34461, 18], [34461, 37113, 19], [37113, 39821, 20], [39821, 41888, 21], [41888, 44211, 22], [44211, 47329, 23], [47329, 50961, 24], [50961, 53920, 25], [53920, 56928, 26], [56928, 57918, 27], [57918, 58801, 28], [58801, 60044, 29], [60044, 61401, 30], [61401, 63154, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63154, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
6b8c087178896cc2a0c08089ad8f18942caa52f1
|
[REMOVED]
|
{"Source-Url": "http://jens-lehmann.org/files/2016/eswc_asknow.pdf", "len_cl100k_base": 9876, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 46465, "total-output-tokens": 12228, "length": "2e13", "weborganizer": {"__label__adult": 0.00051116943359375, "__label__art_design": 0.0008945465087890625, "__label__crime_law": 0.0007653236389160156, "__label__education_jobs": 0.0048370361328125, "__label__entertainment": 0.0003938674926757813, "__label__fashion_beauty": 0.0003409385681152344, "__label__finance_business": 0.0005893707275390625, "__label__food_dining": 0.0005679130554199219, "__label__games": 0.0015153884887695312, "__label__hardware": 0.0007696151733398438, "__label__health": 0.0008754730224609375, "__label__history": 0.0007772445678710938, "__label__home_hobbies": 0.0001800060272216797, "__label__industrial": 0.0005307197570800781, "__label__literature": 0.00304412841796875, "__label__politics": 0.00046133995056152344, "__label__religion": 0.0008234977722167969, "__label__science_tech": 0.263671875, "__label__social_life": 0.0003457069396972656, "__label__software": 0.07977294921875, "__label__software_dev": 0.63720703125, "__label__sports_fitness": 0.0003612041473388672, "__label__transportation": 0.0006017684936523438, "__label__travel": 0.0003178119659423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44513, 0.03756]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44513, 0.60426]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44513, 0.85528]], "google_gemma-3-12b-it_contains_pii": [[0, 2727, false], [2727, 6142, null], [6142, 9361, null], [9361, 12676, null], [12676, 14249, null], [14249, 16727, null], [16727, 20286, null], [20286, 23019, null], [23019, 25425, null], [25425, 28345, null], [28345, 30869, null], [30869, 34051, null], [34051, 37809, null], [37809, 40852, null], [40852, 44513, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2727, true], [2727, 6142, null], [6142, 9361, null], [9361, 12676, null], [12676, 14249, null], [14249, 16727, null], [16727, 20286, null], [20286, 23019, null], [23019, 25425, null], [25425, 28345, null], [28345, 30869, null], [30869, 34051, null], [34051, 37809, null], [37809, 40852, null], [40852, 44513, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44513, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44513, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44513, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44513, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44513, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44513, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44513, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44513, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44513, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44513, null]], "pdf_page_numbers": [[0, 2727, 1], [2727, 6142, 2], [6142, 9361, 3], [9361, 12676, 4], [12676, 14249, 5], [14249, 16727, 6], [16727, 20286, 7], [20286, 23019, 8], [23019, 25425, 9], [25425, 28345, 10], [28345, 30869, 11], [30869, 34051, 12], [34051, 37809, 13], [37809, 40852, 14], [40852, 44513, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44513, 0.09896]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
79c106ca4a3cf4976509e3888663c2fc3467ef10
|
Janhunen, Tomi; Niemelä, Ilkka
The Answer Set Programming Paradigm
Published in:
AI Magazine
DOI:
10.1609/aimag.v37i3.2671
Published: 01/01/2016
Document Version
Peer reviewed version
Please cite the original version:
The Answer Set Programming Paradigm
Tomi Janhunen and Ilkka Niemelä
Helsinki Institute for Information Technology HIIT
Aalto University School of Science
Department of Computer Science
PO Box 15400, FI-00076 Aalto, Finland
Abstract
In this paper, we give an overview of the answer set programming paradigm, explain its strengths, and illustrate its main features in terms of examples and an application problem.
Introduction
Answer set programming (ASP, for short) is a declarative programming paradigm for solving search problems and their optimization variants. In ASP a search problem is modeled as a set of statements (a program) in a logic programming type of a language in such a way that the answer sets (models) of the program correspond to the solutions of the problem. The paradigm was first formulated in these terms by Marek and Truszczyński (1999) and Niemelä (1999). The ASP paradigm has its roots in knowledge representation and nonmonotonic logics research as described by Marek et al. (2011) in a historic account on the development of the paradigm. A more recent and more technical overview of ASP has been contributed by Brewka et al. (2011).
The ASP paradigm is most widely used with the formalism of logic programming under the semantics given by answer sets (Gelfond and Lifschitz 1988; 1990). The term answer sets was proposed by Gelfond and Lifschitz (1991) for sets of literals, by which programs in an extended syntax are to be interpreted where the classical negation operator and disjunctions of literals are allowed in the heads of program rules. Lifschitz’ article (2016) in this special issue gives an introduction to the notion of an answer set and the language of ASP, as well as a comparison to Prolog systems. An alternative approach to ASP has been to use directly first-order logic as the basis and extend it with inductive definitions. The details can be found in the articles by Denecker and Vennekens (2014), Denecker and Ternovska (2008), East and Truszczynski (2006), and the one by Bruynooghe et al. (2016) in this special issue.
A main reason for the increasing interest in ASP is the availability of fast software tools that makes it possible to tackle problems of practical importance. Most of the current software tools employ two steps commonly referred to as grounding and solving, reflecting the definition of answer sets for programs with variables (Lifschitz 2016). The idea is to separate concerns so that the grounding phase takes care of the evaluation of more complicated data structures and variable instantiations using logic programming and deductive database techniques, and then the solving phase focuses on search for answer sets for a much simpler type of programs by employing advanced search methods. The papers by Kaufmann et al. (2016) and by Gebser and Schaub (2016) in this special issue provide more information on the solving and grounding techniques.
There is a growing number of successful applications of ASP including molecular biology (Gebser et al. 2010a; 2010b), decision support system for space shuttle controllers (Balduccini, Gelfond, and Noguera 2006), phylogenetic inference (Erdem 2011; Koponen et al. 2015), product configuration (Soininen and Niemelä 1998; Finkel and O’Sullivan 2011) and repair of web-service work flows (Friedrich et al. 2010). Erdem et al. (2016) give an account of the applications of ASP in this special issue.
On the one hand, ASP is closely related to logic programming and Prolog and, on the other hand, to constraint programming (CP), propositional satisfiability (SAT), and linear/integer programming (LP/IP). Unlike Prolog-like logic programming ASP is fully declarative and neither the order of rules in a program nor the order of literals in the rules matter. Moreover, Prolog systems are tailored to find proofs or answer substitutions to individual queries whereas ASP systems are finding answer sets corresponding to complete solutions to a problem instance. The basic idea in ASP is very close to the paradigm of CP, SAT, or LP/IP where problems are represented by constraints and where systems are tailored to find satisfying variable assignments corresponding to complete solutions.
However, there are significant differences. The ASP paradigm allows for a very systematic approach to problem representation through uniform encodings where the problem statement can be developed independently of data on a particular instance. This leads to a large degree of elaboration tolerance. The ASP approach enables structured representation of problems where more complicated constraints are composed of simpler ones using rules. On the other hand, rules enable one to encode conditions that are challenging (like representing disjunctive constraints or other basic relational operations on constraints) or not available at all (like recursive constraints) when comparing to CP or
The ASP paradigm provides a general framework of features and attractive properties of the paradigm. The use of the paradigm and its main features are then illustrated by developing ASP encodings for an application problem step by step. The application considered in this paper is about designing a locking scheme for a building so that certain safety requirements are met. Having introduced the basic paradigm, we briefly address main ways to implement ASP—either using native answer-set solvers or translators enabling the use of solver technology from neighboring disciplines. The paper ends with a summary and discussion of future prospects. In addition, we illustrate the potential computational hardness of our application problem by explaining its connection to the NP-complete decision problem Exact-3-SAT.
**Basic ASP Paradigm**
The conceptual model of the ASP paradigm is depicted in Figure 1. We start by explaining how to understand search problems at an abstract level and then illustrate how ASP is typically employed to solve such problems using the approach illustrated in the figure. Finally, we address a number of features and attractive properties of the paradigm.
**Problem Solving.** The ASP paradigm provides a general purpose methodology for solving search and optimization problems encountered in many real world applications. To get started, the key step is to identify and formalize the problem to be solved, i.e., to work out a problem statement. Typically this consists of clarifying what the potential solutions of the problem are like and then setting the conditions that solutions should satisfy. Solving the problem means that given the data and the instance of the problem we should find one or more solutions satisfying the given conditions (see the topmost arrow in Figure 1). For illustration, we use the task of finding a seating arrangement for a dinner as the first simple example. The respective problem statement could be read as formulated below.
**Example 1 (Seating Arrangement Problem)**
A certain group of people, say persons \( p_1, \ldots, p_n \), are invited for dinner. There are tables \( t_1, \ldots, t_k \) with the respective capacities \( c_1, \ldots, c_k \) available for seating such that \( c_1 + \cdots + c_k \geq n \). The host has some prior knowledge about the relationships of the guests: there are both friends and enemies among the invitees. This information should be taken into account when designing the arrangement. A solution to this problem is a mapping \( s(p_i) = t_j \) of persons \( p_i \) to tables \( t_j \) so that the mutual relationships are respected.
The problem statement above uses mathematical symbols to abstract the details of the problem such as the number and the identity of persons involved and the collection of tables available for seating. This reflects an important methodological feature, namely the separation of instance data from the actual problem statement. The point is that the problem can be stated without listing all details for a particular instance of the problem. In case of the seating arrangement problem, the instance data would consist of the names of invitees together with lists of tables and their capacities, and the pairs of persons who are known to be either friends or enemies. More concretely put, suppose that we have a group of 20 people: Alice, Bob, John, etc. There are four tables, seating 7, 6, 5, and 4 people, respectively. Moreover, we know that Alice likes Bob, Bob likes John and so on. Given all such pieces of information, the goal is
- to find at least one solution that fulfills the criteria set in the problem statement of Example 1, or
- to show that no solution exists.
Given what we know so far, we can expect solutions where Alice, Bob, and John are seated together at one of the four tables available. However, if we state additionally that Alice and John dislike each other, for instance, the seating problem instance under consideration has no solutions.
**ASP Encoding.** But how do we achieve the goal stated above using ASP and get the problem solved? As suggested by Figure 1, we should formalize the problem statement by writing down a (logic) program. Before we can really do this, we should have a basic understanding of syntax, also introduced in the article by Lifschitz (2016) in this issue. In ASP, programs consist of rules, i.e., statements of the form
\[
\text{head} :- \text{body}_1, \text{body}_2, \ldots, \text{body}_m.
\]
The intuitive reading of the rule above is that the head can be inferred if (and only if) the body conditions \( \text{body}_1, \text{body}_2, \ldots, \text{body}_m \) have been inferred by any other rules in the program. The conditions in the rule are either atomic statements (a.k.a. atoms) like \( \text{seat}(a, 1) \) for Alice being seated at Table 1, or count-bounded sets of atoms
\[
\{ \text{atom}_1; \ldots; \text{atom}_k \} \cup \{ \text{atom}_1; \ldots; \text{atom}_k \}
\]
where at least 1 but at most \( u \) atoms among \( \text{atom}_1, \ldots, \text{atom}_k \) should be inferable. The cardinality constraint above can also be expressed in terms of a counting aggregate
\[
\#\text{count}\{\text{atom}_1; \ldots; \text{atom}_k\}
\]
where appropriate bounds can be incorporated using relation symbols \( <, \leq, >, \geq, \) and \( = \). Atoms can also be negated using the operator \( \neg \) for default negation. A rule with an empty body (\( n = 0 \)) stands for a fact whose head holds unconditionally. As a further special case, a rule without a head stands for a constraint whose body \( \text{body}_1, \text{body}_2, \ldots, \text{body}_m \) must not be satisfied. In this article, we do not consider extensions of rules by classical negation nor disjunctions in rule heads (Gelfond and Lifschitz 1991). We are now ready to describe typical steps in writing down a program in ASP, resulting in an encoding given as
---
1The encodings presented in this paper are directly executable using contemporary ASP grounders and solvers compatible with the ASP-core-2 language specification (Calimeri et al. 2012).
Listing 1: Encoding the Seating Problem in ASP
```prolog
% Instance
person(a). person(b). person(j).
likes(a,b). likes(b,j). ...
dislikes(a,j). dislikes(j,a). ...
tbl(1,7). tbl(2,6). tbl(3,5). tbl(4,4).
% Rules and constraints
:- dislikes(P1,P2), seat(P1,T), seat(P2,T),
person(P1), person(P2),
:- likes(P1,P2), seat(P1,T1), seat(P2,T2),
:- #count{seat(P,T): person(P)}>C, tbl(T,C).
:- { seat(P,T): tbl(T,_) } 1 :- person(P).
% The second step concerns the actual program formalizing the problem statement. Writing down the rules is of course a creative activity, which one learns best by doing, but in ASP one can concentrate on defining the relevant concepts (relations) in terms of rules, as well as thinking about conditions on which certain relations should hold. To understand the outcome of the formalization in Listing 1, let us give the
However, ASP builds on the closed world assumption (CWA): the given information is treated as complete information and the problem is solved under this assumption.
```
intuitive readings for the rules involved. The rule in line 8 stipulates that every person \( P \) must be seated at exactly one table \( T \). A few constraints follow. The capacities of tables are enforced in line 9: it is unacceptable if more than \( C \) persons are seated at table \( T \) which seats at most \( C \) persons. Moreover, if person \( P1 \) likes person \( P2 \), they should not be seated at different tables \( T1 \) and \( T2 \). This constraint is expressed in lines 10–12. The other way around, if \( P1 \) does not like \( P2 \), they should not be seated at the same table \( T \). The respective rule is given in lines 13–14. The rules and constraints in lines 8–14 explained so far form a uniform encoding of the seating problem, as the representation is independent of any problem instance described by facts of the type in lines 2–5.
So far, we have demonstrated the modeling philosophy of ASP in terms of a simple application. The later section on locking design provides further insights into modeling and typical design decisions made. Yet further information is available in the articles of Bruynooghe et al. (2016) and Gebser and Schaub (2016) in this special issue.
**ASP Solving.** It remains to explain how the encoding from Listing 1 solves the problem instance in practice. First, the rules of the program have to be instantiated and evaluated with respect to the present facts. This means, e.g., that the rule in line 8 yields an instance
\[
1 \{ \text{seat}(a,1); \text{seat}(a,2); \text{seat}(a,3); \text{seat}(a,4) \} 1.
\]
when \( P \) is replaced by \( a \) and \( T \) ranges over the available tables 1, 2, 3, and 4. This particular instance concerns the seating of Alice. While instantiating the rules also some evaluations take place. For example, when handling the rule in line 9 for table 1 with capacity 7 the lower bound \( C \) of the constraint is substituted by the value 7. The ground program, also indicated in Figure 1, is typically generated by running a dedicated tool, i.e., a grounder, on the input. After that the search for answer sets can be performed by invoking an answer set solver. Finally, the solution(s) of the original problem instance are obtained by extracting relevant part(s) from the answer set(s) found. For the encoding under consideration, this means that whenever an occurrence of \( \text{seat}(P,T) \) is contained in an answer set, then person \( P \)
is supposed to be seated at table $T$. Using the notions from Example 1, we would have the required mapping $s: P \mapsto T$ from persons to tables. If no answer set can be found, then a problem instance has no solutions. This is actually the case for the instance described by lines 2–5 in Listing 1, since it is impossible to place Alice, Bob, and John at the same table due to their relations. However, if the facts in line 4 are removed, obtaining answer sets is still feasible—the relationships of other guests permitting.
**Beyond Basic ASP.** The basic paradigm illustrated in Figure 1 solves the problem at hand by eventually finding one or more solutions to the problem, or by showing that no solution exists. If there are multiple solutions to the problem, then it may be desirable to select the best solution among the alternatives using some criterion such as price, capacity, etc. This turns the problem into an optimization problem. In ASP, objective functions for such problems can be defined in terms of optimization statements like
\[
\#\text{minimize}\{w_1, l: \text{atom}_1; \ldots; w_n, r: \text{atom}_n\}.
\]
The statement above assigns weights $w_1, \ldots, w_n$ to atoms $\text{atom}_1, \ldots, \text{atom}_n$ respectively, and the goal is to minimize the sum of weights for atoms contained in an answer set—when evaluated over all answer sets. As regards the seating arrangement problem, the respective optimization problem could deal with obviously inconsistent settings like the one described above. Rather than satisfying all constraints resulting from the mutual relations of persons, the goal would be to satisfy as many as possible. In the preceding example, this would mean that either Alice is seated at the same table as Bob, or Bob is seated with John, but Alice and John are placed at different tables.
Besides the optimization of solutions, there are also other reasoning modes of interest. It is sometimes interesting to see how much the solutions are alike. In cautious reasoning, the idea is to check whether a certain atom is present in all or absent from some answer set. For instance, if $\text{seat}(a,1)$ is for some reason contained in all answer sets, then Alice will be unconditionally seated at the first table and no options remain to this end. Cautious reasoning corresponds to basic query evaluation over answer sets and it can be implemented by adding a constraint to the program. In the case of our example, the constraint would read $\neg \text{seat}(a,1)$, indicating that we would like to find any counter-example, i.e., an answer set not containing $\text{seat}(a,1)$. Alternatively, cautious reasoning can be implemented by solvers as a special reasoning mode while searching for answer sets. Brave reasoning is the dual of cautious reasoning and then the presence in some or absence from all answer sets is required. Again, this can be implemented by adding a constraint or as a special reasoning mode.
It is also possible to enumerate answer sets and, hence, count their number. For certain applications, the number of solutions could actually be an interesting piece of information. In product configuration (see, e.g., (Soininen and Niemelä 1998)), this could be the number of variants that a production line should be able to produce. There are also complex use cases of ASP. In incremental solving, the idea is to compute partial solutions to a problem (or show their non-existence) by calling an ASP solver several times and by extending the instance data on the fly. Various kinds of planning problems (with an increasing plan length) typically fall into this category. The latest developments even suggest multi-shot solving (Gebser et al. 2014) where solver calls are freely mixed and the ground programs used upon solver calls may evolve in more complex ways.
**Constraints over Infinite Domains.** Since grounding is an inherent part of ASP work flow, the basic paradigm is based on Boolean or finite-domain variables only. However, certain applications call for variables over infinite domains such as integers and reals. For instance, there have been proposals to extend ASP rules by linear inequalities (Gebser, Ostrowski, and Schaub 2009; Liu, Janhunen, and Niemelä 2012; Mellarkod, Gelfond, and Zhang 2008) as well as difference constraints (Janhunen, Liu, and Niemelä 2011). From the modeling perspective, the goal of such extensions is to increase the expressive power of ASP suitably so that new kinds of applications become feasible. For instance, referring back to the seating problem in Listing 1, we could refine the specification for each person $P$ by introducing integer variables $e(P)$ and $l(P)$ denoting the points of time when $P$ enters and leaves the table in question. Using difference constraints, we could state a specification given as Listing 2. Intuitively, the rules in lines 1 and 2 insist that person $P$ stays at the table from 5 to 90 minutes. The constraint in lines 3–5 refines the last one from Listing 1. It is not allowed that any two persons $P_1$ and $P_2$ who dislike each other are seated at the same table at the same time. It is important to notice that when the constraint in line 1 is instantiated for Alice, the resulting constraint is $: - l(a)-e(a)<5$. Thus, the infinity of the underlying domain is not reflected to the size of the resulting ground program. Naturally, the interpretation of $l(P)$ and $e(P)$ as integer variables must be dealt with by the implementation of such constraints.
**Application: Locking Design**
Having introduced the ASP paradigm on a general level, we now illustrate its main features in terms of an application problem where the goal is to design a locking scheme for a building. This is to be understood comprehensively, i.e., we are not just interested in locks but also anything else that can affect accessibility in a building. For simplicity, we consider a single floor. A sample floor plan of such a building is depicted in Figure 2. There are 12 rooms altogether, numbered from 1 to 12 in the figure. Given this domain, our objectives
Listing 2: Examples of difference constraints
<table>
<thead>
<tr>
<th>Line</th>
<th>Constraint</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>$\neg l(P) < 5$, person(P).</td>
</tr>
<tr>
<td>2</td>
<td>$\neg l(P) > 90$, person(P).</td>
</tr>
<tr>
<td>3</td>
<td>$l(P_1) - e(P_2) > 0$, $l(P_2) - e(P_1) > 0$, dislikes(P1,P2), person(P1), person(P2), seat(P1,T), seat(P2,T), tbl(T).</td>
</tr>
</tbody>
</table>
are as follows. First, we describe the domain in a uniform way by selecting adequate predicates for the representation of domain information. Second, we take one concrete design goal from this domain into consideration. To this end, we concentrate on the configuration of locks installed on (potential) doors between the rooms in such a way that certain accessibility criteria are met. A particular safety requirement is that the floor can be effectively evacuated in case of an emergency. The idea is to develop ASP encodings for a design problem like this and, at the same time, illuminate the basic line of thinking and typical primitives used when modeling in ASP.
Uniform Encoding. The goal is to choose predicate symbols and the respective relations that are needed to represent an instance of the application problem at hand. To abstract the physical coordinates of the rooms, we rather represent the adjacency relation of rooms in terms of a predicate adj/2. For simplicity, we also assume that this relation captures the potential of installing doors between any adjacent rooms. The floor plan of Figure 2 can be represented by constants 1..12 for the rooms and the following facts:
\[
\begin{align*}
\text{adj}(1,2), & \quad \text{adj}(1,3), \quad \text{adj}(2,3), \\
\text{adj}(2,4), & \quad \ldots \quad \text{adj}(11,12).
\end{align*}
\]
In total, there are 21 such facts and they are sufficient for the purposes of our examples to describe the interconnections of the rooms. For space efficiency, the adjacency information is represented asymmetrically, i.e., \(\text{adj}(X,Y)\) is reported only if \(X < Y\). In addition, the rooms having exits are reported using a unary predicate \(\text{exit}/1\). For the running example in Figure 2, this is captured by the fact \(\text{exit}(5)\). Now, if the given floor plan were changed in one way or another, or a completely different floor plan were taken into consideration, this should be reflected in the facts describing the problem instance. The other rules describing the application problem are based on these two predicates, hence making the encoding uniform. As typical in ASP encodings, some subsidiary domain predicates are defined in order to make the description of the actual problem easier. Some domain rules for the locking design problem are collected in Listing 3 and explained below.
Relational Operations. The rules in lines 1–2 of Listing 3 are used to extract room information from the adjacency information by a simple projection operation. As a result \(\text{room}(R)\) is true for only those values of \(R\) that actually appear in the adjacency information. In principle, a door between two rooms provides symmetric access from a room to another. Thus, the adjacency relation is not well-suited as such for the description of accessibility and we form the union of the accessibility relation with its reverse relation using rules in lines 3–4. The relation \(\text{pot}/2\) stands for potential access depending on instrumentation such as locks, handles, press buttons, etc.
Defaults. To illustrate the use of defaults in encodings, we have included the rules in lines 5–6 of Listing 3. The rule in line 5 defines the condition \(\text{otherexit}/0\) meaning that some other room than the room 1 has an exit. The rule in line 6 ensures that, by default, there is an exit at room 1. This is to hold unless another exit has been declared for the particular problem instance. There can be multiple exits. For instance, if there are two exits at rooms 1 and 5, this can be stated explicitly using facts \(\text{exit}(1)\) and \(\text{exit}(5)\). Adding these facts overrules the default in line 6 because \(\text{otherexit}\) can be inferred by the rule in line 5.
Defining the Search Space. Typical ASP encodings include a part where the solution candidates for the problem being formalized are generated. This can be achieved by expressing a number of choices that aim at capturing the varying aspects of solutions. As regards syntax, such choices can be expressed in terms of choice rules whose heads are count-bounded sets of atoms. Bounds can also be omitted if an arbitrary choice is of interest. As explained above, the access from a room to another can be asymmetric due to physical constructions. In particular, this is true for emergency situations where persons try to leave the building as soon as possible but might have no keys to unlock any door. For simplicity, we introduce a two-argument predicate \(\text{evac}/2\) that is used to express the existence of an evacuation route from a room to another. Given adjacent rooms \(R_1\) and \(R_2\), such a design choice can be made in terms of a choice rule
\[
\{ \text{evac}(R_1,R_2) \} := \text{pot}(R_1,R_2).
\]
The intuitive reading is that if \(\text{pot}(R_1,R_2)\) is true, then the truth value of \(\text{evac}(R_1,R_2)\) is subject to a choice. Hence, the selection of evacuation routes between rooms is formalized. Note that the analogous normal rule...
### Listing 4: ASP Encoding of the Evacuation Plan
1. `reach(R, R) :- room(R).`
2. `reach(R1, R2) :- reach(R1, R3), evac(R3, R2), room(R1), pot(R3, R2).`
3. `ok(R) :- room(R), reach(R, X), exit(X).`
4. `reach(R1, R3, S), evac(R3, R2), step(S), step(S+1).`
5. `reach(R1, R2, S+1) :- step(0..s).`
6. `ok(R) :- room(R), reach(R, X, S), exit(X), step(S).`
- `evac(R1, R2) :- pot(R1, R2).`
- `ok(R) :- room(R), reach(R, X), exit(X).`
- `not ok(R), room(R).`
- `#minimize{1, R1, R2: evac(R1, R2), pot(R1, R2)}.`
would falsify `evac(R1, R2)` by default if `pot(R1, R2)` were false, e.g., rooms R1 and R2 were not adjacent. Since the relation `pot/2` is symmetric, this gives rise to four different scenarios if `pot(R1, R2)` and thus also `pot(R2, R1)` is true. Evacuation in one direction is possible if either `evac(R1, R2)` or `evac(R2, R1)` holds. If they are both true, this allows for bidirectional evacuation between R1 and R2. If such an option is not considered safe, it is easy to introduce an integrity constraint to exclude such a possibility in general:
```
:- evac(R1, R2), evac(R2, R1), pot(R1, R2).
```
If both `evac(R1, R2)` and `evac(R1, R1)` are false, then there is no connection between rooms R1 and R2 in case of an emergency. It remains to ensure that there exists an overall evacuation plan, i.e., it is possible to reach at least one exit of the building from every room.
### Recursive Definitions.
The existence of an evacuation plan is governed by constraints that concern the mutual reachability of rooms, to be formalized using a predicate `reach/2`. The first two rules of Listing 4 give a recursive definition for this predicate. Every room R is reachable from itself: the corresponding base case is given in line 1. The recursive case is formulated in lines 2–4: the reachability of R2 from R1 builds on the reachability of an intermediate room R3 from R1 and the condition that R3 can be evacuated to R2 (cf. line 3).
### Constraining Solutions.
The essential constraint on the evacuation plan is given in lines 6–7 of Listing 4. Any given room R is considered to be OK, if some exit X is reachable from it (line 6). The auxiliary predicate `ok/1` is defined in order to detect this aspect for each room. The actual constraint (line 7) excludes scenarios where some of the rooms would not be OK. Last, we want to minimize the number of evacuation connections by the objective function given in line 9. Using the encoding devised so far and an ASP solver, it is possible to check for the floor plan of Figure 2 that the minimum number of connections is 11. This is clear since there are 12 rooms in total each of which (except room 5) must be connected to some other room for the purpose of evacuation. But ASP solvers can find out more for our running example. For instance, it is possible to enumerate and count all possible evacuation plans with 11 connections. In fact, there are 22 020 such plans and further constraints can be introduced to identify the most suitable ones. It is indeed the case that the current requirements allow for very long evacuation routes through the building of Figure 2 such as
```
7 → 6 → 11 → 12 → 10 → 9 → 8 → 4 → 2 → 1 → 3 → 5
```
Given this observation, the lengths of routes seem important. Thus, we now pay special attention to the number of evacuation steps, i.e., moves from a room to another, and from the room perspective. The number of steps ought to be limited.
### Elaboration Tolerance.
It is straightforward to modify the recursive encoding so that the number of steps is reflected. The revised encoding is presented as Listing 5. The domain for steps is first declared by the rule in line 1 where the maximum number of steps s is determined from the command line of the grounder. The base case in line 3 simply states that each room R is reachable from itself in zero steps. The main modification in the recursive case (lines 4–5) concerns counting: the number of steps S is increased by one to S+1 whenever a further step is made. However, since both S and S+1 must be members of the domain of steps, the maximum value is effectively determined by the constant s in line 1. Given the floor plan of Figure 2 and s=2, no evacuation plans can be found. By increasing s by one, solutions with 11 connections are found again and there are only 152 plans where the number of evacuation steps is at most three.
In summary, we have now tackled one particular aspect of locking design, i.e., ensuring that an evacuation plan exists for a building. In reality further requirements are imposed on evacuation plans making the problem computationally more and more challenging. For instance, it can be shown that if we incorporate conditions which can make rooms along an evacuation route mutually exclusive, e.g., for certain security reasons, it is unlikely that we are able to find a polynomial-time algorithm for solving the problem (mathematically expressed the problem becomes NP-complete). This justifies well the use of powerful search methods like ASP for tackling the problem. For readers interested in computational complexity, we sketch the justifications of computational hardness in the sidebar.
### Computing Answer Sets
So far, we have concentrated on the conceptual model of Figure 1 with an emphasis on the modeling side. As regards the actual computation of answer sets, grounding and
The translations mentioned above are cross translations. The other constraint-based disciplines discussed in the introduction offer similar solver technology at the user’s disposal for handling, in particular, the search phase. However, they cannot be used straightforwardly, as ground programs are not directly understood by such solvers and certain kinds of transformations become indispensable. The idea of translation-based ASP is to translate (ground) logic programs into other formalisms so that a variety of solvers can be harnessed to the task of computing answer sets. Such an approach can be understood as a refinement of the search step in Figure 1. There are existing translations from ASP, e.g., to SAT (Janhunen 2004), and its extension as SMT (Niemelä 2008), and mixed integer programming (MIP) (Liu, Janhunen, and Niemelä 2012). These translations indicate the realizability of ASP in other formalisms and they have all been implemented by translators in the ASPTOOLS\textsuperscript{6} collection. They offer another way of implementing the search phase in ASP using off-the-shelf solvers as black boxes. This approach is already competitive in certain application problems and it can be seen as an effort to combine the expressive power of the modeling language offered by ASP with the high performance of existing solvers. Translations are also useful when implementing language extensions in a single target language. For instance, the idea of (Janhunen, Liu, and Niemelä 2011) is to translate programs enriched by difference constraints into difference logic altogether. The strength is that a single solver is sufficient for the search phase, but on the other hand, the original structure of constraints may be lost.
Cross Translation. The translations mentioned above are based on very similar technical ideas but yield representations of the ground program in completely different formats. Since the development of several translators brings about extra programming work, it would be highly desirable to integrate the variety of translators in a single tool—having options for different back-end formats. This is not as simple as that due to the wide variety of formats under consideration. However, this issue is partly solved by a recent translation from ASP to SAT modulo acyclicity (Gebser, Janhunen, and Rintanen 2014) where graph-based constraints are interconnected with ordinary logical constraints (i.e., clauses). The translation can be implemented by instrumenting a ground logic program with certain additional rules and meta information formalizing the underlying recursion mechanism in terms of the acyclicity constraint. This leads to a new implementation strategy for translation-based ASP: the choice of the target formalism can be postponed until the last step of translation where the constraints are output in a particular solver format. This idea is analogous to cross compilation in the context of compiling conventional programming languages and hence we coin the term cross translation for ASP. In the current implementation of this idea, a back-end translator transforms the instrumented program into other kinds of constraints understood by SMT, MIP, and pseudo-Boolean (PB) solvers, for instance. Interestingly, by implementing an additional acyclicity check inside a native ASP solver, the instrumented program can also be processed directly by the solver (Bomanson et al. 2015), which offers yet another approach to answer set computation.
Summary and Future Prospects
This paper provides an introduction to the ASP paradigm as well as explains its main features—first generally, but also in terms of examples. We also discuss the two mainstream approaches to implementing the search for answer sets using either native solvers, or translators combined with solver technology offered by neighboring disciplines.
Towards Universal Modeling. There is a clear trend in the area of constraint-based modeling where methods and techniques are being transferred from one discipline to another. Various ideas from knowledge representation, logic programming, databases, and Boolean satisfiability served as a starting point for the ASP paradigm. But there are signs of knowledge transfer in the other direction as well. For instance, ASP solvers have been integrated into logic programming systems such as XSB (Rao et al. 1997). Advanced query evaluation mechanisms of ASP (Faber, Greco, and Leone 2007) are also relevant for deductive databases. The very idea of answer sets has been brought to the context of CP by introducing so-called bound-founded variables (Aziz, Chu, and Stuckey 2013). Quite recently, the algorithms for projected answer set enumeration have been exported for model counting in the context of SAT (Aziz et al. 2015).
We foresee that the exchange and incorporation of ideas and technologies in this way is gradually leading towards a universal approach where the user may rather freely pick the right language for expressing constraints of his or her interest. The underlying reasoning system is then supposed to (i) take care of required translations transparently and (ii) forward the resulting constraints for a solver architec-
---
\textsuperscript{3}potassco.sourceforge.net/
\textsuperscript{4}www.dlvsystem.com/
\textsuperscript{5}github.com/alviano/wasp.git
\textsuperscript{6}research.ics.aalto.fi/software/asp/
ature that can realize the search for answers. The first attempts to define a modular framework for multi-language modeling have already been made (Järvisalo et al. 2009; Lierler and Truszczynski 2014; Tasharrofi and Ternovska 2011). However, a lot of work remains to be done in order to realize the universal modeling scenario. Our experience from integrating various kinds of tools suggests that finding a universal format for the constraints of interest is one of the key issues for tool interoperability. There are existing formats such as the DIMACS format in SAT, the Smodels format in ASP, and the FlatZinc format in CP; that can be used as starting points for designing the universal format.
Acknowledgments. The support from the Finnish Centre of Excellence in Computational Inference Research (COIN) funded by the Academy of Finland (under grant #251170) is gratefully acknowledged. The authors thank Martin Gebser, Michael Gelfond, Torsten Schaub, and Mirek Truszczynski for their comments on a preliminary draft of this article.
References
Bruynooghe, M.; Denecker, M.; and Truszczynski, M. 2016. First order logic with inductive definitions for model-based problem solving. AI Magazine (this number).
Erdem, E.; Gelfond, M.; and Leone, N. 2016. Applications of ASP. AI Magazine (this number).
Koponen, L.; Oikarinen, E.; Janhunen, T.; and Säätä, L.
sidebar: locking design can be computationally challenging
It is not surprising that finding a locking scheme satisfying given conditions can become computationally challenging when more involved conditions need to be satisfied. Here we consider the problem of finding a locking scheme that allows an evacuation plan such that for each room there is exactly one evacuation direction and the evacuation routes respect a given set of room conflicts, i.e., a set of pairs of rooms \((R_{i,1}, R_{i,2})\) such that when following the evacuation routes if you enter room \(R_{i,1}\), then you cannot enter room \(R_{i,2}\). We show that this locking design problem is NP-complete indicating that it is unlikely that a polynomial time algorithm for solving this problem can be found. See, for example, (Papadimitriou 1994) for an introduction to computational complexity and the required concepts used below.
Technically, the NP-completeness of a problem can be shown by establishing a reduction computable in polynomial time from a known NP-complete problem to the problem and showing that it can be checked in polynomial time that a potential solution satisfies the required conditions for the problem. As such a known NP-complete problem we use the Exact-3-SAT problem where we are given a conjunction of 3-literal clauses and the problem is to find a truth assignment that satisfies exactly one literal in each of the clauses.
Reduction from Exact-3-SAT. Any given 3-SAT instance \(C_1 \land \ldots \land C_n\) can be transformed into a floor plan illustrated in Figure 3. For each 3-literal clause \(C_i = \l_1 \lor \l_2 \lor \l_3\), we introduce a corridor \(C_i\) connected to rooms \(R_{i,1}, R_{i,2}\), and \(R_{i,3}\) that are connected to corridor \(C_{i+1}\). Moreover, rooms \(R_{i,1}, R_{i,2}\), and \(R_{i,3}\) do not have doors in-between. The (only) exit is located next to corridor \(C_{n+1}\) which means that all corridors and rooms must be eventually evacuated through it. Moreover, each room \(R_{i,j}\) is labeled by the respective literal \(l_{i,j}\), the idea being that \(l_{i,j}\) is satisfied if \(C_i\) is evacuated via the room \(R_{i,j}\). Consequently, if there are two rooms labeled by complementary literals (i.e., a Boolean variable \(x\) and its negation \(\neg x\)), then those rooms are in conflict. This means that evacuation routes involving any pair of conflicting rooms are not feasible. It is also easy to see that the floor plan in Figure 3 and the associated set of conflicts can be computed in polynomial time.
It can be shown that a 3-SAT instance \(C_1 \land \ldots \land C_n\) has
a satisfying truth assignment such that each clause has exactly one literal satisfied if and only if for the corresponding floor plan there is a locking scheme that allows an evacuation plan such that (i) for each room there is exactly one evacuation direction and (ii) the evacuation routes respect the set of room conflicts arising from the complementary literals. The key observation is that for the corresponding floor plan evacuation is possible only if there is a route from $C_1$ to $C_{n+1}$ such that for each $i = 1, \ldots, n$ the route visits exactly one of the rooms $R_{i,1}$, $R_{i,2}$, and $R_{i,3}$ and all room conflicts are respected. A satisfying truth assignment such that each clause has exactly one literal satisfied gives directly such a route and if such a route is available, it gives directly an appropriate truth assignment where literals corresponding to the visited rooms in the route are satisfied.
Moreover, it is clear that given a locking scheme with exactly one evacuation direction for each room, it can be checked in polynomial time that evacuation is possible and that all room conflicts are respected.
|
{"Source-Url": "https://research.aalto.fi/files/30223846/SCI_Janhunen_Niemela_The_Answer_Set_Programming.pdf", "len_cl100k_base": 9700, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36589, "total-output-tokens": 11948, "length": "2e13", "weborganizer": {"__label__adult": 0.0003614425659179687, "__label__art_design": 0.0004801750183105469, "__label__crime_law": 0.0004935264587402344, "__label__education_jobs": 0.002132415771484375, "__label__entertainment": 9.98377799987793e-05, "__label__fashion_beauty": 0.00020170211791992188, "__label__finance_business": 0.0004119873046875, "__label__food_dining": 0.0004105567932128906, "__label__games": 0.0006403923034667969, "__label__hardware": 0.0008029937744140625, "__label__health": 0.0008091926574707031, "__label__history": 0.00035500526428222656, "__label__home_hobbies": 0.00015616416931152344, "__label__industrial": 0.0006914138793945312, "__label__literature": 0.0004801750183105469, "__label__politics": 0.0003561973571777344, "__label__religion": 0.0005545616149902344, "__label__science_tech": 0.118896484375, "__label__social_life": 0.00014126300811767578, "__label__software": 0.01238250732421875, "__label__software_dev": 0.85791015625, "__label__sports_fitness": 0.00030350685119628906, "__label__transportation": 0.0006847381591796875, "__label__travel": 0.00021266937255859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46278, 0.03597]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46278, 0.49526]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46278, 0.9079]], "google_gemma-3-12b-it_contains_pii": [[0, 365, false], [365, 5271, null], [5271, 11400, null], [11400, 14866, null], [14866, 21271, null], [21271, 26288, null], [26288, 31671, null], [31671, 37068, null], [37068, 42509, null], [42509, 45137, null], [45137, 46278, null]], "google_gemma-3-12b-it_is_public_document": [[0, 365, true], [365, 5271, null], [5271, 11400, null], [11400, 14866, null], [14866, 21271, null], [21271, 26288, null], [26288, 31671, null], [31671, 37068, null], [37068, 42509, null], [42509, 45137, null], [45137, 46278, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46278, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46278, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46278, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46278, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46278, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46278, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46278, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46278, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46278, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46278, null]], "pdf_page_numbers": [[0, 365, 1], [365, 5271, 2], [5271, 11400, 3], [11400, 14866, 4], [14866, 21271, 5], [21271, 26288, 6], [26288, 31671, 7], [31671, 37068, 8], [37068, 42509, 9], [42509, 45137, 10], [45137, 46278, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46278, 0.02688]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
f0838e1cb30761007acf0bf1e43591f71b3e06f8
|
Getting started: performing basic operations on Beagle2
- Basics about the system
- Basics about programming environment
- Modules and Programming Environment (PrgEnv)
- How to work on the filesystem
- Description of the filesystem
- HIPAA
- Lustre
- Useful commands on lustre
- Striping
- Useful commands for striping
- How to move data to and from Beagle
- How to submit jobs
- Projects
- Basics about job submission on Beagle2
- Job Submission Best Practices
- Batch jobs
- Commands for submitting and inquiring about jobs
- PBS (batch) scripts
- Aprun
- Memory usage
- Running Swift on Beagle2
- Additional resources:
- In case you need help/support
Note: All policies and approaches are subject to changes. While we will do our best to keep users informed of such changes, but it is not always possible to do so.
Basics about programming environment
The operating system on Beagle2 is the native Cray Linux Environment (CLE)
On login nodes: Is very similar to a conventional Linux environment.
On compute nodes is available as:
- CLE Static (which only allows the utilization of statically linked software, and it is the basic OS used for large simulations in the "Extreme Scalability Model" (ESM))
- CLE with Dynamic Shared Objects and Libraries (DSL) — see How to develop/port programs for/to Beagle
`xtnodestat` command shows:
- Current configuration of Beagle2’s nodes: which blades are compute which are service and where they are located in the machine.
- It will also provide information about the current workload of the machine.
Type `man xtnodestat` for more details. Please note that: free nodes as seen through xtnodestat does not always mean they are available for your use.
Modules and Programming Environment (PrgEnv)
*Programming environments* support the creation, modification, execution and debugging of programs. Programming Environments available on Beagle2 are: Cray Programming Environment and the GNU programming environment. The programming environment is managed by the `module` command. To learn more about modules see this page.
When working with the Cray Linux Environment, you will usually have to load a "module", see Environment User’s Guide
**Module** is a "package" on a Cray system that enables you to dynamically modify the user environment by installing or uninstalling "modulefiles". Module contains commands to configure the shell environment for a particular compiler or library. It allows multiple versions of software to be installed simultaneously; the user can choose which version to use while compiling code or running their jobs.
**Default compiler** on Beagle is PrgEnv-cray, if you want to switch to PrgEnv-gnu:
`module swap PrgEnv-cray PrgEnv-gnu`
The `module` command provides a number of capabilities to the user including:
module load load a module
module unload unload a module
module swap unload a module file and load another (module switch produce the same effect)
module list listing which module files are currently loaded
module avail determining which module files can be loaded; lists all available modules on the system
module use dir to prepend a directory dir to the MODULEPATH environment variable. If you want to add a directory to the list where the module command looks for new modules.
module use --append dir will append the directory to MODULEPATH.
module unusedir will remove directory dir from the MODULEPATH environment variable.
Note: in situations when a new compiler has to be utilized -- module swap might be a more appropriate strategy.
The modules that a user has loaded are persistent as long as you're logged in.
To add modules permanently to your environment you can add module commands to a file in your home directory called .modules. For example if you want to always use the GNU programming environment you would add:
ams@login1:~> cat ~/.modules
module unload PrgEnv-cray
module load PrgEnv-gnu
How to work on the filesystem
Description of the filesystem
Beagle now mount the following filesystems:
/home : CI home directories (read-only on compute nodes, will soon be removed)
• Reliable for small storage of data like source code, shell scripts, etc.
• Slow. It is not tuned for high performance parallel jobs.
• Should not be used for calculations on Beagle!
• 10 GB quotas and they are enforced!
• Referenced by the environment variable $HOME
/lustre/beagle2: local Lustre filesystem (this is where batch jobs should do most of their I/O)
• It's a parallel distributed file system.
• Fast. High-performance.
• Scratch filesystem. NO BACKUP.
• Files in Lustre are subjected to purging. It is the users’ responsibility to protect themselves from data loss!
• Referenced by the environment variable $LUSTREDIR
• 450TB of usable space
• While there are currently no restrictions in terms of usage and capacity, these conditions will likely change.
• Allows users to control the striping parameters when storing data on the filesystem. Tuning these parameters correctly can lead to better computation performance--see bellow.
/soft : local Cray software repository (read-only)
/ufs : internal filesystem for ALPS scheduler (read-write)
/tmp, /var, /opt, /dev and so on are in general read only from any node and usually more restricted from the compute node.
NOTE: Home directories are not mounted on the compute nodes (for performance reasons), so you'll always want to be working out of the Lustre scratch filesystem (/lustre/beagle2/your_user_name>). Make sure to copy everything you're working on out of your home directory to your Lustre directory and work out of that Lustre directory whenever you're on Beagle
Research and HIPAA Privacy Protections
HIPAA's Regulatory Scope
HIPAA’s protections focus on “individually identifiable health information,” which HIPAA defines as information in “any form or medium” that “[t]relates to the past, present, or future physical or mental health or condition of an individual; the provision of healthcare to an individual; or the past, present, or future payment for the provision of health care to an individual” (Security and Privacy 2013).
HIPAA’s protections reach only a subset of individually identifiable health information -- formally called protected health information or simply “PHI” -- created in or by what HIPAA calls covered entities. Covered entities include individual healthcare providers, healthcare provider organizations, health plans, and health information clearinghouses that engage in electronic healthcare transactions (see Health and Human Services Covered Entity Decision Charts). HIPAA’s protections for PHI extend to non-U.S. citizens’ data as well.
Some identifiable health information used for research originates outside of covered entities, and so may not be covered by HIPAA. However, you must check with your organization’s privacy authorities before assuming your situation falls outside HIPAA’s scope.
What Kinds of Users and Uses Are Covered?
HIPAA regulations set requirements for use and disclosure of PHI by covered entities, and by extension on all members of a covered entity’s workforce that have contact with PHI. HIPAA’s data protection requirements also apply “in the same manner” to business associates (and by extension to the workforce of such business associates) that perform functions using PHI on a covered entity’s behalf.
Researchers may be part of the workforce of a covered entity, or may be covered entities themselves if they are also healthcare providers. If so, they are directly affected by the HIPAA’s research rules. Researchers who meet neither of these conditions are still indirectly affected by HIPAA rules if a covered entity is the source of their data and those data meet the definition of PHI.
HIPAA’s rules on use and disclosure are generally “purpose-based” -- that is, the intended use sets the rules more than the type of data itself. The research rules discussed here are different than those for, say, treatment or treatment-related payments (relatively liberal), or for marketing or fundraising (relatively strict). A few types of data, such as psychotherapy notes do receive special protection under HIPAA. State laws also often have many categories of data with special protections, with which you should be familiar (or be in contact with an organizational official who has that knowledge).
What Constitutes "Research"?
Like the Common Rule, HIPAA defines research as a “systematic investigation, including research development, testing, and evaluation, designed to develop and contribute to generalizable knowledge” (Protection of Human Subjects 2009; Security and Privacy 2013). Note that some kinds of investigative activities that use patient data are excluded in this definition. For example:
1. Quality assessment and improvement, including outcomes evaluation and development of clinical guidelines or protocols, fall under the category of healthcare operations under HIPAA -- provided the primary aim is not obtaining generalizable knowledge.
2. Activities that aim primarily for generalizable knowledge of population health can fall into the category of public health activity under HIPAA.
The regulations are complex. So, as with the covered entity status, a determination by an organization’s IRB, designated privacy official(s), or legal counsel is usually required to assure that an activity is “not research” and therefore subject to different HIPAA rules.
Who Enforces the HIPAA Research Protections?
A covered entity may choose to rely on an IRB to assess compliance with both the FDA and Common Rule requirements and HIPAA research requirements. Alternatively, HIPAA provides that covered entities may create a Privacy Board to handle some research-related issues, notably determinations about eligibility for waivers, alterations, and exemptions from authorization processes. A covered entity may also leave some decisions about compliance with the research provisions of HIPAA to its designated privacy officer. It is critical that you understand the allocation of responsibilities at your organization.
Research subjects, like patients generally, have recourse to both your organization’s authorities and to federal and state agencies in the event they wish to file complaints about or have questions regarding an organization’s protective efforts.
As with any other planned activity related to protected health information, research must be mentioned in a privacy notice that HIPAA requires be provided by covered entities to their patients/customers. The privacy notice must include the ways in which data subjects may register complaints and report problems, either locally or with federal authorities. Every researcher should be familiar with their organization’s privacy notice, particularly the persons or departments it identifies as enforcement authorities for the organization.
HIPAA Research-Related Rules
If the data in question meet the definition of PHI and are being used for purposes that fall within HIPAA’s definition of research, HIPAA generally requires explicit written authorization (consent) from the data subject for research uses.
However, HIPAA allows for research-related access to individuals’ identifiable health data without authorization under certain circumstances:
1. The research involves only minimal risk.
2. The research is used solely for activities preparatory to research.
3. Only deceased individual’s information is used.
4. It is “grandfathered” research where all legal permissions were in place before HIPAA took effect.
Data that do not identify individuals can be used for research without specific authorization if:
1. Only fully de-identified data are used.
2. A “limited data set” is used, under an approved “data use agreement.”
Each of these conditions is described in the sections below.
Waivers of Alterations of Authorization Requirement Due to Minimal Risk
An organization’s IRB or Privacy Board (and in some organizations a designated privacy official) may determine that a waiver or alteration of the authorization requirement is appropriate. The conditions are modeled on the criteria for a waiver of informed consent in the Common Rule.
Use or disclosure of the PHI must involve no more than minimal risk to the privacy of the research subjects, and include the following elements:
- An adequate plan to protect any data identifiers from improper use and disclosure.
- An adequate plan to destroy data identifiers at the earliest opportunity consistent with conduct of the research (unless there is a health or research justification for retaining the identifiers, or such retention is otherwise required by law).
- Adequate written assurances that the PHI will not be reused or disclosed to any other individual or entity, except as required by law for authorized oversight of the research project, or for other research for which the use or disclosure of PHI would be permitted by HIPAA.
- The research could not practicably be conducted without access to and use of the PHI.
- The research could not practicably be conducted without the waiver or alteration to the authorization.
More about what counts as a data identifier is provided in the sections below on de-identified data and limited data sets.
Activities Preparatory to Research; Decedents’ Information Exceptions
HIPAA provides for two more exceptions to the authorization requirement for identifiable data:
- Where the PHI will be used solely for reviews preparatory to research (for example, for protocol development or identifying potential subjects) and will not leave the covered entity.
- Where the PHI refers solely to deceased individuals (the covered entity may ask for documentation of death of all data subjects).
In each case, the researcher must make a written or oral representation to the covered entity’s designated officials that such access is necessary for the research purposes -- someone from the IRB, the Privacy Board, or a privacy officer / designee -- who would then determine the appropriateness of the request.
Grandfathered Research
If all informed consents and other legal permissions required at the time were in place before HIPAA took effect (April 2003 in most cases), and have not changed since, a new HIPAA authorization is not required even for identified data. Obviously, this is no longer a commonly used pathway to bypass authorizations.
De-identified Data
A researcher may use fully de-identified health data without any authorization from individual data subjects. As the name implies, de-identified information must have all direct and indirect identifiers removed, to eliminate (or at least make highly improbable) re-identification using statistical techniques. De-identified information is no longer considered PHI, because by definition it is no longer individually identifiable.
HHS issued its Guidance Regarding Methods for De-identification of Protected Health Information in 2012. This guidance provides a detailed description of alternative methods, and should be considered required reading for anyone contemplating a de-identification strategy.
Under the HIPAA regulations, successful de-identification may be based on an “Expert Determination” by an “individual with appropriate knowledge” of statistical techniques who has analyzed the data set and can attest that the risk of re-identification is “very small.” (Very small is not defined in the regulations.) Alternatively, covered entities may use the “Safe Harbor” method of removing 18 types of identifying elements specified in the HIPAA regulations. In either case, the covered entity must have no actual knowledge that re-identification is possible or likely, for example by linking to other known data sets.
Limited Data Sets and Data Use Agreements
De-identification trades privacy protection for research productivity. Sometimes the trade-off is too steep, and a fully de-identified data set will not meet a research need. As an alternative, a covered entity may disclose PHI in a limited data set (LDS) to a researcher who has entered into an appropriate data use agreement. A LDS must have all direct identifiers removed; however, it may still include information that could “indirectly” identify the subject using statistical methods. That is, the disclosure risk is greater than “very small.”
The data use agreement for an LDS must:
- Delineate the permitted uses and disclosures of such information by the recipient, consistent with the purposes of research;
- Limit the individuals that can use or receive the data; and
- Require the recipient to agree not to re-identify the data or contact the individuals.
Minimum Necessary Uses and Disclosures
Uses and disclosures of data for research that are allowed to bypass the authorization requirement are still subject to the minimum necessary standard -- that is, the uses/disclosures must be no more than the minimum required for the described research purpose. A covered entity may rely on a researcher's documentation -- or the assessment of an IRB or Privacy Board -- that the information requested is the minimum necessary for the research purpose.
By contrast, research information obtained using an authorization is not bound by the minimum necessary standard -- on the theory that the data subject has given explicit permission in accordance with the signed authorization. However, be aware that while HIPAA may not require a minimum necessary justification at all times, an IRB’s evaluation of risks and burdens on human research subjects arguably does.
Disclosure Accounting
Individuals whose health information is covered by HIPAA have the right to an “accounting of disclosures” of their PHI. In this context, a “disclosure” occurs when PHI is communicated to an outside individual or entity, including another covered entity. Access within the covered entity -- for example, by members of a research team who are all part of the same organization’s workforce -- is considered a “use” not a disclosure. There is no accounting requirement for these internal uses for research.
In addition to being limited to external disclosures, disclosure accounting is not required for:
- Disclosures made under authority of a consent/authorization, on the theory that individuals are aware of what they have expressly permitted for that research.
- Disclosures to the individual directly about him/herself.
- Limited data set disclosures subject to a data use agreement.
- De-identified information that no longer qualifies as PHI.
When an accounting is required, it must include disclosures during the six years prior to the data subject’s request, and include certain types of information depending on the size of the protocol.
While HIPAA may not require it, many organizations will require that researchers maintain logs of all disclosures from research data collections as a security measure, including transfers to other individuals within the covered entity. Electronic data storage will increasingly offer this capability cheaply and automatically; older collections will require manual logging.
Characteristics of Authorizations
If a research activity meets none of the bypassing criteria above, an authorization (consent) is required. When they are required, authorizations must be:
- In “plain language” so that individuals can understand the information contained in the form, and therefore are able to make an informed decision.
- Executed in writing, and signed by the research subject (or an authorized personal representative).
Authorizations must include a specific description of the PHI to be used or disclosed, the name(s) or other identification of individuals involved in the research, and description of each purpose of the requested use or disclosure.
HIPAA authorizations are normally required to have an explicit expiration date. In the context of research, it is sufficient to specify an expiration “event” -- such as “the end of the study.” A research authorization can also have no expiration date at all, as would be the case for a research database or repository, or other future use, though this absence must be clearly indicated.
HIPAA authorizations cannot normally be combined with other types of documents (such as a privacy notice). However, HIPAA research authorizations can be combined with any other legal permission related to the study, including an informed consent that meets Common Rule or FDA regulations or another type of authorization.
As with any informed consent document, researchers are strongly urged to rely on standard models rather than creating their own authorization forms, lest they make a critical error in format or content. Most organizations will already have standard documents available; check with your IRB, Privacy Board, or privacy officer.
If there are multiple documents that limit information use or disclosure, the most restrictive one applies. Whether in a single instrument or several, the core requirement is to provide enough information for the data subject to make an informed choice.
Revocations of Authorizations
Like other kinds of HIPAA authorizations, those for research may be revoked by the subject at any time, provided that the revocation is in writing. Revocation of an authorization is not valid to the extent that the covered entity has taken actions relying on it, such as in the provision of prior treatment. Such revocations may be limited “as necessary to maintain the integrity of the research study.”
Recruiting into Research
It is still permissible under HIPAA to discuss recruitment into research with patients for whom such involvement might be appropriate. This common practice is considered to fall within the definition of treatment, at least when the conversation is undertaken by one of the patient's healthcare providers.
Remember, however, that a data subject’s information cannot generally be disclosed to a third party -- even another care provider -- for a research use without an authorization from the individual or an approved waiver, alteration, or exception to authorization.
HHS guidance on HIPAA has affirmed that recruitment efforts can qualify as a “preparatory to research” activity that would allow a researcher to identify potential research participants, and even contact them for purposes of seeking their authorization (HHS 2004). However, such efforts must be approved, and the PHI used for this purpose cannot leave the covered entity during this activity.
"Retrospective" Research
As electronic health data collections grow in scale and scope it is an increasingly common practice to “browse” them, looking for interesting patterns that could translate into research possibilities. Indeed, bio-repositories of tissue and data created just for this purpose are increasingly common, and the scope and scale of such repositories grow daily. (Retrospective analysis of paper charts hasn’t gone away either.)
Use or disclosure of PHI for retrospective research studies may be done only with patient authorization -- or with a waiver, alteration, or exception determination from an IRB or Privacy Board. It should not be difficult to meet one of the criteria for the latter for such exploratory efforts. Alternatively, the data collection itself may have been created with an explicit authorization from subjects for future research. However, remember that you generally cannot proceed on your own without some approval from an IRB, Privacy Board, or other designated governing entity.
Security Rule
Efforts to meet the Common Rule, FDA, and HIPAA regulations’ privacy requirements are only part of the researcher’s task. HIPAA also has a Security Rule that complements its Privacy Rule. The Security Rule requires that PHI collections receive appropriate information security protections for as long as they exist. If you do not know how to do that, find a resource at your organization that does. In addition to a privacy officer, HIPAA requires designation of a security official, who should be able to help assure appropriate data protection.
It is important to note that HIPAA’s requirements include reporting of security breaches and data exposures. In addition to notifying affected individuals, HHS must be notified of exposures of PHI; in addition to potentially triggering an investigation, exposures involving more than 500 persons are posted on the HHS ‘Breach Portal’ website for all the world to see. State laws may also include breach-reporting requirements.
Conclusion
Although the specifics are lengthy, the net administrative burden that HIPAA adds to existing Common Rule and FDA regulations is generally not a large one. Compared to protocol approval generally -- and the details of informed consent particularly -- a HIPAA authorization is relatively easy. Additionally, as noted, there are several pathways around the authorization requirement.
To approve a study under the Common Rule and FDA requirements, IRBs have long been required to determine that there are adequate provisions to protect the privacy of subjects and to maintain the confidentiality of data. Where researchers are meeting those requirements, HIPAA should change very little beyond the additional “paperwork.”
As noted, HIPAA applies to covered entities and their business associates, and to the PHI that originates in or by them. Research conducted by organizations that do not qualify as such, using data that does not derive from any covered entity source, is not reached by HIPAA. In such cases, the requirements of the Common Rule and FDA remain as protections for human subjects’ privacy and other interests. The issue then is not "PHI" but what the Common Rule defines as identifiable "private information."
Here are the key points:
1. HIPAA privacy protections supplement those of other federal regulations (viz., the Common Rule and FDA), state law, and certification/accreditation requirements.
2. HIPAA protects identifiable health information (PHI) originating or held in covered entities or their business associates. De-identified data is not protected, and not all identifiable health information is considered PHI either.
3. Under HIPAA, research activity using PHI generally requires authorization. However, there are several alternatives that allow bypassing the authorization requirement.
4. Minimum necessary standards, disclosure accounting requirements, and the characteristics of authorizations (when required) must be understood by researchers when HIPAA applies.
5. Privacy protection includes a commitment to data security throughout the lifecycle of your data.
6. If you are unsure about the particulars at your organization or have questions, consult with your organization’s IRB, Privacy Board, or privacy official. For data security issues, consult with your organization’s security official.
Acknowledgements
The author would like to thank the following individuals for their editorial and content review of this and prior versions: Jaime Arango, Evelyne Bital, Helenemarie Blake, Joey Casanova, Anita Cava, Amanda Coltes-Rojas, Ken Goodman, Karen Hansen, Margaret Rankovic, Daniel Smith, and Sally Mann.
References
Additional Resources
Lustre
Useful commands on lustre:
- `lfs df` system configuration information
- `lfs find [directory | file name]` find a file or directory
- `lfs quota -u $LOGNAME /login/beagle` display quota
Striping
Useful commands for striping:
- `lfs setstripe` create a file or directory with a specific striping pattern
- `lfs getstripe` display file striping patterns
To find more about it use: `man lfs`
The default striping is 2: each file created is split across 2 OSTs (potentially double read/write bandwidth)
- Usually good values are between one and four.
- Striping can be set either on file or directory level.
- Cannot change the stripe pattern on an existing file.
- Can change the stripe pattern on a directory.
- Striping must be set on a directory before files in it are created.
- New files inherit the striping of the parent directory.
NOTE: Striping over too many OSTs will cause unnecessary overhead and lead to a loss in performance! We do NOT recommend changing striping settings unless you absolutely know what you are doing. Striping config is already set to Cray recommendations for a volume of that size.
How to move data to and from Beagle
Beagle is not HIPAA-compliant — do not put PHI (Protected Health Information) data on Beagle2 !!!
Make sure that you are properly handling PHI data, the consequences of mishandling could be considerable both for your and for the institutions you work for.
Factors for choosing a data movement tool:
Make sure you have permission to move such data from its source to its target if you are not the owner or the sole owner.
Consider carefully the structure of Beagle's filesystem before deciding where you move your data:
- **Relatively small files** (say < 1 GB) that should be considered permanent: `/home/<username>` *(disk quota 10 GB)*.
- **Larger data to be used for calculations**, but which does not need to be backed up locally: `/lustre/beagle2/` *(currently there is no disk quota)*.
**Recommended data movement tools:**
- **scp/sftp**
- quick to initiate but
- slow and not scalable.
- **Globus Online**
- Provides high-performance and is easy to use from either a command line or web browser.
- Provides fault tolerant, fire-and-forget transfers.
- For moving larger data.
- When scp is too slow/unreliable
- See also Globus Tools and Grid Services
**Globus Online** addresses the challenges faced by researchers in moving, sharing, and archiving large volumes of data among distributed sites. With Globus Online, you hand-off data movement tasks to a hosted service that manages the entire operation, monitoring performance and errors, retrying failed transfers, correcting problems automatically whenever possible, and reporting status to keep you informed so that you can focus on your research. Command line and web-based interfaces are available. The command line interface, which requires only ssh to be installed on the client, is the method of choice for script-based workflows. Globus Online also has a REST-style transfer API.
After you register, simply use the **Beagle2 endpoint** "ci#beagle" as well as other sources or destinations. The Beagle2 endpoints server nodes are tuned especially for WAN data movement tasks. With a growing collection of Globus Online endpoints you'll be using the highest performing WAN-tuned systems with simplicity.
By default any file transfer command will be initiated on the service/login in node. The user can also bundle commands into a batch script and submit it to the scheduler. Users can also build multiple batch scripts with job dependency to move data to the machine using a few processors, run the jobs with a lot of processors, and then move the results off the machine. Here's an example of a batch script.
```bash
#!/bin/bash
JOB1=`qsub -lm ppwidth=1 copy_input.pbs`
JOB2=`qsub -lm ppwidth=128 -W depend=afterok:$JOB1 run.pbs`
JOB3=`qsub -lm ppwidth=1 -W depend=afterok:$JOB2 copy_results.pbs`
```
**How to submit jobs**
Projects
A valid HPC project is required to submit jobs.
To join an HPC project visit http://www.ci.uchicago.edu/hpc/projects
`projects` to check whether or not you’re a member of a project, to see what projects you’re a member of (do this when you login on Beagle).
`projects --available` will tell you which are the projects that are available for your use.
`projects --set my_project_code:` to set one of the projects that are available to you as your default project.
Basics about job submission on Beagle2
To run a batch job on Beagle2:
1. Prepare a PBS script that specifies the application you want to run and the resources it will require.
**Note:** Your application’s executable line must start with one of the application launch commands (aprun for ESM jobs; ccmrun for CCM jobs).
2. Submit your job PBS script using the TORQUE `qsub` command.
3. Monitor your job’s progress using the TORQUE `qstat` command, the Moab `showq` command.
When jobs are executed, they are allocated at least one node. Each node has 32 cores on Beagle2.
If a user wants to run a different computation on each of the cores of a node Swift scripting language should be used Swift web site
We are using PBS scripts with Moab (scheduler), see HPC Scheduling and Torque (resource manager), see HPC Job Management
PBS script consist of: PBS directives, comments and executable statements (aprun).
Every executable needs to be initiated by the aprun command.
It is necessary to properly match your aprun parameters with your PBS parameters.
In order to actually run on the compute nodes qsub has reserved for you, you must use aprun
Job_ID is assigned after the qsub command is executed. Use it to control your job!
Batch jobs are submitted using the qsub command, e.g., qsub myjob.pbs, where myjob.pbs is a script that will be described below.
Reservations:
Jobs can be sent either to the queues available on Beagle2 or users can ask for reservations: nodes specifically set aside for a task. In general reservations are awarded when a job has specific needs that cannot be easily met with the standard queues.
To request a reservation is necessary to send an email to beagle-support@ci.uchicago.edu
Job Submission Best Practices
How many tasks per node? -- On Beagle2 the number of cores per node is 32. Take this into account when submitting jobs.
What if tasks are memory intensive? -- Each compute node has 64GB, and 32 cores. If the memory requirements for your tasks are in terms of Gigabytes request much less than 32 tasks per node.
How much wall-time to request? -- Try to request relatively smaller walltime for your jobs. Schedular employs a technique called backfilling that may be advantageous for shorter walltimed jobs. If the application is a long running one then a checkpointing mechanism could be used to submits fragments of application.
Batch jobs
Commands for submitting and inquiring about jobs
Batch jobs are controlled by PBS (batch) scripts written by the user and submitted to a batch system that manages the compute resource and schedules the job to run based on a set of policies.
NOTE: job_id, the numerical identifier associated with a batch job, is assigned after the qsub command is executed.
- qsub batch jobs are submitted using the qsub command, e.g., qsub myjob.pbs, where myjob.pbs is a script that will be described below.
- qdel job_id to delete a job. Users can only delete their own jobs.
- qhold job_id to request that the scheduler place one or more holds on a job. A job that has a hold is not eligible for execution (just for jobs which user owns)
- qrsl job_id to release holds on batch jobs. A job may be blocked by one or more types of holds: USER, OTHER, and SYSTEM. USER hold can be removed by the job's owner.
- qalter new_options job_id to modify the job's attributes. If any of the specified attributes cannot be modified for a job, none of that job's attributes will be modified.
- qmove new_queue job_id to move a job from one queue type to another one.
- qstat shows the jobs the resource manager, Torque, knows about (i.e., all those submitted using qsub).
- qstat -a show all jobs in submit order
- qstat -a --username show all jobs of a specific user in submit order
- qstat -t job_id receive a detailed report on the job status
- qstat -n job_id show the status of a job running on
- qstat -q gives the list of the queues available on Beagle2
- showq shows all jobs in priority order. Tells which jobs Moab the scheduler, is considering eligible to run or is running.
- showres shows all the reservations currently in place or that have been scheduled (e.g., maintenance reservations, training reservations and specific user reservations) See Adaptive Computing: showres for more details.
- showbf shows what resources are available for immediate use as backfill. See Adaptive Computing: showbf for more details.
- showstart displays the estimated start time of a job. It is important to realize that this prediction is not strictly deterministic because jobs can be done earlier than forecasted. The command always assumes the job is the next to run, so it's only useful for the top job in queue. See Adaptive Computing: showstart for more details.
NOTE: The behaviors of all these commands can be affected by the use of command line arguments, see the man pages for more details, e.g., but typing man qsub for the qsub command when logged in on Beagle2.
For more Moab commands and their descriptions, see the Adaptive Computing Scheduler Commands page
To submit batch job:
From the directory that contains the script file, type:
```
qsub myjob.pbs
```
NOTE: Scripts submitted via qsub use default bash shells, so you need to make sure you load modules or set any environmental variables you use in the submit script.
**PBS (batch) scripts**
A PBS job script is a text file you prepare that specifies which application to run and the resources required to run it. A detailed FAQ about PBS scripts is available from [HPC Job Management](#) where users can learn the basics of building their scripts. **Note:** The TORQUE directives in your PBS script **must precede your executable lines** (lines that begin with one of the application launch commands `aprun` for ESM jobs; `ccmr` for CCM jobs, or `module` load commands); if directives occur on subsequent lines, they will be ignored. More specifically to Beagle these are some of the instructions that can be given:
```
#PBS -A my_project_code to set the project to which this run will be charged
#PBS -N job_name
#PBS -l mppwidth=nodes*cores_per_node is the number of processing elements (instance of an executable) requested and corresponds to the number of MPI or executable tasks. Default is one.
#PBS -l mppdepth=threads_per_MPI_task . Default is one. Use for OpenMP. The number cannot be larger than the number of cores per node (32). In some situations multiple threads can be run on same core, see Cray Doc:aprun or type `man aprun` for details.
NOTE: It is necessary to add `setenv OMP_NUM_THREADS=<number_of_threads>` in the PBS script before the `aprun` flags openMP.
#PBS -l mppnppn=Number of processing elements (or MPI tasks) per node. PE is one instance of an executable propagated by the Application Level Placement Scheduler.
```
Using a smaller mppnppn number will result in fewer MPI tasks or executables (to run multiple executables per node scripts are necessary) being scheduled per node. That will give each core/PE more memory, but leave cores unused on the node, or allowing for mixed MPI/openMP executables (multiple openMP threads on multiple cores per MPI task).
NOTE: We **recommend you not to use the directive mppnppn** in the batch script. If you want less than the default 32 MPI tasks per node or use OpenMP you should request all 32 cores on the desired number of nodes with the mppwidth parameter and with `aprun -N` you will specify the number of cores per node.
```
#PBS -l walltime=hh:mm:ss, i.e., in hours, minutes and seconds. Be mindful that specific queues might not allow all job-time lengths.
#PBS -q queue_name, to submit a job to a specific queue (use `qstat -q` to find which are the available queues). Batch is the default queue.
#PBS -o job_output_file_name to connect as specific file to the output of the PBS script
#PBS -j oe join output and error file.
#PBS -l advres=res_id, if a user is running a job that requires a reservation. In order to send computations that run on it it is necessary to add this line to the PBS script.
#PBS -V Please don’t use this option! This can propagate large numbers of environment variable settings from the submitting shell into a job.
```
NOTE: In the script, these instructions can be followed by other instructions and in the end by the `aprun` command, to run the executable. Otherwise you would be attempting to run your calculations on a login node and not on the reserved compute nodes!
For pure MPI scripts running a single program the **total number of nodes** requested is the number of PEs requested divided the number of PEs per node and rounded up.
For MPI/OpenMP tasks the **total number of nodes** will be ceiling (mppdepth*mppwidth/32). Type `man aprun` for details.
NOTE: Since Moab assigns entire nodes to jobs, the **total number of cores requested should be a multiple of 32**. If it is smaller, Moab will effectively round it up to the closest multiple of 32 in the sense of locking up those resources.
Type `man pbs_resources` when logged into Beagle for more information/more option.
**Example of PBS script:**
```
#!/bin/bash
#PBS -N myjob
#PBS -l walltime=10:00:00
#PBS -l mppwidth=544 ## ceiling ((100 (tasks)/6(tasks per node))*32(total cores per node)=544
#PBS -j oe ## join standard output and standard error-recommended!
. /opt/modules/default/init/bash
cd $PBS_O_WORKDIR
aprun -n 100 -N 6 ./myexecutable
• Job directive lines begin with #PBS. These directives tell the batch system how many nodes to reserve for your job and how long to reserve those nodes.
• $PBS_O_WORKDIR holds the path to the directory from which you submitted your job. While not required, most batch scripts have "cd $PBS_O_WORKDIR" as the first command after the directives.
• The aprun command is used to start execution of your code on Beagle2's compute nodes.
• Remember you can request up to 500 compute nodes for your batch jobs.
NOTE: All options may be specified as either (1) qsub command-line options (see below) or (2) as directives in the batch script as #PBS options (for batch jobs). We recommend putting your directives (options) in the script instead. Then you will have a record of the directives you used, which is useful for record-keeping as well as debugging should something go wrong.
Aprun
All codes that execute on Beagle2's compute nodes must be started with the "aprun" command. Without the "aprun" command, the code will run (if it runs at all) on the shared MOM node that executes your batch job commands.
To run aprun similar instructions should be used as given to the PBS script for qsub. Here are the equivalent aprun options.
<table>
<thead>
<tr>
<th>aprun Option</th>
<th>qsub Option</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>-n MPI</td>
<td>-n</td>
<td>Width (number of PEs). Number of MPI tasks. There is 32 cores per node on Beagle2.</td>
</tr>
<tr>
<td>-d MPI</td>
<td>-d</td>
<td>Depth (The number of threads to run for each PE). Number of OpenMP threads per MPI task. For OpenMP job you must also set the environment variable OMP_NUM_THREADS to this same value. Make sure that this value multiplied by the value for -N does not exceed 32.</td>
</tr>
<tr>
<td>-N PE</td>
<td>-N</td>
<td>Number of PEs per node. Number of MPI tasks to run on each node.</td>
</tr>
<tr>
<td>-B</td>
<td></td>
<td>Reuse the width, depth, nppn and memory specified with qsub: no need to specify aprun options -n, -d, -N, and -m; aprun will exit with an error if the user specifies these with the -B option</td>
</tr>
<tr>
<td>-S</td>
<td></td>
<td>Specifies the number of PEs to allocate per NUMA node. You'll get better performance if you distribute your MPI tasks among the 4 NUMA nodes (each NUMA node has 8 cores).Value can be 1-8. Default is 8</td>
</tr>
</tbody>
</table>
Example of batch script for running an MPI/OpenMP code using 6 nodes:
#!/bin/bash
#PBS -l mppwidth=256
#PBS -l walltime=1:00:00
. /opt/modules/default/init/bash
Memory usage
Our compute nodes have 64 GB of physical memory (2GB per core), but not all the memory is available to user programs. “System overhead” requires memory to run the node, and message passing library buffers all consume memory, as does loading the executable into the memory. Thus the precise memory available to an application varies. So if you are using all 32 cores per node, you will get a bit less than 2 GB per MPI task on average.
If you see an error message, “OOM killer terminated this process.” in your job output, it means that your code has exhausted the memory available on the node (OOM stands for “out of memory”). One simple thing you can try when your code runs into an OOM error is to use more nodes and fewer cores per node. You can choose to launch fewer than 32 tasks per node to increase the memory available for each MPI task. Note that your account will be charged for all 32 cores per node, regardless of how many cores you actually use.
For aprun options refer to our wiki page or man page.
https://wiki.uchicago.edu/display/Beagle/Getting+started%3A+performing+basic+operations+on+Beagle2
https://wiki.uchicago.edu/display/Beagle/Examples+of+PBS+scripts
For example if you would like to run 64 MPI tasks and use only 16 cores per compute node:
```
#PBS –l mppwidth=128
aprun –n 64 –N 16 –S 3 ./a.out
```
This example uses #PBS –l mppwidth=128 because 128 cores are required and this number must be multiple of 32 (64 MPI tasks / 16 tasks used per compute node X 32 cores per compute node). Use the –S 3 option to place the 16 MPI tasks per compute node on cores from all four NUMA nodes to ensure best performance and access to all compute node memory. We need this option because the default is for aprun to pack the NUMA nodes, meaning 16 tasks on just two NUMA nodes.
Where -S Specifies the number of PEs to allocate per NUMA node. Each NUMA node has 8 cores. Value for S can be 1-8. Default is 8.
If you are using OpenMP please refer to this page:
https://wiki.uchicago.edu/display/Beagle/Examples+of+PBS+scripts
For more information see the CrayDoc page http://docs.cray.com/cgi-bin/craydoc.cgi?mode=Show;q=f=man/alpsm/31/cat1/aprun.1.html or type man aprun.
Running Swift on Beagle2
Swift is now installed on Beagle2 as a module. Swift supports a many-task computing environment for Beagle2. In this model, Swift scripts and the Swift runtime are used to submit and manage large numbers of small process executions on Beagle2's massive number of cores. Swift is able to do this without overloading the Beagle2 scheduler by using a user space scheduler called Coasters.
- The Swift web site is here.
- Swift documentation is here.
- To get started with Swift on Beagle2 follow the steps outlined here.
Additional resources:
- Workload Management and Application Placement for the Cray Linux Environment from CrayDoc
- HPC Scheduling and HPC Job Management on job management
In case you need help/support
- please email beagle-support@lists.uchicago.edu. This will create a ticket in our ticketing system so that we can best track and resolve your issues.
|
{"Source-Url": "https://wiki.uchicago.edu/download/temp/pdfexport-20190807-070819-1354-3197/Beagle-143561327-070819-1354-3198.pdf?contentType=application/pdf", "len_cl100k_base": 9957, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 34267, "total-output-tokens": 11077, "length": "2e13", "weborganizer": {"__label__adult": 0.0006613731384277344, "__label__art_design": 0.0006585121154785156, "__label__crime_law": 0.0013837814331054688, "__label__education_jobs": 0.014373779296875, "__label__entertainment": 0.00015878677368164062, "__label__fashion_beauty": 0.0003535747528076172, "__label__finance_business": 0.002422332763671875, "__label__food_dining": 0.0006132125854492188, "__label__games": 0.0011644363403320312, "__label__hardware": 0.004642486572265625, "__label__health": 0.0201873779296875, "__label__history": 0.0004558563232421875, "__label__home_hobbies": 0.000518798828125, "__label__industrial": 0.0011091232299804688, "__label__literature": 0.0004374980926513672, "__label__politics": 0.0005812644958496094, "__label__religion": 0.0007100105285644531, "__label__science_tech": 0.266357421875, "__label__social_life": 0.00042819976806640625, "__label__software": 0.224365234375, "__label__software_dev": 0.456787109375, "__label__sports_fitness": 0.0005431175231933594, "__label__transportation": 0.000537872314453125, "__label__travel": 0.00033402442932128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47978, 0.00908]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47978, 0.4507]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47978, 0.90273]], "google_gemma-3-12b-it_contains_pii": [[0, 2842, false], [2842, 5788, null], [5788, 9546, null], [9546, 14174, null], [14174, 17702, null], [17702, 21655, null], [21655, 27003, null], [27003, 30010, null], [30010, 32525, null], [32525, 33487, null], [33487, 38094, null], [38094, 42115, null], [42115, 44864, null], [44864, 47978, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2842, true], [2842, 5788, null], [5788, 9546, null], [9546, 14174, null], [14174, 17702, null], [17702, 21655, null], [21655, 27003, null], [27003, 30010, null], [30010, 32525, null], [32525, 33487, null], [33487, 38094, null], [38094, 42115, null], [42115, 44864, null], [44864, 47978, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47978, null]], "pdf_page_numbers": [[0, 2842, 1], [2842, 5788, 2], [5788, 9546, 3], [9546, 14174, 4], [14174, 17702, 5], [17702, 21655, 6], [21655, 27003, 7], [27003, 30010, 8], [30010, 32525, 9], [32525, 33487, 10], [33487, 38094, 11], [38094, 42115, 12], [42115, 44864, 13], [44864, 47978, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47978, 0.01882]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
eb08d6a7244781623941005d5a3fb153a8dea2b1
|
RPER SOFTWARE - A SOCIAL MANAGEMENT TOOL FOR RAPID PARTICIPATORY EMANCIPATORY RESEARCH: PLANNING, DESIGN AND IMPLEMENTATION
Luiz Flávio Felizardo 1
Marcelo Osório Wallau 2
José Roberto Pereira 3
ABSTRACT
Theoretical framework: This study is grounded in Social Management, a paradigm that focuses on society’s deliberative process for public decisions. It also employs the Rapid Participatory and Emancipatory Research (RPER) method, an adaptation of rapid and participatory appraisals, to apply social management in rural contexts.
Research objectives: Identify the requirements, plan, design, assess the complexity, and implement a system to support the RPER application.
Methodology: The waterfall model for software development lifecycle was used to carry out the system’s planning. The discipline of Business Process Management (BPM) was necessary for the requirements mapping and the Function Point Analysis (FPA) technique to measure the software complexity from a user perspective.
Results: The RPER application process was fully mapped, and several features that could be implemented for the software were uncovered. These functionalities address practically all the steps involved in the method’s application. In addition, the software measurement was completed, and 542 function points were found. After this, the design for the graphical user interface was then created. Finally, the software was developed using technologies such as Express for building the back-end RESTful API with Node.js, React library to create the front-end’s componentized user interface, TypeScript as the main programming language and PostgreSQL as the relational database.
Originality: It is notable that some software has already been used to try and promote social participation in public matters. However, studies specific to the use of information and communication technology (ICT) to resolve social issues, and, at the same time, dealing specifically with participatory techniques are non-existent. There is an overflow of software tools designed to support quantitative research in the agriculture field, yet there remains a notable deficiency in software tailored to assist qualitative research and practices.
Theoretical and practical contributions: The use of a web system on participatory approaches can bring advantages. In the theoretical side, this research might provide insights into these methods’ evolution. It will also provide a foundational framework for understanding the intersection of ICT and participatory techniques, paving the way for future research in this area. Some other more practical benefits include the wider distribution and dissemination of results, data transparency, the unification or centralization of the research made using the methods, the organization of data, the possibility of automation on report generation, better communication and collaboration between team members, and data safety with periodic backups. Nonetheless, the software could serve as a platform for preparing new researchers with help and tips section for each of the methods’ techniques.
Keywords: Agricultural Information Systems, Rural Community Engagement, Information Technology, Participatory Software Design, Social Management.
1 Universidade Federal de São João del-Rei, São João del Rei, Minas Gerais, Brazil. E-mail: felizardo@ufsj.edu.br Orcid: https://orcid.org/0000-0002-6324-7313
2 University of Florida, Gainesville, Florida, United States, E-mail: mwallau@ufl.edu Orcid: https://orcid.org/0000-0001-9898-3399
3 Universidade Federal de Lavras, Minas Gerais, Brazil. E-mail: jpereira@ufla.br Orcid: https://orcid.org/0000-0003-1570-2016
RESUMO
Estrutura teórica: Este estudo é fundamentado na Gestão Social, um paradigma que se concentra no processo deliberativo da sociedade para as decisões públicas. Também emprega o método de Pesquisa Rápida Participativa e Emancipatória (RPER), uma adaptação de avaliações rápidas e participativas, para aplicar a gestão social em contextos rurais.
Objetivos da pesquisa: Identificar os requisitos, planejar, projetar, avaliar a complexidade e implementar um sistema para dar suporte à aplicação RPER.
Metodologia: O modelo em cascata para o ciclo de vida do desenvolvimento de software foi utilizado para realizar o planejamento do sistema. A disciplina de BPM (Business Process Management, gerenciamento de processos de negócios) era necessária para o mapeamento de requisitos e para a técnica de FPA (Function Point Analysis, análise de ponto de função) a fim de medir a complexidade do software a partir da perspectiva do usuário.
Resultados: O processo de aplicação do RPER foi totalmente mapeado, e vários recursos que poderiam ser implementados para o software foram descobertos. Estas funcionalidades abordam praticamente todas as etapas envolvidas na aplicação do método. Além disso, a medição do software foi concluída e foram encontrados 542 pontos de função. Depois disso, o design da interface gráfica do usuário foi criado. Finalmente, o software foi desenvolvido usando tecnologias como Express para a construção da API RESTful back-end com Node.js, biblioteca React para criar a interface de usuário componentizada do front-end, TypeScript como a linguagem de programação principal e PostgreSQL como o banco de dados relacional.
Originalidade: É notável que alguns softwares já tenham sido usados para tentar promover a participação social em assuntos públicos. No entanto, não existem estudos específicos sobre o uso das tecnologias da informação e comunicação (TIC) para resolver questões sociais e, ao mesmo tempo, lidar especificamente com técnicas participativas. Há um excesso de ferramentas de software projetadas para apoiar a pesquisa quantitativa no campo da agricultura, mas ainda há uma deficiência notável em software adaptado para ajudar a pesquisa e práticas qualitativas.
Contribuições teóricas e práticas: O uso de um sistema web sobre abordagens participativas pode trazer vantagens. Teoricamente, essa pesquisa pode fornecer ideias sobre a evolução desses métodos. Proporcionará também um quadro fundamental para a compreensão da interseção entre as TIC e as técnicas participativas, abrindo caminho para a investigação futura nesta área. Outros benefícios mais práticos incluem a distribuição e a disseminação mais amplas dos resultados, a transparência dos dados, a unificação ou centralização da pesquisa feita usando os métodos, a organização dos dados, a possibilidade de automação na geração de relatórios, melhor comunicação e colaboração entre os membros da equipe e segurança de dados com backups periódicos. No entanto, o software pode servir como plataforma para preparar novos pesquisadores com seção de ajuda e dicas para cada uma das técnicas dos métodos.
RGSA adota a Licença de Atribuição CC BY do Creative Commons (https://creativecommons.org/licenses/by/4.0/).
1 INTRODUCTION
Social Management is a relatively a new paradigm that focuses on society's deliberative process for public decisions. This paradigm is committed to the promotion of the common good. It is conceptualized as a dialogical management action focused on the public interest (Cançado, Pereira, Tenório, 2015, p.101) and its main categories are the well-understood public interest
(Tocqueville, 2003), the public sphere (Habermas, 1991), and social emancipation (Freire, 1985). The paradigm itself is similar to the concept of deliberative governance developed by Dryzek (2010) and his collaborators. Deliberative action incorporates policies and institutional measures to promote the common good.
Araújo (2012) describes Social Management as a multi-paradigmatic and polysemic field, suggesting that it's still evolving and highlighting its multidisciplinary nature. On the other hand, Cançado (2011) and Cançado, Pereira, and Tenório (2015) strongly argue that Social Management has made substantial progress over the years. They emphasize its well-established theoretical foundations and to validate its standing as a genuine science, they compare its maturity to the standards set by renowned scholars like Popper, Kuhn, Lakatos, and others. Through this comparison, they show that Social Management indeed fits the characteristics of a recognized scientific field. Social Management's concept remains open at some degree of interpretation due to academic debates that shape its evolution. Despite this, a central theme present is its emphasis on participation and adherence to Weber's (2017) ideal type suggesting a guiding path characterized by transparent, inclusive, intelligible, dialogical, coercion-free, and emancipatory collective decision-making.
A fitting example to elucidate the application of Social Management concepts is in the agriculture field. The famous Hardin's "Tragedy of the Commons" dilemma (Hardin, 1968) suggests that individuals, acting in their own self-interest, will inevitably overuse shared resources, leading to depletion or ruin. In the context of agriculture, this might manifest as depleting water resources, over-farming, or over-grazing, resulting in land degradation. Ostrom (1990) offers a more nuanced perspective, arguing that communities can, and often do, develop cooperative mechanisms to manage and sustain common resources effectively. Ostrom's principles, such as clearly defined boundaries, collective choice arrangements, and effective monitoring, could be applied to rural agricultural settings to prevent over-exploitation and ensure sustainable use.
Implementing such principles requires a holistic approach, and this is where a method of Social Management can be employed. The Rapid Participatory and Emancipatory Research (RPER; Pereira, 2017; Teixeira, Alcântara, Garcia & Pereira, 2019) consists of intervention techniques that allow qualitative and quantitative information to be obtained from a collectivity in a short period. This information is then used to identify problems, their causes, and possible solutions, with the goal of promoting social change and sustainable development. The RPER method has been the foundation of several empirical studies, some of which focus on agricultural communities and water related issues in different countries and regions (Teixeira, Cruz, Machado, & Pereira, 2020; Teixeira et al., 2019; Pereira, 2017; Alcântara, Pereira, & Vieira, 2018; Teixeira Cruz, 2017; Pereira, 2001; Teixeira, Marques, & Pereira, 2017; Pereira, & Little, 1998). As stated by Pereira (2017, pg. 76), this method was tailor-made to systematically address the intricate realities of social groups like rural land reform settlements, associations, and agriculture related cooperatives, for example. The RPER represents a progressive evolution from the classic Rapid Rural Appraisal (RRA) and Participatory Rural Appraisal (PRA) methods, merging foundational principles from both and focusing on critical theory and participatory strategies to apply the concepts of Social Management.
Information and communication technology (ICT) has been used to try and promote social participation in public matters by enhancing societal well-being with platforms that empower communities to address challenges and participate directly in public decision-making processes (MySociety, 2013; Walravens, 2015; Peña-López, 2017; Felizardo, Pereira, & Silva, 2019). However, to the best of our knowledge, studies specific to the use of ICT to resolve social issues, that support participatory techniques, especially in the agriculture domain, are non-existent. In agricultural research, there is a strong trend toward developing software and hardware tools for quantitative application and research. For example, tools have been
developed for high-throughput phenotyping and seed quality testing (Tu et al., 2023), as well as for identifying soil-constrained areas in row crop fields (Orton, McClymont, Page, Menzies, & Dang, 2022). Hyper spectral imaging-based plant phenotyping is another area of focus (ElManawy, Sun, Abdalla, Zhu, & Cen, 2022), and the examples are many (Kim, 2021; Pacciofetti, Córdoba, & Balzarini, 2020; Jacquin et al., 2019; Álvarez, Oliva, & Valera, 2012; Zapa et al., 2012). Despite showing significant advancements with all these tools for the agriculture field, there remains a glaring deficiency in software solutions specifically tailored to assist qualitative research and participation methods.
The use of a web system on participatory approaches, particularly the one proposed on this work, can bring several advantages. From a theoretical perspective, it not only provides insights into the evolution of these participatory methods, but also establishes a foundational framework that bridges the gap between ICT and participatory techniques, setting the stage for subsequent research in this domain. Practical benefits include the wider distribution and dissemination of results, data transparency, the unification or centralization of the research made using the methods, the organization of data, the possibility of automation on report generation for faster feedback for the community, better communication and collaboration between team members, and data safety with periodic backups. Nonetheless, the software could serve as a platform for preparing new researchers with help and tips section for each of the methods’ techniques.
The primary objective of this work is to identify the requirements, plan, design, assess the complexity, and implement a system to support the Rapid Participatory and Emancipatory Research (RPER). The software can receive input from users, deal with separate roles like project coordinators, team members and visitors and generate automated report documents. All the technologies used for implementing the software are open-source and freely available, including the software itself, which is in a public repository of a cloud-based service for version control. In this manuscript, we bring the theoretical framework and development process of the software, discuss functionality and utilization strategies, and propose potential directions for subsequent research in this field.
2 THEORETICAL FRAMEWORK
2.1 Social Management
Even though the term social management's first appearance in the social sciences field was in the 1960s (Porket, 1967), as reported by Felizardo et al. (2021), the most recognizable reference came from the text of Rovida (1985), which deals with self-managed experiences in the Spanish civil war (Cançado, Tenório, & Pereira, 2011; Tenório, 2012). Nonetheless, in Rovida's (1985) text, social management appears with the meaning of proletarian democracy for locals. However, the term is also used to describe the management of collective farms in the communist Soviet Union, also known as Sovkhoz.
According to Cançado (2011), the main references on the Social Management concept construction are the works of Tenório (2008a, 2008b, 2010, 2012), França Filho (2003, 2008), Fischer (2002), Fischer and Melo (2003, 2006), Boullosa (2009), and Boullosa and Schommer (2008, 2009). In this list, it is also important to include the own work of Cançado (2011), the book entitled "Social Management, the epistemology of a paradigm" from Cançado, Pereira, and Tenório (2015) and, more recently, Tenório and Araújo (2021) alongside Davel, Xavier, and Cançado (2020). Academic work in social management are extensive and involves a set of scientific articles, books, dissertations, thesis, and other bibliographic materials that are being produced in different education and research institutions with different theoretical approaches and empirical studies, thus, controversies arose in the field.
As an example of this kind of dispute, Araújo (2012) defends the concept of Social Management as multi-paradigmatic, polysemic, and a field under construction. He states that it is a field of knowledge in a preliminary stage in which the multidisciplinary character prevails. On the other hand, Cançado (2011) and Cançado, Pereira, and Tenório (2015) argue that Social Management has already achieved much progress and has a consistent theoretical body approaching its first paradigm, or in other words, with specific theoretical foundations. In order to demonstrate that Social Management passes the criteria to be accepted as a science, the authors compare the maturity of Social Management with the criteria proposed by Popper, Kuhn, Lakatos, Feyerabend, Chalmers, Boaventura de Souza Santos and, Pedro Demo. This way, attempting to prove scientifically that social management is a field of knowledge that could be characterized as a science. The academic debate about the divergences is still in progress, as can be observed in the works of Araújo (2012), Cançado (2013), and Tenório and Araújo (2021), for example.
As recently stated by Tenório and Araújo (2021), social management arises, in opposition to strategic management, trying to achieve a fairer society. A society that is democratically articulated in the management of its interests, other than the interests of the market. It is, therefore, the opposition to strategic management as it, according to Tenório (1998), tries to replace technobureaucratic, monological management with participatory and dialogical management, one that the decision-making process is exercised amongst different social subjects. The distinctions between these management approaches are highlighted by other numerous scholarly works (Pimentel, 2014; Cançado, Villela, & Sausen, 2016; Tenório & Araújo, 2021; do Carmo et al., 2023). In social management, the decision-making authority is shared among the participants in the action using a dialogical managerial process. This seminal concept by Tenório (1998) is one of the most cited in the literature on this subject and it assumes Habermas' (1984, 1987) communicative action and the deliberative democracy concept as its analytical premises.
Tenorio and Araujo (2021) stated that despite the concept of social management being already on the agenda of the South American academy for quite some time, its understanding is not unanimous, and the concept is still not fully known. Notwithstanding, the authors insist that social management, since the early 1990s, has been an opposition and alternative to strategic management. Thus, it is a schism, a heterodox perspective against the mainstream, a concept of resistance not taken as an end in itself or as a goal of politics, but as a beginning and as a possibility, as the relationship between oppression and resistance, with no appeal to the sense of maximum agency of the modern subject.
The concept of social management is not fully formed and continues to evolve, with the existing academic debates shaping its progression and refinement. However, there is a common and convergent point in every work, social management is based on participation. In addition, it has flexible delimitation, and it is based on the ideal Weberian type (Weber, 2017). This means that it has a path to be followed as a guide, but the end possibly will not be fully achieved. This path, however, is conducted by some characteristics in collective decision-making with the characteristics being: no coercion, maximum transparency, intelligibility, dialogicity, aiming at emancipation.
2.2 Rapid Participatory Emancipatory Research (RPER)
To accurately appraise the reality of an organization, a rural community, small groups, or a collectivity from the perspective of Social Management, in relation to organizational change and sustainable development, it is necessary to use participatory methods based on dialogical processes of transformation of reality. It was with this aim in mind that the Rapid Rural Appraisal (RRA) emerged, and by 1979 it had its own thematic workshop and conference
(Barnett, 1979; Workshop on Rapid Rural Appraisal, 1979; Conference on Rapid Rural Appraisal, 1979). After some time, Chambers (1981) provided one of the foundational expositions on RRA, elucidating the method’s rationale and repertoire, emphasizing its significance in obtaining reliable data swiftly and efficiently. However, as the decade unfolded, there was an increasing awareness of the necessity for a more collaborative approach. This thought led to the evolution of RRA into the Participatory Rural Appraisal (PRA) in the late 1980s. This approach prioritized the active involvement of local communities in the research process, ensuring their voices and insights were central to the findings. In a seminal work, Chambers (1994) traced the origins and practices of PRA, highlighting the transformative shift from the more observational RRA to the inclusive and collaborative nature of PRA. This evolution underscored the realization that sustainable and impactful development necessitates the active engagement of the communities it seeks to benefit.
The RPER was established on the foundational principles of both RRA and PRA methods. It integrates the tenets of critical theory, predominantly from the Habermasian communicative action theory (Habermas, 1984, 1987), and is also deeply influenced by Paulo Freire's approach to dialogical education (Freire, 2018). Thus, RPER became a path for the application of Social Management theory (Teixeira et al., 2019). According to the method's creator, Pereira (2017), RPER is not entirely characterized as an action-research, it has the presence of an interdisciplinary team, external to the collectivity and uses participatory techniques, although, it uses such methods in the research process and in the construction of inter-subjectivities. According to the author, in the RPER, the main role of the interdisciplinary team is to guide participants to identify their own problems, their causes, and possible solutions, recognizing their demands within a principle of dialogical otherness. Thus, the participatory approach of this method is based on the knowledge, aspirations, and creative capacity of the participants, in addition to the involvement of other social actors. Therefore, in the methodological process of the RPER, a dialogical communicative action occurs and causes the commitment between the social actors involved. This gives the research a characteristic of a participatory development process. In addition to that, the interdisciplinary characteristics of the external team enable dialogic interaction with participants in correspondence to various aspects of their socio-economic, political, cultural, and environmental reality. That makes it possible for the participants to capture, understand, register, and communicate properly about different problems.
In general terms, the objectives of the RPER are focused on the basis of a process where the awareness of the participants allows them to move from a situation of dependency (also known as tutorial) to a sustained and emancipated situation as mentioned in the dialogical education perspective by Freire (2018). The main objectives of the method are: 1) to identify and analyze the participants' generated themes to motivate them methodologically to problematize their own reality, establishing their priorities and evaluating the actions that they themselves can carry out with those that would be the responsibility of local, state, or federal institutions; 2) to collect information of qualitative and quantitative natures in order to develop action strategies for the participants; and 3) to identify structural or potential organizational limitations of the participants (Pereira, 2017).
RPER is used to instrumentalize the concept of Social Management. It has as its methodological assumption the participation of the community that will experience the research process in conjunction with the interdisciplinary team. It is an approach and intervention methodology that is not in a tutorial format but one that has the capacity to promote participation and commitment from those involved. More details and information on the stages of the RPER will be explained in the Results and Discussion section of this work since they are part of the software requisites analysis and implementation.
3 METHODOLOGY
In order to carry out the implementation of the software, a development lifecycle model was used, which is a structure that contains processes, activities, and tasks related to the development, operation, and maintenance of a software product, covering the life of the system, from the definition of its requirements to the end of its use (ISO/IEC/IEEE 12207, 2017). However, there is no absolute consensus on software development lifecycle models, but traditional models are sequential and include models such as waterfall, spiral, or V-shaped (Ehrler, Lovis & Blondon, 2019). For this study, the waterfall model was chosen due to its great success, simplicity, and systematic nature (Bassil, 2012; Kumar & Bhatia, 2014). Precisely because of these characteristics, the waterfall model was and still is used by many software development companies and industrial manufacturers as the main technique to plan, build and maintain their products (Munassar & Govardhan, 2010; Susilo, 2018; Firzatullah, 2021).
The waterfall model was first introduced by Benington (1956) and modified by Royce (1970). Bennington’s original waterfall model recommended that software be developed in the following stages: operational analysis, operational specification, design and coding specification, development, and testing. Anticipating that there could be difficulties and unforeseen events, Royce (1970) improved this model, where at the end of each stage, feedback would be added so that each previous stage could be revisited, he also suggested a preliminary requirements phase (Figure 1).
The model phases can be summarized as follows (Royce, 1970; Bassil, 2012):
Requirements Phase - Also known as planning, or system requirements. As with research, this initial step consists of conducting a preliminary analysis to raise the problem, the objectives, and the needs or requirements of the software to be built. The business prerequisites are recognized at this stage, and, if possible, an initial measurement of the software should already be carried out at this stage.
Analysis Phase - This phase represents a complete and comprehensive description of the behavior of the software to be developed. Here functional and non-functional requirements
are defined in more detail, including classes, their relationships, functions, software attributes, interface requirements, and database requirements.
**Design Phase** - It is the process of planning and solving problems for a software solution, including the initial visuals. In this phase, the developers define the plan for a solution that includes algorithm design, software architecture design, graphical user interface design, among others.
**Development Phase** - Refers to the realization of business requirements and design specifications in an executable program, database, website, desktop application, and/or mobile application, that is, a concrete software component will be done in this phase using programming and implementation.
**Validation and Testing Phase** - It is the process of verifying whether a software solution meets the original requirements and specifications and if it fulfills the intended objective. In addition, the testing phase is the time to perform code debugging, in which errors and system failures are sought and corrected.
**Maintenance Phase** - It is the process of modifying a software solution after delivery and deployment to refine the output, correct errors, and improve performance or quality.
The general procedure starts with the identification of the requirements and needs of the project through the authors’ experience in software development and knowledge of the RPER method. The target audience for using the software was determined as the entire interdisciplinary team responsible for applying the method. All the requirements and requisites of the process that would be affected somehow by technology to be developed are detailed in the results topic.
The discipline of Business Process Management (BPM) was used to carry out the requirements mapping. BPM includes concepts, methods, and techniques to support the representation and execution of Business Processes (Weske, 2007). The BPM approach has been increasingly applied in the business scenario in recent years (Baklizky & Fantinato, 2012) and has proven to be a powerful way of solving or contributing significantly to the solution of a series of organizational problems, allowing for the improvement of business processes and, consequently, improved results obtained (Baklizky & Fantinato, 2012). The union of business management and information technology allows for the alignment between the processes and the strategic objectives to be achieved. In business process modeling, the main objective is to produce a description of reality, for example, the way in which a business transaction is carried out to understand it and, eventually, modify it to incorporate improvements in it. Consequently, it is important to have a notation that allows the essence of the business to be modeled as clearly as possible (Rodríguez, Fernández-Medina & Piattini, 2007). This notation has the acronym BPMN and for this study two elements of the notation will be used. Basically, the activity, represented by a rectangle with rounded edges, and the sequence flow, represented by an arrow indicating the process flow. Each activity will have mapped the requirements that can be accomplished or supported by the software.
After the initial requirements mapping, the Function Point Analysis (FPA) technique was used to measure the software. This is a complete technique to measure software from the system requirements point of view, even in the early requirements planning and analysis stages. FPA is part of one of the Functional Size Measurement (FSM) methods, which was introduced by Albrecht (1979) as a method for measuring the amount of complexity and functionality in a software project. In the FPA procedure, there are a variety of transactions to be accounted for, including data received, sent, or to be processed by the system and its access to internal and external databases (Rohayani, Gaol, Soewito & Hendric, 2017). Despite accounting for these details, the analysis should only be used with the business requirements that are clear to users and is independent of technical details such as the choice of programming languages and technologies to be used.
4 RESULTS AND DISCUSSION
4.1 Functionality Planning for the RPER’s Steps
The RPER's methodological intervention process follows a script, which can be defined as shown in Figure 2 (Pereira, 2017). Almost all steps can benefit from the software implementation in different ways, the details of each step will be presented next.
Formation of the interdisciplinary team - At this stage, it is already possible to predict the need for the system to support the registration of users. Thus, a CRUD for users is needed – CRUD are the four basic operations for data manipulation, that is create, read, update, and delete data from a table in the database. In addition, there must also be a registration of roles and the association between roles to users. These roles include being a member of a RPER and coordinator, which can add or remove other users as members of the interdisciplinary RPER team.
Preparation and training of the interdisciplinary team - To help with this step, the system can have the following functionalities: a) A page with step-by-step about the RPER method described using text and graphical visualizations. b) A help button on each of the system screens with useful information about each system functionality and step.
Elaboration of the generating themes framework table to be used as a guide to the techniques that will make up the RPER - Despite not being specified in the Figure 2 scheme, this is an important part of the team preparation and we believed to be important to make sure it will be added in the system. Similar to what will happen in most steps, here the system will
allow members of a particular RPER to add content to a blank editor page, where they can work together in the same document. Also, at this page, the help link could provide examples of tables already made on past applications of the method.
*Collection and systematization of secondary information about the collectivity in focus and the historical context of the region* - Here, the system allows the upload of images, text, and table inputs from members of that particular RPER application. The content will be saved in the database and all users will have access to the same content so they can collaborate with each other to form a single document for each RPER, similar to the step right before about the generating themes.
*Direct contact and mobilization with the participating actors or community* - For this step the system has no interference and everything must be done in person or sometimes over the phone/video conferences.
*Fieldwork with the community following the methodological process using previously defined participatory techniques, including interviews.* - Especially due to the possibility of no internet connection in most places where the RPER method is applied, it will not be possible to count on having the system work in loco for most places, since it is a web-based software with integrated options between several different users at the same time. Nonetheless, the functionalities of fieldwork will be created to enable the insertion, storage, and organization of all data collected during the application, even afterward. CRUD functions for all field work activities, enable the insertion of images, tables, charts, and text obtained during each used technique. The complete list and purpose of each technique can be found in table 1. In summary the software will be able to be a place where the interdisciplinary team members could save: the information on all collective participants acquired during their presentation; data obtained during the historical mapping; inputs acquired during the transect walk; details of public and private organizations that have links with the community for the stage named Venn diagram; facts on the seasonal calendar; findings about the community daily routine habits; records about the input and output technique; whole transcripts and insights from interviews and focus groups, with guiding pre-set questions; the reality and objective matrix data; and finally, the results of the priority election step.
The RPER fieldwork phase should be carried out over a period of three to five consecutive days by an interdisciplinary team with approximately five researchers from different backgrounds. Before the fieldwork, the group of researchers must have already contacted the community where the method will be applied. It is important that everybody knows some basic information about the community, such as type, structure, who are the members and leaders, among other aspects. After that, the application team and the community should schedule a date for the fieldwork application. As mentioned, there are several techniques that can be applied during this phase. Table 1 presents a summary of field activities that can be carried out during the application and its purpose. It is important to emphasize that the method is flexible, enabling changes in the choice and organization of the participatory techniques to capture the reality experienced by the participants of any given collectivity. The system is prepared for that, where each activity can be marked as not applicable, if necessary and with optional other fieldwork field where additional fieldwork not initially predicted can be added to appear in the final report.
| Table 1 - Rapid Participatory Emancipatory Research (RPER) fieldwork stages and techniques |
| Technique name | Technique Purpose |
| Presentation of the interdisciplinary team and members of the community | Identify who the participants are (name, age, marital status, occupation, and other information). |
| Historical mapping | Draw a map of the location that represents the organization or social phenomenon in the perception of the participants. |
Transect walk | The team must walk across the map drawn in the previous step to verify on the spot the description made by them, photographing and/or filming the landscape.
---|---
Venn diagram | Identify and evaluate public and private organizations that have importance and performance in the organization in the perception of the participants.
Seasonal calendar | Arrange all the organization's activities during the previous year in a graph.
Input and output | Analyze the situation of the production system in relation to the market context that involves the economic activities developed by the organization.
Semi-structured interviews | Allow for the objective comparison of opinions while also providing an opportunity to spontaneously explore topics relevant to that collectivity member. In addition, it requires the interviewer to have prior knowledge about the interviewee and the topic to be addressed.
Focus group | It aims to obtain qualitative information on the themes that generate collectivity and have as a principle a focused, previously determined discussion.
Daily routine | Identify day-to-day activities and the division of labor in the organization while planning future activities.
Reality and objective matrix | Identify problems, their causes, and possible solutions in the perception of the participants themselves.
Priorities election | Identify the social, economic, political, and technical-productive priorities of the participants through a democratic election.
Other fieldwork | While many fieldwork techniques are already encompassed in the method, its theory clearly allows for adaptability. Depending on the collective needs, certain techniques can be included or omitted.
Source: adapted from Pereira (2017).
**Systematization, analysis, and interpretation of all information collected** - This can be used after the analysis to insert content about the data gathered and studied. On this page the team members can also insert text and figures related to the interpretation of the data.
**Elaboration of the final report** - It is important that all the information collected is inserted into the system beforehand so it can be used to enable the final report generation. At first, the system will contain a button to generate a Microsoft Word report with a pre-defined style and introduction.
Apart from the already mentioned not applicable status available for each step and fieldwork activity, the system offers three additional status options for each page: unstarted, completed, and in progress. These indicate the current stage of each step within the RPER application. These statuses serve two primary purposes in the system. First, the software features a progress bar that automatically calculates the completion percentage of the RPER application, counting steps marked as completed or not applicable. Second, the statuses guide the automated report generation. Only pages with the status completed or in progress are selected for printing to form the final document.
For the data analysis and interpretation, everything should be carried out by the interdisciplinary team members following the RPER principles, that is, using content analysis (Bardin, 1977). This analysis is considered a fundamental phase of the research process for the RPER method. It is another step where the software can facilitate and speed up the process as much of the information should be collected by now and the team members can work cooperatively and simultaneously before selecting the final report elaboration. The concept of content analysis is given by Bardin (1977) as a set of communication analysis techniques that use systematic and objective procedures to describe the message content. According to the author, content analysis follows three phases: 1) pre-analysis, 2) exploration of the material, and 3) treatment, inference, and interpretation of results. The purpose of this analysis is to reveal the meaning of the ideas and values expressed during the research process by the participants.
During the RPER, the interpretation of the participating actors' ideas and values is enriched by the discussion about the materials collected and by the triangulation of information,
which confers quality, validity, and fidelity of the information to the researched reality. Possible discrepancies between the actual situation of the community and the thematic universe in which they fit must also be analyzed, contrasting the information presented by the participants and the technical-scientific knowledge of the interdisciplinary team. Thus, the RPER seeks to explore, qualitatively and quantitatively, the generating themes, considering the whole set and attributing a "holistic" character to the information collection process during the analyses. Finally, during the data interpretation stage, it is necessary to distance the team from the place where the research was carried out to put into practice the process of critical reflection on the information collected.
Figure 3 offers a comprehensive view of the RPER application process using a BPM diagram, illustrating the software features associated with each activity. Each functionality is numbered for easy reference and to track their frequency of appearance.
Figure 3 - Business Process Management (BPM) diagram of the Rapid Participatory Emancipatory Research (RPER) intervention method process and the respective planned functionalities for the system.
Source: Prepared by the authors.
4.2 RPER Software Function Point Analysis (FPA)
Albrecht (1979) initially proposed Function Point Analysis (FPA) as a strategic approach to quantitatively assess both the complexity and functionality inherent in a software endeavor. Within the framework of FPA, it's imperative to consider an array of transactions. These encompass data that the system receives, sends, or process, as well as its interactions with both internal and external database structures (Rohayani et al., 2017).
Within the realm of software development, the foundational operations associated with data management are encapsulated by the acronym CRUD, denoting Create, Read, Update, and Delete, as already mentioned in this manuscript. When delving into function point analysis, these operations are equivalently recognized as the foundational activities for interaction with the system's Internal Logic Files (ILFs), that is, persistent data structures within a software application that store and manage the system's internal data (IFPUG, 2004).
Table 2 elucidates the desired system functionalities according to the BPM established in the previous topics and their corresponding calculated total function points. It offers a comprehensive view of each individual functionality point. Every row of the table delineates a specific functionality with its name, brief description and numerical designation which corresponds to the ones mapped using BPM shown in the previous section.
<table>
<thead>
<tr>
<th>Functionality</th>
<th>Overview</th>
<th>FP</th>
</tr>
</thead>
<tbody>
<tr>
<td>1) Users CRUD</td>
<td>CRUD operations for the users, this defines the possibility of user creation in the system, update information such as name and password and update profile picture, for example.</td>
<td>19</td>
</tr>
<tr>
<td>2) Roles & Users/Roles Association</td>
<td>A user is given the status of a viewer for all registered RPERs when he/she first join the system, but it is known that they can became members or even coordinate a RPER, this functionality controls this kind of association.</td>
<td>22</td>
</tr>
<tr>
<td>3) Step by Step Explanation Page</td>
<td>Just a static page that will contain information to guide current and future interdisciplinary team members on how to proceed with a RPER application.</td>
<td>3</td>
</tr>
<tr>
<td>4) Help Button “?” Pages</td>
<td>This is also a static page with information, but instead of a macro view of the applications, using this help button users can see information on each individual step of the method, including examples of past uses.</td>
<td>54</td>
</tr>
<tr>
<td>5) Text Editor with Image Handling</td>
<td>Collaborative text editor for each step of the method, so users can insert, update, delete and/or justinspect text, images, and tables for each page of the system. This is the main functionality in the whole software where users will input most of the data and information gathered.</td>
<td>360</td>
</tr>
<tr>
<td>6) Status Tracking Info</td>
<td>With four possible status option, not applicable, unstarted, completed, and in progress, each step of the application will be controlled by the team members to keep track of their progress and select what will be printed in the report.</td>
<td>57</td>
</tr>
<tr>
<td>7) Automated Report</td>
<td>With this functionality, users can generate a report document with all the information placed into the system. Initially the software will only be able to create a Microsoft Word file, but the choices could extend in the future.</td>
<td>4</td>
</tr>
<tr>
<td>8) Summary Page with Progress Bar</td>
<td>This page is a dynamic page with summarized information about the RPER, such as its avatar image, team members, and progress, monitored by evaluating the status of each step.</td>
<td>10</td>
</tr>
<tr>
<td>9) RPER CRUD</td>
<td>CRUD operations for the RPER itself, where it is possible to create the method for an application in a collectivity, choose its avatar picture and by default the coordinator will be person that initiate the RPER.</td>
<td>13</td>
</tr>
</tbody>
</table>
Total Function Points: 542
Source: Prepared by the authors.
Starting with the users CRUD functionality, this was evaluated based on several components. External Inputs (EI) for creation, updating, and deletion of users accounted for 9 points. The display of user details, categorized under External Inquiry (EQ), contributed another
3 points. Additionally, the Internal Logical Files (ILF) for user data storage added 7 points, culminating in a total of 19 function points for this segment.
To ensure there is no confusion, an EQ, worth 3 functions points, typically involves both retrieving and presenting data without significant transformation to it. External Output (EO) involves processing logic that changes the behavior or form of the data, like doing calculations before presenting it, that is why it is accounted for 4 function points each. That is the reason user details being displayed are more of an inquiry operation rather than an output operation.
Roles and users/roles association is an important functionality within the software, responsible for the management of roles to users. The processes of role assignment and removal, both categorized under EIs, together they contributed 8 points since they fall under the average complexity classification for interacting with two ILF at once. The ILFs, which holds user data and the intricate associations between specific RPER applications and their members, summed up to 14 points, 7 each. Thus, the aggregate function points for this segment amounted to 22.
EIs refer to processes in which data enters the software system from an external application or user. These inputs do not necessarily have to update the ILF, but often they do. The primary intent behind an EI is to maintain some form of ILF or to influence the behavior of the system. An EI is characterized by its complexity, which is determined by the number of data elements it manages and the number of ILFs it references. In the context of the RPER software's roles and users/roles association functionality, an EI is the process where a coordinator assigns or removes a member role to/from a user. This role will influence the system by updating the user's permissions or access rights based on it. These processes of assigning and removing roles are typical examples of EIs in the function point analysis.
The software also offers a step-by-step full explanation of the method and help pages that provide guidance for each of the 18 steps of the RPER application. In total there are 3 preparation pages, 12 fieldwork pages, and 3 more after fieldwork steps that can withhold separate information each. The viewing of unique content for these pages, classified under External Inquiries (EQ). The sum of 3 function points for each of these 18 pages, 54, plus the 3 for the step-by-step document generated a total of 57 points.
Next we have the text editor with image handling, this tool facilitates content management across all 18 steps. This functionality involves 162 points from EIs for adding, updating, and deleting content (3 operations times 3 function point times 18 different content pages). Also, the display of this content, categorized under EQ, added another 54 points, 3 for each page. The ILF, for storing this content in the database, contributed 126 points, since it uses 18 unique tables to save the content of each page. This led to an aggregate of 360 function points for this functionality.
Status tracking information enables the software to monitor and present the status of each of the 18 RPER steps. This functionality amassed 54 points from EIs dedicated to status dynamic updates and an additional 3 points from EQ used to display the current status of each step of a given method, totaling 57 points. On another functionality, a user has the option to generate the report merging the information from all the steps in a single Microsoft Word file, despite being technically challenging, this task this falls under a single EO contributing in mere 4 points to the overall function point count.
The summary page with progress bar display information on the RPER avatar’s image, team members, and roles which account for 3 points as EQ. It also enables members to update the RPER avatar photo, adding 3 points as EIs. And it calculates and presents the progress percentage for the RPER application, 4 points as an EO. All of this results in 10 function points for this segment. Lastly, the RPER Creation operation to allow the management of the RPER itself, was assessed. In this version of the software it is only possible for users to create the RPER, and all the updates possible on it were already dealt with in previous functionalities, so we only have to add the EIs for the creation operation, 3 points, the displaying of RPER details.
in the RPERs list, which falls under External Inquiries (EQ) adding 3 points, and the ILF for storing RPER data, that contributed 7 points, bringing the total for this section to 13 function points.
The RPER software, with its set of functionalities and interactions, encompasses a total of 542 function points, providing a robust measure of its complexity and underscoring the intricacies involved in its development and maintenance. The Function Point Analysis technique considers the effort required to produce the software, however, to transform this points effort into number of hours of work is very subjective. This can vary depending on the programming language chosen and the knowledge of the technicians who will produce the system, for example.
Nonetheless, expert knowledge provides a metric suggesting that in modern programming languages, one function point equates to approximately 10 hours of effort from a well-trained professional, which fits with what is indicated by other researchers. While in the past this value reached fourteen hours per point (Morris, 2001), more recent studies indicate that this value ranges from about eight to eleven hours depending on the project type, software system, application area and technology involved (Chrobot, 2011; Czarnacka, 2012). The International Software Benchmarking Standards Group (ISBSG, 2023) supports this estimate, particularly for "medium 2" size projects (spanning from 300 to 1000 function points). Consequently, completing such a system would require roughly 5420 hours of dedicated full-time work, equating to about two years and eight months. Subsequent sections compare this estimation with the actual development time spent.
### 4.3 Software Interface Prototyping and Wireframe
There are several free tools that help in the system prototyping stage. In this work we use Figma (2021), a vector graphics editor and prototyping tool which is primarily web browser based. The software is focused on graphical user interface development and user experience design, also known as UI/UX (Franco, 2021). Figma is mainly a browser-based application, but there are desktop versions available for macOS and Windows. In addition, it has vector tools for proficient illustrations and code generation. Furthermore, the software can be applied to image manipulation (Kadam, Ahirrao, & Kotecha, 2021). Figma allows the resize, crop, adjustment of colors, and filters application in images like contrast, shadows, mirror, blur, exposure, highlight, and many others.
The designed graphical interface can be visualized in Figures 4 and 5. Figure 4 shows the logo created for the software, the login screen, and the account creation screen. In Figure 5, two screens of the system are presented. The screen on the left lists all RPERs in progress or finished in the format of a product cards design, where the title of the application and a photo representing it will be highlighted. Also, on the same interface, it is possible to check the position of the search bar and the sorting functionalities. The button to add a new RPER is also present. The other screen, located on the right of figure 5, unveils an example of RPER already in progress and the menu with the mapped features accounted for in the previous sections of this work. It is also possible to see the featured image, also known as RPER avatar, other images inserted in the particular example project and the application progress based on steps already finished or underway. Each step status is listed as a graphical small circle before the step’s name. The empty circle for example indicates an unstarted step, while the full circle suggests a completed task. This design was done before any coding, however the end result of the software precisely followed this plan.
4.4 Software Back-End and Front-End Implementation
The back-end of the RPER software serves as the backbone, ensuring data integrity, security, and communication with one or multiple front-end options (e.g., Web Application, Mobile App). It functions as the foundational infrastructure where data storage, processing, and business logic reside. In our case, it also delineates a set of conventions for creating, retrieving, updating, and deleting data, thereby ensuring the seamless interplay of data and operations between the front-end and back-end systems. The most important technologies employed for this step of the software development included: Node.js, an open-source server environment, Express as the free and open-source back-end web application framework, Postgres database as the free and open-source relational database management system (RDBMS), and TypeScript as the open-source high-level programming language that builds on JavaScript.
Node.js is a multi-platform, open-source runtime environment that executes JavaScript or TypeScript both on the client-side and the server-side. This facilitates the creation of a dynamic web system even before it is relayed to the user's browser. Node.js harmonizes web application development around a singular programming language, simplifying its coding process. In recent years, Node.js has garnered significant accolades. For instance, LinkedIn's mobile application transitioned from "Ruby on Rails" to Node.js, leading to a reduction from 30 data servers to a mere three, all the while retaining the same user traffic (Paul, 2012). Other industry leaders like Netflix, PayPal, and Uber also leverage this technology (Lin & El Gebaly, 2016). In performance evaluations juxtaposing Node.js against traditional server environments, systematic tests have consistently shown it outpacing its competitors (Chitra & Satapathy, 2017). Carter (2014) further praised Node.js as a platform designed for rapid and easy system development with significant scalability for network applications.
The choice of using Express, a minimal and flexible Node.js web framework, ensured a strong foundation for building the application's programming interface (API). Express, when combined with Node.js, enabled the creation of a powerful representational state transfer API (RESTful API), which serves as the bridge between the software's front-end and its PostgreSQL database. Express is used at large companies, such as Twitter, now "X" (StackShare, 2023)
Data management and storage, is the essence of the RPER software. Given the intricate nature of participatory appraisals and the depth of data they produce, a reliable and efficient database system was paramount. PostgreSQL, a powerful open-source relational database, was the chosen database system. Given its reputation for extensibility, performance, and Structured Query Language (SQL) compliance (Makris, Tserpes, Spiliopoulos, Zissis, & Anagnostopoulos, 2021), PostgreSQL provided the necessary tools to handle the vast amounts of data that could be generated during several to come Rapid Participatory and Emancipatory Research (RPER) applications. Its atomicity, consistency, isolation, and durability compliant (ACID-compliant) nature ensures that all transactions are processed reliably, a critical factor for a research-oriented application like this. As example, due to its robustness, companies like Instagram and Spotify are using PostgreSQL in their applications (Thomson Data, 2023)
Lastly, TypeScript, a typed superset of JavaScript (Microsoft, 2023), was used as the primary programming language for the back-end and for the front-end development. It offers strong programming concepts, including classes and interfaces, facilitating the building of large-scale JavaScript projects (Wu, Sun, Gong, Chen, Liao, & Jin, 2020). Based on various studies, both JavaScript and TypeScript had emerged as leading languages for quite some time and still are (Frederickson, 2018; Stackoverflow, 2017, Stackoverflow, 2022). TypeScript's static typing feature, combined with its powerful object-oriented programming capabilities, ensured that our codebase will remain maintainable, a critical aspect for any software expected to last long and evolve over time.
For our front-end, we faced a decision between developing a mobile app or a web application. While we initially leaned towards a mobile app, we strategically pivoted to a Web Application. Web apps provide instant cross-platform access, facilitating quicker deployment and wider user engagement. Despite not being developed specifically as a mobile app, our web system was designed with responsive principles and adjusts seamlessly to various screen sizes and resolutions, ensuring optimal viewing on phones, tablets, and computers alike. The framework we chose for development of the front-end was React (Meta, 2023), a free and open-source front-end library for building user interfaces based on components, primarily developed maintained by Meta (formerly known as Facebook). This not only expedited development but also simplifies potential future transitions. As React Native, a popular tool for developing mobile apps leverages React’s core principles, it offers a smoother pathway to expand into a mobile app later, facilitating continuity and a unified user experience across platforms.
The front-end of the RPER software focuses on user experience, ensuring that data visualization, user input, and overall interaction are smooth and intuitive. The choice of React for building user interfaces laid the groundwork for a componentized and efficient front-end architecture. This modular approach allows for reusable components, enhancing the software's maintainability and scalability. Besides Typescript and React.js, other important tools were used in the front-end development, such as StyledComponents, SunEditor and Axios. We will briefly explain how these tools were used in our application.
Styled-components played a crucial role in the software’s aesthetics and user experience. This library for React and React Native allowed us to utilize tagged template literals to style components, ensuring a clean and organized code structure. This approach also eliminated the need for mapping between styles and components, reducing potential errors and simplifying the styling process while ensuring the software’s web pages were both responsive and interactive. For rich text editing capabilities, SunEditor, a lightweight yet powerful what you see is what you get (WYSIWYG) editor, was incorporated. This allowed users to generate detailed reports, documentation, and other essential research documents with ease. And finally, communication with the back-end was facilitated by Axios, a promise-based HTTP client for the browser and Node.js. Axios made it simpler to send asynchronous HTTP requests to REST endpoints, ensuring that data retrieval, posting, and other CRUD operations were handled smoothly.
All the technologies used for implementing the software are open-source and freely available, including the software itself which is in a public repository of a cloud-based service for version control. In summary, the combination of these technologies provided a robust, scalable, and user-friendly software solution, tailored specifically for the needs of RPER. Given that community participation lies at the heart of the topic, it was imperative to choose free and open-source tools. This not only echoes the principles of participatory research but also ensures that the platform remains adaptable to future needs, inviting contributions and fostering a sense of community ownership over its evolution.
The coding for the back-end and front-end started in June 2021 and was done regularly until August 2023, little over 2 years. It had the involvement of two experienced developers and one occasional third junior developer that helped with the front-end. The effort and time invested closely matched the initial projections from the function point analysis, especially considering none of the developers were dedicated full time to the project. By the time this manuscript was being drafted, both, the back-end and the front-end, had approximately eight thousand lines of code, each, in the back-end these lines were spread among basically three hundred files while in the front-end one hundred and thirty files were created, including pages and components. This amount accounts only for developed files, excluding the code from imported libraries and reused functions from external contributors.
5 CONCLUSIONS AND FUTURE WORK
This work used a professional approach to software development to create an initial version of an application in the Social Management field to enhance the capabilities of this theory, augment the RPER methodology with innovative information technology, and extend its reach. Based on the number of function points calculated, and the hours required for development, it is evident that the system is of significant size. Despite the complexity of development, it is more important to note how the software can support the RPER method application in nearly every step of the way. This will consequently benefit society and the rural communities where the research was intended to be applied.
The thorough text description, coupled with illustrative software screens, make it easier for readers to grasp the developed work, facilitating their journey from conceptual understanding to tangible insights. Due to the development methodology adopted, the waterfall model, it is possible during the maintenance of the system to revisit previous steps to make changes in the design itself or in any of the functionalities already mapped, according to the necessity.
Beyond the immediate theoretical merits of applying social management, other accomplishments are expected with the use of a web system in the RPER applications. These benefits include the distribution and dissemination of results, even more transparency, the unification or centralization of the research made using the method, the organization of data, report generation automation, and better communication and collaboration between team members. But perhaps its most profound impact lies in its potential socio-economic ramifications, particularly in the realm of rural community engagement and qualitative research enhancement in agriculture.
As future work, we recommend the continuation of this system development, maintenance, and possible adaptations like integrating artificial intelligence capabilities and adaptation to other appraisal practices. As highlighted in the results, every single technology used to build the system is openly accessible and we made sure the RPER software be the same, guaranteeing the platform's flexibility for future demands and encouraging community contributions. Some other suggestions include a frequently asked questions page where users can pose questions and specialists give their responses, integrated video lessons with tips about the techniques and application steps, and interactive questionnaire for users to gauge their proficiency and readiness to apply the method. Furthermore, it is also possible to suggest the construction of other systems using the concepts, techniques, and technologies presented in this work, especially those that bolster qualitative research in the agriculture domain, a methodology niche with a shortfall in information technology engagement.
Figure 4 - Rapid Participatory and Emancipatory Research (RPER) Software Graphical User Interface (GUI): Logo, Login, and Signup Screens.
Source: Prepared by the authors.
Figure 5 - Rapid Participatory and Emancipatory Research (RPER) Software Graphical User Interface (GUI): Listing and Example Screens.
Source: Prepared by the authors.
REFERENCES
---
ISBSG - International Software Benchmarking Standards Group. (2023). Analysis of project cost per function point. ISBSG Delivering IT Confidence: The global and independent source of data and analysis for the IT industry.
|
{"Source-Url": "https://rgsa.emnuvens.com.br/rgsa/article/download/4201/1233", "len_cl100k_base": 13376, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 75652, "total-output-tokens": 20136, "length": "2e13", "weborganizer": {"__label__adult": 0.0003421306610107422, "__label__art_design": 0.0013093948364257812, "__label__crime_law": 0.0005211830139160156, "__label__education_jobs": 0.01161956787109375, "__label__entertainment": 0.00013637542724609375, "__label__fashion_beauty": 0.00024139881134033203, "__label__finance_business": 0.001674652099609375, "__label__food_dining": 0.0005059242248535156, "__label__games": 0.0009760856628417968, "__label__hardware": 0.0012187957763671875, "__label__health": 0.000331878662109375, "__label__history": 0.000804901123046875, "__label__home_hobbies": 0.0005097389221191406, "__label__industrial": 0.0017385482788085938, "__label__literature": 0.0007724761962890625, "__label__politics": 0.0006704330444335938, "__label__religion": 0.0005249977111816406, "__label__science_tech": 0.094482421875, "__label__social_life": 0.0003693103790283203, "__label__software": 0.04437255859375, "__label__software_dev": 0.83544921875, "__label__sports_fitness": 0.00028133392333984375, "__label__transportation": 0.0006685256958007812, "__label__travel": 0.0002269744873046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 82798, 0.03269]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 82798, 0.26859]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 82798, 0.85954]], "google_gemma-3-12b-it_contains_pii": [[0, 3667, false], [3667, 7438, null], [7438, 11850, null], [11850, 15811, null], [15811, 19937, null], [19937, 24269, null], [24269, 26523, null], [26523, 30703, null], [30703, 32313, null], [32313, 36473, null], [36473, 40698, null], [40698, 41739, null], [41739, 41969, null], [41969, 46369, null], [46369, 50841, null], [50841, 54640, null], [54640, 58909, null], [58909, 63230, null], [63230, 66148, null], [66148, 66319, null], [66319, 66487, null], [66487, 69146, null], [69146, 71834, null], [71834, 74414, null], [74414, 77223, null], [77223, 79683, null], [79683, 82217, null], [82217, 82798, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3667, true], [3667, 7438, null], [7438, 11850, null], [11850, 15811, null], [15811, 19937, null], [19937, 24269, null], [24269, 26523, null], [26523, 30703, null], [30703, 32313, null], [32313, 36473, null], [36473, 40698, null], [40698, 41739, null], [41739, 41969, null], [41969, 46369, null], [46369, 50841, null], [50841, 54640, null], [54640, 58909, null], [58909, 63230, null], [63230, 66148, null], [66148, 66319, null], [66319, 66487, null], [66487, 69146, null], [69146, 71834, null], [71834, 74414, null], [74414, 77223, null], [77223, 79683, null], [79683, 82217, null], [82217, 82798, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 82798, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 82798, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 82798, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 82798, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 82798, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 82798, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 82798, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 82798, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 82798, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 82798, null]], "pdf_page_numbers": [[0, 3667, 1], [3667, 7438, 2], [7438, 11850, 3], [11850, 15811, 4], [15811, 19937, 5], [19937, 24269, 6], [24269, 26523, 7], [26523, 30703, 8], [30703, 32313, 9], [32313, 36473, 10], [36473, 40698, 11], [40698, 41739, 12], [41739, 41969, 13], [41969, 46369, 14], [46369, 50841, 15], [50841, 54640, 16], [54640, 58909, 17], [58909, 63230, 18], [63230, 66148, 19], [66148, 66319, 20], [66319, 66487, 21], [66487, 69146, 22], [69146, 71834, 23], [71834, 74414, 24], [74414, 77223, 25], [77223, 79683, 26], [79683, 82217, 27], [82217, 82798, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 82798, 0.05976]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
c0c867b754f813c2ea0e30f7123c319c8eb2e060
|
[REMOVED]
|
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-642-17358-5_24.pdf", "len_cl100k_base": 10691, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 54726, "total-output-tokens": 12674, "length": "2e13", "weborganizer": {"__label__adult": 0.00025582313537597656, "__label__art_design": 0.0003485679626464844, "__label__crime_law": 0.0003540515899658203, "__label__education_jobs": 0.0007023811340332031, "__label__entertainment": 6.0439109802246094e-05, "__label__fashion_beauty": 0.00013339519500732422, "__label__finance_business": 0.0003628730773925781, "__label__food_dining": 0.0002968311309814453, "__label__games": 0.0004329681396484375, "__label__hardware": 0.0008454322814941406, "__label__health": 0.00047850608825683594, "__label__history": 0.00021028518676757812, "__label__home_hobbies": 9.971857070922852e-05, "__label__industrial": 0.0004405975341796875, "__label__literature": 0.00022661685943603516, "__label__politics": 0.000255584716796875, "__label__religion": 0.00034046173095703125, "__label__science_tech": 0.049072265625, "__label__social_life": 7.50422477722168e-05, "__label__software": 0.01120758056640625, "__label__software_dev": 0.93310546875, "__label__sports_fitness": 0.00021255016326904297, "__label__transportation": 0.00047707557678222656, "__label__travel": 0.00016367435455322266}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46989, 0.02762]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46989, 0.57848]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46989, 0.88146]], "google_gemma-3-12b-it_contains_pii": [[0, 2381, false], [2381, 5913, null], [5913, 9963, null], [9963, 13289, null], [13289, 16409, null], [16409, 19516, null], [19516, 22489, null], [22489, 27066, null], [27066, 30127, null], [30127, 32620, null], [32620, 36505, null], [36505, 39247, null], [39247, 40840, null], [40840, 43909, null], [43909, 46989, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2381, true], [2381, 5913, null], [5913, 9963, null], [9963, 13289, null], [13289, 16409, null], [16409, 19516, null], [19516, 22489, null], [22489, 27066, null], [27066, 30127, null], [30127, 32620, null], [32620, 36505, null], [36505, 39247, null], [39247, 40840, null], [40840, 43909, null], [43909, 46989, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46989, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46989, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46989, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46989, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46989, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46989, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46989, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46989, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46989, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46989, null]], "pdf_page_numbers": [[0, 2381, 1], [2381, 5913, 2], [5913, 9963, 3], [9963, 13289, 4], [13289, 16409, 5], [16409, 19516, 6], [19516, 22489, 7], [22489, 27066, 8], [27066, 30127, 9], [30127, 32620, 10], [32620, 36505, 11], [36505, 39247, 12], [39247, 40840, 13], [40840, 43909, 14], [43909, 46989, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46989, 0.02525]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
02771fcdc31cd613babd52765be71bae726c855b
|
[REMOVED]
|
{"Source-Url": "https://hal.science/hal-01162795/document", "len_cl100k_base": 15726, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 86748, "total-output-tokens": 20129, "length": "2e13", "weborganizer": {"__label__adult": 0.0003788471221923828, "__label__art_design": 0.0004210472106933594, "__label__crime_law": 0.00036263465881347656, "__label__education_jobs": 0.0006060600280761719, "__label__entertainment": 6.562471389770508e-05, "__label__fashion_beauty": 0.00016641616821289062, "__label__finance_business": 0.0001983642578125, "__label__food_dining": 0.0003886222839355469, "__label__games": 0.0008416175842285156, "__label__hardware": 0.001255035400390625, "__label__health": 0.0005278587341308594, "__label__history": 0.00030684471130371094, "__label__home_hobbies": 0.0001245737075805664, "__label__industrial": 0.000457763671875, "__label__literature": 0.0003008842468261719, "__label__politics": 0.0003151893615722656, "__label__religion": 0.0005741119384765625, "__label__science_tech": 0.031494140625, "__label__social_life": 8.237361907958984e-05, "__label__software": 0.004856109619140625, "__label__software_dev": 0.955078125, "__label__sports_fitness": 0.0003108978271484375, "__label__transportation": 0.0006785392761230469, "__label__travel": 0.0002180337905883789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57245, 0.03746]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57245, 0.4188]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57245, 0.74592]], "google_gemma-3-12b-it_contains_pii": [[0, 981, false], [981, 2951, null], [2951, 5817, null], [5817, 8720, null], [8720, 11897, null], [11897, 15380, null], [15380, 18513, null], [18513, 21298, null], [21298, 23620, null], [23620, 25956, null], [25956, 28137, null], [28137, 31797, null], [31797, 35077, null], [35077, 38046, null], [38046, 41181, null], [41181, 44274, null], [44274, 46973, null], [46973, 49607, null], [49607, 52705, null], [52705, 54891, null], [54891, 57245, null]], "google_gemma-3-12b-it_is_public_document": [[0, 981, true], [981, 2951, null], [2951, 5817, null], [5817, 8720, null], [8720, 11897, null], [11897, 15380, null], [15380, 18513, null], [18513, 21298, null], [21298, 23620, null], [23620, 25956, null], [25956, 28137, null], [28137, 31797, null], [31797, 35077, null], [35077, 38046, null], [38046, 41181, null], [41181, 44274, null], [44274, 46973, null], [46973, 49607, null], [49607, 52705, null], [52705, 54891, null], [54891, 57245, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57245, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57245, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57245, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57245, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57245, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57245, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57245, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57245, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57245, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57245, null]], "pdf_page_numbers": [[0, 981, 1], [981, 2951, 2], [2951, 5817, 3], [5817, 8720, 4], [8720, 11897, 5], [11897, 15380, 6], [15380, 18513, 7], [18513, 21298, 8], [21298, 23620, 9], [23620, 25956, 10], [25956, 28137, 11], [28137, 31797, 12], [31797, 35077, 13], [35077, 38046, 14], [38046, 41181, 15], [41181, 44274, 16], [44274, 46973, 17], [46973, 49607, 18], [49607, 52705, 19], [52705, 54891, 20], [54891, 57245, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57245, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
8beaf47c230fdb61e3af54722f9072453f8a2271
|
Compiler-assisted adaptive program scheduling in big.LITTLE systems
Marcelo Novaes, Vinicius Petrucci, Abdoulaye Gamatié, Fernando Magno Quintão Pereira
To cite this version:
HAL Id: lirmm-02100287
https://hal-lirmm.ccsd.cnrs.fr/lirmm-02100287
Submitted on 17 Jan 2020
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Compiler-assisted Adaptive Program Scheduling in big.LITTLE Systems
Marcelo Novaes
Department of Computer Science
UFMG
Brazil
marcelonovaes@dcc.ufmg.br
Vinicius Petrucci
Department of Computer Science
UFBA
Brazil
vinicius.petrucci@dcc.ufba.br
Abdoulaye Gamatié
LIRMM
CNRS
France
abdoulaye.gamatie@lirmm.fr
Fernando Quintão
Department of Computer Science
UFMG
Brazil
fernando@dcc.ufmg.br
Abstract
Energy-aware architectures provide applications with a mix of low (LITTLE) and high (big) frequency cores. Choosing the best hardware configuration for a program running on such an architecture is difficult, because program parts benefit differently from the same hardware configuration. State-of-the-art techniques to solve this problem adapt the program’s execution to dynamic characteristics of the runtime environment, such as energy consumption and throughput. We claim that these purely dynamic techniques can be improved if they are aware of the program’s syntactic structure. To support this claim, we show how to use the compiler to partition source code into program phases: regions whose syntactic characteristics lead to similar runtime behavior. We use reinforcement learning to map pairs formed by a program phase and a hardware state to the configuration that best fits this setup. To demonstrate the effectiveness of our ideas, we have implemented the Astro system. Astro uses Q-learning to associate syntactic features of programs with hardware configurations. As a proof of concept, we provide evidence that Astro outperforms GTS, the ARM-based Linux scheduler tailored for heterogeneous architectures, on the parallel benchmarks from Rodinia and Parsec.
1 Introduction
Contemporary hardware found in mobile phones and data centers sport multiple ways to reduce energy consumption. Two of these techniques are the combination of low and high power cores (the so called big.LITTLE architectures) [7], and the ability to adjust power and speed dynamically (DVFS) [15]. This design gives us the possibility to allocate to each parallel application the hardware configuration that best suits it. A hardware configuration consists of a number of cores, their type and their frequency level. We say that a configuration $H_1$ suits a program better than another configuration $H_2$ if $H_1$ runs said program more efficiently than $H_2$, according to some metric such as runtime or energy consumption. Nevertheless, even though we have today the possibility of choosing among several configurations, the one that better fits the needs of a certain program, we still have no clear technique to perform this choice seamlessly.
We call the task of allocating parts of a parallel program to processors the code placement problem. State-of-the-art approaches solve this problem dynamically or statically. Dynamic solutions [18, 20, 22] are implemented at the runtime level, at the operating system, or via a middleware. Static approaches [11, 19, 21, 31] are implemented at the compiler level. The main advantage of the dynamic approach is the fact that it can use runtime information to weight the choices it makes. Static techniques, in turn, provide reduced runtime cost and better leverage of program characteristics. In this paper, we claim that it is possible to join these two approaches, achieving a synergy that, otherwise, could not be attained by each technique individually.
To fundament this claim, we start from a technique that has been proven effective to schedule computations in big.LITTLE architectures: Reinforcement learning. Nishtala et al. [20] showed that reinforcement learning helps to find good hardware configurations to applications subject to varying dynamic conditions. The beauty of this approach is adaptability: it provides the means to explore a vast universe of states, formed by different hardware setups and runtime data changing over time. Given enough time, well-tuned heuristics find a set of scheduling decisions that suits the underlying hardware. Yet, “enough time” can be too long. The universe of runtime states is unbounded, and program behavior is hard to predict without looking into its source code. To speedup convergence, we resort to the compiler.
The compiler gives us two benefits. First, it lets us mine program features, which we can use to train the learning
algorithm. Second, it lets us instrument the program. This instrumentation allows the program itself to provide feedback to the scheduler, concerning the code region currently under execution. Based on previous knowledge, collected statically, about characteristics of that region, the scheduler can take immediate action. An action consists in choosing a new state to represent program behavior, and collecting the reward related to that choice. Such feedback is then used to fine-tune and improve scheduling decisions. As we show in Section 4, convergence is faster, and runtime shorter.
To validate our ideas, we have materialized them into a framework to instrument and execute applications in heterogeneous architectures: the Astro System. Astro collects syntactic characteristics from programs and instruments them using LLVM [14]. Experiments in programs from PARSEC [4] and Rodinia [6] running on an Odroid XU4 show that we can obtain speedups of more than 10% over the default GTS scheduler used in ARM-based systems. Such numbers result from the following contributions:
Observations: in Section 2, we demonstrate that the performance of a program running on a heterogeneous architecture varies depending on which part of its text we consider. This observation points us to the key insight: the possibility of augmenting an adaptive runtime apparatus with program characteristics.
Compiler: in Section 3.1, we explain how to collect and discretize program features, and in Section 3.2, we explain how to instrument a program, so to use said features to fine-tune an adaptive code placement algorithm.
Runtime: in Section 3.3, we show how to integrate the static information that we collect with an adaptive runtime environment. Once we train a program, we generate code that maps different parts of it to suitable hardware configurations.
2 Empirical Observations
This section motivates our work through three empirical observations. First, different hardware configurations yield very different tradeoffs between power consumption and runtime speed for a program (Figure 1). Second, this behavior happens because programs have power phases: depending on the operations that they perform, they might consume more or less power per time unit (Figure 2). Third, the best hardware configuration for a program might not suit the needs of a different application (Figure 4). Central to the discussion in this section is the notion of a hardware configuration:
Definition 2.1 (Hardware Configuration). A heterogeneous architecture is formed by a set \( P = \{p_1, p_2, \ldots, p_n\} \) of \( n \) processors. A hardware configuration is a function \( H : P \mapsto \text{Boolean} \). If \( H(p_i) = \text{True} \), then processor \( p_i \) is said to be active in \( H \), otherwise it is said to be inactive.
First Observation. The same application might benefit differently from different hardware configurations. This benefit is measured in terms of processing time and energy consumption. Figure 1 shows how two benchmarks from the PARSEC suite – Freqmine and Streamcluster – fare on an Odroid XU4 board featuring 4 Cortex-A15 2.0Ghz cores and 4 Cortex-A7 1.4Ghz cores. Following a nomenclature adopted by ARM, we shall call the A15 cores \( \text{bigs} \), and the A7 cores \( \text{LITTLEs} \). By switching on and off the different cores, we have 24 different hardware configurations.
Each dot in the figure represents the average of 10 executions on the same configuration, using the smallest\(^2\) input available in PARSEC. Variance is almost negligible, staying under 1% in every sample, for the two benchmarks. The X-axis shows the sum of the execution times of processors active in a particular configuration; hence, it is not clock time. Energy is measured with the Odroid XU3 on-board power measurement circuit and refers to work performed within the processors only; thus, peripherals are not considered.
Figure 1 lets us conclude that the energy and runtime footprint of applications vary greatly across different hardware
\(^1\)We have 24 = 5 \times 5 – 1 configurations, because we do not count the setup in which all cores are off.
\(^2\)This experiment would take 12 days using the largest inputs.
configurations. For instance, the most time efficient configuration for Freqmine is 0L4B, i.e., four bigs and no LITTLEs (2.90secs, 10.43J). However, the most energy efficient configuration is 4L0B (4.01secs, and 8.65J). Results are not the same for Streamcluster. The best energy configuration is 0L1B (0.48secs, 0.69J). This is also the most time efficient configuration. Freqmine shows more parallelism than Streamcluster; therefore, it benefits more from a larger number of cores. This diversity of scenarios happen because programs have phases. Energy and runtime behavior are similar within the same phase, and potentially different across different phases.
**Second Observation.** The instantaneous power consumed by a program is not always constant. In other words, a program has power phases. Figure 2 (a) shows a program which we have crafted to emphasize the different phases that a program undergoes during its execution. This program performs the following actions: (i) read two matrices from text files; (ii) multiply them and (iv) prints all the matrices in the standard output. In between each of these actions we have interposed commands to read data from the standard input.
Figure 3 shows the power profile of this program. This chart has been produced with JetsonLeap [3], an apparatus that let us measure the energy consumed by programs running on the Nvidia TK1 Jetson board. JetsonLeap is formed by three components: the target Nvidia board (Figure 2 (b)), a data acquisition device, which reads the instantaneous power consumed by the board (Figure 2 (c)), and a synchronization circuit, which lets us communicate to the power meter which program event is running at each instant (Figure 2 (c)).
Distinct phases exist within the same program because it might use the hardware resources differently, depending on which part of it is running. By reading performance counters, we know that during matrix multiplication, CPU is at maximum usage. During the input/output operations, this utilization drops slightly, and other components of the hardware, such as its serial port, are more exercised instead. This fall is steep once the program is waiting for user inputs. The CPU is not the only hardware component that accounts for power dissipation. The JetsonLeap apparatus measure energy for the entire hardware. Thus, the under utilization of the CPU does not mean that overall power consumption will decrease. Nevertheless, variations in the CPU usage are likely to cause variations in the power profile of the program.
Discovering such program phases by means of purely dynamic techniques is possible, yet difficult. As we shall demonstrate in Section 4, we can use profiling techniques, à la Hipster [20], to identify variations in program behavior. However, this approach has two shortcomings. First, distinct program parts, with very different resource demands in terms of memory, CPU, disk and such, can display similar dynamic characteristics. For instance, we could imagine a scenario in which function read_user_data, in Figure 2 is implemented via busy waiting. In this case, instead of the valleys observed in Figure 3, we would encounter a power line similar to that produced by CPU-intensive functions like mulMatrix. Second, profiling-based techniques face a trade-off between precision and overhead. Fast detection asks for high sampling rates; thus burdening the application which originally we intended to optimize. On the other hand, purely
```c
int main(int argc, char** argv) {
int M1, N1, M2, N2;
// Read first matrix from file 'argv[1]'
int** m1 = readMatrix(argv[1],&M1,&N1);
read_user_data();
// Read second matrix from file 'argv[2]'
int** m2 = readMatrix(argv[2],&M2,&N2);
read_user_data();
int** m3 = mulMatrix(m1,m2,M1,N1,N2);
read_user_data();
// Print all the matrices in the
// standard output
printMatrix(m1, M1, N1);
printMatrix(m2, M2, N2);
printMatrix(m3, M1, N2);
read_user_data();
}
```
**Figure 2.** (a) Simple matrix multiplication implemented in C. (b) The Nvidia TK1 board. (c) NI 6009 Data Acquisition Device. (d) Synchronization circuit.
**Figure 3.** (a) Power profile of program seen in Figure 2. The NI 6009 sample rate was 1000 samples/sec. (b) Zoom of the power profile obtained during the last phase of the program.
of structural properties of the code. Thus, we claim that effective adaptation demands knowledge of program characteristics. Such information is readily available to the compiler; however, it is hard to be precisely acquired by techniques unaware of the program’s structure.
3 The Astro System
This section describes the design and implementation of our approach to solve the problem of finding good hardware configurations for programs. We state this problem as follows:
Definition 3.1. Scheduling of Programs in Heterogeneous Architectures (SPha)
**Input**: a program $P$, its input $I$, hardware configurations $H_1, \ldots, H_n$, energy threshold $E$, and performance threshold $S$.
**Output**: $P'$, a new version of $P$, which switches between configurations, and process $I$ using $E\%$ less energy, with a slowdown of no more than $S\%$.
In this paper, we solve SPha using an assortment of techniques, which give us the means to generate code that is well adapted to different architectures and workloads. Figure 5 provides a general overview of these techniques, emphasizing the different stages over which we go in the process of solving SPha. Section 3.1 describes program instrumentation, a necessary step to partition a program into phases. Section 3.2 goes over actuation; and Section 3.3 discusses the generation of the final program. However, before we move into the particulars of our solution to SPha, we provide a

brief introduction to Q-Learning, the flavour of reinforcement learning that we have adopted.
**Q-Learning.** Q-learning is a reinforcement learning algorithm [28]. Given some notion of state (Definition 3.2) and reward (Definition 3.7), it finds an optimized policy to perform the best action (Definition 3.9). Q-learning is attractive because there is no need to know in advance the precise results of the actions before we perform them; that is, we learn about the environment as we perform actions on it. A Markov Decision Process (MDP) drives Q-learning. A MDP is given by a set of states \( S \), a set of possible actions \( A \), a reward function \( R : S \times A \rightarrow \mathbb{R} \), and a state transition mapping \( T : S \times A \rightarrow S \) that describes the effects of taking each action in each state of the environment. The Markov property says that the results of an action depends only on the state where the action was taken, regardless of any other prior states.
### 3.1 Phase Partitioning
A running program might cause the hardware to go over an infinite number of different states. Because this universe is unbounded, Definition 3.2 discretizes the notion of a *State*. In that definition, \( S \) is a *Program Phase* and \( D \) is a *Hardware Phase*. Program phases are discussed in Section 3.1.1, and hardware phases are discussed in Section 3.1.2.
**Definition 3.2 (State).** A state is a triple \((H, S, D)\) representing a hardware configuration \( H \), a program phase \( S \) and a hardware phase \( D \).
#### 3.1.1 Program Phases
Static Program Phases depend only on the syntax of a program. Definition 3.3 formalizes this notion. A static program phase is not equivalent to a *program region*, because different regions can present the same set of feature ranges. Example 3.4 clarifies the meaning of these definitions.
**Definition 3.3 (Program Phase).** A code-level feature (also called code feature or simply feature) is a syntactic characteristic of a program, such as number of \( n \)-nested loops or instruction mix. A feature range is a contiguous interval of values that a feature can assume, and that partitions the feature space into equivalence classes. A program phase \( S \) is a group of feature ranges, covering different features.
**Example 3.4.** The density of arithmetic and logical instructions is a code-level feature, which we obtain by dividing the number of such opcodes by the total number of program instructions. We can define different feature ranges covering this metric, such as \([0, 0.25], [0.25, 0.50] \) and \([0.5, 1.00] \). The number of nested loops yields another feature. In this case, possible ranges are \([0, 1], [2, 3] \) and \([4, +\infty] \). Finally, an expectation on the number of I/O routines called in a function gives us a third feature. A heuristic to estimate it is:\( \Sigma_{i=1}^n \), for every I/O call \( i \) nested into \( n \) loops. Potential intervals for this metric are \([0, 1], [1, 10], [10, 100] \) and \([100, +\infty] \). The \( 3 \times 3 \times 4 \) possible combinations of these ranges gives us 36 program phases. If we collect these features for each function in the program code, then we can map any of them to one of these program phases.
In this paper, we mine (e.g., collect) features from the intermediate program representation that the compiler manipulates before producing executable code. We have implemented a *Phase-Extractor* using the LLVM compiler. The result of mining program features is a map that assigns phases to program regions. This map depends on the choice of program region. Many different granularities of regions are possible, such as instruction, basic block, loop, Single-Entry-Single-Exit block [9], etc. We have chosen to work mostly at the granularity of functions. The “mostly” in this case, refers to the fact that we also change phases before and after library calls that cause the program to block waiting for some event (see the Barrier phase, in the discussion that follows). Pragmatically, this amounts to say that the instrumented program adds logic to change phases at the entry point of functions, and around certain library calls.
**Example 3.5.** Figure 6 shows the five functions in Figure 2, classified according to features seen in Example 3.4. We are assigning these functions hypothetical values. Because we have three features, we can map them into a three-dimensional space. Each phase corresponds to a cube in this space. Figure 6 shows the sub-space that corresponds to the phase: \( \text{Arith.Density} \in [0, 0.25], \text{I/O Weight} \in [0, 1] \) and \( \text{NestingFactor} \in [0, 1] \). Function main, in our example, fits in this phase.
**Our Choice of Program Phases.** In our implementation, we combine four code features to determine program phases. These features are all "densities", i.e., they represent a certain quantity of instructions normalized by the total of instructions in the target function. We use the following features:
- **IO-Dens:** proportion of library calls that perform I/O operations;
• Mem-Dens: proportion of instructions that access memory (loads and stores);
• Int-Dens: proportion of arithmetic and logic instructions that operate on integer types.
• FP-Dens: proportion of arithmetic and logic instructions that operate on floating point types.
• Locks-Dens: proportion of lock instructions.
• Barrier: true when the program invokes a multi-thread barrier that forces it to wait for some blocking event.
• Net: true when the program invokes a library call that forces it to wait for some network-related event.
• Sleep: true when the program invokes a sleep library call that forces it to wait unconditionally.
We have defined four program phases, which appear as combinations of the features above. This choice is arbitrary. We have opted for a simple partitioning, involving only a handful of features for convenience, as this choice already lets us support the main thesis of this paper: that static features greatly enhance the dynamic scheduling of computations in heterogeneous hardware. The program phases that we shall consider in Section 4 are:
• Blocked: Barrier = true or Net = true or Sleep = true or Locks-Dens > 0.5;
• I/O Bound: IO-Dens + Mem-Dens > 0.5 and not(Blocked) and Locks-Dens = 0;
• CPU Bound: Int-Dens + FP-Dens > 0.5 and not(Blocked);
• Other: in case none of the previous relations hold.
### 3.1.2 Hardware Phases
While the program phases seen in Section 3.1.1 depend only on syntactic program characteristics, hardware phases depend on the dynamic state of the hardware:
**Definition 3.6 (Hardware Phase).** A Performance Counter is any monitor that collects dynamic information about the hardware state, such as CPU performance and cache miss rate. The domain over which the performance counter ranges can be partitioned into phases. Given a collection of performance counters \( \{ C_1, C_2, \ldots, C_n \} \), where each \( C_i \) is partitioned into \( R_i \) phases, then a hardware phase is any combination within the product \( R_1 \times R_2 \times \ldots \times R_n \).
The monitoring of hardware phases does not require program instrumentation. Instead, an actuator reads the state of hardware performance counters periodically. Modern architectures already provide an array of performance counters that can be queried. In this paper, we consider four kinds of counters to define hardware phases:
• IPC: instructions per cycle in the ranges \([0..5), [5, 1.0), [1.0, +\infty)\);
• CMA: cache misses per cycle accesses in the ranges \([0, 1\%), [1\%, 5\%), [5\%, +\infty)\);
• CMI: cache misses per instruction executed, in the ranges \([0, .1\%), [.1\%, .5\%), [.5\%, +\infty)\);
• CPU: utilization of the CPU, in the ranges \([0, 20\%), [20\%, 50\%), [50\%, +\infty)\).
Each counter is partitioned in three buckets. Therefore, we consider a total of \(3 \times 3 \times 3 \times 3 = 81\) hardware phases.
### 3.2 Actuation
The heart of the Astro system is the Actuation Algorithm outlined in Figure 7. Actuation consists of phase monitoring, learning and adapting. These three steps happen at regular intervals, called check points, which, in Figure 7, we denote by \(i\) and \(i+1\). The rest of this section describes these events.
#### 3.2.1 Monitoring
To collect information that will be later used to solve SPHA, Astro reads four kinds of data. Figure 7 highlights this data:
• From the Operating System (OS): current hardware configuration \(H\) and instructions \(p\) executed since last check point.
• From the Program (Log): the current program phase \(S\).
• From the device’s performance counters (PerfMon): the current hardware phase \(D\).
• From the power monitor (PowMon [32]): the energy \(e\) consumed since the last checkpoint.
The monitor collects this data at periodic intervals, whose granularity is configurable. Currently, it is 500 milliseconds. The recording of the program phase is aperiodic, following from instrumentation inserted in the program by the compiler. As discussed in Section 3.1.1, information is logged at the entry point of functions, and around library calls that might cause the program to enter a dormant state. The hardware configuration is updated whenever it changes. The metrics \(e\) and \(p\) lets us define the notion of reward as follows:
int main(int argc, char** argv) {
save_feature_range(
0.12, /* Arithmetic Density */
0.8, /* IO weight */
0, /* Nesting factor */
False /* compiler knows next function blocks */
);
// Read first matrix from file 'argv[1]'
int** m1 = readMatrix(argv[1],&M1,&N1);
...
}
(a)
int main(int argc, char** argv) {
...
determine_active_configuration (1);
int** m1 = readMatrix(argv[1],&M1,&N1);
...
}
(b)
determine_active_configuration (1);
int main(int argc, char** argv) {
...
determine_active_conf (STA, DYN);
...
}
(c)
Figure 8. (a) Instrumentation to mine features. (b) Final instrumentation, inserted in production code.
Definition 3.7 (Reward). The reward is the set of observable events that determine how well the learning algorithm is adapting to the environment. The reward is computed from a pair $(e, p)$, formed by the Energy Consumption Level $e$, measured in Joules per second (Watt), and the CPU Performance Level $p$, measured in number of instructions executed per second.
The metric used in the reward is given by a weighted form of performance per watt, namely MIPS/Watt, where $y$ is a design parameter that gives a boosting performance effect in the system. This is usually a trade-off between the performance and energy consumption. To optimize for energy, we let $y = 1.0$. A value of $y = 2.0$ emphasizes performance gains: the reward function optimizes (in fact, maximizes the inverse of) the energy delay product per instruction, given by Watt/IPS$^2$; letting $IPS = 1/S$ we have $(Watt \times S \times S)/IPS^2 = (Energy \times Delay)/IPS^2$. This aims to minimize both the energy and the amount of time required to execute thread instructions [5].
Example 3.8. Continuing with Example 3.5, Figure 8 (a) shows the instrumentation of function main (Figure 2) to log program phases.
3.2.2 Learning
The learning phase uses the Q-learning algorithm. As illustrated in Figure 7, a key component in this process is a multi-layer Neural Network (NN) that receives inputs collected by the Monitor. The NN outputs the actions and their respective rewards to the Actuator so that a new system adaptation can be carried out. Following common methodology, learning happens in two phases: back-propagation and feed-forwarding. During back-propagation we update the NN using the experience data given by the Actuator (Figure 7). Experience data is a triple: the current state, the action performed and the reward thus obtained. The state consists of a hardware configuration $(H_{i-1})$, static features $(S_{i-1})$ and dynamic features $(D_{i-1})$ at check points $i-1$. The action performed at check point $i-1$ makes the system move from hardware configuration $H_{i-1}$ to $H_i$. The reward is given by $r_i$, received after the action is taken. The NN consists of a number of layers including computational nodes, i.e., neurons. The input layer uses one neuron to characterize each triple (state, action, reward). The output layer has one neuron per action/configuration available in the system. During the feed-forward phase, we perform predictions using the trained NN. Each node of the NN is responsible to accumulate the product of its associated weights and inputs. Given as input a state $(H_i, D_i, S_i)$ at check point $i$, the result of the feed-forward step is an array of pairs $A \times R$, where $A$ is an action, and $R$ is its reward, estimated by NN. Actions determine configuration changes; rewards determine the expected performance gain, in terms of energy and time, that we expect to obtain with the change. We use the method of gradient descent to minimize a loss function given by the difference between the reward predicted by the NN, and the actual value found via hardware performance counters.
3.2.3 Adapting
At this phase, Astro takes an action. Together with states and rewards, actions are one of the three core notions in Q-learning, which we define below:
Definition 3.9 (Action). Action is the act of choosing the next hardware configuration $H$ to be adopted at a given checkpoint.
An action may change the current hardware configuration; hence, adapting the program according to the knowledge inferred by the Neural Network. Following Figure 7, we start this step by choosing, among the pairs $(A_1, R_1), \ldots, (A_n, R_n)$, the action $A_x$ associated with the maximal reward $R_x$. $A_x$ determines, uniquely, a hardware configuration $H'$. Once $H'$ is chosen, we proceed to adopt it. However, the adoption of a configuration is contingent on said configuration being available. Cores might not be available because they are running higher privilege jobs, for instance. If the Next Configuration is accessible, Astro enables it; otherwise, the whole system remains in the configuration $H_i$ active at check point $i$. Such choice is represented, in Figure 7, by the function $H_{i+1} = chg(H', H_i)$. Regardless of this outcome, we move on to the next check point, and to a new actuation round.
3.3 Code Scheduling
After we have trained a program to a given architecture, we imprint this knowledge directly in that program’s code. In Figure 5, this step is named Final Code Generation. Code generation consists in inserting instrumentation into the target program. Instrumentation is inserted in the same regions modified to mark program phases (see Section 3.1.1): at the entry point of functions, and around particular library calls. Example 3.10 illustrates this instrumentation.
Example 3.10. Figure 8 shows the final actuation code for the program in Figure 2. Function determine_active_configuration tries to move the program to the configuration that has produced the largest rewards for that program phase. We consider two versions of instrumentation: static, as in Figure 8(b), and hybrid, as in Figure 8(c). The latter can read hardware status to improve the decision making process.
The static scheduling discussed in Example 3.10 always maps the same program region to the same hardware configuration. Hybrid scheduling might change decisions, given enough runtime information. As we show in Section 4, the static scheduling yields lower runtime overhead than Astro’s hybrid scheduling. However, this modus operandi is unable to adapt the program to its workload; and cannot recover from bad decisions. A striking example is the benchmark ParticleFilter (see Fig. 10 in Section 4.2). In this case, even with the runtime overhead, the flexibility of hybrid instrumentation paid off in terms of energy and speed.
4 Evaluation
This section presents an experimental evaluation of the Astro system over several parallel benchmarks running on a big.LITTLE system. In the process of evaluating Astro, we shall provide answers to the following research questions:
- **RQ1**: How close can Astro be from an optimal oracle?
- **RQ2**: How does Astro compare against fixed and immutable best configuration choices?
- **RQ3**: How does Astro compare against state-of-the-art schedulers?
- **RQ4**: How does Astro behave on an actual device?
- **RQ5**: How much does Astro increase code size?
**Experimental Setup.** We use two experimental setups: program traces, henceforth called simulation; and an actual device, the Odroid XU4. Experiments in Section 4.1 use simulation because they involve testing exhaustively every hardware configuration. Experiments in Section 4.2 run on an actual device: the Odroid XU4 development board with a big.LITTLE ARM processor (Samsung Exynos 5422) featuring 4 big cores (Cortex-A15 2.0 Ghz) and 4 LITTLE cores (Cortex-A7 1.4 Ghz), running on Linux odroid 3.10.63, using the “performance” frequency governor, with cores at maximum speed. This device was also used to produce the simulation traces. We report CPU power consumption via PowMon [32]. Astro is implemented on LLVM 3.8.
**Benchmarks.** The simulation traces used in Section 4.1 were produced on Parsec’s FluidAnimate [4]. Experiments on Section 4.2 use eight benchmarks from Rodinia and Parsec. These are the only programs that we can currently instrument, as our LLVM module does not recognize mangled C++ routines yet (to discover program phases such as I/O density – Sec. 3.1.1). We used FluidAnimate to obtain the initial learning parameters; hence, we do not use it for validation.
4.1 Results in the Simulated Environment
In this section we report results that are hard to obtain on an actual device, because they involve exhaustive search on the universe of valid hardware configurations. We have approximated the exhaustive execution of configurations by generating traces for every hardware configuration. These traces lets us simulate different behaviors, by choosing, at each checkpoint, the reward offered by one of them. Different policies can guide this choice: optimal, best fixed and random for instance. Producing such traces is time consuming, thus, we have produced them only for fluidanimate. We took between 410 seconds to up to 7,000 seconds to produce each trace, depending on the hardware configuration. Figure 9 compares seven different scheduling strategies built on top of this simulator, applied on fluidanimate.
**Figure 9.** Comparison between Astro and a system that chooses the next configuration randomly.
**RQ1: how close is Astro to an optimal oracle?** The data collected for every possible configurations lets us know, for each part of the program, which configuration consumes less energy and has the best performance. We then combine these 24 traces into a single trace, choosing, at each checkpoint, a particular configuration. This "optimal" trace is what we call the Oracle. Our oracle is not an optimal global solution to SP1A. Rather, it is a greedy approximation: given that at check-point i we are at configuration H_i, what is the configuration that gives us the best reward at check-point i + 1. Figure 9 shows two oracles: (E) and (T). The former yields optimal energy consumption; the latter yields optimal execution time. Astro’s reward function prioritizes time over energy; hence, it leads to execution times close to T. If we schedule Fluidanimate with Astro, its final runtime is only 10% slower than T. However, it is more energy hungry: it uses 8% more energy than T, and 15% more energy than E.
**RQ2: How does Astro compare against immutable best configuration choices?** If we fix the hardware configuration, then 4b4L (4 bit, 4 LITTLE cores) gives us the best runtime and the best energy consumption for the simulation...
of Fluidanimate. This configuration is 45% slower than Astro, yet it is 4% more energy efficient. The fact that Astro, and the energy oracle, could beat 4b4L is surprising. We have found out that 4b4L tends to slowdown programs at critical sections, due to an excess of conflicts between threads. Astro eventually learns to use configurations with less cores at these program phases; hence, speeding up execution. Figure 9 also shows the configuration that yields the slowest and more power hungry execution: 1b0L. It is almost 15 times slower than Astro, and spends 3.6x more energy.
**RQ3: How does Astro perform when compared with state-of-the-art program schedulers?** We tried to implement, on the simulator, two well-known schedulers for big.LITTLE architectures: Hipster [20] and Octopus-Man [22]. The implementation of Hipster used in Figure 9 differs slightly from the original description of Nishtala et al, although we have reused much of their code base. Hipster was originally conceived to deal with cloud workloads; hence, we had to customize its state and reward function for multithreaded programs. In this experiment, both, Hipster and Astro use the same reward function. Octopus-Man is the profiling mechanism used in Hipster; hence, it does not use the notion of reward. Astro produces code that runs 17% faster than Hipster, and 15% faster than Octopus-Man. However, Astro uses 6% more energy than the former, and 4% more than the latter.
### 4.2 Results in an Actual Device
**RQ4: How does Astro behave on an actual device?** Figure 10 shows the runtime (5 samples) of three different solutions to SPha: Astro (purely static or hybrid), and Global Task Scheduling (GTS). GTS is a scheduling algorithm developed by ARM. This scheduler is aware of the different compute capabilities of big and LITTLE cores in the system. It uses historical data of the running tasks and active cores to determine where each individual thread will run. By tracking the load information at runtime, GTS migrates tasks that are compute-intensive to big cores and those that are less intensive to little cores. Load balancing heuristics are periodically executed to minimize concentrating compute-intensive threads excessively on big cores and letting little cores under-utilized. Numbers reported for Astro include all the overhead of monitoring and adapting the target application.
Astro, in its static or hybrid flavours, yields faster code than GTS in six benchmarks, and more energy efficient code in five. We show two p-values next to each plot: S and H. The former is the probability that the static and purely dynamic (GTS) samples come from the same distribution. The latter relates the hybrid and purely dynamic distributions. The closer to zero, the more statistically significant are our results. We emphasize that GTS is a state-of-the-art approach, widely used in operating systems running on ARM hardware, and the fact that Astro can consistently outperform it testifies in favour of the benefits of syntax awareness when taking scheduling decisions. There is no clear winner between the hybrid and static versions of Astro. We observer that the former tends to be better in more regular (kernel-like) applications, such as CFD and sradv2. We also observe strong correlation between runtime and energy consumption, except for Swaptions. In that case, the Static version of Astro tends to avoid using the high-frequency cores, a fact that leads to slower runtime, but also to less power dissipation. In ParticleFilter the static version was penalized for a wrong scheduling decision: it stays in 1b2L, and the lack of runtime information prevents it from fixing this choice.
**RQ5: How much does Astro increase code size?** There are three different versions of instrumented programs: those used during Astro’s learning phase; the programs that use static instrumentation; and the programs that use hybrid instrumentation. The binary size of the last two is the almost the same: it consists of code that collects data, plus the Astro library. The only different between static and dynamic instrumentation is the code used to collect dynamic data in the latter version. This different is too small; hence, in Figure 11 we include both types of binaries in the same bar: Instrumented. As the figure shows, most of the size overhead imposed by Astro is due to its dynamic library. This increase is constant across benchmarks. The amount of instrumentation in binaries grows linearly with the program size. This growth tends to be very small. As evidence to this small growth, in the Learning phase, binaries do not use any dynamically linked library; thus, code size expansion is due to instrumentation only, and it is small, as seen in Figure 11.
### 5 Related Work
The problem of scheduling computations in heterogeneous architectures (Definition 3.1) has attracted much attention in recent years. Table 1 provides a taxonomy of previous solutions to this problem. We group them according to how they answer each of the following four questions:
- **Source**: is the program’s code modified?
- **Auto**: is user intervention required?
- **Runtime**: is runtime information exploited?
- **Learn**: is there any adaptation to runtime conditions?
Perhaps the most important difference among the several strategies proposed to solve SPha concerns the moment when they are used: at compilation time, at runtime, or both.
*Purely static* approaches work at compilation time. They might be applied by the compiler, either automatically, i.e., without user intervention [8, 12, 16, 24, 26, 29], or not. In the latter case, developers can use annotations [19], domain specific programming languages [16, 26] or library calls [1] to indicate where each program part should run. In Table 1, techniques implemented at either the compiler or library levels are purely static. *Purely dynamic* approaches take into account runtime information. They can be implemented at the architecture level [13, 17, 25, 30, 33], or at the virtual
Figure 10. Time (Top) and Energy (Bottom) comparison between Astro and GTS (G). “Static (S)” is the purely static version of Astro (Fig. 8b). “Hybrid (H)” is the version that uses runtime information to improve on the static decisions (Fig. 8c). Numbers in boxes are p-values for the Static and Hybrid approaches, compared to GTS. Grey triangles indicate winning strategies.
Figure 11. Code size increase. Y-axis shows code size (Kb).
Table 1. Comparison between different solutions to SPHA. Level: at which level the technique is implemented: Architecture (A), Operating System (O), Compiler (C) or Library/Programming model (L). Code: “Yes” if approach requires source code. Auto: “Yes” if it is performed automatically, without user intervention/annotation. Runtime: “Yes” if technique considers runtime information. Learn: “Yes” if technique adapts/learns a model from the target architecture.
None of these previous work use any form of learning technique to adapt the program to runtime conditions, as Table 1 indicates in the column Learn. Once guards are created, they always behave on the same way. That is the main difference between these previous approaches and the Astro method.
6 Conclusion
This paper has presented Astro, a program scheduler for big.LITTLE architectures. Astro uses machine learning to adapt a program to runtime conditions. However, it departs from previous approaches, also based on machine learning, because it takes program characteristics into consideration. Astro relies on the compiler to identify program regions that contain similar syntactical features. We classify these features in sets called program phases, and track, at runtime, which program phase is currently valid. When combined with dynamic data, this information lets a neural network which program phase is currently valid. When combined with static and dynamic features in sets called program phases, and track, at runtime, which program phase is currently valid. When combined with dynamic data, this information lets a neural network which program phase is currently valid. When combined with static and dynamic features in sets called program phases, and track, at runtime, which program phase is currently valid. When combined with static and dynamic features in sets called program phases, and track, at runtime, which program phase is currently valid. When combined with static and dynamic features in sets called program phases, and track, at runtime, which program phase is currently valid. When combined with static and dynamic features in sets called program phases, and track, at runtime, which program phase is currently valid. When combined with static and dynamic features in sets called program phases, and track, at runtime, which program phase is currently valid.
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-02100287/document", "len_cl100k_base": 9760, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 45935, "total-output-tokens": 10851, "length": "2e13", "weborganizer": {"__label__adult": 0.0005240440368652344, "__label__art_design": 0.0005917549133300781, "__label__crime_law": 0.0003986358642578125, "__label__education_jobs": 0.0007610321044921875, "__label__entertainment": 0.00012505054473876953, "__label__fashion_beauty": 0.0002574920654296875, "__label__finance_business": 0.00033593177795410156, "__label__food_dining": 0.00048613548278808594, "__label__games": 0.0011339187622070312, "__label__hardware": 0.007381439208984375, "__label__health": 0.0008082389831542969, "__label__history": 0.0004940032958984375, "__label__home_hobbies": 0.00022840499877929688, "__label__industrial": 0.0009870529174804688, "__label__literature": 0.0002853870391845703, "__label__politics": 0.000415802001953125, "__label__religion": 0.0008101463317871094, "__label__science_tech": 0.2078857421875, "__label__social_life": 9.948015213012697e-05, "__label__software": 0.0076446533203125, "__label__software_dev": 0.76611328125, "__label__sports_fitness": 0.0005092620849609375, "__label__transportation": 0.0012044906616210938, "__label__travel": 0.0003323554992675781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44769, 0.02826]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44769, 0.44468]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44769, 0.89413]], "google_gemma-3-12b-it_contains_pii": [[0, 1116, false], [1116, 5445, null], [5445, 9680, null], [9680, 14005, null], [14005, 15487, null], [15487, 20600, null], [20600, 24864, null], [24864, 30357, null], [30357, 35365, null], [35365, 41407, null], [41407, 42307, null], [42307, 44197, null], [44197, 44769, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1116, true], [1116, 5445, null], [5445, 9680, null], [9680, 14005, null], [14005, 15487, null], [15487, 20600, null], [20600, 24864, null], [24864, 30357, null], [30357, 35365, null], [35365, 41407, null], [41407, 42307, null], [42307, 44197, null], [44197, 44769, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44769, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44769, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44769, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44769, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44769, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44769, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44769, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44769, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44769, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44769, null]], "pdf_page_numbers": [[0, 1116, 1], [1116, 5445, 2], [5445, 9680, 3], [9680, 14005, 4], [14005, 15487, 5], [15487, 20600, 6], [20600, 24864, 7], [24864, 30357, 8], [30357, 35365, 9], [35365, 41407, 10], [41407, 42307, 11], [42307, 44197, 12], [44197, 44769, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44769, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
f20e0e0e8eb9a71f32b353531e85837a525070ff
|
Abstract
A universal spatial automaton, called WAVE, for highly parallel processing in arbitrary distributed systems is described. The automaton is based on a virus principle where recursive programs, or waves, self-navigate in networks of data or processes in multiple cooperative parts while controlling and modifying the environment they exist in and move through. The layered general organisation of the automaton as well as its distributed implementation in computer networks have been discussed. As the automaton dynamically creates, modifies, activates and processes any knowledge networks arbitrarily distributed in computer networks, it can easily model any other paradigms for parallel and distributed computing. Comparison of WAVE with some known programming models and languages, and ideas of their possible integration have also been given.
1. Introduction
While traditional distributed programming models are based on stationary programs exchanging data by sending messages, i.e. on data flow or data mobility, WAVE represents a quite opposite paradigm [16, 17, 20]. Originating from a practical implementation of mobile control migrating in heterogeneous computer networks and organising intercomputer dialogue [15], which was much easier to implement than a global management system, WAVE is based on program mobility while data is rather a stationary world which these programs navigate. In WAVE, special recursive programs (or waves) spread themselves through other systems which they consider as data. While moving, waves may be dynamically self-replicated, split into pieces and modified. Waves propagate in a distributed network space operating with variables which access only local data in current nodes or data items which are transferred with the moving wave code.
The WAVE language, a core of this model, describes parallel propagation through a distributed knowledge network where both nodes and links may hold arbitrary information (which can include procedures to be executed). The whole process is described as a sequential-parallel composition of elementary actions, or moves, which can include hops through data links (permitting also broadcasting and multicasting), assignments to variables, and condition checking filters, all these alternatives having the same rank. During the navigation process, waves may create or modify the very network they move through.
WAVE has navigational, not message passing, semantics. Explicit message passing is used only on the implementation level where communicating interpreters execute heads of moving waves while sending their tails and intermediate data, as messages, to other interpreters. This results in a drastic simplification of application programs (usually being one-two orders of magnitude shorter than in traditional programming languages) as the main synchronisation and data communication routines are hidden inside the implementation of the language. From WAVE, communication with other systems resident on the same hosts is possible.
WAVE language has been successfully used for solving complex problems in distributed systems including theoretical graph and network problems, integration of distributed databases, distributed simulation of dynamic systems, modelling of collective behaviour of robots, intelligent management of computer, telecommunication and transport networks, design of intelligent infrastructures for distributed federations, as well as for distributed dynamic 3D virtual reality [1, 2, 6, 7, 8, 9, 12, 13, 19, 21, 23].
In this paper, main features of the WAVE model, as well as the gained experience of its implementation and use in a variety of dynamic and distributed applications, are summarised by representing WAVE as a new type of a universal parallel computational automaton. This automaton is capable of solving any problems in arbitrary computer network topologies in a program flow and pattern-matching modes of operation, without any central resources.
The rest of the paper is organised as follows. Chapter 2 provides examples of programming in
WAVE. In Chapter 3 a brief description of the WAVE language is given. Chapter 4 describes a layered organisation of the spatial WAVE automaton, functions performed by each layer, interactions between layers, as well as a general implementation architecture of the WAVE interpreter. In Chapter 5, a comparison of WAVE with some other models and languages is provided, with giving hints on their possible integration. Chapter 6 concludes the paper.
2. Mobile programming in WAVE
WAVE directly processes knowledge networks (KN) consisting of nodes and links (oriented as well as non oriented) connecting them, with any information associated with the both. Such networks may be arbitrarily distributed between processors, and the computer network topology may correspond to or be quite different from the knowledge network topology.
2.1. Elementary example: following a path
Let us consider an elementary program which, starting from node "a" in the network of Fig. 1, follows a route consisting of links named first "p" and then "q", and prints a name of the node reached. Its usual, verbal, description may be as:
Start in "a"
Hop through "p"
Hop through "q"
Print name of the current node
A corresponding WAVE program is shown in Fig. 1 which, after being applied in "a", moves through the network while incrementally interpreted in nodes with losing its worked parts. In this program "#" is a hop operator whose left operand names links to be passed and the right one identifies nodes these links should lead to (their absence makes any links or nodes allowable).
"@" means an associative (or "tunnel") link to a node from an outside of the system or between any two nodes, not necessarily neighbouring. C is an "environmental" variable always lifting a content (name) of a node in which wave currently resides. T means terminal (special read–write variable) accessible from a current node (may not be the same in different network nodes). Period delimits operations which should succeed one another while executed in a network.
This program after node "d" splits into two copies, along each "q" link, while bringing replicated operation "T=C" into "e" and "f" which prints node names in parallel. To restrict this program to a single solution, say "e" as a destination, both the link and node operands must be present in the last hop: @#a.p#.q#.e.T=C.
2.2. Collecting names of all network nodes
Let us consider another program which collects names of all network nodes into one list in parallel and prints this list in a starting node, let it again be "a". A usual description of a recursive breadth-first search & collection algorithm for this may look like:
Define MOVING_PROCEDURE as:
Hop to all neighbours
If node is not marked
put its name into NODAL_LIST
otherwise halt this branch
Do sequentially the steps
Step1: Apply MOVING_PROCEDURE
Step2: Copy NODAL_LIST into MOVING_LIST
Hop to predecessor
Append MOVING_LIST into NODAL_LIST
end MOVING_PROCEDURE
Start in "a"
Put node's name into NODAL_LIST
Do sequentially
Apply MOVING_PROCEDURE
Print NODAL_LIST
The corresponding WAVE program and its parallel development in the network are shown in Fig. 2, where the dynamically created breadth-first spanning tree is depicted in bold. This tree is subsequently used for collecting node names and merging partial lists (in a logarithmic total time) until the final list appears in the starting node "a", which is then printed.
wave code), a mere naming of which (without any operation) causes an injection and immediate execution of the corresponding code as part of the wave. "&" is a list append operation (with the result recorded on the left operand, being a variable). P is an environmental variable always giving address of the predecessor node from which the current node has been reached.
N is a stationary, or "nodal", variable copies of which are associated with different network nodes, "==" means "equal to" (its missing right operand means "nothing", not defined). Fr is another moving, frontal variable, and SQ is a "sequence" control rule activating two program branches (separated by comma) sequentially, from the same node (where SQ is interpreted), with the second branch starting only after a complete termination of the (recursive) first branch.
Only movements of the resultant partial lists are shown in Fig. 2, with the copied SQ rule dynamically appearing in network nodes (symbolised by loops). As can be seen from this simple program, waves are creating cooperative mobile program societies which spread operations, control and local data in a distributed space behaving altogether as a fully controlled system.
2.3. Creating network topology
Arbitrary knowledge networks may not only be processed by mobile waves but also created within the same WAVE language syntax. In Fig.3.a the creation of a simple network is shown where wave template is embraced by a CR ("create") rule which allows to create a distributed topology in "one breath", without repeating CR for each element. The CR-rule is inherited when waves replicate and reduce while unwrapping in space. The language interpreter will automatically distribute this network during its creation between different processors, according to some general recommendations given to the system (for example, "put each new node to a new processor").
To make an exact distribution, each new creative hop should be preceded by moving to the corresponding processor by its address, with saving nodal addresses (environmental variable A), if needed, in frontal variables, like in Fig.3.b. Here, the interpreter will be recommended to place as many nodes as possible into the same processor (thus putting "a" and "b" together).
To create and distribute the more complex network from Fig.1, the following wave, based on a depth-first spanning tree template, will be sufficient (starting from "d" and using moving variable F for saving address of "d" for implementing cycles, and not replicating "d"):
\[
\text{CR}@d.F=A.(p#a.(k#b,k#F),(m#c,n#F)),(q#e,r#f,q#F))
\]
The CR rule influences only hops with both link and node names (not addresses) explicitly given in the wave it controls, whereas other operations remain unaffected and are performed as in a usual program.
The simple examples described above explain some peculiarities of the mobile programming in WAVE. Syntactically WAVE is a very simple language but, due to its recursive structure, it allows for the expression of arbitrary complex (parallel and fully distributed) algorithms in a very dense form. Waves, being creative & navigative templates rather than the traditional programs, are spreading like physical waves or viruses in distributed systems.
3. WAVE language
The WAVE language describes propagation (parallel and asynchronous) through a distributed, network-structured data continuum rather than traditional data processing. It has a concise syntax and rich navigational semantics and is a machine-level language intended to be suitable for physical movement and direct hardware interpretation in computer networks.
3.1. General organisation
The syntactic structure of waves is shown in Fig.4 where braces mean zero or more repetitions (with a given delimiter at the right, if more than one), square brackets denote an optional construct, and vertical bar separates alternatives. Period delimits sequential parts and comma separates independent or parallel parts of a wave, called moves.
\[
\text{wave} \rightarrow \{ \{ \text{move} \}, \} \]
\[
\text{move} \rightarrow \text{unit} \{ \text{act unit} \} | [ \text{rule} ] (\text{wave})
\]
Figure 4. Recursive syntax of WAVE.
Moves may be simple, consisting of one or more elementary operations, or acts (like assignments, hops, condition checking filters, etc.) over information units, or be again waves (in parentheses) optionally prefixed by control rules. The latter impose a variety of constraints over distributed development of waves in KN. Starting from some (a current) node, a move brings the wave into a new set of current KN nodes, or Goal Set (GS) which
may include the initial node, to all of which the tail of the wave is applied, waves thus being interpreted incrementally in KN. In general, many waves within the same program (or the different ones) may spread in KN in an asynchronous wavefront mode.
Many self-evolving wave processes may start independently in the shared multiprocessor space and from different sources, as shown in Fig. 5, and may be independent or interact with each other. The spatial processes may cover in parallel any parts of the space. The shapes of asynchronous wavefronts producing new GSi by moving waves may be arbitrary as the waves may represent any algorithms.

**Figure 5. Spreading waves in a knowledge space.**
### 3.2. Basic information unit
The basic information unit of the language is vector—a dynamic sequence of arbitrary length values generally defined as strings (syntactically separated by a semicolon), concrete interpretation of which depends on the operations involved. This simple data structure with special operations on it, together with the recursive control syntax of the language, is sufficient for representing arbitrary network creation and processing algorithms in a distributed environment. No explicit type descriptions are used in the language: automatic type conversions are activated depending on the current operations involved.
### 3.3. Spatial variables
Information units in a WAVE program can also be expressed by **spatial variables** dynamically distributed throughout KN and being of the three types: **nodal** (prefixed by N) dynamically attached to KN nodes and shared by different moving waves, **frontal** (prefixed by F) moving with the language strings, and **environmental**, accessing currently available resources relating to KN nodes and links. The latter are named as: C - node content, A - node address, L - incoming link content, S - incoming link sign, P - predecessor address, and T - user terminal (or one of them if they are distributed throughout KN). There are also special environmental variables enabling to control efficiency on the implementation level: D - a list of addresses of direct computer neighbours, and V - a threshold of volume of data allowed to be stored in the current processor (in a number of nodes). Frontal variables accompany the mobile wave and carry out local information exchanges between different nodes of KN.
### 3.4. Acts
Basic acts are selective or broadcasting **hops** in KN, condition checking **filters** (halting if FALSE), **data processing** (arithmetic and string operations), explicit **halts** with a repertoire of echoing conditions, and an **external call** permitting an access and exchange of information with other systems distributed in networks. Hops (the "#" act) identify by the left operand the links to be passed, and by the right operand the nodes these links should lead to (nodes are identified by contents or addresses). Omitting the right operand makes any destination nodes acceptable with the given links. Omitting the left operand leads to a neglect of the link contents and broadcasting to certain (if names are provided) or to all neighbours. The special name "@" used as the link operand triggers direct (tunnel) jumps between any (including non-neighbouring) nodes, and makes broadcasting to all other nodes of KN if the right operand is empty. If more than one link named in the hop is associated with the node, all these may be passed in parallel.
Filters ("==" - equal, "/=" - not equal, "+" - less, "+=" - less or equal”) allow for the further wave propagation if their result is TRUE, and halt if FALSE. Data processing includes arithmetic acts ("+", "-", "/", "/"), splitting string into a vector ("|") and merging vector into a string ("%") with the given delimiters, appending vectors ("&"), finding/recording a content by an index ("."), finding an index by a content or recording by a content ("::"). Act "?" makes an access to other systems on the host (via its operating system) and "/!" is a programmed halt with the operands establishing different halting conditions (right hand operand), or switching off the track mechanism for launching uncontrolled waves with establishing their life time (by the left operand).
Act "/=" means a mere assignment of the result obtained on the right to the variable on the left. In its absence, the result of the data processing operations is assigned to the leftmost unit (ought to be a variable). For example, N=N+F-1 is equivalent to : N+F-1, thus making expressions more compact.
### 3.5. Rules
Main rules and their abbreviations are: SeQuence (SQ), Or Sequential (OS), Or Parallel (OP), And Sequential (AS), And Parallel (AP), RePetition (RP), WaiTing (WT), InDivisible (ID), and CReate (CR). The
rules split waves into branches and coordinate their cooperative (parallel or sequential) development in the KN (SQ, OS, OP, AS, AP), provide distributed logical synchronisation (WT) and indivisible access to shared resources (ID), repeated application of the wave (RP), enable the wave to create or modify the KN it moves through (CR). The control points triggered by rules dynamically appear in different KN nodes and make distributed coordination of the propagating waves to a proper depth, using the tracks mechanism. These points cease to exist after termination of the controlled waves.
3.6. Dynamic code injection
It is possible to inject new strings into the moving wave as procedures (kept and processed as string contents of variables) which accompany the waves (in frontal variables) or are picked up in nodes of KN during navigation of the latter (nodal and environmental variables). This provides flexibility in creative and navigative network processes where the evolving spatial program may be additionally fed from the distributed environment it moves through. Syntactically this is expressed by a move consisting of a single unit (a variable), without any act, which causes injection of its content into the wave with immediate execution. Such dynamic code injection may be recursive.
4. The WAVE automaton
In this chapter we will give an informal presentation of the WAVE model as a spatial automaton processing arbitrary network topologies in a parallel mode. A layered structure of this automaton will be outlined with the main interactions between different layers discussed.
4.1. Layered organisation of the automaton
The WAVE automaton has a four-layer organisation depicted in Fig. 6 with the layers having the following meanings.
- Mobile waves layer
- Dynamic tracks layer
- Knowledge network layer
- Computer network layer
Figure 6. Layered organisation of WAVE.
1. Computer network layer. The lowest, or Computer Network (CN), layer may have any number of computers and arbitrary interconnection topology. Each computer in the network must have a unique address. These addresses may be used for sending messages between any two computers by using standard communication facilities (say Internet) supposed to be in a regular service on the CN layer and which are not specified in the automaton.
2. Abstract knowledge network layer. Knowledge network (KN) layer reflects structuring of information within an application area and may have any topology. KN consists of nodes and links and may be arbitrarily distributed between computers of CN, where each computer may have zero or more nodes allocated to it. Links of KN may therefore connect nodes within the same or between different computers. Both nodes and links of KN, being arbitrary strings of characters, may associate with any information, as was already mentioned above. KN nodes have absolute addresses in space consisting of the two parts: a physical address of the computer they reside in and an address within the memory of the computer.
3. Dynamic tracks layer. Tracks accompany and support the spreading and coordination of distributed processes in KN. Starting in different nodes of KN they grow as trees. Track nodes dynamically match KN nodes (zero or more track nodes may be associated with the same KN node), while track links match KN links through which the processes evolve. Track links may also reflect direct, or “tunnel”, hops between non-neighbouring KN nodes. Different track trees, starting independently in the same or different KN nodes, may overlap in its structure. The track layer serves as a special dynamic control infrastructure while executing mobile algorithms in KN. Tracks are used for generalising states (like success or failure) of multiple distributed processes by using different echoes which are backwarded via track links and merged in track nodes. Tracks also serve as channels for further spreading of suspended mobile processes (wave tails), till proper conditions are met, and support life time of temporary variables dynamically associated with KN nodes.
4. Mobile waves layer. Recursively organised mobile programs, or waves, are on the top organisational level. They have navigational, rather than the traditional reductional (for functional models) or message passing (for communicating processes) semantics, and solve all problems in a KN by an incremental matching of its topology, while propagating through it. During propagation, waves may self-replicate, split into parallel branches and modify, losing the worked parts. They can transfer with them local data while leaving other data in KN nodes to be shared with other waves, as has already been specified in the WAVE language description.
4.2. Interactions between layers
There are regular communications between any two
neighbouring layers, as well as a possibility of direct interactions between any non-adjacent layers. The main cases of this activity are as follows.
Waves layer — tracks layer. All activity of the WAVE automaton is initiated from the waves layer by injecting waves into one or a number of KN nodes, the waves subsequently self-propagate in KN. Moving waves access and change only local information when staying in KN nodes. Any hop between KN nodes and any split of a wave into branches is accompanied by extending of tracks. Rules coordinating sequential or parallel invocation of branches become temporarily associated with certain track nodes. The rules use echoes coming back via tracks to assess success or failure of the whole wave branch at the root. The rules forward the suspended wave tails to the fringe track nodes (associated with the certain KN nodes) from which they develop further. All tracks are automatically removed when the wave program echoes complete termination via its track tree. Track branches may also be deleted at runtime when the corresponding waves fail or execute special halts, while other branches may continue.
Waves layer — KN layer. Initially, waves directly access KN layer, which is accompanied by creating tracks as a service layer between suspended waves and the KN topology. These tracks subsequently serve as bridges for further spreading "waves" of waves. Waves can make "surface" hops in a KN via its links, or "tunnel" hops (directly between any, including non-adjacent, nodes) by their addresses or contents. Waves can also stay in KN nodes arbitrary long time while performing any sequences of operations over local data.
Waves layer — CN layer. Waves may not only process existing KN topologies but may also create and modify them while navigating the computer network directly. Creation/modification of KN and computations on it may be done simultaneously by the same waves.
Track layer — KN layer. Tracks, evolving in a KN topology, are matching the latter, and their existence is generally connected with the existence of the corresponding elements of KN. Tracks, however, have a certain degree of autonomy and may remain alive while the corresponding KN nodes and links are removed by other waves, thus preserving the continuity of a distributed control in KN.
KN layer — CN layer. Distribution of KN in a computer network may be explicit by waves, by first hopping to particular computers and then creating KN nodes in them and links to the predecessor nodes, or implicit, where KN nodes and links are created and spread between computers automatically. The latter uses a special threshold parameter (V) accessible from the waves layer and establishing a maximum number of KN nodes allowed to be allocated in different computers.
As can be seen, the waves layer, tracks layer and knowledge network layer have a flexible cooperative organisation which dynamically creates and supports distributed knowledge structures and the processes evolving on them within arbitrary computer networks. The latter may also be dynamic, with the number of computers and their interconnection topologies changing at runtime.
4.3. The WAVE interpreter architecture
The WAVE automaton has been implemented as a direct interpreter of waves operating in arbitrary computer networks [5, 18]. It dynamically supports all the four layers and their interactions described. A copy of this interpreter must be installed in each computer, the interpreters may communicate with the neighbouring interpreters, thus forming a distributed machine driven by mobile waves.
The interpreter (see Fig. 7) consists of incoming and outgoing queues for exchanging waves and echoes with other interpreters, and the three main specialised functional units: parser, data processor, and control engine. The KN layer (part of KN allocated to the current computer) is kept within the data processor which also holds nodal variables dynamically attached to KN nodes. The tracks layer is supported by the control engine which implements all control rules of waves and suspends wave tails until the rules terminate, the tails being subsequently sent further via the created tracks.

Parser decomposes waves into their heads (first period-separated part on the top level) and tails (the rest of waves) and sends the parsed heads accompanied by the wave tail to the data processor if the head identifies elementary operations (acts) on top level. If the head splits on top level into parts separated by comma (called sectors), the original wave is substituted in the waves queue by a set of waves formed by the sectors (as new heads) with appending to each a common tail. This decomposition process in the parser recursively
continues (with removing parentheses which become redundant) until the elementary acts or rules are found in the heads of waves, after which either the data processor or the control engine become engaged.
After the data processor, the wave tails are sent back to the parser or to the control engine. The latter action takes place when hops to other nodes should be executed on the operands prepared in the data processor, as hops must be accompanied by new tracks. Control engine executes control rules of waves and establishes links between tracks and the wave branches into which the rule-controlled waves are split, these branches being processed from the waves queue in the parser. Echoes are merged in the track nodes by the control engine with the final results used to assess the success or failure of the whole branches and influence invocation of other branches by the rules. The control engine also triggers the process of garbage deletion in the data processor when waves terminate (which is associated with the deletion of tracks). An external call act allows for communication with other systems on the same computers by exchanging data with them via the data processor.
Parts of the KN and track forests located in different interpreters form together a seamless distributed and dynamic information & processing space where waves (accompanied by moving data variables) and echoes are propagating either within the memory of the same machines, or being automatically passed via incoming–outgoing queues to other interpreters on other machines.
A publicly available WAVE interpreter is currently written in C (with graphical interface in TCL) and operates via Internet.
5. WAVE and other paradigms
In the same way as graphs and networks are widely used for description and analysis of systems of a different nature (say, biological, social, or technical), the WAVE paradigm, creating and processing arbitrary knowledge networks in a distributed environment, can be readily used for modelling any other programming techniques. The latter may especially include different models and languages for parallel and distributed processing.
5.1. Petri nets / dataflow
Petri nets, originally proposed by Petri [14], still remain widely used as a model for description of event-driven asynchronous systems of any nature. In Fig.8, a WAVE program is shown which first creates a certain petri net topology with setting up its initial marking in place p5 (places represented by circles), and then puts into all transitions (drawn as bars) active functions which continuously check presence of tokens in all their incoming places, and if "yes", remove tokens from these places with adding them to the all outgoing places of the fired transition. The created net operates arbitrary long time, with transitions t2 and t3 firing in parallel. The WAVE program puts active functions only into the transition nodes, recognising them by checking if their names contain letter "t". ("a" is used as all link names.)
\[
\text{SQ(CR}@\text{t1,F=}\text{A}.+\text{a#p2}.+\text{a#t3}.+\text{a#p4}.+\text{a#t4}. \\
(\text{a#p3}.-\text{a#t2}.-\text{a#t1}.-\text{a#F}),(+\text{a#p5}.N=1,+\text{a#F})), \\
(@\#.C|::t=.RP(AS( (AP(-#.N/=).N=1), (+#.N+1),3))),))
\]
**Figure 8. Petri net creation & activation.**
Similar to the example above, any other graph-based models with interpreted nodes and moving tokens (the latter as arbitrary data structures), like dataflow, actors, neural networks, etc., may be efficiently represented in WAVE. Moreover, such WAVE-based networks may evolve in space and change their topology at runtime which many other existing models cannot do.
5.2. Mobile agents
Mobile agents [3, 24] have become a hot topic for discussion during the last years. They allow to launch autonomous programs into a network which may freely travel between computers and do some jobs on behalf of a particular user, like business negotiation, booking tickets, etc., letting programs move to data, and not the usual way round. Mobile agents allow, in many cases, to reduce traffic in networks and organise solutions of many problems in a much more flexible way. Mobile agent techniques are usually based on object technologies, with self-contained and rather large programs. In WAVE, as was shown above, the moving and cooperating parts of a spatial program may be of arbitrary size – from large programs to elementary operations. WAVE also contains powerful spatial control rules for coordination between different moving parts which mobile agents are usually lacking.
Any existing mobile agent systems can be easily modelled in WAVE. Let us consider (Fig. 9) a creation and activation of two agents: a1 and a2, where the first one starts from node "b" and travel sequentially through links "k" and "q", the latter leading to node "e", and the second one should start from "c" while passing "n" and "q" links, the later ought to lead to "f". Let a2 be allowed to start only after a1 terminates, and let such control be located in node "a". Starting in "a", the wave
```
```
```
```
in Fig. 9 creates the needed agents, whereby a2 is activated only after full completion of a1 which is echoed to the SQ rule via dynamically created tracks. Other spatial control rules of WAVE may be used too for direct subordination and coordination between agents.
![Spatial control diagram]
**Figure 9. Creation & coordination of mobile agents.**
Agents may also cooperate indirectly, by sharing with each other any information in nodes they pass, like in the case for agents a3 and a4 below, where a4 will be busy-waiting in node "d" until a3 passes and marks it.
\[
\begin{align*}
a3: & \quad (@#e.\#.N=1.k#) \\
a4: & \quad (@#f.\#.RP(N==).n#)
\end{align*}
\]
Both agents are sharing nodal variable N in "d". Integration of these two agents within a wave is simple:
\[
(@#e.\#.N=1.k#),(@#f.\#.RP(N==).n#).
\]
5.3. VRML
Languages for representing 3D virtual reality in computer networks are of growing popularity nowadays. The current VRML 1.0 specification details a text language for describing three-dimensional scenes [4] and has provision for hyperlinks. In an effort to add dynamics to VRML scenes, a number of its extensions have been developed. All these, however, base the description of a scene on a rigid structure, usually a tree known as a "scene graph" which cannot be effectively changed from within the VRML programs as it reflects the program text. WAVE may provide fully distributed and highly parallel multi-user processing of the VRML scenes, parts of which (or the whole) could be easily modified (by dynamic restructuring of scene graphs in WAVE) and could migrate between computers. Efficiently working with the knowledge (semantic) networks, WAVE may provide parallel processing and inference on the deepest levels of distributed knowledge representations. The languages like VRML supporting visual representations of the modelled worlds and direct communications with the users may be on the surface of this semantic knowledge processing.
In Fig.10, a representation in WAVE of a scene graph is shown where numbers at links set an order in which the corresponding program parts (subtrees) should be located in the VRML program text. A wave which adds to this graph a new transform node with parameters, allowing the whole set of n objects rotate as a group, is also shown. As this new part should be placed before the other parts in the text (i.e. to be reflected by link named "1" in the graph), all other names of links emanating from the top separator node must be incremented by 1.
![Scene graph modification diagram]
**Figure 10. Scene graph modification in WAVE.**
This WAVE-based scene graph easily converts into a VRML text by the following highly parallel wave program which applies in the "start" node in Fig.10:
\[
\begin{align*}
F_0 & = Fc+1.AS(Fc#.,N&x.Fc#.,F_0). \\
F & = SQ(OS((Fc=.F_0.F),(Fr=C&'{'&N&'}'%' '.#P.N:Fr))),(F_0=F'&'\%') \} .#P.N:Fr \}'). \\
@#start.RP(SQ((N=x.F.1),T=N).5?sleep)
\end{align*}
\]
This program recursively navigates the (distributed) graph while regularly synthesising (here every 5 sec.) the VRML text from its parts when echoing up the tree, whereas the graph may itself be changed at the same time by another wave(s). The resultant VRML text may be rendered using standard techniques, and will look like:
```
Separator
{ Transform { rotation 0 1 0 0 } \\
Separator
{ Material { diffuseColor 1 0 0 } \\
Transform { translation -4 4 } \\
Transform { rotation 0 1 0 0 } \\
Cube {} } ...
}
```
Instead of synthesising the VRML source code, much more efficient would be to provide direct rendering of the scene graphs produced and coded in WAVE.
Any other visualisation techniques may be easily used on top of the dynamic distributed semantic worlds expressed in WAVE. For example, a possible representation of a terrain may be a grid, each node containing data such as height, surface type, etc. Such a grid may be created in WAVE as a KN and dynamically distributed between any number of computers. Mobile wave societies may produce on these grids actively...
changing shapes spread among computers (e.g. growing craters, flooding, or moving mountains). This process is fully open, i.e. any (multiple) agent activities in these worlds may be started in parallel at any time, by different users, and from different machines.
5.4. Java
The WAVE ideology of solving complex graph and network problems in distributed and dynamic spaces is being developed for a rather long time, having materialised now into an extremely dense its definition shown in Fig. 4, with only five syntactic categories. As the WAVE model was radically new during all its development history, the author often had to give the answers like follows:
a) WAVE is not petri net, because it is dynamic, operations-moving, and self-evolving; b) it is not dataflow but a quite opposite - program flow; c) it is not neural net as it creates networks through which both operations and data may move, not only analog signals influencing thresholds; d) it is not actors [10] representing only evolution of processes, regardless of data structures to be processed; e) it is not Telescript [24] which appeared much later, after the main WAVE definition was published [16], and actually represents only a small subset of space navigation and coordination features of WAVE. And, to continue this:
**WAVE is not Java.** Java [11] allows for code movement, it is distributed, interpreted and platform independent, like WAVE. But this is only an external similarity. Java is based on a conventional programming philosophy where a program processes stored data and communicates with other programs in a client-server mode. WAVE philosophy is based on a self-navigation and pattern-matching, with mobility and activity of all constructs and any control initiative totally embedded in a mobile code. It also enables to create dynamically any data networks and recursive control in a distributed space, which Java cannot do directly. WAVE is not a programming language in a usual sense, it is rather a computational model, automaton, integrating a universal set of novel space-control features within a program flow mode of computing not present in other paradigms.
An attempt to compare WAVE with Java has been done recently in [22], in favour of WAVE, being however rather artificial as WAVE and Java belong to quite different classes like, say, Lisp and Fortran. Nevertheless an integration of WAVE with Java may be quite useful and is currently being analysed, including re-implementation of WAVE in Java (instead of C), with possibility of accessing any Java routines from WAVE. Moreover, the “flying” WAVE engine may also be accessible from Java supplying it with a universal spatial control Java does not have. Java's multithreading and code mobility may be useful to support WAVE too.
Of course, everything can in principle be programmed in any language (or even in a machine code) in computer networks, if there is a technical possibility of sending messages between machines. However many network processing tasks for which WAVE suits well, when written in other languages, will inevitably have to include explicitly a variety of special functions which are hidden in the wave interpreter and shared by many waves. Direct code in C, as well as in Java, as experiments show, will be more than 50-100 times longer and much more complex.
6. Conclusions
We have described a universal WAVE automaton capable of solving arbitrary problems in distributed and open computer networks without any central resources. The automaton is based on a program flow mode of operation, rather than on traditional dataflow, where cooperative mobile programs navigate in distributed networks while self-replicating, splitting and modified.
The main difference between WAVE and the other mobile programming paradigms is in presence of a powerful recursive spatial control in WAVE efficiently coordinating societies of mobile agents, and the two dynamic service layers operating between mobile application programs (waves) and the computer networks they propagate through. The latter being the knowledge network layer allowing for the creation of abstract distributed worlds reflecting different domains with subsequent processing of these worlds directly, and the tracks layer effectively supporting distributed control and communication in these worlds. The use of these layers allows us to free the application programming from most of the routines on synchronisation, message passing, routing, hierarchical control and garbage collection, which have to be explicitly managed within the traditional distributed systems.
The WAVE automaton ideology efficiently supports distributed algorithms in open networks which:
—may not be known for the network in advance;
—may start from arbitrary node, on the node’s initiative;
—may work with part of the network, which may not be known in advance and may be outlined only at runtime;
—may use any (and all) network resources in parallel;
—behaving like highly organised viruses, may efficiently recover from complex failures, including failures and damages in the underlying system's software and hardware.
Different users may start their spatial mobile programs simultaneously from the same or different
nodes, these programs may overlap in the network and be completely independent from each other or cooperate in the network space while solving complex interactive problems.
The WAVE model, based on a dynamic activation of massively parallel processes in a data space, exhibits a general parallelism proportional to a number of nodes in a network it processes, thus being easily scaled to an arbitrary extent while acting without any central control or other centralised resources.
In spite of a successful direct use of WAVE in a variety of application projects, which have shown its potentially unlimited power for high-performance computing in open network topologies, it needs (and can provide an insight for) the design of higher level, user-friendly languages for mobile programming, as well as an efficient conversion into it from other paradigms.
WAVE, in its pure form, may also serve as a convenient model for studying fundamental features of other models and systems, especially those dealing with dynamics, openness and self-recoverability. It may also provide a basis for the design of radically new distributed algorithms for advanced communications, distributed simulation and control, self-organisation, evolution, and emergent functionality.
Acknowledgements
Special thanks are due to the anonymous referees whose frank comments and robust criticism helped much in improving and enriching this paper.
References
|
{"Source-Url": "http://www.csc.villanova.edu/~pschweit/papers/wave.pdf", "len_cl100k_base": 8687, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31576, "total-output-tokens": 10577, "length": "2e13", "weborganizer": {"__label__adult": 0.0003142356872558594, "__label__art_design": 0.0003447532653808594, "__label__crime_law": 0.0002846717834472656, "__label__education_jobs": 0.000644683837890625, "__label__entertainment": 9.72747802734375e-05, "__label__fashion_beauty": 0.00014448165893554688, "__label__finance_business": 0.0003018379211425781, "__label__food_dining": 0.00031447410583496094, "__label__games": 0.0005741119384765625, "__label__hardware": 0.00138092041015625, "__label__health": 0.0004508495330810547, "__label__history": 0.00030493736267089844, "__label__home_hobbies": 0.00010848045349121094, "__label__industrial": 0.0004992485046386719, "__label__literature": 0.000400543212890625, "__label__politics": 0.0002589225769042969, "__label__religion": 0.0004978179931640625, "__label__science_tech": 0.09173583984375, "__label__social_life": 9.28044319152832e-05, "__label__software": 0.0159454345703125, "__label__software_dev": 0.88427734375, "__label__sports_fitness": 0.00023484230041503904, "__label__transportation": 0.0005502700805664062, "__label__travel": 0.00018906593322753904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46263, 0.02009]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46263, 0.66997]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46263, 0.91034]], "google_gemma-3-12b-it_contains_pii": [[0, 4076, false], [4076, 7518, null], [7518, 12166, null], [12166, 16973, null], [16973, 21799, null], [21799, 26593, null], [26593, 31678, null], [31678, 35773, null], [35773, 41009, null], [41009, 46263, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4076, true], [4076, 7518, null], [7518, 12166, null], [12166, 16973, null], [16973, 21799, null], [21799, 26593, null], [26593, 31678, null], [31678, 35773, null], [35773, 41009, null], [41009, 46263, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46263, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46263, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46263, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46263, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46263, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46263, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46263, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46263, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46263, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46263, null]], "pdf_page_numbers": [[0, 4076, 1], [4076, 7518, 2], [7518, 12166, 3], [12166, 16973, 4], [16973, 21799, 5], [21799, 26593, 6], [26593, 31678, 7], [31678, 35773, 8], [35773, 41009, 9], [41009, 46263, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46263, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
8e622711919c7baf4259ef55e8f0267e494107c6
|
1 The Heavy Hitters Problem
1.1 Finding the Majority Element
Let’s begin with a problem that many of you have seen before. It’s a common question in technical interviews. You’re given as input an array $A$ of length $n$, with the promise that it has a majority element — a value that is repeated in strictly more than $n/2$ of the array’s entries. Your task is to find the majority element.
In algorithm design, the usual “holy grail” is a linear-time algorithm. For this problem, you may already know (e.g. from CSE 373 or CSE 421) a subroutine that gives a linear-time solution — just compute the median of $A$. (Note that a majority element will be the median element.) So let’s be more ambitious: can we compute the majority element with a single left-to-right pass through the array? If you haven’t seen it before, here’s the solution:
- Initialize counter := 0, current := NULL.
[current stores the frontrunner for the majority element]
- For $i = 1$ to $n$:
- If counter == 0:
[In this case, there is no frontrunner.]
* current := $A[i]$
* counter++
- else if $A[i] ==$ current:
[In this case, our confidence in the current frontrunner goes up.]
* counter++
- else
[In this case, our confidence in the current frontrunner goes down.]
* counter--
- Return current
For example, suppose the input is the array \{2, 1, 1\}. The first iteration of the algorithm makes “2” the current guess of the majority element, and sets the counter to 1. The next element decreases the counter back to 0 (since 1 ≠ 2). The final iteration resets the current guess to “1” (with counter value 1), which is indeed the majority element.
More generally, the algorithm correctly computes the majority element of any array that possesses one. We encourage you to formalize a proof of this statement (e.g., by induction on \(n\)). The intuition is that each entry of \(A\) that contains a non-majority-value can only “cancel out” one copy of the majority value. Since more than \(n/2\) of the entries of \(A\) contain the majority value, there is guaranteed to be a copy of it left standing at the end of the algorithm.
1.2 The Heavy Hitters Problem
In the heavy hitters problem, the input is an array \(A\) of length \(n\), and also a parameter \(k\). You should think of \(n\) as very large (in the hundreds of millions, or billions), and \(k\) as modest (10, 100, or 1000). The goal is to compute the values that occur in the array at least \(n/k\) times.\footnote{A similar problem is the “top-k problem,” where the goal is to output the \(k\) values that occur with the highest frequencies. The algorithmic ideas introduced in this lecture are also relevant for the top-k problem.} Note that there can be at most \(k\) such values; and there might be none. The problem of computing the majority element corresponds to the heavy hitters problem with \(k \approx 2 - \delta\) for a small value \(\delta > 0\), and with the additional promise that a majority element exists.
The heavy hitters problem has lots of applications, as you can imagine. We’ll be more specific later when we discuss a concrete solution, but here are some high-level examples:\footnote{You wouldn’t expect there to be a majority element in any of these applications, but you might expect a non-empty set of heavy hitters when \(k\) is 100, 1000, or 10000.}
1. Computing popular products. For example, \(A\) could be all of the page views of products on \(\text{amazon.com}\) yesterday. The heavy hitters are then the most frequently viewed products.
2. Computing frequent search queries. For example, \(A\) could be all of the searches on Google yesterday. The heavy hitters are then searches made most often.
3. Identifying heavy TCP flows. Here, \(A\) is a list of data packets passing through a network switch, each annotated with a source-destination pair of IP addresses. The heavy hitters are then the flows that are sending the most traffic. This is useful for, among other things, identifying denial-of-service attacks.
4. Identifying volatile stocks. Here, \(A\) is a list of stock trades.
It’s easy to think of more. Clearly, it would be nice to have a good algorithm for the heavy hitters problem at your disposal for data analysis.
The problem is easy to solve efficiently if \(A\) is readily available in main memory — just sort the array and do a linear scan over the result, outputting a value if and only if it occurs
(consecutively) at least \( n/k \) times. After being spoiled by our slick solution for finding a majority element, we naturally want to do better. Can we solve the heavy hitters problem with a single pass over the array? This question isn’t posed quite correctly, since it allows us to cheat: we could make a single pass over the array, make a local copy of it in our working memory, and then apply the sorting-based solution to our local copy. Thus what we mean is: can we solve the Heavy Hitters problem with a single pass over the array, using only a small amount of auxiliary space?\
### 1.3 An Impossibility Result
The following fact might surprise you.
**Fact 1.1** There is no algorithm that solves the Heavy Hitters problems in one pass while using a sublinear amount of auxiliary space.
We next explain the intuition behind Fact 1.1. We encourage you to devise a formal proof, which follows the same lines as the intuition.
Set \( k = n/2 \), so that our responsibility is to output any values that occur at least twice in the input array \( A \). Suppose \( A \) has the form
\[
|x_1| x_2 | x_3 | \cdots | x_{n-1} | y |,
\]
set \( S \) of distinct elements
where \( x_1, \ldots, x_{n-1} \) are an arbitrary set \( S \) of distinct elements (in \( \{1, 2, \ldots, n^2\} \), say) and the final entry \( y \) may or may not be in \( S \). By definition, we need to output \( y \) if and only if \( y \in S \). That is, answering membership queries reduces to solving the Heavy Hitters problem.
By the “membership problem,” we mean the task of preprocessing a set \( S \) to answer queries of the form “is \( y \in S \)?” (A hash table is the most common solution to this problem.) It is intuitive that you cannot correctly answer all membership queries for a set \( S \) without storing \( S \) (thereby using linear, rather than constant, space) — if you throw some of \( S \) out, you might get a query asking about the part you threw out, and you won’t know the answer. It’s not too hard to make this idea precise using the Pigeonhole Principle.\(^{3}\)
---
3 Rather than thinking of the array \( A \) as an input fully specified in advance, we can alternatively think of the elements of \( A \) as a “data stream,” which are fed to a “streaming algorithm” one element at a time. One-pass algorithms that use small auxiliary space translate to streaming algorithms that need only small working memory. One use case for streaming algorithms is when data arrives at such a fast rate that explicitly storing it is absurd. For example, this is often the reality in the motivating example of data packets traveling through a network switch. A second use case is when, even though data can be stored in its entirety and fully analyzed (perhaps as an overnight job), it’s still useful to perform lightweight analysis on the arriving data in real time. The first two applications (popular transactions or search queries) are examples of this.
4 A simple modification of this argument extends the impossibility result to all interesting values of \( k \) — can you figure it out?
5 Somewhat more detail: if you always use sublinear space to store the set \( S \), then you need to reuse exactly the same memory contents for two different sets \( S_1 \) and \( S_2 \). Your membership query answers will be the same in both cases, and in one of these cases some of your answers will be wrong.
1.4 The Approximate Heavy Hitters Problem
What should we make of Fact 1.1? Should we go home with our tail between our legs? Of course not — the applications that motivate the heavy hitters problem are not going away, and we still want to come up with non-trivial algorithms for them. In light of Fact 1.1, the best-case scenario would be to find a relaxation of the problem that remains relevant for the motivating applications and also admits a good solution.
In the $\epsilon$-approximate heavy hitters ($\epsilon$-HH) problem, the input is an array $A$ of length $n$ and user-defined parameters $k$ and $\epsilon$. The responsibility of an algorithm is to output a list of values such that:
1. Every value that occurs at least $\frac{n}{k}$ times in $A$ is in the list.
2. Every value in the list occurs at least $\frac{n}{k} - \epsilon n$ times in $A$.
What prevents us from taking $\epsilon = 0$ and solving the exact version of the problem? We allow the space used by a solution to grow as $\frac{1}{\epsilon}$, so as $\epsilon \downarrow 0$ the space blows up (as is necessary, by Fact 1.1).
For example, suppose we take $\epsilon = \frac{1}{2}$. Then, the algorithm outputs every value with frequency count at least $\frac{n}{k}$, and only values with frequency count at least $\frac{n}{2k}$. Thinking back to the motivating examples in Section 1.2, such an approximate solution is essentially as useful as an exact solution. Space usage $O(\frac{1}{\epsilon}) = O(k)$ is also totally palatable; after all, the output of the heavy hitters or $\epsilon$-HH problem already might be as large as $k$ elements.
2 The Count-Min Sketch
2.1 Discussion
This section presents an elegant small-space data structure, the count-min sketch [5], that can be used to solve the $\epsilon$-HH problem. There are also several other good solutions to the problem, including some natural “counter-based” algorithms that extend the algorithm in Section 1.1 for computing a majority element [7, 6]. We focus on the count-min sketch for a number of reasons.
1. It has been implemented in real systems. For example, AT&T has used it in network switches to perform analyses on network traffic using limited memory [4]. At Google, a precursor of the count-min sketch (called the “count sketch” [3]) has been implemented on top of their MapReduce parallel processing infrastructure [8]. One of the original motivations for this primitive was log analysis (e.g., of source code check-ins), but presumably it is now used for lots of different analyses.
2. The data structure is based on hashing, and as such fits in well with the current course theme.
---
6There is a long tradition in the Internet of designing routers that are “fast and dumb,” and many of them have far less memory than a typical smartphone.
3. The data structure introduces a new theme, present in many of the next few lectures, of “lossy compression.” The goal here is to throw out as much of your data as possible while still being able to make accurate inferences about it. What you want to keep depends on the type of inference you want to support. For approximately preserving frequency counts, the count-min sketch shows that you can throw out almost all of your data!
We’ll only discuss how to use the count-min sketch to solve the approximate heavy hitters problem, but it is also useful for other related tasks (see [5] for a start). Another reason for its current popularity is that its computations parallelize easily — as we discuss its implementation, you might want to think about this point.
2.2 A Role Model: The Bloom Filter
This section briefly reviews the bloom filter data structure, which is a role model for the count-min sketch. No worries if you haven’t seen bloom filters before; our treatment of the count-min sketch below is self-contained. There are also review videos covering the details of bloom filters on the course Web site.
The raison d’être of a bloom filter is to solve the membership problem. The client can insert elements into the bloom filter and the data structure is responsible for remembering what’s been inserted. The bloom filter doesn’t do much, but what it does it does very well. Hash tables also offer a good solution to the membership problem, so why bother with a bloom filter? The primary motivation is to save space — a bloom filter compresses the stored set more than a hash table. In fact, the compression is so extreme that a bloom filter cannot possibly answer all membership queries correctly. That’s right, it’s a data structure that makes errors. Its errors are “one-sided,” with no false negatives (so if you inserted an element, the bloom filter will always confirm it) but with some false positives (so there are “phantom elements” that the data structure claims are present, even though they were never inserted). For instance, using 8 bits per stored element — well less than the space required for a pointer, for example — bloom filters can achieve a false positive probability less than 2%. More generally, bloom filters offer a smooth trade-off between the space used and the false positive probability. Both the insertion and lookup operations are super-fast (O(1) time) in a bloom filter, and what little work there is can also be parallelized easily.
Bloom filters were invented in 1970 [1], back when space was at a premium for everything, even spellcheckers[7]. This century, bloom filters have gone viral in the computer networking community [2]. Saving space is still a big win in many networking applications, for example by making better use of the scarce main memory at a router or by reducing the amount of communication required to implement a network protocol.
Bloom filters serve as a role model for the count-min sketch in two senses. First, bloom filters offer a proof of concept that sacrificing a little correctness can yield significant space savings. Note this is exactly the trade-off we’re after: Fact 1.1 states that exactly solving the
---
[7] The proposal was to insert all correctly spelled words into a bloom filter. A false positive is then a misspelled word that the spellchecker doesn’t catch.
heavy hitters problem requires linear space, and we’re hoping that by relaxing correctness — i.e., solving the \(\epsilon\)-HH problem instead — we can use far less space. Second, at a technical level, if you remember how bloom filters are implemented, you’ll recognize the count-min sketch implementation as a bird of the same feather.
### 2.3 Count-Min Sketch: Implementation
The count-min-sketch supports two operations: \(\text{Inc}(x)\) and \(\text{Count}(x)\). The operation \(\text{Count}(x)\) is supposed to return the frequency count of \(x\), meaning the number of times that \(\text{Inc}(x)\) has been invoked in the past.
The count-min sketch has two parameters, the number of buckets \(b\) and the number of hash functions \(\ell\). We’ll figure out how to choose these parameters in Section 2.5, but for now you might want to think of \(b\) as in the thousands and of \(\ell\) as 5. The point of \(b\) is to compress the array \(A\) (since \(b \ll n\)). This compression leads to errors. The point of \(\ell\) is to implement a few “independent trials,” which allows us to reduce the error. What’s important, and kind of amazing, is that these parameters are independent of the length \(n\) of the array that we are processing (recall \(n\) might be in the billions, or even larger).
The data structure is just a \(\ell \times b\) 2-D array CMS of counters (initially all 0). See Figure 1. After choosing \(\ell\) hash functions \(h_1, \ldots, h_\ell\), each mapping the universe of objects
---
**Figure 1**: Running \(\text{Inc}(x)\) on the CMS data structure. Each row corresponds to a hash function \(h_i\).
to \(\{1,2,\ldots,b\}\), the code for \(\text{Inc}(x)\) is simply:
- for \(i = 1,2,\ldots,\ell\):
- increment \(\text{CMS}[i][h_i(x)]\)
Assuming that every hash function can be evaluated in constant time, the running time of the operation is clearly \(O(\ell)\).
To motivate the implementation of \(\text{Count}(x)\), fix a row \(i \in \{1,2,\ldots,\ell\}\). Every time \(\text{Inc}(x)\) is called, the same counter \(\text{CMS}[i][h_i(x)]\) in this row gets incremented. Since counters are never decremented, we certainly have
\[
\text{CMS}[i][h_i(x)] \geq f_x, \tag{1}
\]
where \(f_x\) denotes the frequency count of object \(x\). If we’re lucky, then equality holds in (1). In general, however, there will be collisions: objects \(y \neq x\) with \(h_i(y) = h_i(x)\). (Note with \(b \ll n\), there will be lots of collisions.) Whenever \(\text{Inc}(y)\) is called for an object \(y\) that collides with \(x\) in row \(i\), this will also increment the same counter \(\text{CMS}[i][h_i(x)]\). So while \(\text{CMS}[i][h_i(x)]\) cannot underestimate \(f_x\), it generally overestimates \(f_x\).
The \(\ell\) rows of the count-min sketch give \(\ell\) different estimates of \(f_x\). How should we aggregate these estimates? Later in the course, we’ll see scenarios where using the mean or the median is a good way to aggregate. Here, our estimates suffer only one-sided error — all of them can only be bigger than the number \(f_x\) we want to estimate, and so it’s a no-brainer which estimate we should pay attention to. The smallest of the estimates is clearly the best estimate. Thus, the code for \(\text{Count}(x)\) is simply:
- return \(\min_{i=1}^{\ell} \text{CMS}[i][h_i(x)]\)
The running time is again \(O(\ell)\). By (1), the data structure has one-sided error — it only returns overestimates of true frequency counts, never underestimates. The key question is obviously: how large are typical overestimates? The answer depends on how we set the parameters \(b\) and \(\ell\). As \(b\) gets bigger, we’ll have fewer collisions and hence less error. As \(\ell\) gets bigger, we’ll take the minimum over more independent estimates, resulting in tighter estimates. Thus the question is whether or not modest values of \(b\) and \(\ell\) are sufficient to guarantee that the overestimates are small. This is a quantitative question that can only be answered with mathematical analysis; we do this in the next section (and the answer is yes!).
**Remark 2.1 (Comparison to Bloom Filters)** The implementation details of the count-min sketch are very similar to those of a bloom filter. The latter structure only uses bits, rather than integer-valued counters. When an object is inserted into a bloom filter, \(\ell\) hash functions indicate \(\ell\) bits that should be set to 1 (whether or not they were previously 0 or 1). The count-min sketch, which is responsible for keeping counts rather than just tracking membership, instead increments \(\ell\) counters. Looking up an object in a bloom filter just involves checking the \(\ell\) bits corresponding to that object — if any of them are still 0, then
the object has not been previously inserted. Thus Lookup in a bloom filter can be thought of as taking the minimum of \( \ell \) bits, which exactly parallels the Count operation of a count-min-sketch. That the count-min sketch only overestimates frequency counts corresponds to the bloom filter’s property that it only suffers from false positives.
## 2.4 Count-Min Sketch: Heuristic Error Analysis
The goal of this section is to analyze how much a count-min sketch overestimates frequency counts, as a function of the parameters \( b \) and \( \ell \). Once we understand the relationship between the error and these parameters, we can set the parameters to guarantee simultaneously small space and low error.
Fix an object \( x \). Let’s first think about a single row \( i \) of the count-min sketch; we’ll worry about taking the minimum over rows later. After a bunch of Inc\((x)\) operations have been executed, what’s the final value of CMS\([i][h_i(x)]\), row \( i \)’s estimate for the frequency count of \( x \)?
If we’re lucky and no other objects collide with \( x \) in the \( i \)th row, then CMS\([i][h_i(x)]\) is just the true frequency count \( f_x \) of \( x \). If we’re unlucky and some object \( y \) collides with \( x \) in the \( i \)th row, then \( y \) contributes its own frequency count \( f_y \) to CMS\([i][h_i(x)]\). More generally, CMS\([i][h_i(x)]\) is the sum of the contributions to this counter by \( x \) and all other objects that collide with it:
\[
CMS[i][h_i(x)] = f_x + \sum_{y \in S} f_y,
\]
where \( S = \{y \neq x : h_i(y) = h_i(x)\} \) denotes the objects that collide with \( x \) in the \( i \)th row. In this equation, \( f_x \) and the \( f_y \)'s are fixed constants (independent of the choice of \( h_i \)), while the set \( S \) will be different for different choices of the hash function \( h_i \).
Recall that a good hash function spreads out a data set as well as if it were a random function. With \( b \) buckets and a good hash function \( h_i \), we expect \( x \) to collide with a roughly \( 1/b \) fraction of the other elements \( y \neq x \) under \( h_i \). Thus we expect
\[
CMS[i][h_i(x)] = f_x + \frac{1}{b} \sum_{y \neq x} f_y \leq f_x + \frac{n}{b},
\]
where in the inequality we use that the sum of the frequency counts is exactly the total number \( n \) of increments (each increment adds 1 to exactly one frequency count). See also Section 2.5 for a formal (non-heuristic) derivation of this inequality. See also Section 2.5 for a formal (non-heuristic) derivation of (3).
We should be pleased with (3). Recall the definition of the \( \epsilon \)-approximate heavy hitters problem (Section 1.4): the goal is to identify objects with frequency count at least \( n \epsilon \), without being fooled by any objects with frequency count less than \( n \epsilon - \epsilon n \). This means we just need to estimate the frequency count of an object up to additive one-sided error \( \epsilon n \). If we take the number of buckets \( b \) in the count-min sketch to be equal to \( \frac{1}{\epsilon} \), then (3) says the expected overestimate of a given object is at most \( \epsilon n \). Note that the value of \( b \), and hence the number of counters used by the data structure, is completely independent of \( n \)! If you think of \( \epsilon = .001 \) and \( n \) as in the billions, then this is pretty great.
So why aren’t we done? We’d like to say that, in addition to the expected overestimate of a frequency count being small, with very large probability the overestimate of a frequency count is small. (For a role model, recall that typical bloom filters guarantee a false positive probability of 1-2%). This requires translating our bound on an expectation to a bound on a probability.
Next, we observe that (3) implies that the probability that a row’s overestimate of $x$ is more than $\frac{2n}{b}$ is less than 50%. (If not, the expected overestimate would be greater than $\frac{1}{2} \cdot \frac{2n}{b} = \frac{n}{b}$, contradicting (3).) This argument is a special case of “Markov’s inequality;” see Section 2.5 for details.
A possibly confusing point in this heuristic analysis is: in the observation above, what is the probability over, exactly? I.e., where is the randomness? There are two morally equivalent interpretations of the analysis in this section. The first, which is carried out formally and in detail in Section 2.5, is to assume that the hash function $h_i$ is chosen uniformly at random from a universal family of hash functions. The second is to assume that the hash function $h_i$ is fixed and that the data is random. If $h_i$ is a well-crafted hash function, then your particular data set will almost always behave like random data.\(^{10}\)
Remember that everything we’ve done so far is just for a single row $i$ of the hash table. The output of $\text{Count}(x)$ exceeds $f_x$ by more than $\epsilon n$ only if every row’s estimate is too big. Assuming that the hash functions $h_i$ are independent\(^{11}\) we have
\[
\Pr\left[ \min_{i=1}^\ell \text{CMS}[i][h_i(x)] > f_x + \frac{2n}{b} \right] = \prod_{i=1}^\ell \Pr\left[ \text{CMS}[i][h_i(x)] > f_x + \frac{2n}{b} \right] \leq \left( \frac{1}{2} \right)^\ell.
\]
To get an overestimate threshold of $\epsilon n$, we can set $b = \frac{2}{\epsilon}$ (so e.g., 200 when $\epsilon = .01$). To drive the error probability — that is, the probability of an overestimate larger than this threshold — down to the user-specified value $\delta$, we set
\[
\left( \frac{1}{2} \right)^\ell = \delta
\]
and solve to obtain $\ell = \log_2 \frac{1}{\delta}$. (This is between 6 and 7 when $\delta = .01$.) Thus the total number of counters required when $\delta = \epsilon = .01$ is barely over a thousand (no matter how long
\(^{10}\)In an implementation that chooses $h_i$ deterministically as a well-crafted hash function, the error analysis below does not actually hold for an arbitrary data set. (Recall that for every fixed hash function there is a pathological data set where everything collides.) So instead we say that the analysis is “heuristic” in this case, meaning that while not literally true, we nevertheless expect reality to conform to its predictions (because we expect the data to be non-pathological). Whenever you do a heuristic analysis to predict the performance of an implementation, you should always measure the implementation’s performance to double-check that it’s working as expected. (Of course, you should do this even when you’ve proved performance bounds rigorously — there can always be unmodeled effects (cache performance, etc.) that cause reality to diverge from your theoretical predictions for it.)
\(^{11}\)Don’t forget that probabilities factor only for independent events. There are again two interpretations of this step: in the first, we assume that each $h_i$ is chosen independently and randomly from a universal family of hash functions; in the second, we assume that the $h_i$’s are sufficiently well crafted that they almost always behave as if they were independent on real data.
the array is!). See Section 2.6 for a detailed recap of all of the count-min sketch’s properties, and Section 2.5 for a rigorous and optimized version of the heuristic analysis in this section.
2.5 Count-Min Sketch: Rigorous Error Analysis
This section carries out a rigorous version of the heuristic error analysis in Section 2.4. Let $f_x$ denote the true frequency count of $x$, and $Z_i$ the (over)estimate $CMS[i][h_i(x)]$. $Z_i$ is a random variable over the state space equal to the set of all possible hash functions $h_i$. (I.e., given an $h_i$, $Z_i$ is fully determined.)
If we’re lucky and no other objects collide with $x$ in the $i$th row, then $Z_i = f_x$. If we’re unlucky and some object $y$ collides with $x$ in the $i$th row, then $y$ contributes its own frequency count $f_y$ to $Z_i$. As in (2), we can write
$$Z_i = f_x + \sum_{y \neq x} f_y,$$
where $S = \{y \neq x : h_i(y) = h_i(x)\}$ denotes the objects that collide with $x$ in the $i$th row. In (4), $f_x$ and the $f_y$’s are fixed constants (independent of the choice of $h_i$), while the set $S$ is random (i.e., different for different choices of $h_i$).
To continue the error analysis, we make the following assumption:
(*) For every pair $x, y$ of distinct objects, $Pr[h_i(y) = h_i(x)] \leq \frac{1}{b}$.
Assumption (*) basically says that, after conditioning on the bucket to which $h_i$ assigns an object $x$, the bucket $h_i$ assigns to some other object $y$ is uniformly random. For example, the assumption would certainly be satisfied if $h_i$ is a completely random function. It is also satisfied if $h_i$ is chosen uniformly at random from a universal family — it is precisely the definition of such a family.
Before using assumption (*) to analyze (4), we recall linearity of expectation: for any real-valued random variables $X_1, \ldots, X_m$ defined on the same probability space,
$$E \left[ \sum_{j=1}^{m} X_j \right] = \sum_{j=1}^{m} E[X_j].$$
(5)
That is, the expectation of a sum is just the sum of the expectations, even if the random variables are not independent.\footnote{Note the analogous statement for products is false if the $X_j$’s are not independent. For example, suppose $X_1 \in \{0, 1\}$ is uniform while $X_2 = 1 - X_1$. Then $E[X_1 \cdot X_2] = 0$ while $E[X_1] \cdot E[X_2] = \frac{1}{4}$.} The statement is trivial to prove — just expand the expectations and reverse the order of summation — and insanely useful.
To put the pieces together, we first rewrite (4) as
$$Z_i = f_x + \sum_{y \neq x} f_y 1_y,$$
(6)
where $1_y$ is the indicator random variable that indicates whether or not $y$ collides with $x$ under $h_i$:
$$1_y = \begin{cases}
1 & \text{if } h_i(y) = h_i(x) \\
0 & \text{otherwise.}
\end{cases}$$
Recalling that $f_x$ and the $f_y$’s are constants, we can apply linearity of expectation to (6) to obtain
$$\mathbb{E}[Z_i] = f_x + \sum_{y \neq x} f_y \cdot \mathbb{E}[1_y].$$
(7)
As indicator random variables, the $1_y$’s have very simple expectations:
$$\mathbb{E}[1_y] = 1 \cdot \Pr[h_i(y) = h_i(x)] + 0 \cdot \Pr[1_y = 0] = \Pr[h_i(y) = h_i(x)] \leq \frac{1}{b}. \quad (8)$$
Combining (7) and (8) gives
$$\mathbb{E}[Z_i] \leq f_x + \frac{1}{b} \sum_{y \neq x} f_y \leq f_x + \frac{n}{b}. \quad (9)$$
Next we translate this bound on an expectation to a bound on a probability. A simple and standard way to do this is via Markov’s inequality.
**Proposition 2.2 (Markov’s Inequality)** If $X$ is a nonnegative random variable and $c > 1$ is a constant, then
$$\Pr[X > c \cdot \mathbb{E}[X]] \leq \frac{1}{c}. \quad (10)$$
The proof of Markov’s inequality is simple. For example, suppose you have a nonnegative random variable $X$ with expected value 10. How frequently could it take on a value greater than 100? (So $c = 10$.) In principle, it is possible that $X$ has value exactly 100 10% of the time (if it has value 0 the rest of the time). But it can’t have value strictly greater than 100 10% or more of the time — if it did, its expectation would be strictly greater than 10. An analogous argument applies to nonnegative random variables with any expectation and for any value of $c$.
Let’s return to our error analysis, for a fixed object $x$ and row $i$. Define
$$X = Z_i - f_x \geq 0$$
as the amount by which the $i$th row of the count-min sketch overestimates $x$’s frequency count $f_x$. By (9), with $b = \xi$, the expected value of $X$ is at most $\frac{c}{e}$. Since this overestimate is always nonnegative, we can apply Markov’s inequality (Proposition 2.2) with $\mathbb{E}[X] = \frac{c}{e}$ and $c = e$ to obtain
$$\Pr[X > e \cdot \frac{c}{e}] \leq \frac{1}{e}.$$
and hence
\[ \Pr[Z_i > f_x + \epsilon n] \leq \frac{1}{\epsilon}. \]
Assuming that the hash functions are chosen independently, we have
\[ \Pr\left[ \min_{i=1}^{\ell} Z_i > f_x + \epsilon n \right] = \prod_{i=1}^{\ell} \Pr[Z_i > f_x + \epsilon n] \leq \frac{1}{\epsilon^\ell}. \] \hspace{1cm} (10)
To achieve a target error probability of \( \delta \), we just solve for \( \ell \) in (10) and find that \( \ell \geq \ln \frac{1}{\delta} \) rows are sufficient. For \( \delta \) around 1\%, \( \ell = 5 \) is good enough.
### 2.6 Count-Min Sketch: Final Report Card
- The space required is that for \( \frac{\epsilon}{\epsilon} \ln \frac{1}{\delta} \) counters. Recall from Section 1.4 that for the \( \epsilon \)-HH problem, \( \epsilon = \frac{1}{2k} \) is a sensible choice. For \( k = 100 \) and \( \delta = .01 \) this is in the low thousands. For larger values of \( k \), the number of counters needed scales linearly.\(^{14}\)
- In any case, the number of counters is totally independent of \( n \) (which could be in billions)! This is the magic of the count-min sketch — you can throw out almost all of your data set and still maintain approximate frequency counts. Contrast this with bloom filters, and pretty much every other data structure that you’ve seen, where the space grows linearly with the number of processed elements.\(^{15}\)
- Assuming the hash functions take constant time to evaluate, the Inc and Count operations run in \( O(\ln \frac{1}{\delta}) \) time.
- The count-min sketch guarantees 1-sided error: no matter how the hash functions \( h_1, \ldots, h_\ell \) are chosen, for every object \( x \) with frequency count \( f_x \), the count-min sketch returns an estimate \( \text{Count}(x) \) that is at least \( f_x \).
- Assuming that each hash function \( h_1, \ldots, h_\ell \) is chosen uniformly from a universal family, for every object \( x \) with frequency count \( f_x \), the probability that the estimate \( \text{Count}(x) \) output by the count-min sketch is greater than \( f_x + \epsilon n \) is at most \( \delta \). One would expect comparable performance for fixed well-crafted hash functions \( h_1, \ldots, h_\ell \) on pretty much any data set that you might encounter.
\(^{14}\)Note that the challenging case, and the case that often occurs in our motivating applications, is an array \( A \) that simultaneously has lots of different elements but also a few elements that occur many times. If there are few distinct elements, one can maintain the frequency counts exactly using one counter per distinct element. If no elements occur frequently, then there’s nothing to do.
\(^{15}\)How is this possible? Intuitively, with an error of \( \epsilon n \) allowed, only the elements with with large (> \( \epsilon n \)) frequency counts matter, and there can be at most \( \frac{1}{\epsilon} \) such elements (why?). Thus it is plausible that space proportional to \( \frac{1}{\epsilon} \) might be enough. Of course, there’s still the issue of not knowing in advance which \( \approx \frac{1}{\epsilon} \) elements are the important ones!
2.7 Solving the $\epsilon$-Heavy Hitters Problem
The count-min sketch can be used to solve the $\epsilon$-HH problem from Section 1.4. If the total number $n$ of array elements is known in advance, this is easy: set $\epsilon = \frac{1}{2k}$, process the array elements using a count-min sketch in a single pass, and remember an element once its estimated frequency (according to the count-min sketch) is at least $\frac{n}{k}$.
When $n$ is not known a priori, here is one way to solve the problem. Assume that $\epsilon = \frac{1}{2k}$ and so the number of counters is $O(k \ln \frac{n}{\delta})$. In a single left-to-right pass over the array $A$, maintain the number $m$ of array entries processed thus far. We store potential heavy hitters in a heap data structure. When processing the next object $x$ of the array, we invoke $\text{Inc}(x)$ followed by $\text{Count}(x)$. If $\text{Count}(x) \geq \frac{m}{k}$, then we store $x$ in the heap, using the key $\text{Count}(x)$. (If $x$ was already in the heap, we delete it before re-inserting it with its new key value.) This requires one insertion and at most one deletion from the heap. Also, whenever $m$ grows to the point that some object $x$ stored in the heap has a key less than $\frac{m}{k}$ (checkable in $O(1)$ time via $\text{Find-Min}$), we delete $x$ from the heap (via $\text{Extract-Min}$). After finishing the pass, we output all of the objects in the heap.
Assume for simplicity that the count-min sketch makes no large errors, with $\text{Count}(x) \in [f_x, f_x + \epsilon n]$ for all $x$. Every object $x$ with $f_x \geq \frac{n}{k}$ is in the heap at the end of the pass. (To see this, consider what happens the last time that $x$ occurs.) The “no large errors” assumption implies an approximate converse: every object $x$ in the heap has true frequency count at least $\frac{n}{k} - \epsilon n = \frac{n}{2k}$ (other objects would be deleted from the heap by the end of the pass). These are exactly the two properties we ask of a solution to the $\epsilon$-HH problem. If the count-min sketch makes large errors on a few objects, then these objects might erroneously appear in the final output as well. Ignoring the objects with large errors, the heap contains at most $2k$ objects at all times (why?), so maintaining the heap requires an extra $O(\log k) = O(\log \frac{1}{\epsilon})$ amount of work per array entry.
References
|
{"Source-Url": "https://homes.cs.washington.edu/~jrl/teaching/cse422sp22/notes/heavy-hitters.pdf", "len_cl100k_base": 9288, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 43867, "total-output-tokens": 10744, "length": "2e13", "weborganizer": {"__label__adult": 0.0003802776336669922, "__label__art_design": 0.0005025863647460938, "__label__crime_law": 0.0004799365997314453, "__label__education_jobs": 0.00179290771484375, "__label__entertainment": 0.0001266002655029297, "__label__fashion_beauty": 0.00021004676818847656, "__label__finance_business": 0.0004143714904785156, "__label__food_dining": 0.0004742145538330078, "__label__games": 0.0007171630859375, "__label__hardware": 0.0020732879638671875, "__label__health": 0.0009551048278808594, "__label__history": 0.00044798851013183594, "__label__home_hobbies": 0.0002110004425048828, "__label__industrial": 0.0008449554443359375, "__label__literature": 0.0004191398620605469, "__label__politics": 0.0003895759582519531, "__label__religion": 0.0007429122924804688, "__label__science_tech": 0.2890625, "__label__social_life": 0.000141143798828125, "__label__software": 0.0099334716796875, "__label__software_dev": 0.6884765625, "__label__sports_fitness": 0.00037479400634765625, "__label__transportation": 0.0007176399230957031, "__label__travel": 0.0002288818359375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37304, 0.02383]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37304, 0.66361]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37304, 0.88767]], "google_gemma-3-12b-it_contains_pii": [[0, 1310, false], [1310, 4442, null], [4442, 7849, null], [7849, 10652, null], [10652, 14013, null], [14013, 15644, null], [15644, 18767, null], [18767, 22167, null], [22167, 25872, null], [25872, 28415, null], [28415, 30519, null], [30519, 33623, null], [33623, 36810, null], [36810, 37304, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1310, true], [1310, 4442, null], [4442, 7849, null], [7849, 10652, null], [10652, 14013, null], [14013, 15644, null], [15644, 18767, null], [18767, 22167, null], [22167, 25872, null], [25872, 28415, null], [28415, 30519, null], [30519, 33623, null], [33623, 36810, null], [36810, 37304, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37304, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37304, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37304, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37304, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 37304, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37304, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37304, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37304, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37304, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37304, null]], "pdf_page_numbers": [[0, 1310, 1], [1310, 4442, 2], [4442, 7849, 3], [7849, 10652, 4], [10652, 14013, 5], [14013, 15644, 6], [15644, 18767, 7], [18767, 22167, 8], [22167, 25872, 9], [25872, 28415, 10], [28415, 30519, 11], [30519, 33623, 12], [33623, 36810, 13], [36810, 37304, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37304, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
927a8a4bec9b4b2d2445b6df03b6f7a5b359543a
|
[REMOVED]
|
{"Source-Url": "https://graphics.cg.uni-saarland.de/fileadmin/cguds/papers/2012/byelozyorov_vc2012/10.1007_s00371-012-0717-9__1___1_.pdf", "len_cl100k_base": 10521, "olmocr-version": "0.1.46", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 41688, "total-output-tokens": 12491, "length": "2e13", "weborganizer": {"__label__adult": 0.0005292892456054688, "__label__art_design": 0.002330780029296875, "__label__crime_law": 0.0006251335144042969, "__label__education_jobs": 0.0017948150634765625, "__label__entertainment": 0.0004072189331054687, "__label__fashion_beauty": 0.00029921531677246094, "__label__finance_business": 0.000408172607421875, "__label__food_dining": 0.0004794597625732422, "__label__games": 0.004058837890625, "__label__hardware": 0.001995086669921875, "__label__health": 0.0007987022399902344, "__label__history": 0.0012502670288085938, "__label__home_hobbies": 0.00012481212615966797, "__label__industrial": 0.0006608963012695312, "__label__literature": 0.000545501708984375, "__label__politics": 0.0004611015319824219, "__label__religion": 0.0008435249328613281, "__label__science_tech": 0.374267578125, "__label__social_life": 0.00015354156494140625, "__label__software": 0.0310211181640625, "__label__software_dev": 0.5751953125, "__label__sports_fitness": 0.0003771781921386719, "__label__transportation": 0.0007605552673339844, "__label__travel": 0.0005135536193847656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56769, 0.03943]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56769, 0.57024]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56769, 0.92182]], "google_gemma-3-12b-it_contains_pii": [[0, 143, false], [143, 143, null], [143, 3160, null], [3160, 6350, null], [6350, 11975, null], [11975, 17783, null], [17783, 23412, null], [23412, 28999, null], [28999, 32757, null], [32757, 38331, null], [38331, 42078, null], [42078, 45284, null], [45284, 49692, null], [49692, 54954, null], [54954, 56769, null]], "google_gemma-3-12b-it_is_public_document": [[0, 143, true], [143, 143, null], [143, 3160, null], [3160, 6350, null], [6350, 11975, null], [11975, 17783, null], [17783, 23412, null], [23412, 28999, null], [28999, 32757, null], [32757, 38331, null], [38331, 42078, null], [42078, 45284, null], [45284, 49692, null], [49692, 54954, null], [54954, 56769, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56769, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56769, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56769, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56769, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56769, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56769, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56769, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56769, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56769, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56769, null]], "pdf_page_numbers": [[0, 143, 1], [143, 143, 2], [143, 3160, 3], [3160, 6350, 4], [6350, 11975, 5], [11975, 17783, 6], [17783, 23412, 7], [23412, 28999, 8], [28999, 32757, 9], [32757, 38331, 10], [38331, 42078, 11], [42078, 45284, 12], [45284, 49692, 13], [49692, 54954, 14], [54954, 56769, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56769, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
5712711d9380cdf41039555593dc57b72b470176
|
Abstract
Beginners may find it difficult to relate the facts from the formal documentation on the BSD rc.d framework with the practical tasks of rc.d scripting. In this article, we consider a few typical cases of increasing complexity, show rc.d features suited for each case, and discuss how they work. Such an examination should provide reference points for further study of the design and efficient application of rc.d.
Table of Contents
1. Introduction ................................................................. 1
2. Outlining the task .......................................................... 2
3. A dummy script ............................................................. 3
4. A configurable dummy script ............................................ 5
5. Startup and shutdown of a simple daemon .............................. 6
6. Startup and shutdown of an advanced daemon .......................... 8
7. Connecting a script to the rc.d framework .............................. 11
8. Giving more flexibility to an rc.d script ................................. 14
9. Further reading ............................................................... 17
1. Introduction
The historical BSD had a monolithic startup script, /etc/rc. It was invoked by init(8) at system boot time and performed all userland tasks required for multi-user operation: checking and mounting file systems, setting up the network, starting daemons, and so on. The precise list of tasks was not the same in every system; admins needed to customize it. With few exceptions, /etc/rc had to be modified, and true hackers liked it.
The real problem with the monolithic approach was that it provided no control over the individual components started from /etc/rc. For instance, /etc/rc could not restart a single daemon. The system admin had to find the daemon process by hand, kill it, wait until it actually exited, then browse through /etc/rc for the flags, and finally type the full command line to start the daemon again. The task would become even more difficult and prone to errors if the service to restart consisted of more than one daemon or demanded additional actions. In a few words, the single script failed to fulfil what scripts are for: to make the system admin’s life easier.
Later there was an attempt to split out some parts of /etc/rc for the sake of starting the most important subsystems separately. The notorious example was /etc/netstart to bring up networking. It did allow for accessing the network from single-user mode, but it did not integrate well into the automatic startup process because parts of its code needed to interleave with actions essentially
unrelated to networking. That was why /etc/netstart mutated into /etc/rc.network. The latter was no longer an ordinary script; it comprised of large, tangled sh(1) functions called from /etc/rc at different stages of system startup. However, as the startup tasks grew diverse and sophisticated, the “quasi-modular” approach became even more of a drag than the monolithic /etc/rc had been.
Without a clean and well-designed framework, the startup scripts had to bend over backwards to satisfy the needs of rapidly developing BSD-based operating systems. It became obvious at last that more steps are necessary on the way to a fine-grained and extensible rc system. Thus BSD rc.d was born. Its acknowledged fathers were Luke Mewburn and the NetBSD community. Later it was imported into FreeBSD. Its name refers to the location of system scripts for individual services, which is in /etc/rc.d. Soon we will learn about more components of the rc.d system and see how the individual scripts are invoked.
The basic ideas behind BSD rc.d are fine modularity and code reuse. Fine modularity means that each basic “service” such as a system daemon or primitive startup task gets its own sh(1) script able to start the service, stop it, reload it, check its status. A particular action is chosen by the command-line argument to the script. The /etc/rc script still drives system startup, but now it merely invokes the smaller scripts one by one with the start argument. It is easy to perform shutdown tasks as well by running the same set of scripts with the stop argument, which is done by /etc/rc.shutdown. Note how closely this follows the Unix way of having a set of small specialized tools, each fulfilling its task as well as possible. Code reuse means that common operations are implemented as sh(1) functions and collected in /etc/rc.subr. Now a typical script can be just a few lines’ worth of sh(1) code. Finally, an important part of the rc.d framework is rcorder(8), which helps /etc/rc to run the small scripts orderly with respect to dependencies between them. It can help /etc/rc.shutdown, too, because the proper order for the shutdown sequence is opposite to that of startup.
The BSD rc.d design is described in the original article by Luke Mewburn, and the rc.d components are documented in great detail in the respective manual pages. However, it might not appear obvious to an rc.d newbie how to tie the numerous bits and pieces together to create a well-styled script for a particular task. Therefore this article will try a different approach to describe rc.d. It will show which features should be used in a number of typical cases, and why. Note that this is not a how-to document because our aim is not at giving ready-made recipes, but at showing a few easy entrances into the rc.d realm. Neither is this article a replacement for the relevant manual pages. Do not hesitate to refer to them for more formal and complete documentation while reading this article.
There are prerequisites to understanding this article. First of all, you should be familiar with the sh(1) scripting language to master rc.d. In addition, you should know how the system performs userland startup and shutdown tasks, which is described in rc(8).
This article focuses on the FreeBSD branch of rc.d. Nevertheless, it may be useful to NetBSD developers, too, because the two branches of BSD rc.d not only share the same design but also stay similar in their aspects visible to script authors.
2. Outlining the task
A little consideration before starting $EDITOR will not hurt. To write a well-tempered rc.d script for a system service, we should be able to answer the following questions first:
• Is the service mandatory or optional?
• Will the script serve a single program, e.g., a daemon, or perform more complex actions?
• Which other services will our service depend on, and vice versa?
From the examples that follow we will see why it is important to know the answers to these questions.
3. A dummy script
The following script just emits a message each time the system boots up:
```bash
#!/bin/sh
./etc/rc.subr
name="dummy"
start_cmd="${name}_start"
stop_cmd=
dummy_start()
{
echo "Nothing started."
}
load_rc_config $name
run_rc_command "$1"
```
Things to note are:
□ An interpreted script should begin with the magic "shebang" line. That line specifies the interpreter program for the script. Due to the shebang line, the script can be invoked exactly like a binary program provided that it has the execute bit set. (See `chmod(1)`.) For example, a system admin can run our script manually, from the command line:
```
# /etc/rc.d/dummy start
```
To be properly managed by the rc.d framework, its scripts need to be written in the `sh(1)` language. If you have a service or port that uses a binary control utility or a startup routine written in another language, install that element in `/usr/sbin` (for the system) or `/usr/local/sbin` (for ports) and call it from a `sh(1)` script in the appropriate rc.d directory.
If you would like to learn the details of why rc.d scripts must be written in the `sh(1)` language, see how `/etc/rc` invokes them by means of `run_rc_script`, then study the implementation of `run_rc_script` in `/etc/rc.subr`.
In /etc/rc.subr, a number of sh(1) functions are defined for an rc.d script to use. The functions are documented in rc.subr(8). While it is theoretically possible to write an rc.d script without ever using rc.subr(8), its functions prove extremely handy and make the job an order of magnitude easier. So it is no surprise that everybody resorts to rc.subr(8) in rc.d scripts. We are not going to be an exception.
An rc.d script must "source"/etc/rc.subr (include it using ".") before it calls rc.subr(8) functions so that sh(1) has an opportunity to learn the functions. The preferred style is to source /etc/rc.subr first of all.
Some useful functions related to networking are provided by another include file, /etc/network.subr.
The mandatory variable name specifies the name of our script. It is required by rc.subr(8). That is, each rc.d script must set name before it calls rc.subr(8) functions.
Now it is the right time to choose a unique name for our script once and for all. We will use it in a number of places while developing the script. For a start, let us give the same name to the script file, too.
The current style of rc.d scripting is to enclose values assigned to variables in double quotes. Keep in mind that it is just a style issue that may not always be applicable. You can safely omit quotes from around simple words without sh(1) metacharacters in them, while in certain cases you will need single quotes to prevent any interpretation of the value by sh(1). A programmer should be able to tell the language syntax from style conventions and use both of them wisely.
The main idea behind rc.subr(8) is that an rc.d script provides handlers, or methods, for rc.subr(8) to invoke. In particular, start, stop, and other arguments to an rc.d script are handled this way. A method is a sh(1) expression stored in a variable named argument_cmd, where argument corresponds to what can be specified on the script's command line. We will see later how rc.subr(8) provides default methods for the standard arguments.
To make the code in rc.d more uniform, it is common to use ${name} wherever appropriate. Thus a number of lines can be just copied from one script to another.
We should keep in mind that rc.subr(8) provides default methods for the standard arguments. Consequently, we must override a standard method with a no-op sh(1) expression if we want it to do nothing.
The body of a sophisticated method can be implemented as a function. It is a good idea to make the function name meaningful.
It is strongly recommended to add the prefix ${name} to the names of all functions defined in our script so they never clash with the functions from rc.subr(8) or another common include file.
This call to rc.subr(8) loads rc.conf(5) variables. Our script makes no use of them yet, but it still is
recommended to load \texttt{rc.conf(5)} because there can be \texttt{rc.conf(5)} variables controlling \texttt{rc.subr(8)} itself.
Usually this is the last command in an \texttt{rc.d} script. It invokes the \texttt{rc.subr(8)} machinery to perform the requested action using the variables and methods our script has provided.
4. A configurable dummy script
Now let us add some controls to our dummy script. As you may know, \texttt{rc.d} scripts are controlled with \texttt{rc.conf(5)}. Fortunately, \texttt{rc.subr(8)} hides all the complications from us. The following script uses \texttt{rc.conf(5)} via \texttt{rc.subr(8)} to see whether it is enabled in the first place, and to fetch a message to show at boot time. These two tasks in fact are independent. On the one hand, an \texttt{rc.d} script can just support enabling and disabling its service. On the other hand, a mandatory \texttt{rc.d} script can have configuration variables. We will do both things in the same script though:
```bash
#!/bin/sh
. /etc/rc.subr
name=dummy
rcvar=dummy_enable ➊
start_cmd="${name}_start"
stop_cmd=":
load_rc_config $name ➋
: ${dummy_enable:=no} ➌
: ${dummy_msg="Nothing started."} ➍
dummy_start()
{
echo "$dummy_msg" ➎
}
run_rc_command "$1"
```
What changed in this example?
➊ The variable \texttt{rcvar} specifies the name of the ON/OFF knob variable.
➋ Now \texttt{load_rc_config} is invoked earlier in the script, before any \texttt{rc.conf(5)} variables are accessed.
While examining \texttt{rc.d} scripts, keep in mind that \texttt{sh(1)} defers the evaluation of expressions in a function until the latter is called. Therefore it is not an error to invoke \texttt{load_rc_config} as late as just before \texttt{run_rc_command} and still access \texttt{rc.conf(5)} variables from the method functions exported to \texttt{run_rc_command}. This is because the method functions are to be called by \texttt{run_rc_command}, which is invoked after \texttt{load_rc_config}.
subdivision
A warning will be emitted by `run_rc_command` if `rcvar` itself is set, but the indicated knob variable is unset. If your rc.d script is for the base system, you should add a default setting for the knob to `/etc/defaults/rc.conf` and document it in `rc.conf(5)`. Otherwise, it is your script that should provide a default setting for the knob. The canonical approach to the latter case is shown in the example.
You can make `rc.subr(8)` act as though the knob is set to ON, irrespective of its current setting, by prefixing the argument to the script with `one` or `force`, as in `onestart` or `forcesset`. Keep in mind though that `force` has other dangerous effects we will touch upon below, while `one` just overrides the ON/OFF knob. E.g., assume that `dummy_enable` is OFF. The following command will run the `start` method in spite of the setting:
```
# /etc/rc.d/dummy onestart
```
Now the message to be shown at boot time is no longer hard-coded in the script. It is specified by an `rc.conf(5)` variable named `dummy_msg`. This is a trivial example of how `rc.conf(5)` variables can control an rc.d script.
The names of all `rc.conf(5)` variables used exclusively by our script must have the same prefix: `${name}_`. For example: `dummy_mode`, `dummy_state_file`, and so on.
While it is possible to use a shorter name internally, e.g., just `msg`, adding the unique prefix `${name}_` to all global names introduced by our script will save us from possible collisions with the `rc.subr(8)` namespace.
As a rule, rc.d scripts of the base system need not provide defaults for their `rc.conf(5)` variables because the defaults should be set in `/etc/defaults/rc.conf` instead. On the other hand, rc.d scripts for ports should provide the defaults as shown in the example.
Here we use `dummy_msg` to actually control our script, i.e., to emit a variable message. Use of a shell function is overkill here, since it only runs a single command; an equally valid alternative is:
```
start_cmd="echo "$dummy_msg"
"
```
5. Startup and shutdown of a simple daemon
We said earlier that `rc.subr(8)` could provide default methods. Obviously, such defaults cannot be too general. They are suited for the common case of starting and shutting down a simple daemon program. Let us assume now that we need to write an rc.d script for such a daemon called `mumbled`. Here it is:
```
#!/bin/sh
```
---
Note: The above text includes partial page numbers (6) and page dimensions (595.3x841.9) which are not relevant to the content and are not necessary for understanding the text. They are included in the image as metadata but are not part of the natural text.
Pleasingly simple, isn’t it? Let us examine our little script. The only new thing to note is as follows:
The `command` variable is meaningful to `rc.subr(8)`. If it is set, `rc.subr(8)` will act according to the scenario of serving a conventional daemon. In particular, the default methods will be provided for such arguments: `start`, `stop`, `restart`, `poll`, and `status`.
The daemon will be started by running `$command` with command-line flags specified by `$mumbled_flags`. Thus all the input data for the default `start` method are available in the variables set by our script. Unlike `start`, other methods may require additional information about the process started. For instance, `stop` must know the PID of the process to terminate it. In the present case, `rc.subr(8)` will scan through the list of all processes, looking for a process with its name equal to `procname`. The latter is another variable of meaning to `rc.subr(8)`, and its value defaults to that of `command`. In other words, when we set `command`, `procname` is effectively set to the same value. This enables our script to kill the daemon and to check if it is running in the first place.
Some programs are in fact executable scripts. The system runs such a script by starting its interpreter and passing the name of the script to it as a command-line argument. This is reflected in the list of processes, which can confuse `rc.subr(8)`. You should additionally set `command_interpreter` to let `rc.subr(8)` know the actual name of the process if `$command` is a script.
For each `rc.d` script, there is an optional `rc.conf(5)` variable that takes precedence over `command`. Its name is constructed as follows: `${name}_program`, where `name` is the mandatory variable we discussed earlier. E.g., in this case it will be `mumbled_program`. It is `rc.subr(8)` that arranges `${name}_program` to override `command`.
Of course, `sh(1)` will permit you to set `${name}_program` from `rc.conf(5)` or the script itself even if `command` is unset. In that case, the special properties of `${name}_program` are lost, and it becomes an ordinary variable your script can use for its own purposes. However, the sole use of `${name}_program` is discouraged because using it together with `command` became an idiom of `rc.d` scripting.
For more detailed information on default methods, refer to `rc.subr(8)`.
6. Startup and shutdown of an advanced daemon
Let us add some meat onto the bones of the previous script and make it more complex and featureful. The default methods can do a good job for us, but we may need some of their aspects tweaked. Now we will learn how to tune the default methods to our needs.
```bash
#!/bin/sh
.
name=mumbled
rcvar=mumbled_enable
command="/usr/sbin/${name}"
command_args="mock arguments > /dev/null 2>&1" ①
pidfile="/var/run/${name}.pid" ②
required_files="/etc/${name}.conf /usr/share/misc/${name}.rules" ③
sig_reload="USR1" ④
start_precmd="${name}_prestart" ⑤
stop_postcmd="echo Bye-bye" ⑥
extra_commands="reload plugh xyzzy" ⑦
plugh_cmd="mumbled_plugh" ⑧
xyzzy_cmd="echo 'Nothing happens.'"
mumbled_prestart()
{
if checkyesno mumbled_smart; then ⑨
rc_flags="-o smart ${rc_flags}" ⑩
fi
case "${mumbled_mode}" in
foo)
rc_flags="-frotz ${rc_flags}"
;;
bar)
rc_flags="-baz ${rc_flags}"
;;
*)
warn "Invalid value for mumbled_mode" ⑪
return 1 ⑫
esac
esac
}
```
run_rc_command xyzzy
return 0
}
mumbled_plugh()
{
echo 'A hollow voice says "plugh".'
}
load_rc_config $name
run_rc_command "$1"
Additional arguments to \$command can be passed in command_args. They will be added to the
command line after \$mumbled_flags. Since the final command line is passed to eval for its actual
execution, input and output redirections can be specified in command_args.
Never include dashed options, like -X or --foo, in command_args. The contents of
command_args will appear at the end of the final command line, hence they are
likely to follow arguments present in \${name}_flags; but most commands will not
recognize dashed options after ordinary arguments. A better way of passing
additional options to \$command is to add them to the beginning of \${name}_flags.
Another way is to modify rc_flags as shown later.
A good-mannered daemon should create a pidfile so that its process can be found more easily and
guaranteed. The variable pidfile, if set, tells rc.subr(8) where it can find the pidfile for its default
methods to use.
In fact, rc.subr(8) will also use the pidfile to see if the daemon is already running
before starting it. This check can be skipped by using the faststart argument.
If the daemon cannot run unless certain files exist, just list them in required_files, and rc.subr(8)
will check that those files do exist before starting the daemon. There also are required_dirs and
required_vars for directories and environment variables, respectively. They all are described in
detail in rc.subr(8).
The default method from rc.subr(8) can be forced to skip the prerequisite checks
by using forcestart as the argument to the script.
We can customize signals to send to the daemon in case they differ from the well-known ones. In
particular, sig_reload specifies the signal that makes the daemon reload its configuration; it is
SIGHUP by default. Another signal is sent to stop the daemon process; the default is SIGTERM, but
this can be changed by setting sig_stop appropriately.
The signal names should be specified to rc.subr(8) without the SIG prefix, as it is
shown in the example. The FreeBSD version of kill(1) can recognize the SIG prefix,
but the versions from other OS types may not.
Performing additional tasks before or after the default methods is easy. For each command-
argument supported by our script, we can define `argument_precmd` and `argument_postcmd`. These `sh(1)` commands are invoked before and after the respective method, as it is evident from their names.
Overriding a default method with a custom `argument_cmd` still does not prevent us from making use of `argument_precmd` or `argument_postcmd` if we need to. In particular, the former is good for checking custom, sophisticated conditions that should be met before performing the command itself. Using `argument_precmd` along with `argument_cmd` lets us logically separate the checks from the action.
Do not forget that you can cram any valid `sh(1)` expressions into the methods, pre-, and post-commands you define. Just invoking a function that makes the real job is a good style in most cases, but never let style limit your understanding of what is going on behind the curtain.
If we would like to implement custom arguments, which can also be thought of as commands to our script, we need to list them in `extra_commands` and provide methods to handle them.
The `reload` command is special. On the one hand, it has a preset method in `rc.subr(8)`. On the other hand, `reload` is not offered by default. The reason is that not all daemons use the same reload mechanism and some have nothing to reload at all. So we need to ask explicitly that the builtin functionality be provided. We can do so via `extra_commands`.
What do we get from the default method for `reload`? Quite often daemons reload their configuration upon reception of a signal - typically, SIGHUP. Therefore `rc.subr(8)` attempts to reload the daemon by sending a signal to it. The signal is preset to SIGHUP but can be customized via `sig_reload` if necessary.
Our script supports two non-standard commands, `plugh` and `xyzzy`. We saw them listed in `extra_commands`, and now it is time to provide methods for them. The method for `xyzzy` is just inlined while that for `plugh` is implemented as the `mumbled_plugh` function.
Non-standard commands are not invoked during startup or shutdown. Usually they are for the system admin's convenience. They can also be used from other subsystems, e.g., `devd(8)` if specified in `devd.conf(5)`.
The full list of available commands can be found in the usage line printed by `rc.subr(8)` when the script is invoked without arguments. For example, here is the usage line from the script under study:
```
# /etc/rc.d/mumbled
Usage: /etc/rc.d/mumbled [fast|force|one]
(start|stop|restart|rcvar|reload|plugh|xyzzy|status|poll)
```
A script can invoke its own standard or non-standard commands if needed. This may look similar to calling functions, but we know that commands and shell functions are not always the same thing. For instance, `xyzzy` is not implemented as a function here. In addition, there can be a pre-command and post-command, which should be invoked orderly. So the proper way for a script to run its own
command is by means of `rc.subr(8)`, as shown in the example.
A handy function named `checkyesno` is provided by `rc.subr(8)`. It takes a variable name as its argument and returns a zero exit code if and only if the variable is set to `YES`, or `TRUE`, or `ON`, or `1`, case insensitive; a non-zero exit code is returned otherwise. In the latter case, the function tests the variable for being set to `NO`, `FALSE`, `OFF`, or `0`, case insensitive; it prints a warning message if the variable contains anything else, i.e., junk.
Keep in mind that for `sh(1)` a zero exit code means true and a non-zero exit code means false.
The `checkyesno` function takes a *variable name*. Do not pass the expanded *value* of a variable to it; it will not work as expected.
The following is the correct usage of `checkyesno`:
```bash
if checkyesno mumbled_enable; then
foo
fi
```
On the contrary, calling `checkyesno` as shown below will not work - at least not as expected:
```bash
if checkyesno "${mumbled_enable}"; then
foo
fi
```
We can affect the flags to be passed to `$command` by modifying `rc_flags` in `$start_precomd`.
In certain cases we may need to emit an important message that should go to `syslog` as well. This can be done easily with the following `rc.subr(8)` functions: `debug`, `info`, `warn`, and `err`. The latter function then exits the script with the code specified.
The exit codes from methods and their pre-commands are not just ignored by default. If `argument_precomd` returns a non-zero exit code, the main method will not be performed. In turn, `argument_postcomd` will not be invoked unless the main method returns a zero exit code.
However, `rc.subr(8)` can be instructed from the command line to ignore those exit codes and invoke all commands anyway by prefixing an argument with `force`, as in `forcestart`.
7. Connecting a script to the rc.d framework
After a script has been written, it needs to be integrated into rc.d. The crucial step is to install the script in `/etc/rc.d` (for the base system) or `/usr/local/etc/rc.d` (for ports). Both `bsd.prog.mk` and `bsd.port.mk` provide convenient hooks for that, and usually you do not have to worry about the proper ownership and mode. System scripts should be installed from `src/libexec/rc/rc.d` through the
Makefile found there. Port scripts can be installed using `USE_RC_SUBR` as described in the Porter's Handbook.
However, we should consider beforehand the place of our script in the system startup sequence. The service handled by our script is likely to depend on other services. For instance, a network daemon cannot function without the network interfaces and routing up and running. Even if a service seems to demand nothing, it can hardly start before the basic filesystems have been checked and mounted.
We mentioned `rcorder(8)` already. Now it is time to have a close look at it. In a nutshell, `rcorder(8)` takes a set of files, examines their contents, and prints a dependency-ordered list of files from the set to `stdout`. The point is to keep dependency information inside the files so that each file can speak for itself only. A file can specify the following information:
- the names of the "conditions" (which means services to us) it provides;
- the names of the "conditions" it requires;
- the names of the "conditions" this file should run before;
- additional keywords that can be used to select a subset from the whole set of files (`rcorder(8)` can be instructed via options to include or omit the files having particular keywords listed.)
It is no surprise that `rcorder(8)` can handle only text files with a syntax close to that of `sh(1)`. That is, special lines understood by `rcorder(8)` look like `sh(1)` comments. The syntax of such special lines is rather rigid to simplify their processing. See `rcorder(8)` for details.
Besides using `rcorder(8)` special lines, a script can insist on its dependency upon another service by just starting it forcibly. This can be needed when the other service is optional and will not start by itself because the system admin has disabled it mistakenly in `rc.conf(5)`.
With this general knowledge in mind, let us consider the simple daemon script enhanced with dependency stuff:
```sh
#!/bin/sh
# PROVIDE: mumbled oldmumble
# REQUIRE: DAEMON cleanvar frotz
# BEFORE: LOGIN
# KEYWORD: nojail shutdown
./etc/rc.subr
name=mumbled
rcvar=mumbled_enable
command="/usr/sbin/${name}"
start_precmd="${name}_prestart"
mumbled_prestart()
{
if ! checkyesno frotz_enable
```
! /etc/rc.d/frotz forcestatus 1>/dev/null 2>&1; then
force_depend frotz || return 1
fi
return 0
}
load_rc_config $name
run_rc_command "$1"
As before, detailed analysis follows:
❼ That line declares the names of "conditions" our script provides. Now other scripts can record a dependency on our script by those names.
Usually a script specifies a single condition provided. However, nothing prevents us from listing several conditions there, e.g., for compatibility reasons.
In any case, the name of the main, or the only, PROVIDE: condition should be the same as ${name}.
❼ So our script indicates which "conditions" provided by other scripts it depends on. According to the lines, our script asks rcorder(8) to put it after the script(s) providing DAEMON and cleanvar, but before that providing LOGIN.
The BEFORE: line should not be abused to work around an incomplete dependency list in the other script. The appropriate case for using BEFORE: is when the other script does not care about ours, but our script can do its task better if run before the other one. A typical real-life example is the network interfaces vs. the firewall: While the interfaces do not depend on the firewall in doing their job, the system security will benefit from the firewall being ready before there is any network traffic.
Besides conditions corresponding to a single service each, there are meta-conditions and their "placeholder" scripts used to ensure that certain groups of operations are performed before others. These are denoted by UPPERCASE names. Their list and purposes can be found in rc(8).
Keep in mind that putting a service name in the REQUIRE: line does not guarantee that the service will actually be running by the time our script starts. The required service may fail to start or just be disabled in rc.conf(5). Obviously, rcorder(8) cannot track such details, and rc(8) will not do that either. Consequently, the application started by our script should be able to cope with any required services being unavailable. In certain cases, we can help it as discussed below
❼ As we remember from the above text, rcorder(8) keywords can be used to select or leave out some scripts. Namely any rcorder(8) consumer can specify through -k and -s options which keywords are on the "keep list" and "skip list", respectively. From all the files to be dependency sorted, rcorder(8) will pick only those having a keyword from the keep list (unless empty) and not having a keyword from the skip list.
In FreeBSD, `rcorder(8)` is used by `/etc/rc` and `/etc/rc.shutdown`. These two scripts define the standard list of FreeBSD rc.d keywords and their meanings as follows:
**nojail**
The service is not for `jail(8)` environment. The automatic startup and shutdown procedures will ignore the script if inside a jail.
**nostart**
The service is to be started manually or not started at all. The automatic startup procedure will ignore the script. In conjunction with the shutdown keyword, this can be used to write scripts that do something only at system shutdown.
**shutdown**
This keyword is to be listed *explicitly* if the service needs to be stopped before system shutdown.
When the system is going to shut down, `/etc/rc.shutdown` runs. It assumes that most rc.d scripts have nothing to do at that time. Therefore `/etc/rc.shutdown` selectively invokes rc.d scripts with the shutdown keyword, effectively ignoring the rest of the scripts. For even faster shutdown, `/etc/rc.shutdown` passes the faststop command to the scripts it runs so that they skip preliminary checks, e.g., the pidfile check. As dependent services should be stopped before their prerequisites, `/etc/rc.shutdown` runs the scripts in reverse dependency order. If writing a real rc.d script, you should consider whether it is relevant at system shutdown time. E.g., if your script does its work in response to the start command only, then you need not to include this keyword. However, if your script manages a service, it is probably a good idea to stop it before the system proceeds to the final stage of its shutdown sequence described in `halt(8)`. In particular, a service should be stopped explicitly if it needs considerable time or special actions to shut down cleanly. A typical example of such a service is a database engine.
To begin with, `force_depend` should be used with much care. It is generally better to revise the hierarchy of configuration variables for your rc.d scripts if they are interdependent.
If you still cannot do without `force_depend`, the example offers an idiom of how to invoke it conditionally. In the example, our `mumbled` daemon requires that another one, `frotz`, be started in advance. However, `frotz` is optional, too; and `rcorder(8)` knows nothing about such details. Fortunately, our script has access to all `rc.conf(5)` variables. If `frotz_enable` is true, we hope for the best and rely on rc.d to have started `frotz`. Otherwise we forcibly check the status of `frotz`. Finally, we enforce our dependency on `frotz` if it is found to be not running. A warning message will be emitted by `force_depend` because it should be invoked only if a misconfiguration has been detected.
### 8. Giving more flexibility to an rc.d script
When invoked during startup or shutdown, an rc.d script is supposed to act on the entire subsystem it is responsible for. E.g., `/etc/rc.d/netif` should start or stop all network interfaces described by `rc.conf(5)`. Either task can be uniquely indicated by a single command argument such as `start` or `stop`. Between startup and shutdown, rc.d scripts help the admin to control the running system, and
it is when the need for more flexibility and precision arises. For instance, the admin may want to add the settings of a new network interface to `rc.conf(5)` and then to start it without interfering with the operation of the existing interfaces. Next time the admin may need to shut down a single network interface. In the spirit of the command line, the respective `rc.d` script calls for an extra argument, the interface name.
Fortunately, `rc.subr(8)` allows for passing any number of arguments to script's methods (within the system limits). Due to that, the changes in the script itself can be minimal.
How can `rc.subr(8)` gain access to the extra command-line arguments. Should it just grab them directly? Not by any means. Firstly, an `sh(1)` function has no access to the positional parameters of its caller, but `rc.subr(8)` is just a sack of such functions. Secondly, the good manner of `rc.d` dictates that it is for the main script to decide which arguments are to be passed to its methods.
So the approach adopted by `rc.subr(8)` is as follows: `run_rc_command` passes on all its arguments but the first one to the respective method verbatim. The first, omitted, argument is the name of the method itself: `start`, `stop`, etc. It will be shifted out by `run_rc_command`, so what is $2 in the original command line will be presented as $1 to the method, and so on.
To illustrate this opportunity, let us modify the primitive dummy script so that its messages depend on the additional arguments supplied. Here we go:
```bash
#!/bin/sh
.
/proc/rc.subr
name="dummy"
start_cmd="${name}_start"
stop_cmd=":
"
kiss_cmd="${name}_kiss"
extra_commands="kiss"
dummy_start()
{
if [ $# -gt 0 ]; then
echo "Greeting message: $*
else
echo "Nothing started."
fi
}
dummy_kiss()
{
echo -n "A ghost gives you a kiss"
if [ $# -gt 0 ]; then
echo -n " and whispers: $*
fi
case "$*" in
*[.!?])
echo
```
What essential changes can we notice in the script?
- All arguments you type after `start` can end up as positional parameters to the respective method. We can use them in any way according to our task, skills, and fancy. In the current example, we just pass all of them to `echo(1)` as one string in the next line - note `$*` within the double quotes. Here is how the script can be invoked now:
```bash
# /etc/rc.d/dummy start
Nothing started.
# /etc/rc.d/dummy start Hello world!
Greeting message: Hello world!
```
- The same applies to any method our script provides, not only to a standard one. We have added a custom method named `kiss`, and it can take advantage of the extra arguments not less than `start` does. E.g.:
```bash
# /etc/rc.d/dummy kiss
A ghost gives you a kiss.
# /etc/rc.d/dummy kiss Once I was Etaoin Shrdlu...
A ghost gives you a kiss and whispers: Once I was Etaoin Shrdlu...
```
- If we want just to pass all extra arguments to any method, we can merely substitute "@$" for "$1" in the last line of our script, where we invoke `run_rc_command`.
> An `sh(1)` programmer ought to understand the subtle difference between `$*` and `$@` as the ways to designate all positional parameters. For its in-depth discussion, refer to a good handbook on `sh(1)` scripting. *Do not* use the expressions until you fully understand them because their misuse will result in buggy and insecure scripts.
> Currently `run_rc_command` may have a bug that prevents it from keeping the original boundaries between arguments. That is, arguments with embedded whitespace may not be processed correctly. The bug stems from `$*` misuse.
The original article by Luke Mewburn offers a general overview of rc.d and detailed rationale for its design decisions. It provides insight on the whole rc.d framework and its place in a modern BSD operating system.
The manual pages rc(8), rc.subr(8), and rcorder(8) document the rc.d components in great detail. You cannot fully use the rc.d power without studying the manual pages and referring to them while writing your own scripts.
The major source of working, real-life examples is /etc/rc.d in a live system. Its contents are easy and pleasant to read because most rough corners are hidden deep in rc.subr(8). Keep in mind though that the /etc/rc.d scripts were not written by angels, so they might suffer from bugs and suboptimal design decisions. Now you can improve them!
|
{"Source-Url": "https://download.freebsd.org/doc/en/articles/rc-scripting/rc-scripting_en.pdf", "len_cl100k_base": 9063, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 40980, "total-output-tokens": 9900, "length": "2e13", "weborganizer": {"__label__adult": 0.00019276142120361328, "__label__art_design": 0.0003771781921386719, "__label__crime_law": 0.000125885009765625, "__label__education_jobs": 0.00047469139099121094, "__label__entertainment": 9.393692016601562e-05, "__label__fashion_beauty": 6.896257400512695e-05, "__label__finance_business": 0.00020420551300048828, "__label__food_dining": 0.00015020370483398438, "__label__games": 0.0005297660827636719, "__label__hardware": 0.001110076904296875, "__label__health": 0.00010484457015991212, "__label__history": 0.00015437602996826172, "__label__home_hobbies": 9.059906005859376e-05, "__label__industrial": 0.0002191066741943359, "__label__literature": 0.00017595291137695312, "__label__politics": 0.00010573863983154296, "__label__religion": 0.0002434253692626953, "__label__science_tech": 0.00981903076171875, "__label__social_life": 5.996227264404297e-05, "__label__software": 0.04241943359375, "__label__software_dev": 0.94287109375, "__label__sports_fitness": 0.0001018047332763672, "__label__transportation": 0.00014662742614746094, "__label__travel": 0.0001220703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38813, 0.00732]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38813, 0.58255]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38813, 0.89068]], "google_gemma-3-12b-it_contains_pii": [[0, 2673, false], [2673, 6365, null], [6365, 7939, null], [7939, 10762, null], [10762, 12761, null], [12761, 15423, null], [15423, 17807, null], [17807, 18917, null], [18917, 21255, null], [21255, 24200, null], [24200, 26505, null], [26505, 28746, null], [28746, 31249, null], [31249, 34412, null], [34412, 36384, null], [36384, 38030, null], [38030, 38813, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2673, true], [2673, 6365, null], [6365, 7939, null], [7939, 10762, null], [10762, 12761, null], [12761, 15423, null], [15423, 17807, null], [17807, 18917, null], [18917, 21255, null], [21255, 24200, null], [24200, 26505, null], [26505, 28746, null], [28746, 31249, null], [31249, 34412, null], [34412, 36384, null], [36384, 38030, null], [38030, 38813, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38813, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38813, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38813, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38813, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38813, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38813, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38813, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38813, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38813, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38813, null]], "pdf_page_numbers": [[0, 2673, 1], [2673, 6365, 2], [6365, 7939, 3], [7939, 10762, 4], [10762, 12761, 5], [12761, 15423, 6], [15423, 17807, 7], [17807, 18917, 8], [18917, 21255, 9], [21255, 24200, 10], [24200, 26505, 11], [26505, 28746, 12], [28746, 31249, 13], [31249, 34412, 14], [34412, 36384, 15], [36384, 38030, 16], [38030, 38813, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38813, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
6d0e3206e83e4f78a2d078714f78025c47fe141a
|
Experience Report: Log Mining using Natural Language Processing and Application to Anomaly Detection
Christophe Bertero, Matthieu Roy, Carla Sauvanaud, Gilles Trédan
To cite this version:
Christophe Bertero, Matthieu Roy, Carla Sauvanaud, Gilles Trédan. Experience Report: Log Mining using Natural Language Processing and Application to Anomaly Detection. 28th International Symposium on Software Reliability Engineering (ISSRE 2017), Oct 2017, Toulouse, France. 10p., 2017. <hal-01576291>
HAL Id: hal-01576291
https://hal.laas.fr/hal-01576291
Submitted on 22 Aug 2017
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Experience Report: Log Mining using Natural Language Processing and Application to Anomaly Detection
Christophe Bertero, Matthieu Roy, Carla Sauvanaud and Gilles Tredan
LAAS-CNRS, Université de Toulouse, CNRS, INSA, Toulouse, France
Email: firstname.name@laas.fr
Abstract—Event logging is a key source of information on a system state. Reading logs provides insights on its activity, assess its correct state and allows to diagnose problems. However, reading does not scale: with the number of machines increasingly rising, and the complexification of systems, the task of auditing systems’ health based on logfiles is becoming overwhelming for system administrators. This observation led to many proposals automating the processing of logs. However, most of these proposal still require some human intervention, for instance by tagging logs, parsing the source files generating the logs, etc.
In this work, we target minimal human intervention for logfile processing and propose a new approach that considers logs as regular text (as opposed to related works that seek to exploit at best the little structure imposed by log formatting). This approach allows to leverage modern techniques from natural language processing. More specifically, we first apply a word embedding technique based on Google’s word2vec algorithm: logfiles’ words are mapped to a high dimensional metric space, that we then exploit as a feature space using standard classifiers. The resulting pipeline is very generic, computationally efficient, and requires very little intervention.
We validate our approach by seeking stress patterns on an experimental platform. Results show a strong predictive performance (≈ 90% accuracy) using three out-of-the-box classifiers.
Keywords—Anomaly detection, logfile, NLP, word2vec, machine learning, VNF
I. INTRODUCTION
Gathering feedback about computer systems states is a daunting task. To this aim, it is a common practice to have programs report on their internal state, for instance through journals and logfiles, that can be analyzed by system administrators.
However, as systems tend to grow in size, this traditional logging method does not scale well. Indeed, scattered software components and applications produce heterogeneous logfiles. For instance, logging methods such as the common syslog, are extremelly flexible in their syntax (see the RFC [7]). Also, different logfiles may gather information with distinct types of information. For instance rule-based logging [4] traces the start and the termination of applications functions, while syslog event logging collects system activity. Each of them tends to describe a partial view of the whole system. In particular, [3] shows that event logging, assertion checking, and rule-based logging are orthogonal sources for system monitoring. Moreover, each partial view of the system, even when using the same logging method (or protocol), may not use the same keywords to express normal or erroneous behaviors. This plethora of available logfiles burdens log summarization.
As a result, source code analyzes and communications with application developers are necessary for troubleshooting or auditing systems [17]. Notwithstanding, such non automatic processes are not acceptable in large computing system because troubleshooting for reconfiguration must be handled online. To address these challenges, a large number of studies proposed approaches to automate and scale up log analysis (5), [8], [17], [23], [24]). Most approaches require however cumbersome log processing, for instance by manually tagging important events, or by parsing the source code functions to assess the fixed and variable parts of log events.
The contribution of this paper is to propose a new approach departing from this research line and considering log mining as a natural language processing task.
This approach has two main consequences, i) we lose a part of the context by under-exploiting the specificities of each structured sentence according to a predefined pattern and, most importantly, ii) our approach is agnostic to the format of the logfiles. Thus, while considering sets of logfiles as languages, we gain the ability to use modern Natural Language Processing (NLP) methods. In other words, we trade accuracy for volume, preferring the ability to inaccurately process large volumes of logfiles instead of accurately processing some tediously preprocessed logs.
As such, the question we explore in this work is: “What can off-the-shelf Natural Language Processing algorithms bring to log mining?”. We more particularly focus on such questions as “is my system in state A or state B?”. The proposed approach is rather simple and brutal. Instead of precisely tracking the events related to a transition from A to B, we collect large amounts of log events related to systems in states A and B. We then transform the logs into multidimensional vectors of features (using NLP algorithms) and train a classifier on the resulting data. The resulting pipeline is a relatively standard big data application, where we target the realization of classifiers providing accurate information about the target system state. We believe this approach is specifically interesting due to the expensive expertise usually required to preprocess the logs.
We show in this paper, through a series of experiments, that with minimum setup effort and standard tools, it is possible
to automatically extract relevant information about a system state. We more particularly use the \texttt{word2vec} algorithm of Google [16] for log mining, which is an algorithm for learning high-quality vector representations of words. It notably has been used for NLP in some previous works but not for the analysis of logfiles.
Through experiments, we illustrate the potential benefits of our approach, by providing answers to system administrators’ questions when data is massively available. As an illustrative example, we focus on the detection of stress related anomalies over a broad range of configurations. More specifically, we deployed on a virtual cloud environment a virtual network function running a panel of three applications, namely a proxy, a router, and a database, to which we applied a large variety of stress patterns by means of fault injection (high CPU and memory consumption, high number of disk accesses, increase of network latency and network packet losses). We show that by simply analyzing the results of NLP processed logfiles, it is possible to detect stressed behaviors with $\approx 90\%$ accuracy.
In the following, we first present in Section II the rationale of our log mining approach, and describe our use of fault injection for validation purposes in Section II. Then, in Section III we define our case study, the experimental platform on which we deployed it, and the implementation of our approach on this platform. Section IV presents some promising experimental results. In Section V we discuss our results, and analyze their threats to validity. Section VI describes related works regarding NLP and log mining for detection purposes. Finally, we conclude this paper in section VII.
II. APPROACH
A. General approach overview
The approach proposed as the contribution of this paper is presented in Figure 1.
Consider a set of logfiles related to a given system. Each of these logfiles contains a varying amount of lines, each line consisting of one application of the system reporting an event. Each log event (line) is a list of words.
As we consider logfiles as a natural language, we analyze these logfiles using Natural Language Processing tools. As such, we first remove all non alphanumeric characters (as required by \texttt{word2vec}) and replace them by spaces, namely \texttt{sed 's/[ˆa-zA-Z0-9]/ /g'}.
Secondly, we use \texttt{word2vec} from [16], a popular embedding tool employed by Google to process natural language. In a nutshell, \texttt{word2vec} produces a mapping from the set of words of a text corpus (a set of logfiles in our case) to an euclidean space say $T$. In the case of a 20-dimensions space $T \subset \mathbb{R}^{20}$. Thus, each word of an event gets assigned coordinates in a vector space. The enjoyable property of \texttt{word2vec} is its ability to produce meaningful embeddings, where similar words end up close, whereas words that are not related to each other end up far away in the embedding space.
Once each word has been mapped to the embedding space $T$, we define the position of a log event as the barycenter of its words. Following a similar scheme, once all log events from a given logfile have been mapped to points, we define
the position of this logfile as the barycenter of the position of its log events. Hence, at the end of the process, each logfile is mapped to a single point in $T$. This drastic compression has one major interest: it produces a compact and useful input to traditional classifiers. Assuming $X$ represents the set of all possible logfiles, such mapping can be represented as a function:
$$p : X \rightarrow T$$
$\ x \mapsto p(x).$
Now, assume that one has access to a large set $X$ of observations (logfiles) on the system, corresponding to two states that we would like to characterize, say $A$ and $\bar{A}$. Let $X |_A$ and $X |_{\bar{A}}$ be the corresponding logfiles sets. By the above described process, every observation $x \in X = X |_A \cup X |_{\bar{A}}$ can be assigned to a coordinate $p(x) \in T$.
In a third step, we train a classifier, named $\hat{f}$ hereafter, on $p(x)|x \in X |_A$. A typical such classifier $\hat{f}$ is an approximation of the ideal separation function:
$$f : T \rightarrow [0, 1]$$
$p(y) \mapsto P(A|y).$
The training of a classifier requires an available set of labeled data. These labels may be for instance: normal and anomalous. In cases that labeled data is not available, one can generate them by monitoring a system while experiencing normal and anomalous behaviors. Since anomalous behaviors are undesired events and, as such, usually not frequent in
recent systems, they need to be synthesized using techniques such as fault injection. In this paper, we generate sets of normal and anomalous behaviors in a controlled manner using fault injection techniques for all anomalous behaviors, as represented in Figure 1.
Once the training is finished, the resulting classifier is used to provide, given any new production log file $x$, an inferred state (anomalous or not) $\hat{f}(p(x))$ that we claim is a good approximation of the actual stress status of the system, i.e., $P(A|x) \approx \hat{f}(p(x))$. It is actually expressed as a probability and we need to set a limit over which a system is categorized as stressed, say 1/2 as in Figure 1. In the case $x$ contains unencountered words, those are simply ignored.
III. CASE STUDY AND EXPERIMENTAL PLATFORM
A. Case study
We hereby present our case study on virtual network function (VNF) called Clearwater\(^2\) as well as the workload generator used during our experiments to simulate actual users of this target system. This case study was used in our previous work [19] for anomaly detection based on monitoring data.
It constitutes a meaningful case study in that it deploys several components of different roles (e.g., router, proxy and database). While we apply our approach with no specific configuration nor a priori knowledge of the implementations for each component, we consider that our approach has good chances to generalize to various case studies.
1) Description: The service is an open source VNF named Clearwater. It provides voice and video calls based on the Session Initiation Protocol (SIP), and messaging applications. Clearwater encompasses several software components and we particularly focus our work on Bono, Sprout, Homestead shown in Figure 2.
Bono is the SIP proxy implementing the Proxy-Call/Session Control Functions. It handles users’ requests and routes them to Sprout. It also performs Network Address Translation traversal mechanisms.
Sprout is the IMS SIP router, receiving requests from Bono and routing them to the adequate endpoints. It implements some Serving-CSCF and Interrogating-CSCF functions and gets the required users profiles and authentication data from Homestead. Sprout can also call application servers and actually contains itself a multimedia telephony (MMTel) application server, whose data is stored in another Clearwater component not presented in this work (when calls are configured to use its services).
Homestead is a HTTP RESTful server. It either stores Home Subscriber Server (HSS) data in a Cassandra database and masters data (i.e., information about subscribed services and locations), or pulls data from another IMS compliant HSS.
Bono, Sprout, and Homestead work together to control the sessions initiated by users and handle the entire CSCF. Our case study encompasses these three components, each one being deployed on a dedicated virtual machine (VM) of our virtualized experimental platform (see Section III-B).
Fig. 2: Clearwater deployment.
2) Workload: IMS workloads can be emulated by means of the SIPP benchmark\(^2\). The benchmark contains a workload that can be configured with a number of calls per second to be sent to the IMS, and a scenario. The execution of a scenario corresponds to a call. A scenario is described in terms of SIP transactions in XML. A SIP transaction corresponds to a SIP message to be sent and an expected SIP response message. A call fails when a transaction fails. A transaction may fail for two reasons: either a message is not received within a fixed time window (i.e., the timeout), or an unexpected message is received. Unexpected messages are identified by the HTTP error codes 500 (Internal Server Error), 503 (Service Unavailable) and 403 (Forbidden).
The scenario run for our experiments simulates a standard call between two users and encompasses the standard SIP REGISTER, INVITE, UPDATE, and BYE messages. The scenario is available online\(^3\). Timeouts are set to 10 sec as in similar experimental campaigns [2].
3) Fault injection for training and validation: Fault injection is used in our study for collecting log files representing both normal behaviors and stressed behaviors of a target system, in order to provide them as inputs for the training and validation of the classifiers. We emulate errors by means of injection tools that implement systems stressing. These tools were used in our previous work [19].
We call the orchestration of several executions of the target system in presence or not of error emulations an experimental campaign. In the following we present the errors that our injection tools emulate and describe the execution of an experimental campaign.
Error emulation. We emulate the following five types of errors, which we will be referring to as CPU, memory, disk, network packet loss, and network latency errors respectively:
1. high CPU consumption,
2. misuse of memory, i.e., increase of memory consumption.
3. abnormal number of disk accesses, i.e., large increase of disk I/O accesses and synchronizations,
4. network packet loss,
5. network latency increase.
CPU errors. Abnormal CPU consumptions may arise from programs encountering impossible termination conditions leading to infinite loops, busy waits or deadlocks of competing actions, which are common issues in multiprocessing and distributed systems.
\(^{1}\)http://www.projectclearwater.org/about-clearwater/
\(^{2}\)http://sipp.sourceforge.net/index.html
\(^{3}\)https://homepages.laas.fr/csauvana/sipp\_scenario/issre2016\_sipp\_ scenario.xml
Memory errors. Abnormal memory usages are common and happen when allocated chunks of memory are not freed after their use. Accumulations of unfreed memory may lead to memory shortage and system failures.
Disk errors. A high number of disk accesses, or an increase of disk accesses over a short period of time, emulate disks whose accesses often fail and lead to an increase in disk access retries. It may also result from a program stuck in an infinite loop of data writing.
Network packet loss and latency errors. Such errors may arise from network interfaces of the target system or from the network interconnection of the virtualized infrastructure hosting the system. We emulate packet losses and latency increases. Packet losses may arise from undersized buffers, wrong routing policies or even firewall misconfigurations. Latency errors may originate from queuing or processing delays of packets on gateways or at the target system level.
From the definition of these error types, an important experimental parameter is the injection intensity, i.e., the expected impact magnitude of the different injections from users points of view. In our study, we present results for the detection of errors with high intensities. In other terms, experimental campaigns perform injections that strongly affect the target system capability to answer users requests.
Table I presents the intensity levels that we calibrated for our Clearwater case study.
<table>
<thead>
<tr>
<th>Error type</th>
<th>Unit</th>
<th>Intensity level</th>
</tr>
</thead>
<tbody>
<tr>
<td>CPU</td>
<td>%</td>
<td>90</td>
</tr>
<tr>
<td>Memory</td>
<td>%</td>
<td>97</td>
</tr>
<tr>
<td>Disk</td>
<td>#process</td>
<td>50</td>
</tr>
<tr>
<td>Network packet loss</td>
<td>%</td>
<td>8.0</td>
</tr>
<tr>
<td>Network latency</td>
<td>ms</td>
<td>80</td>
</tr>
</tbody>
</table>
TABLE I: Injection intensity levels.
Regarding the memory, disk and CPU injections, the intensity values of errors are constrained by the capacity of the operating systems (OSs) on which are deployed the applications of our case study. In other words, the intensity levels correspond to the maximum resource consumption allowed by the OS before killing the execution of the injection agent.
Considering the remaining types of injections, the corresponding intensity levels is set so as to lead to around 99% of unsuccessfully answered requests when applied in at least one VM. The unsuccessfully answered requests rate can be known from the workload logfiles.
Experimental campaigns. The experimental campaign is conducted using a customizable main script that either launches normal or anomalous executions of the target system. The experimental campaign either launches normal or stressed executions of the target system. An execution, be it normal or anomalous, produces one logfile for each VM of our target system.
We define a campaign to run as many normal executions as the number of stressed executions. The selected number of stressed executions is configured to represent all combinations of different injections (i.e., the injection of each error type, in each VM).
When running an anomalous execution, the configured injection starts after \( t \) seconds from the target system boot time, where \( t \) is randomly selected in a preconfigured interval. This process adds randomization to the set of collected logfiles, a prerequisite for the generalization of our results.
Additionally, consecutive executions of a campaign are separated by the reboot of all VMs of the target system and the workload in order to be sure to restart from a clean and unpolluted state.
As a result, the parameters of an experimental campaign are as follows: \( i \) target VMs listed in \( l_{\text{vm}} \), \( ii \) error types listed in \( l_{\text{type}} \), \( iii \) an injection duration set in \( \text{inject}\_\text{duration} \), \( iv \) a clean run duration set in \( \text{clean}\_\text{run}\_\text{duration} \), \( v \) an interval of values defining after which time an injection can start after a reboot set in \( \text{interval} \).
Moreover, a campaign is executed as follows. Each error type is injected in a first VM, then in a second VM, etc. with reboots of the target system and the workload before each new execution. The stressed executions are orchestrated as explained in algorithm 1. Then the same number of normal executions are performed.
**Algorithm 1** Orchestration of stressed executions of the target system in an experimental campaign
```
Input: \( l_{\text{vm}}, l_{\text{type}}, \text{inject}\_\text{duration}, \text{interval}, \text{clean}\_\text{run}\_\text{duration} \)
\( \text{start}\_\text{workload}() \) \quad \triangleright \text{Clean run} \\
\text{for } \text{vm in } l_{\text{vm}} \text{ do} \quad \triangleright \text{Runs with injections} \\
\quad \text{for } \text{err in } l_{\text{type}} \text{ do} \\
\quad \quad \text{start}\_\text{workload}() \\
\quad \quad \text{rand}\_\text{time} = \text{random}\_\text{int}(\text{interval}) \\
\quad \quad \text{sleep}(\text{rand}\_\text{time}) \\
\quad \quad \text{inject} = \text{Injection}(\text{err}, \text{inject}\_\text{duration}) \\
\quad \quad \text{inject}\_\text{in}\_\text{vm}(\text{vm}, \text{inject}) \\
\quad \quad \text{stop}\_\text{workload}() \\
\quad \quad \text{reboot}\_\text{vm}() \\
\quad \text{end for} \\
\text{end for}
```
B. Experimental platform
In the following, we first present the platform on which we run experiments. Then we describe the implementation required to carry out our experiments namely the injection agents, experimental campaign parameters, and the collection of logfiles.
1) Platform: We deployed our target system on a virtualized platform. The platform is composed of a cluster including two hypervisors and several VMs. Four VMs are deployed for our target system: one VM runs the workload and the other three respectively host the components Bono, Sprout and Homestead of Clearwater. The workload VM also has the means to control the experimental campaign launch. Two other VMs are respectively used to store logfiles collected from the target system and to analyze the stored logfiles. The deployment of the VMs is illustrated in Figure 3.
The platform is a VMware vSphere 5.1 private cloud composed of 2 servers Dell Inc. PowerEdge R620 with Intel Xeon CPU E5-2660 2.20 GHz and 64 GB memory. Each server has a VMFS storage. Each VM deployed for the target system implementation has 2 CPUs, a 10 GB memory, a 10 GB disk and runs the Ubuntu OS. VMs are connected through a 100 Mbps network.
2) Fault injection: Injections in the target system are carried out by injection agents installed in these VMs. There is one injection agent for each error type in each VM of a target system. Agents are run and stopped through an SSH connection orchestrated by the campaign main script. They emulate errors presented in Section III-A3 by means of a software implementation.
CPU and disk errors are emulated using the stress test tool stress-ng. CPU injections run 2 processes (there are 2 cores in each VM) running all the stress methods listed in the tool documentation. The percentage of loading is set according to the intensity level of the injection.
Disk injections start several workers writing 50 Mo and 50 workers continuously calling the sync command, with an ionice level of 0. The number of writing workers is set according to the intensity level of the injection.
Memory injections are run by means of a python script reserving memory space while continuously checking whether the amount of memory space reserved by the script corresponds to the amount set by the intensity level of the injection.
Finally, we use the Linux kernel tools iptables and tc for the injection of network latencies on the POSROUTING chain, and iptables on the INPUT chain for the injection of packet losses. All network protocols are targeted.
3) Experimental campaigns parameters: An experimental campaign corresponds to the execution of a customizable main script that starts the workload of our target system, and either makes clean run of this target system or makes runs while performing injections in the target system VMs.
The parameters of the experimental campaigns we run are as follows. The injection duration is calibrated so as to affect several instances of workload executions (an execution lasts less than 1 sec). We calibrated the injection duration to be 10 min long in order to collect around 5000 lines of logfiles for each clean run and injection. Also, we calibrated the clean run duration to be 30 min. Finally, we calibrated the start of injections to be randomly selected in the interval from 1 to 10 min. This interval allows the VMs to stabilize after a reboot.

Our experimental campaign parameters are summarized in Table II.
<table>
<thead>
<tr>
<th>Campaign parameters</th>
</tr>
</thead>
<tbody>
<tr>
<td>l.vm = {Bono, Sprout, Homestead}</td>
</tr>
<tr>
<td>l.type = {CPU, memory, disk, latency, packet_loss}</td>
</tr>
<tr>
<td>injection_duration = 10 min</td>
</tr>
<tr>
<td>clean_run_duration = 10 min</td>
</tr>
<tr>
<td>interval = [1 : 10] min</td>
</tr>
</tbody>
</table>
TABLE II: Injection campaign parameters of the four experiments.
4) Logfiles collection: The logfiles that we use in this study are generated by the Linux-based Ubuntu OS using syslog, the standard tool for message logging. Events are logged with a predefined pattern containing in that order the date of the event issue, the hostname of the equipment delivering the event, the process delivering the event, a priority level, the id of the process delivering the event and finally the message containing free-formatted information. For instance, no performance metrics of the system are logged. A example of syslog events is provided in Figure 4.
Results of previous studies [3] show that syslog event logging is the more suitable method to use in this context, although a combination of the several methods increases the failure coverage. The syslog facility has the advantage to gather several applications events.
During experimental campaigns, logfiles are collected by means of agents (they are represented by orange squares in Figure 3) and stored in a database for later analysis.
IV. RESULTS
In this section, we quantitatively study the effectiveness of the presented approach by presenting the analysis results over 660 logfiles. After briefly introducing the considered metrics, we will detail the obtained results.
The main research question we seek to answer is: Using only syslog files as input, how accurately can our algorithm distinguish Stressed and non Stressed systems? The secondary questions are i) how sensitive are the results to the parameters used to calibrate the models of our approach? and ii) what is the ability of our approach to issue quick decision on a system state?
A. Materials and Metrics
Using the testbed presented in Section III-B we generate a set of 660 logfiles that will constitute the basis of our models training. Exactly half of these (330) originate from normal unstressed system executions. The other half captures systems with injected faults. More precisely, we ran 22 replications for each of the 5 injection campaigns over each of the 3 target VMs of our case study, for a total of \((22 \times 3 \times 5) = 330\) stressed logfiles.
Word2Vec training: To establish the word2vec training set, we use the concatenation of all 660 logfiles from which we removed all non alphanumeric characters.
word2vec, originally designed for NLP tasks, can be tuned with a number of different options. The most important parameter is the embedding space dimension \(\text{dim}(T)\), its impact is detailed in Section IV-B2. The other parameters mostly allow to setup filters in order to optimize the computation. We deactivated all of them to keep the maximum amount of information available to the classifier. Finally, from the two methods proposed in the implementation of word2vec, namely skip-gram and cbow (defining whether the source context words should be predicted from target words or the opposite\(^6\)), we chose cbow because of its simplicity, in order to provide an “as-simple-as-possible” solution.
Given the relatively small size of our text corpus (compared to all the English texts available on the web, namely word2vec’s original usecase), and the well known efficiency of the word2vec implementation, the overall computation is tractable on a standard computer (see Section IV-B3). Therefore, the philosophy behind implementation choices is the following: keep it simple, and keep the maximum amount of information.
From word coordinates to logfile coordinates: The output of word2vec is a file containing the coordinates of the 293k distinct words of our training corpus in \(T\). To transform logfiles into coordinates in \(T\), we explored two standard strategies:
bary In the barycenter approach, we first compute the position of each line of a logfile, defined as the average position of all the words it contains. Then, the position of the file is defined as the average of all its line:
\[
p(f) = \frac{1}{|f|} \sum_{l \in f} 1/|l| \sum_{w \in l} p(w).
\]
tfidf Term frequency - inverse document frequency is a standard metric of information retrieval. Compared to the barycenter approach, words are weighted by their frequency in the document. That is, a frequent (common) word will proportionally have less weight than a rare word when computing the average position of a logfile. We relied on the scikit-learn\(^6\) standard implementation of the function.
The output of this step is a matrix of \(660 \times \text{dim}(T)\) entries decorated with their corresponding target labels (stressed, unstressed system).
\(^6\)http://scikit-learn.org/
Classifiers: Binary classifiers are amongst the most common and understood classifiers in machine learning. We restricted our study to three simple and state of the art approaches: Naive Bayes, Random Forests and Neural Networks. We relied on the following scikit-learn library implementations: Random Forest Classifier, MLP Classifier, and Gaussian NB. All these algorithms belong to the class of supervised algorithms. In other words, they require labeled training data, although we could have used unsupervised approaches such as the ones tested in [8], i.e., Principal Components Analysis and Invariant mining.
Again, the philosophy of our approach is to refrain from fine tuning those implementations and to assess the global strategy as a whole. We therefore used the default parameters on all these algorithms.
Classifier Assessment: To assess the classification accuracy, we used the standard 10-fold validation approach. We first randomly divided the training set in 10 equal sized chunks. Each possible group of 9 chunks was used to train our classifier while the remaining chunk was used as a test.
Let \(\{X_i\}_{1 \leq i \leq 10}\) be a partitioning of \(X\) into 10 chunks. Let \(X_j\) be the tested chunk, and let \(T_j\) (resp. \(F_j\)) be the subset of stressed (resp. unstressed) logs of \(X_j\). The set of true positives \(TP_j\) for \(X_j\) is defined as:
\[
TP_j = \{x \in X_j \text{ s.t. } \hat{f}_j(x) \geq 1/2 \land x \in T_j\}.
\]
Logs that belong to stressed machines and to which the classifier \(\hat{f}_j\) (trained using \(\cup_{i \neq j} X_i\)) assigned a probability greater than 1/2 of being stressed are true positives for \(X_j\). Similarly, the set of false positives \(FP_j\) for \(X_j\) (logs belonging to unstressed machines but detected as more likely stressed) is defined as:
\[
FP_j = \{x \in X_j \text{ s.t. } \hat{f}_j(x) \geq 1/2 \land x \in F_j\}.
\]
Notice that the true negative and false negative sets are symmetrically defined.
To get a closer look at \(\hat{f}_j\), one can use Receiver Operating Characteristics (ROC). That is, let \(s \in [0,1]\) be a “safety level” one wants to apply to \(\hat{f}\)-based decisions. Let \(X_j^s = \{x \in X_j, \hat{f}_j(x) \geq s\}\) be the subset of \(X_j\) containing only the logs detected as stressed with probability at least \(s\). For each value of \(s\), it is thus possible to define a true positive rate \(TPR_s = |X_j^s \cap T_j|/|T_j|\) and a false positive rate \(FPR_s = |X_j^s \cap F_j|/|F_j|\). The graphical representation of the obtained \(\{TPR_s, FPR_s\}\) couples provides a precise visual description of \(f\)'s performance, as in Figure 5 that will be presented shortly hereafter.
B. Results analysis
In the following, after exploring the detailed results obtained using a typical trained classifier, we study the impact of the embedding host space dimension. We then study the runtime overhead of our approach.
1) Accuracy: Figure 5 presents the ROCs obtained on a typical configuration. More precisely, in this setup, we used \(\text{dim}(T) = 20\) and explored various aggregation/classifier configurations. The results are very good, with Neural Network
and Random Forest exhibiting a strong classification accuracy (> 95% AUC). The aggregation technique (i.e., based on tf-idf or barycenter) has little impact. Naive Bayes performs considerably better than random (77% and 81% AUC for tf-idf and barycenter resp.), but is visibly less precise than the other two classifiers. These very good results confirm the soundness of the approach.
One can have a more detailed look at the origin of misclassifications. Table III exhibits the confusion matrix of Neural Network (using barycenter and \( \text{dim}(T) = 20 \)). Although around 90% of the targets get correctly categorized, one can see that the errors are slightly leaning towards false positives (that is, an unstressed system is wrongfully categorized as stressed). Although this is not the purpose of this study, it is possible to exploit this imbalance for an overall better classification accuracy (for instance by raising a 1/2 limit over which a system is categorized as stressed). The stress patterns are not very homogeneously detected, with Latency stress being 7 times more efficiently detected than CPU stress. However, because of the accuracy of the considered classifier, these results only concern a small number of events, and therefore have a low statistical power. Table IV presents the misclassified entries by application: all three applications (namely Bono, Sprout and Homestead) yield to similar classification accuracy.
**TABLE III:** Confusion matrix for the Neural Network classifier, using \( \text{dim}(T) = 20 \), and barycenter: detailed by stress type
<table>
<thead>
<tr>
<th>Stress Type</th>
<th>Detected as Stressed (True)</th>
<th>Detected as Unstressed (False)</th>
</tr>
</thead>
<tbody>
<tr>
<td>No Stress</td>
<td>0.115</td>
<td>0.885</td>
</tr>
<tr>
<td>Packet loss</td>
<td>0.939</td>
<td>0.061</td>
</tr>
<tr>
<td>Latency</td>
<td>0.985</td>
<td>0.015</td>
</tr>
<tr>
<td>Memory</td>
<td>0.989</td>
<td>0.014</td>
</tr>
<tr>
<td>Disk</td>
<td>0.970</td>
<td>0.030</td>
</tr>
<tr>
<td>CPU</td>
<td>0.893</td>
<td>0.106</td>
</tr>
</tbody>
</table>
2) Parameters sensitivity: We here focus on two choices of importance: the dimension of the embedding space \( \text{dim}(T) \), and the classifier algorithm. To compare our classifiers, we use the Area Under Curve (AUC) measure. In a nutshell, it measures the area under the ROC of a classifier. That is, an AUC of 1 denotes a perfect classification, while an AUC of 0 denotes a worse than random prediction. It is also commonly presented, given a random positive (stressed) and random negative (unstressed) example, as the probability for the classifier to rank the negative example below (that is, less stressed) the positive example. The ROC AUC is known to well summarizes ROC curves [1].
Figure 6 provides the AUC measures for our 3 considered classifiers for various embedding space dimensions. As expected, increasing the number of dimensions increases the classification accuracy: more information helps. This increase is however very limited: apart from Neural Network, where increasing dimensions from 5 to 20 has a visible impact, classifier accuracies all stay stable for \( \text{dim}(T) > 20 \) This is good news, as such parameter can be hard to tune a priori.
TABLE IV: Confusion matrix for the Neural Network classifier, using \( \text{dim}(T) = 20 \), and barycenter: detailed by application
<table>
<thead>
<tr>
<th>Target Machine</th>
<th>Requests</th>
<th>Number of misclassifications</th>
<th>Success Rate (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bono</td>
<td>220</td>
<td>19</td>
<td>91.4</td>
</tr>
<tr>
<td>Sprout</td>
<td>220</td>
<td>17</td>
<td>92.3</td>
</tr>
<tr>
<td>Homestead</td>
<td>220</td>
<td>20</td>
<td>90.9</td>
</tr>
</tbody>
</table>
More generally, this figure confirms the previous observations: classification is very accurate, especially using Neural Network and Random Forest, with AUCs consistently scoring above 0.95.
3) Timing performance: When selecting a classifier, the expected classification accuracy is the most important criteria. However, in operational contexts, another crucial criteria is the computational complexity of both training and prediction.
To provide some insights, we recorded wall clock times of the training of machine learning models (Figure 7) and of individual prediction of these models (Figure 8) operations. Those were performed on classical Macbook Pro with 16 GB of RAM and a quad-core Intel i7.
Interestingly, these figures provide a new perspective on our classifiers. Results confirm the reputation of each of those models: Naive Bayes is very simple, it is quickly trained and provides fast answers. Neural Network is a considerably more complex model whose training requires significantly more time. However, once trained it is able to answer reasonably fast. Contrariwise, Random Forest is quickly trained but requires considerably more time to issue predictions. Issuing a prediction requires on average 66 ms (resp. 5 ms and 11 ms) for Random Forest (resp. Naive Bayes and Neural Network).
Not surprisingly, increasing \( \text{dim}(T) \) comes with a computational cost (as it increases the number of features on which each model is trained), but since Section IV-B1 shows that \( \text{dim}(T) = 20 \) is already sufficient to obtain accurate results, we conclude that this approach is computationally tractable. The most prominent decision is the choice of the classifier: although the simplest possible classifier (Naive Bayes) provides cheap and reasonable answers, more efficient classifiers like Random Forest or Neural Network will cost a bit more, either at training time, or at prediction time.
To conclude, this results section explored the performance of three state of the art classifiers exploiting the log positions. These classifiers exhibit a strong performance for a reasonable cost. The most important parameter, the dimension of the host space \( \text{dim}(T) \), is not very sensitive: values ranging from 20 to 200 will roughly deliver the same performance. Although many parameters could be precisely tuned to optimize the classifiers, we believe these good results obtained using mostly default values of COTS tools already validate the soundness of our approach. More precisely, these show the extremely powerful effect of the \texttt{word2vec} embedding applied to logs: it allows to summarize each logfile to a single point in \( T \) while keeping enough information to allow an efficient classification.
V. DISCUSSION
Our approach leaves one common question of all machine learning approaches intact: how general are the learned models? In other words, are the classifiers built in this context able to provide accurate answers in different contexts, application environments, under different injection campaigns? Although this question is definitely of interest, we argue its scope goes well beyond this paper. Philosophically, this study shows that it is easy to train efficient classifiers. But informally, a classifier is only as good as its training data. The availability of labelled training data can clearly limit the applicability of our approach. The advantage of fault injection if to gather relevant labeled datasets in a short time period. Although it enables to evaluate our approach in a straghforward manner this implemention can be cumbersome. However, while we rely on fault injection to gather datasets, other sources exist: user-based feedback, crowded sourced datasets, and crash reports of large scale deployments.
In our previous work [19] we analyzed monitoring counters such as CPU consumption or number of disk accesses for anomaly detection. Results from counter-based detection showed a good predictive performance that is yet not fully aligned with the results of this study. For instance, latency errors were significantly harder to detect. In this study, we show that by solely mining \texttt{syslog} files we could detect anomalies with high accuracy for all types of anomalies. Consequently, we believe our approach is largely promising. As for future work, we plan to study an hybrid approach leveraging both logging and counter-based data in order to further evaluate their potential complementarity. What type of logs enhance or weaken the efficiency of our approach.
Finally, results presented in this paper show that our approach detects with the same accuracy the stresses injected in either type of application of our case study (i.e., proxy, router and database). In other words, the analysis of system related logs such as \texttt{syslog} is an efficient way to summarize
application behaviors for stress detection with no regard to the type of application. We believe however that syslog events are not enough to derive application dataflows that may allow to detect other types of anomalies or more importantly for administrators, to diagnose the origin of an anomaly. Consequently, we need to explore in future work other types of logs, notably the ones generated by our case study application.
VI. RELATED WORK
In this study, we use a word2vec-based method for log mining with a validation-purposed application of detecting stressed behaviors in computing systems. word2vec is a method for learning high-quality vector representations of words. It has been used for NLP in some previous works but not for the analysis of logfiles. In comparison, our previous work [19] focuses on anomaly detection based on monitoring data collected by means of a specific software agent, deployed beforehand on target machines, and providing numerical metrics on the system behavior. Here we exploit the default system-produced textual logs to predict stress. Beside the deep technical differences, our approach allows different use-cases, like post-mortem analysis of the behavior of the several processes being executed in the targeted systems.
Consequently, in the following we present separately several works related to NLP and other works related to logfile analysis for detection purposes.
NLP applications. In the literature, most of the NLP algorithms are used for document processing [26] to isolate references of a given subject in a document and detect the sentiments of the writer, or to exploit tweets [11] to detect cyber-attacks such as distributed denial of service.
To the best of our knowledge, relatively few works exploit NLP for a different purpose than document analysis. We provide here a quick summary of these non-traditional uses of NLP. In [15], the authors use a NLP technique called Latent Semantic Indexing to identify source code documents that match a user query expressed in natural language. They use the same technique in [14] to detect similar piece of code (i.e., duplicated functions) in software systems code. In addition, Latent Dirichlet Allocations are used for a similar purpose in [20]. NLP is also applied on network packet payloads for network intrusion detection in [18]. In [10], customers accesses to businesses URLs are analyzed using a word2vec-based method to propose better services to customers. Finally, NLP is also used to detect design and requirement debts [13] from comments of ten open source projects.
Log mining for detection purposes. Although some works propose new methods to generate relevant log events as in [4], logfiles still gather a wide range of events and evaluating their information in the execution context or weighting their gravity is still intricate. For instance, the authors of [17] analyze a wide range of logs with engineers and compare events signaling failures to the engineers feedback on actual failures. It turns out that the number of actual failures is lower than the failures reported by logs. Also they point out that syslog message severity level is of "dubious value", and that it is essential to take into account the operational context during which log events are collected. Nevertheless, logfiles analysis for anomaly (e.g., crash, fault, OS stressing...) detection in computing systems has been widely studied and it is still an active research field, in particular when considering the ever more complex recent computing systems.
Execution traces of streaming applications are analyzed in [9] in order to detect anomalies. The authors analyze traces by means of the merging pattern mining method applied on patterns of events (i.e., lines of traces). Then they build a graph representing the dataflow between the different computing units of the application. Likewise, in [21] the authors analyze the temporality of execution traces in order to derive system states from their estimated control flows. The authors of [25] also work on the ordered nature of logfiles. They exploit time series potentially hidden behind logs events for failure symptoms detection. They use a probabilistic modeling using a mixture of Hidden Markov Models (HMM) to represent different time windows (i.e., sessions) of logs event. They propose a new method for the learning of the HMM mixture working online.
Automatic techniques based on machine learning or statistics algorithms have been widely used for this matter, as in [6] where the authors propose a new approach for disk failure prediction. More precisely, they analyze by means of a Support Vector Machine (SVM) model, sequences of syslog events based on syslog tag numbers sequences or key strings in events. In [22], the author proposes a new algorithm for the clustering of log events and implements a tool based on it named SLCT. Logfiles parsing is exploited in [24]. The parsing uses log patterns identified from a static analysis of source code. Then, two types of features are computed from the entire available logfiles, and they are fed to the PCA-based anomaly detection algorithm for an offline detection. A log extractor for anomaly detection is studied in [12]. The extractor uses log clustering based on the Levenshtein editing distance to evaluate the similarities amongst log events strings (i.e., two strings are close together if there is a minimal number of actions to change the first string into the other). Templates are then extracted from log clusters. Finally, a sequence of log events matching patterns is created and feed to a machine learning algorithm. The Naive Bayes, and Recurrent Neural Networks are evaluated.
VII. CONCLUSIONS AND FUTURE WORK
In this paper, we tackled the problem of anomaly detection by mining logs produced by running systems. Differently to previous studies, we develop a linguistic approach by considering logs as regular plain text documents. This enables to exploit recent NLP techniques to extract information from the grammatical structure and context of log events. Logfiles are represented as a set of features that can be processed by standard machine learning algorithms. As such this approach shifts the burden of log preprocessing toward the collection of representative datasets. It is a good trade when data is massively available like in recent distributed systems.
Our experimental campaigns on different components of a VNF rely on fault injection to synthesize anomalous behaviors and collect relevant datasets on demand. We more particularly focus on the case of stress detection and show that strong predictors (≈ 90% accuracy) are easily trained with no human intervention in the loop. Even though we focus on stress
detection in this work, our approach is fitted for computing systems administrators for the online detection of any type of anomaly.
As for future work, we plan to explore unsupervised classifiers that would not restrain our approach scope to labelled training data and mostly known anomalies. Syslog files are used in this study, however we plan to inquire about what type of logfile(s) (e.g., dmesg, application logs...) enhance or weaken the efficiency of our approach. Also, we plan to extend our study to more precise online event troubleshooting while combining this detection approach with our previous work on counter-based detection [19].
REFERENCES
|
{"Source-Url": "https://hal.laas.fr/hal-01576291/document", "len_cl100k_base": 10829, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38907, "total-output-tokens": 13363, "length": "2e13", "weborganizer": {"__label__adult": 0.000339508056640625, "__label__art_design": 0.0004787445068359375, "__label__crime_law": 0.000621795654296875, "__label__education_jobs": 0.001789093017578125, "__label__entertainment": 0.0001552104949951172, "__label__fashion_beauty": 0.00018739700317382812, "__label__finance_business": 0.0003767013549804687, "__label__food_dining": 0.00032448768615722656, "__label__games": 0.0008263587951660156, "__label__hardware": 0.0016660690307617188, "__label__health": 0.0007224082946777344, "__label__history": 0.00035858154296875, "__label__home_hobbies": 0.00014507770538330078, "__label__industrial": 0.0005140304565429688, "__label__literature": 0.0005140304565429688, "__label__politics": 0.0003428459167480469, "__label__religion": 0.000461578369140625, "__label__science_tech": 0.229248046875, "__label__social_life": 0.0001672506332397461, "__label__software": 0.0316162109375, "__label__software_dev": 0.728515625, "__label__sports_fitness": 0.000225067138671875, "__label__transportation": 0.0004165172576904297, "__label__travel": 0.00017714500427246094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55903, 0.02283]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55903, 0.35174]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55903, 0.88722]], "google_gemma-3-12b-it_contains_pii": [[0, 1117, false], [1117, 6551, null], [6551, 11188, null], [11188, 16774, null], [16774, 22965, null], [22965, 27541, null], [27541, 33758, null], [33758, 38174, null], [38174, 42604, null], [42604, 49364, null], [49364, 55903, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1117, true], [1117, 6551, null], [6551, 11188, null], [11188, 16774, null], [16774, 22965, null], [22965, 27541, null], [27541, 33758, null], [33758, 38174, null], [38174, 42604, null], [42604, 49364, null], [49364, 55903, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55903, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55903, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55903, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55903, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55903, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55903, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55903, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55903, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55903, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55903, null]], "pdf_page_numbers": [[0, 1117, 1], [1117, 6551, 2], [6551, 11188, 3], [11188, 16774, 4], [16774, 22965, 5], [22965, 27541, 6], [27541, 33758, 7], [33758, 38174, 8], [38174, 42604, 9], [42604, 49364, 10], [49364, 55903, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55903, 0.11588]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
ffe7f37bf358facbc74bb83981a5e258309fcf7b
|
[REMOVED]
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/41310935/Mechanised_Verification_Patterns_for_Dafny_1.pdf", "len_cl100k_base": 10354, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 58258, "total-output-tokens": 12542, "length": "2e13", "weborganizer": {"__label__adult": 0.0003635883331298828, "__label__art_design": 0.0002524852752685547, "__label__crime_law": 0.0003056526184082031, "__label__education_jobs": 0.00037598609924316406, "__label__entertainment": 5.0008296966552734e-05, "__label__fashion_beauty": 0.00013697147369384766, "__label__finance_business": 0.00014007091522216797, "__label__food_dining": 0.00037026405334472656, "__label__games": 0.00051116943359375, "__label__hardware": 0.0006289482116699219, "__label__health": 0.0003769397735595703, "__label__history": 0.00017082691192626953, "__label__home_hobbies": 7.075071334838867e-05, "__label__industrial": 0.0003123283386230469, "__label__literature": 0.0002160072326660156, "__label__politics": 0.0002522468566894531, "__label__religion": 0.0004668235778808594, "__label__science_tech": 0.005756378173828125, "__label__social_life": 6.884336471557617e-05, "__label__software": 0.003116607666015625, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0003285408020019531, "__label__transportation": 0.0004868507385253906, "__label__travel": 0.0001906156539916992}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46169, 0.0133]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46169, 0.47786]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46169, 0.87441]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2673, false], [2673, 5818, null], [5818, 9245, null], [9245, 11881, null], [11881, 14656, null], [14656, 16860, null], [16860, 19673, null], [19673, 23367, null], [23367, 26194, null], [26194, 28835, null], [28835, 31240, null], [31240, 33151, null], [33151, 35873, null], [35873, 38508, null], [38508, 41893, null], [41893, 45007, null], [45007, 46169, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2673, true], [2673, 5818, null], [5818, 9245, null], [9245, 11881, null], [11881, 14656, null], [14656, 16860, null], [16860, 19673, null], [19673, 23367, null], [23367, 26194, null], [26194, 28835, null], [28835, 31240, null], [31240, 33151, null], [33151, 35873, null], [35873, 38508, null], [38508, 41893, null], [41893, 45007, null], [45007, 46169, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46169, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46169, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46169, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46169, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46169, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46169, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46169, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46169, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46169, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46169, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2673, 2], [2673, 5818, 3], [5818, 9245, 4], [9245, 11881, 5], [11881, 14656, 6], [14656, 16860, 7], [16860, 19673, 8], [19673, 23367, 9], [23367, 26194, 10], [26194, 28835, 11], [28835, 31240, 12], [31240, 33151, 13], [33151, 35873, 14], [35873, 38508, 15], [38508, 41893, 16], [41893, 45007, 17], [45007, 46169, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46169, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
05defb7712679a9357703adbdd33eca4ab46579d
|
Report from Dagstuhl Seminar 14511
Programming Languages for Big Data (PlanBig)
Edited by
James Cheney¹, Torsten Grust², and Dimitrios Vytiniotis³
1 University of Edinburgh, GB, jcheney@inf.ed.ac.uk
2 Universität Tübingen, DE, torsten.grust@uni-tuebingen.de
3 Microsoft Research UK – Cambridge, GB, dimitris@microsoft.com
Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 14511 “Programming Languages for Big Data (PlanBig)”. The seminar was motivated by recent developments in programming languages, databases, machine learning, and cloud computing, and particularly by the opportunities offered by research drawing on more than one of these areas. Participants included researchers working in each of these areas and several who have previously been involved in research in the intersection of databases and programming languages. The seminar included talks, demos and free time for discussion or collaboration. This report collects the abstracts of talks and other activities, a summary of the group discussions at the seminar, and a list of outcomes.
1998 ACM Subject Classification D.3.2 [Programming Languages]: Language Classifications – Applicative (functional) languages, H.2.3 [Database Management]: Languages – Query Languages, H.2.4 Systems - Distributed Databases, Query Processing; H.2.8 Database Applications – Data mining, Scientific databases
Keywords and phrases Programming languages, databases, data-centric computation, machine learning, cloud computing
Digital Object Identifier 10.4230/DagRep.4.12.48
Edited in cooperation with Alexander Ulrich
1 Executive Summary
James Cheney
Torsten Grust
Dimitrios Vytiniotis
License © Creative Commons BY 3.0 Unported license
© James Cheney, Torsten Grust, and Dimitrios Vytiniotis
Large-scale data-intensive computing, commonly referred to as “Big Data”, has been influenced by and can further benefit from programming languages ideas. The MapReduce programming model is an example of ideas from functional programming that has directly influenced the way distributed big data applications are written. As the volume of data has grown to require distributed processing potentially on heterogeneous hardware, there is need for effective programming models, compilation techniques or static analyses, and specialized language runtimes. The motivation for this seminar has been to bring together researchers working on foundational and applied research in programming languages but also data-intensive computing and databases, in order to identify research problems and opportunities for improving data-intensive computing.
To this extent, on the database side, the seminar included participants who work on databases, query languages and relational calculi, query compilation, execution engines, distributed processing systems and networks, and foundations of databases. On the programming languages side, the seminar included participants who work on language design, integrated query languages and meta-programming, compilation, as well as semantics. There was a mix of applied and foundational talks, and the participants included people from universities as well as industrial labs and incubation projects.
The work that has been presented can be grouped in the following broad categories:
- Programming models and domain-specific programming abstractions (Cheney, Alexandrov, Vitek, Ulrich). How can data processing and query languages be integrated in general purpose languages, in type-safe ways and in ways that enable traditional optimizations and compilation techniques from database research? How can functional programming ideas such as monads and comprehensions improve the programmability of big data systems? What are some language design issues for data-intensive computations for statistics?
- Interactive and live programming (Green, Vaz Salles, Stevenson, Binnig, Suciu). What are some challenges and techniques for interactive applications. How to improve the live programming experience of data scientists? Ways to offer data management and analytics as cloud services.
- Query compilation (Neumann, Henglein, Rompf, Ulrich). Compilation of data processing languages to finite state automata and efficient execution. Programming languages techniques, such as staging, for enabling implementors to concisely write novel compilation schemes.
- Data programming languages and semantics (Wisnesky, Vansummeren). Functorial semantics for data programming languages, but also foundations for languages for information extraction.
- Foundations of (parallel) query processing (Suciu, Neven, Hidders). Communication complexity results, program equivalence problems in relational calculi.
- Big data in/for science (Teubner, Stoyanovich, Ré). Challenges that arise in particle physics due to the volume of generated data. How we can use data to speed up new material discovery and engineering? How to use big data systems for scientific extraction and integration from many different data sources?
- Other topics: architecture and runtimes (Ahmad), coordination (Foster), language runtimes (Vytiniotis), weak consistency (Gotsman).
The seminar schedule involved three days of scheduled talks, followed by two days of free-form discussions, demos, and working groups. This report collects the abstracts of talks and demos, summaries of the group discussion sessions, and a list of outcomes resulting from the seminar.
# Table of Contents
## Executive Summary
*James Cheney, Torsten Grust, and Dimitrios Vytiniotis* .................................. 48
## Overview of Talks
<table>
<thead>
<tr>
<th>Topic</th>
<th>Presenter</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Self-Adjusting Computation for Dynamic and Large Data Sets</td>
<td>Umut A. Acar</td>
<td>52</td>
</tr>
<tr>
<td>Deconstructing Big Data Stacks</td>
<td>Yanif Ahmad</td>
<td>52</td>
</tr>
<tr>
<td>Data Analytics with Flink</td>
<td>Alexander Alexandrov</td>
<td>53</td>
</tr>
<tr>
<td>Interactive & Visual Data Exploration</td>
<td>Carsten Binnig</td>
<td>53</td>
</tr>
<tr>
<td>From LINQ to QDSLs</td>
<td>James Cheney</td>
<td>53</td>
</tr>
<tr>
<td>Demo: Normalization and Query Composition in LINQ</td>
<td>James Cheney</td>
<td>54</td>
</tr>
<tr>
<td>The Homeostatis Protocol: Avoiding Transaction Coordination Through Program Analysis</td>
<td>Nate Foster</td>
<td>54</td>
</tr>
<tr>
<td>Weak Consistency in Cloud Storage</td>
<td>Alexey Gotsman</td>
<td>55</td>
</tr>
<tr>
<td>Live Programming for Big Data</td>
<td>Todd J. Green</td>
<td>55</td>
</tr>
<tr>
<td>Towards Regular Expression Processing at 1 Gbps/core</td>
<td>Fritz Henglein</td>
<td>56</td>
</tr>
<tr>
<td>MapReduce Optimisation in the Nested Relational Calculus</td>
<td>Jan Hidders</td>
<td>56</td>
</tr>
<tr>
<td>Incremental Computation: The Database Approach</td>
<td>Christoph Koch</td>
<td>56</td>
</tr>
<tr>
<td>Compiling SQL Queries into Executable Code</td>
<td>Thomas Neumann</td>
<td>57</td>
</tr>
<tr>
<td>Parallel-Correctness and Transferability for Conjunctive Queries</td>
<td>Frank Neven</td>
<td>57</td>
</tr>
<tr>
<td>DeepDive: A Data System for Macroscopic Science</td>
<td>Christopher Ré</td>
<td>58</td>
</tr>
<tr>
<td>An Efficient SQL to C Compiler in 500 lines of Scala</td>
<td>Tiark Rompf</td>
<td>58</td>
</tr>
<tr>
<td>F#3.0 – Strongly-Typed Language Support for Internet-Scale Information Sources</td>
<td>Andrew Stevenson</td>
<td>59</td>
</tr>
<tr>
<td>(Big) Data Challenges in Materials Science and Engineering</td>
<td>Julia Stoyanovich</td>
<td>59</td>
</tr>
</tbody>
</table>
Big Data Management with the Myria Cloud Service
Dan Suciu ................................................................. 60
Communication Cost in Parallel Query Processing
Dan Suciu ................................................................. 60
Big Data Problems in Particle Physics
Jens Teubner .............................................................. 60
Query Compilation Based on the Flattening Transformation
Alexander Ulrich ....................................................... 61
Spanners: A Formal Framework for Information Extraction
Stijn Vansummeren .................................................. 61
Challenges in Interactive Applications
Marcos Vaz Salles ..................................................... 62
The R Project and Language
Jan Vitek ................................................................. 62
Broom: Sweeping Out Garbage Collection from Big Data systems
Dimitrios Vytiniotis .................................................... 63
The Functorial Data Model
Ryan Wisnesky .......................................................... 63
Working Groups ........................................................ 63
Outcomes ................................................................. 66
Participants .............................................................. 67
3 Overview of Talks
3.1 Self-Adjusting Computation for Dynamic and Large Data Sets
Umut A. Acar (Carnegie Mellon University – Pittsburgh, US)
License: Creative Commons BY 3.0 Unported license
© Umut A. Acar
Developing efficient and reliable software is a difficult task. Rapidly growing and dynamically changing data sets further increase complexity by making it more challenging to achieve efficiency and performance. We present practical and powerful abstractions for taming software complexity in this domain. Together with the algorithmic models and programming languages that embody them, these abstractions enable designing and developing efficient and reliable software by using high-level reasoning principles and programming techniques. As evidence for their effectiveness, we consider a broad range of benchmarks including sophisticated algorithms in geometry, machine-learning, and large data sets. On the theoretical side, we show asymptotically significant improvements in efficiency and present solutions to several open problems using the proposed techniques. On the practical side, we present programming languages, compilers, and related software systems that deliver significant improvements in performance, usually with little effort from the programmer. This talk is based on research done jointly with collaborators including A. Ahmed, G. Blelloch, M. Blume, Y. Chen, J. Dunfield, M. Fluet, M. Hammer, R. Harper, B. Hudson, R. Ley-Wild, O. Sumer, K. Tangwongsan, D. Turkoglu.
3.2 Deconstructing Big Data Stacks
Yanif Ahmad (Johns Hopkins University, US)
License: Creative Commons BY 3.0 Unported license
© Yanif Ahmad
Modern big data applications deployed in datacenter environments are complex layered software stacks that provide functionality ranging from the networking and storage hardware, to the high-level analytics logic required by the application. Today’s data systems play a central role in facilitating distributed data movement, query scheduling and fault tolerance for large-scale data processing. In this talk, we survey and deconstruct the design decisions made in the modern data systems architectures commonly found in a Big Data stack. This includes the storage services provided for input data as well as large intermediate results, support for both mid-query and inter-query fault tolerance, and the architectural impact of providing low-latency results, ideally without a long tail. The systems considered include HDFS, Hadoop, Spark, Impala, Storm and briefly NoSQL and NewSQL DBMS.
3.3 Data Analytics with Flink
Alexander Alexandrov (TU Berlin, DE)
In this demo session we give an overview of Apache Flink – an open-source system for scalable data analysis. We present Flink’s functional programming model and discuss some unique system features: (1) the approach of managing a JVM-based heap through aggressive object serialization on byte buffers, (2) the cost-based dataflow optimizer, and (3) the support for native incremental iterations and their resemblance with semi-naive Datalog evaluation.
3.4 Interactive & Visual Data Exploration
Carsten Binnig (DHBW – Mannheim, DE)
Data-centric applications in which data scientists of varying skill levels explore large data sets are becoming more and more relevant to make sense of the data, identify interesting patterns, and bring aspects of interest into focus for further analysis. Enabling these applications with ease of use and at “human speeds” is key to democratizing data science and maximizing human productivity. As a first step towards visual interactive data exploration, we implemented a visual index for computing histograms based on a \( B^+ \)-tree. The major differences to the traditional \( B^+ \)-tree are: (1) We annotate the index nodes with count values as discussed before. (2) We offer optimized index traversal strategies for all requested bins of a histogram. (3) We use typical bin definitions of a histogram as separators for the upper levels instead of using the normal balancing rules.
3.5 From LINQ to QDSLs
James Cheney (University of Edinburgh, GB)
Language-integrated query techniques ease database programming by placing queries and ordinary program code on the same level, so that the language implementation can check and coordinate communication between the host language and database. Such techniques are based on foundations developed in the 90s including comprehension syntax, normalization results for nested relational calculus, and more recent work on generalizing normalization to a higher-order setting and embedding query languages in host languages using quotation (a technique we identify as Quotation-based Domain Specific Languages, or QDSLs). In this talk I give an overview of this prior work exemplifying interaction between database and programming language research, and illustrate its impact on LINQ for F#.
3.6 Demo: Normalization and Query Composition in LINQ
James Cheney (University of Edinburgh, GB)
License © Creative Commons BY 3.0 Unported license
© James Cheney
Joint work of Cheney, James; Lindley, Sam; Wadler, Philip
URL http://dx.doi.org/10.1145/2500365.2500586
In this demo I explained the underlying ideas of LINQ in F#, and application of recent work with Lindley and Wadler on normalization of query expressions. LINQ already performs some transformations to query expressions at run time using quotation and reflection capabilities of F#, but it has some gaps in support for queries that involve higher-order functions. Our work overcomes this limitation by providing a guarantee that all query expressions of a certain class normalize to a form that can be turned into SQL – even if the query expression makes use of lambda-abstraction and application. This has subtle implications, and allows writing efficient queries using lambda-abstraction that are not executed efficiently by the built-in F# LINQ library, and constructing queries at run time by recursion over in-memory data (illustrated by showing how XPath queries and their mapping to SQL can be defined in F# LINQ).
3.7 The Homeostatis Protocol: Avoiding Transaction Coordination Through Program Analysis
Nate Foster (Cornell University – Ithaca, US)
License © Creative Commons BY 3.0 Unported license
© Nate Foster
Joint work of Roy, Sudip; Bender, Gabriel; Kot, Lucja; Ding, Bailu; Foster, Nate; Gehrke, Johannes; Koch, Christoph
Many datastores rely on distribution and replication to achieve improved performance and fault-tolerance. But correctness of many applications depends on strong consistency properties – something that can impose substantial overheads, since it requires coordinating the behavior of multiple nodes. This work developed a new approach to achieving strong consistency in distributed systems while minimizing communication between nodes. The key insight was to allow the state of the system to be inconsistent during execution, as long as this inconsistency is bounded and does not affect transaction correctness. In contrast to previous work, our approach used program analysis to extract semantic information about permissible levels of inconsistency and is fully automated. We also developed a novel “homeostasis protocol” to allow sites to operate independently, without communicating, as long as any inconsistency is governed by appropriate treaties between the nodes. We designed mechanisms for optimizing treaties based on workload characteristics to minimize communication, built a prototype implementation, and conducted experiments to demonstrate the benefits of our approach on transactional benchmarks.
To appear in SIGMOD 2015.
3.8 Weak Consistency in Cloud Storage
Alexey Gotsman (IMDEA Software Institute, ES)
License Ⓒ Creative Commons BY 3.0 Unported license
© Alexey Gotsman
Joint work of Bernardi, Giovanni; Cerone, Andrea; Burckhardt, Sebastian; Yang, Hongseok; Zawirski, Marek
Modern geo-replicated databases underlying large-scale Internet services guarantee immediate availability and tolerate network partitions at the expense of providing only weak forms of consistency, commonly dubbed *eventual consistency*. At the moment there is a lot of confusion about the semantics of eventual consistency, as different systems implement it with different sets of features and in subtly different forms, stated either informally or using disparate and low-level formalisms.
We address this problem by proposing a framework for formal and declarative specification of the semantics of eventually consistent systems using axioms. Our framework is fully customisable: by varying the set of axioms, we can rigorously define the semantics of systems that combine any subset of typical guarantees or features, including conflict resolution policies, session guarantees, causality guarantees, multiple consistency levels and transactions. We prove that our specifications are validated by an example abstract implementation, based on algorithms used in real-world systems. These results demonstrate that our framework provides system architects with a tool for exploring the design space, and lays the foundation for formal reasoning about eventually consistent systems.
3.9 Live Programming for Big Data
Todd J. Green (LogicBlox – Atlanta, US)
License Ⓒ Creative Commons BY 3.0 Unported license
© Todd J. Green
Joint work of Green, Todd J.; Olteanu, Dan; Washburn, Geoffrey
We observe that the emerging category of self-service enterprise applications motivates support for “live programming” in the database, where the user’s iterative exploration of the input data triggers changes to installed application code and its output in real time. This talk discusses the technical challenges in supporting live programming in the database and presents the solution implemented in version 4.1 of the LogicBlox commercial database system. The workhorse architectural component is a novel “meta-engine” that incrementally maintains metadata representing application code, guides compilation of input application code into its internal representation in the database kernel, and orchestrates maintenance of materialized views based on those changes. Our approach mirrors LogicBlox’s declarative programming model and describes the maintenance of application code using declarative meta-rules; the meta-engine is essentially a “bootstrap” version of the database engine proper. Beyond live programming, the meta-engine turns out effective for a range of static analysis and optimization tasks, which we discuss. Outside of the database systems context, we speculate that our design may even provide a novel means of building incremental compilers for general-purpose programming languages.
3.10 Towards Regular Expression Processing at 1 Gbps/core
Fritz Henglein (University of Copenhagen, DK)
License © Creative Commons BY 3.0 Unported license © Fritz Henglein
Joint work of Bjørn Grathwohl; Henglein, Fritz; Ulrik Rasmussen
URL http://dx.doi.org/10.1007/978-3-319-10882-7_14
URL http://www.diku.dk/kmc
We describe how type theory, prefix codes, nondeterministic automata, streaming and determinization to register automata yield a worst-case linear-time regular expression parser for fixed regular expressions. Early tests indicate that it operates at a sustained 100+ Mbps rate on complex regular expressions and large data sets; this seems to be significantly faster than existing tools, which operate at 2 to 20 Mbps (commodity PC). We sketch how we believe an expressive regular expression processor executing at 1 Gbps per 64-bit core can be designed and implemented, without employing machine-specific or hardware oriented tricks.
3.11 MapReduce Optimisation in the Nested Relational Calculus
Jan Hidders (TU Delft, NL)
License © Creative Commons BY 3.0 Unported license © Jan Hidders
Joint work of Grabowski, Marek; Hidders, Jan; Sroka, Jacek; Vansummeren, Stijn
URL http://dx.doi.org/10.1007/978-3-642-39467-6_17
We introduced sNRC, a variant of the Nested Relational Calculus over bags which allows heterogeneous bags and has two special operations: basic value equality and a duplicate elimination operator that selects only basic values. In this language we can readily represent a MapReduce operator, and so reasoning about equivalence of expressions in the language becomes equivalent to reasoning over MapReduce workflows over nested data. It is discussed how it might be possible to axiomatise equivalence of expressions with relatively simple equations. We also show some conjectures about the decidability of this problem for the presented fragment, and how this relates to existing results and open problems.
3.12 Incremental Computation: The Database Approach
Christoph Koch (EPFL – Lausanne, CH)
License © Creative Commons BY 3.0 Unported license © Christoph Koch
URL http://dx.doi.org/10.1007/s00778-013-0348-4
In this talk, I presented the database approach to incremental computation – incremental view maintenance by compile-time query transformation. I first presented the classical approach
to incremental view maintenance using delta queries and then presented the DBToaster approach – recursive or higher-order incremental view maintenance. I also gave a demo of the DBToaster system, available at www.dbtoaster.org. Finally, I presented our recent work on higher-order incremental view maintenance for nested relational queries and the simply-typed lambda calculus, available as a preprint as [1].
References
3.13 Compiling SQL Queries into Executable Code
Thomas Neumann (TU München, DE)
License © Creative Commons BY 3.0 Unported license © Thomas Neumann
Joint work of Neumann, Thomas; Leis, Viktor
URL http://sites.computer.org/debull/A14mar/p3.pdf
On modern servers the working set of database management systems becomes more and more main memory resident. Slow disk accesses are largely avoided, and thus the in-memory processing speed of databases becomes an important factor. One very attractive approach for fast query processing is just-in-time compilation of incoming queries. By producing machine code at runtime we avoid the overhead of traditional interpretation systems, and by carefully organizing the code around register usage we minimize memory traffic and get excellent performance. In this talk we show how queries can be brought into a form suitable for efficient translation, and how the underlying code generation can be orchestrated. By carefully abstracting away the necessary plumbing infrastructure we can build a query compiler that is both maintainable and efficient. The effectiveness of the approach is demonstrated by the HyPer system that uses query compilation as its execution strategy and achieves excellent performance.
3.14 Parallel-Correctness and Transferability for Conjunctive Queries
Frank Neven (Hasselt University – Diepenbeek, BE)
License © Creative Commons BY 3.0 Unported license © Frank Neven
Joint work of Ameloot, Tom; Geck, Gaetano; Ketsman, Bas; Neven, Frank; Schwentick, Thomas
A dominant cost for query evaluation in modern massively distributed systems is the number of communication rounds. For this reason, there is a growing interest in single-round multiway join algorithms where data is first reshuffled over many servers and then evaluated in a parallel but communication-free way. The reshuffling itself is specified as a distribution policy. We introduce a correctness condition, called parallel-correctness, for the evaluation of queries with respect to a distribution policy. We study the complexity of parallel-correctness for conjunctive queries as well as transferability of parallel-correctness between queries. We also investigate the complexity of transferability for certain families of distribution policies, including, for example, the Hypercube distribution.
3.15 DeepDive: A Data System for Macroscopic Science
Christopher Ré (Stanford University, US)
Many pressing questions in science are macroscopic in that these questions require that a scientist integrate information from many data sources. Often, these data sources are documents that contain natural language text, tables, and figures. Such documents contain valuable information, but they are difficult for machines to understand unambiguously. This talk describes DeepDive, a statistical extraction and integration system to extract information from such documents. For tasks in paleobiology, DeepDive-based systems are surpassing human volunteers in data quantity, recall, and precision. This talk describes recent applications of DeepDive and DeepDive’s technical core. One of those core technical issues is efficient statistical inference. In particular, we describe our recent Hogwild! and DimmWitted engines that explore a fundamental tension between statistical efficiency (steps until convergence) and hardware efficiency (efficiency of each of those steps). In addition, we offer thoughts about how domain specific languages can help.
3.16 An Efficient SQL to C Compiler in 500 lines of Scala
Tiark Rompf (Purdue University, US)
For hard-core systems level programming, low-level C code is still the industry standard. The drawbacks are well known: buggy systems, security vulnerabilities, poor programmer productivity, etc. Generative programming is an alternative; writing expressive high-level programs that generate fast low-level code at runtime. While many languages come with basic code generation facilities, generative programming has remained somewhat of a black art. Recent developments, however, promise to make generative programming much more accessible. This talk will provide a step-by-step introduction to the open-source LMS (Lightweight Modular Staging) framework, which brings runtime code generation and compilation to Scala programs. We will build a SQL query engine that outperforms commercial and open source database systems and consists of just about 500 lines of high-level Scala code. Along the way, we will discuss concepts such as mixed-stage data structures that contain both static and dynamic parts (e.g. static schema and dynamic values for data records) and staged interpreters which can be mechanically turned into compilers (e.g. for SQL queries or regular expressions).
3.17 F#3.0 – Strongly-Typed Language Support for Internet-Scale Information Sources
Andrew Stevenson (Queen’s University – Kingston, CA)
License © Creative Commons BY 3.0 Unported license
Joint work of Syme, Don; Battocchi, Keith; Takeda, Kenji; Malayeri, Donna; Fisher, Jomo; Hu, Jack; Liu, Tao; McNamara, Brian; Quirk, Daniel; Taveggia, Matteo; Chae, Wonseok; Matsveyeu, Uladzimir; Petricek, Tomas
URL http://research.microsoft.com/apps/pubs/?id=173076
A growing trend in both the theory and practice of programming is the interaction between programming and rich information spaces. From databases to web services to the semantic web to cloud-based data, the need to integrate programming with heterogeneous, connected, richly structured, streaming and evolving information sources is ever-increasing. Most modern applications incorporate one or more external information sources as integral components. Providing strongly typed access to these sources is a key consideration for strongly-typed programming languages, to insure low impedance mismatch in information access. At this scale, information integration strategies based on library design and code generation are manual, clumsy, and do not handle the internet-scale information sources now encountered in enterprise, web and cloud environments. In this report we describe the design and implementation of the type provider mechanism in F# 3.0 and its applications to typed programming with web ontologies, web-services, systems management information, database mappings, data markets, content management systems, economic data and hosted scripting. Type soundness becomes relative to the soundness of the type providers and the schema change in information sources, but the role of types in information-rich programming tasks is massively expanded, especially through tooling that benefits from rich types in explorative programming.
3.18 (Big) Data Challenges in Materials Science and Engineering
Julia Stoyanovich (Drexel University – Philadelphia, US)
License © Creative Commons BY 3.0 Unported license
© Julia Stoyanovich
Materials Science and Engineering (MSE) is focused on the process of engineering matter into new and useful forms. It is a vast field that seeks to understand the properties of materials, to create materials appropriate for particular tasks, and to predict material behavior. Like many other disciplines, MSE is looking for ways to leverage data-driven approaches to make the process of scientific discovery and engineering more efficient. In this talk I present two interesting MSE use cases, outline ongoing efforts towards making MSE a data-intensive domain, and discuss ingredients of an MSE cyberinfrastructure.
3.19 Big Data Management with the Myria Cloud Service
Dan Suciu (University of Washington – Seattle, US)
License © Creative Commons BY 3.0 Unported license
© Dan Suciu
Joint work of Halperin, Daniel; de Almeida, Victor Teixeira; Choo, Lee Lee; Chu, Shumo; Koutris, Paraschos; Moritz, Dominik; Ortiz, Jennifer; Ruamviboonsuk, Vaspol; Wang, Jingjing; Whitaker, Andrew; Xu, Shengliang; Balazinska, Magdalena; Howe, Bill; Suciu, Dan
URL http://dx.doi.org/10.1145/2588555.2594530
URL http://myria.cs.washington.edu/
Myria is a novel cloud service for big data management and analytics designed to improve productivity. Myria’s goal is for users to simply upload their data and for the system to help them be self-sufficient data science experts on their data – self-serve analytics. From a web browser, Myria users can upload data, author efficient queries to process and explore the data, and debug correctness and performance issues. Myria queries are executed on a scalable, parallel cluster that uses both state-of-the-art and novel methods for distributed query processing.
3.20 Communication Cost in Parallel Query Processing
Dan Suciu (University of Washington – Seattle, US)
License © Creative Commons BY 3.0 Unported license
© Dan Suciu
Joint work of Beame, Paul; Koutris, Paraschos; Suciu, Dan
URL http://dx.doi.org/10.1145/2594538.2594558
We study the problem of computing a conjunctive query $q$ in parallel using $p$ servers on a large database. We consider algorithms with one round of communication, and study the complexity of the communication. We prove matching upper and lower bounds based on the fractional edge packing of the query.
3.21 Big Data Problems in Particle Physics
Jens Teubner (TU Dortmund, DE)
License © Creative Commons BY 3.0 Unported license
© Jens Teubner
Joint work of Teubner, Jens; Spaan, Bernhard
The Large Hadron Collider at CERN is often cited as a source of extremely large data volumes, or “Big Data”. The talk gives a brief intuition of the type of experiments that are being ran at CERN (specifically the LHCb sub-project) and I will show what types of data are being produced and how they are being accessed by physical analyses. I will sketch my vision on how database-oriented techniques could be used to allow for more efficient data analysis and – as a consequence – to improve the insights that can be gained from the experimental data.
3.22 Query Compilation Based on the Flattening Transformation
Alexander Ulrich (Universität Tübingen, DE)
We tackle the problem of supporting an expressive, fully compositional list-based query language that allows nested results efficiently on off-the-shelf relational query engines. Query formulation is centered around comprehensions and a rich set of order-aware combinators including grouping, aggregation and sorting. This query language provides a basis for the construction of language-integrated query systems that seamlessly embed querying capabilities into functional programming languages. In this talk, we sketch the internals of a query compiler centered around the flattening transformation, a program transformation originally conceived to support nested data parallelism on vector processors. Adapted to query compilation, the flattening-based approach shreds nested queries into a small, statically determined number of efficient relational queries. In contrast to previous work, flattening-based query compilation (a) consists of a composition of simple steps that build on previous work and are easy to reason about (b) supports ordered collections and operations like aggregation, grouping and sorting and (c) produces efficient code.
In addition, we demonstrate Database-Supported Haskell (DSH), an implementation of flattening-based query shredding. DSH is an embedded query DSL that allows to formulate complex queries in idiomatic Haskell style. DSH queries are constructed from (higher-order) combinators and comprehensions, support abstraction over sub-queries and are subject to the same static typing discipline as other parts of a Haskell program. DSH compiles such queries with nested results into a bundle of efficient flat queries for off-the-shelf relational query engines.
3.23 Spanners: A Formal Framework for Information Extraction
Stijn Vansummeren (University of Brussels, BE)
An intrinsic part of information extraction is the creation and manipulation of relations extracted from text. In this talk, we present a foundational framework where the central construct is what we call a spanner. A spanner maps an input string into relations over the spans (intervals specified by bounding indices) of the string. The focus of this presentation is on the representation of spanners. Conceptually, there are two kinds of such representations. Spanners defined in a primitive representation extract relations directly from the input string; those defined in an algebra apply algebraic operations to the primitive represented spanners. This framework is driven by SystemT, an IBM commercial product for text analysis, where the primitive representation is that of regular expressions with capture variables. We define additional types of primitive spanner representations by means of two kinds of automata that assign spans to variables. We prove that the first kind has the same expressive power as regular expressions with capture variables; the second kind expresses precisely the algebra of the regular spanners – the closure of the first kind under standard relational operators.
The core spanners extend the regular ones by string-equality selection (an extension used in SystemT). We give some fundamental results on the expressiveness of regular and core spanners.
3.24 Challenges in Interactive Applications
Marcos Vaz Salles (University of Copenhagen, DK)
Interactive applications, such as data visualizations and maps, computer games and simulations, or in-memory transactional and analytics systems, are becoming ever more pervasive and important to our society. In this talk, we describe lessons learned and challenges emerging from our research with these applications. First, we explore the challenge of declarative pre-computation of complex data transformations in these applications, discussing an example of selecting data for zoomable maps [1]. Second, we discuss the challenge of performance visibility in programming models for online computations, suggesting a way to revisit the transaction model for this goal [2].
References
3.25 The R Project and Language
Jan Vitek (Northeastern University – Boston, US)
Jan introduced the seminar attendees to the R project for statistical computing and the associated R scripting language. Through a series of live examples, from simple and obvious to quirky and outright surprising, Jan demonstrated relevant bits of the R language semantics. The discussion with the audience had a particular focus on R’s family of collection data types (vectors, matrices, arrays, lists, factors, and data frames). Issues of R’s interpreted execution model and the possibility of compiling R code were brought up later in the seminar.
Jan maintains his collection AllR of R-related implementation projects on GitHub: https://github.com/allr/.
3.26 Broom: Sweeping Out Garbage Collection from Big Data systems
Dimitrios Vytiniotis (Microsoft Research UK – Cambridge, GB)
License Creative Commons BY 3.0 Unported license
© Dimitrios Vytiniotis
Many popular systems for processing “big data” are implemented in high-level programming languages with automatic memory management via garbage collection (GC). However, high object churn and large heap sizes put severe strain on the garbage collector. As a result, applications underperform significantly: GC increases the runtime of typical data processing tasks by up to 40%. We propose to use region-based memory management instead of GC in distributed data processing systems. In these systems, many objects have clearly defined lifetimes. It is natural to allocate these objects in fate-sharing regions, obviating the need to scan a large heap. Regions can be memory-safe and could be inferred automatically. Our initial results show that region-based memory management reduces emulated Naiad vertex runtime by 34% for typical data analytics jobs.
3.27 The Functorial Data Model
Ryan Wisnesky (MIT – Cambridge, US)
License Creative Commons BY 3.0 Unported license
© Ryan Wisnesky
Joint work of Wisnesky, Ryan; Spivak, David
We study the data transformation capabilities associated with schemas that are presented by directed multi-graphs and path equations. Unlike most approaches which treat graph-based schemas as abbreviations for relational schemas, we treat graph-based schemas as categories. A schema \( S \) is a finitely-presented category, and the collection of all \( S \)-instances forms a category, \( S \)-inst. A functor \( F \) between schemas \( S \) and \( T \), which can be generated from a visual mapping between graphs, induces three adjoint data migration functors, \( \Sigma_F : S\text{-inst} \rightarrow T\text{-inst} \), \( \Pi_F : S\text{-inst} \rightarrow T\text{-inst} \), and \( \Delta_F : T\text{-inst} \rightarrow S\text{-inst} \). We present an algebraic query language FQL based on these functors, prove that FQL is closed under composition, prove that FQL can be implemented with the select-project-product-union relational algebra (SPCU) extended with a key-generation operation, and prove that SPCU can be implemented with FQL.
4 Working Groups
The participants expressed a clear preference to avoid splitting into smaller groups to have discussions; instead, on Thursday and Friday there were plenary discussions in the main seminar room.
There are now lots of “programming languages for big data”, exhibiting signs of convergent evolution, with similar primitives (usually starting with some variation on map and reduce operations). Nevertheless, most such languages seem not to be well-informed by principles of programming language design, or at least, these appear to be afterthoughts. One discussion session considered the question whether these efforts are now stable enough that there is a case for a community-led “standard” – drawing inspiration from the lazy functional programming community, which consolidated its effort behind a single language design (Haskell) after a number of exploratory language designs gained momentum in the 1980s and early 1990s.
There was an extensive discussion of what this would mean, with different participants taking different views of what a “standard” would mean and what its benefits would be. One question raised was the community that such a standard would serve – would it serve PL researchers (as a “standard calculus” for language-based work on big data / data-centric computation)? Would it serve system developers (as an API) or users (as a standard surface language)? Another concern raised was that industry tends to view academic work as irrelevant due to limited scale – would this limit the value of a standard language model?
One participant mentioned recent experience with eventual consistency: after an initial burst of enthusiasm, industry seems to be reverting to stronger consistency models and tested higher-level abstractions such as transactions. Thus, it may be premature to try to consolidate effort on language designs/calculi for dealing with big data, as work in this area may still be at an experimental stage and may be at risk of abandonment if its value is not realized soon.
At a more concrete level, participants discussed what kind of standard would be of value for their research. The lambda-calculus was cited as a (widely successful) example of a “standard” formalism that programming languages researchers use as a starting point for understanding and formalizing language design ideas, abstracting away somewhat from the full complexity of implemented languages. By analogy, a calculus that plays a similar role for cloud computing, MapReduce systems, or multicore CPU or GPU code could be valuable (it should be noted that there are already some such proposals). It might be a good idea to take experience from the OpenFlow standard in software-defined networking into account; OpenFlow was established by an industry consortium but has enabled programming languages and systems researchers to work to a common interface. Likewise, formalisms such as the relational calculus/algebra (and formal standards such as SQL) have played a similar role in the database community for decades.
An interesting issue for a proposed “standard model” is that of cost modeling: a calculus or language that attempts to abstract away from the implementation details risks abstracting away the computational costs as well, so there is a tension between abstraction/portability and performance transparency/scalability. A standard model that is operationally transparent would be valuable for parallel or distributed computing (but there was no clear consensus on what this would mean). It would be desirable for such a model to give an explicit account of physical properties or distances between components in the system to avoid cost-opacity. Cellular automata models were mentioned as an example of how to do this but it was argued that they are too low-level. The Delite system was also mentioned as an example providing a set of high-level operators that can be mapped to different execution architectures; it is higher-level than real hardware or systems and needs to be mapped to abstract machines that model the underlying hardware well. A standard formalism might need to handle multiple layers of abstraction (by analogy with relational query optimization with its logical, physical and run-time layers). Something that is “good enough” for typical uses and portable might
be the best tradeoff (analogously to C which is not perfect but represents a workable tradeoff between abstraction and performance).
In addition, there was a short side-discussion about the desirability of benchmarking and diversity clusters for the evaluation of “big data” systems (and language techniques for them). This would aid performance tuning and portability. The Stabilizer system from the University of Massachusetts was mentioned as an example of this. The general topic of reproducibility for computer science/systems research was also mentioned (and it was pointed out that this is currently receiving attention from several quarters).
Community-building
Another topic that was discussed was the need for, and options for, building a community to improve communication among and interaction between communities relevant to the topics of the seminar. There seemed to be consensus that it would be beneficial to encourage community-building in this area. Some participants expressed concern that existing workshops seem to be diminishing in popularity and value, while it is at least possible (sometimes with greater effort) to publish work with (for example) a significant DB component in PL venues or vice-versa. Others expressed the opinion that workshops are no longer as worthwhile and a lighter-weight approach such as Dagstuhl-like events every 2 years or so is preferable. This approach, however, has the disadvantage that it limits participation to those whom the organizers can identify well in advance of the event, so may limit diversity and community growth.
One concrete option that was discussed was the possibility of organizing a new conference (rather than workshop) on “data-centric computing” to encourage work and cross-fertilization between PL and systems/databases/machine learning. The pros and cons of this strategy were discussed. On one hand, it was recognized that this would require buy-in from “big names” / thought leaders (beyond the participants in the Dagstuhl seminar). Another potential challenge was the need to encourage significant industry participation, which could impose constraints on logistics or venues. On the other hand, participants cited recent experience with new workshops on hot topics such as USENIX HotCloud and HotSDN workshops, the ACM Symposium on Cloud Computing, which has grown rapidly to an independent event since its inception in 2010.
Overall, it was recognized that a new venue might be feasible but a strong scientific case (going beyond identifying the shortcomings of existing venues) needs to be made, in terms of increased benefit to participants and better science. One participant (Umut Acar) volunteered to coordinate subsequent discussion of the idea of a new “data-centric computation” conference. Establishing such a new conference may be difficult and so experience with DBPL 2015 may help build the case for this.
DBPL
The final morning of the seminar saw a discussion of the future of DBPL, the International Symposium on Database Programming Languages, which has been running biennially since 1987. Recent occurrences of DBPL in 2011 and 2013 had seen a decrease in submissions and participation compared to previous years. Members of both events PC chair teams were present and as of the week of the seminar its status in 2015 was unclear. There was some
feeling that DBPL may have run its course, but also that it would be a shame for the series to end when events such as this Dagstuhl seminar showcase such a range of relevant activity. It was felt that this question was largely orthogonal to the question of developing a new conference venue (though a strong showing for DBPL in 2015 might contribute to a case for the “data-centric computation” conference idea).
DBPL had been co-located with VLDB (a major database conference, which seminar participants from the DB community would typically attend) until 2013, and since 2009 took place as a one-day workshop. In 2015, VLDB takes place the same week as ICFP, a major PL conference (and one which a number of seminar participants would normally attend). This clash highlighted a problem with DBPL’s recent role as a “VLDB workshop”: even in years when there is no clash with other events, participants from outside the DB community may find it difficult to justify the time/expense of attending another conference (or of just attending one day of an event they would otherwise not attend).
A number of alternatives were discussed, including the possibility of co-locating DBPL with ICFP in 2015, holding it as a stand-alone event (close in time/space to VLDB or ICFP but not formally affiliated with either), or seeking another co-location option. The possibility of co-locating with SPLASH 2015 (an umbrella PL conference including OOPSLA and several other events) was also raised, but did not seem to generate much enthusiasm at the seminar. An alternative proposal was considered, which attracted considerable support: to try to hold DBPL at both venues, with a video link connecting speakers and audience members at VLDB (in Hawaii) and ICFP (in Vancouver). Although this arrangement was recognized to have disadvantages (e.g. the inability to talk to speakers or other participants informally outside the conference room), participants felt that it offered the most promising route if it could be done. Of approximately 20 participants present in the discussion, a clear majority indicated willingness to either help organize or participate in/submit to DBPL if it were held in 2015.
5 Outcomes
- Umut Acar agreed to coordinate a discussion of the possibility of starting a “data-centric computation” conference.
- James Cheney started a “data-centric programming languages” mailing list, invited Dagstuhl participants to join and subsequently advertised it on relevant mailing lists such as TYPES and DBworld. The list currently has over 120 members.
- Fritz Henglein and Torsten Grust agreed to investigate the possibility of DBPL taking place “virtually” at two locations, with VLDB in Hawaii and ICFP in Vancouver connected by a video link. This turned out to be infeasible due to the high up-front cost of the link.
- Based on a straw poll conducted with Dagstuhl participants it was decided to approach the SPLASH 2015 organizers to see if DBPL could be co-located there. The SPLASH organizers were willing to approve this without going through the formal workshop application process. The two co-chairs are James Cheney and Thomas Neumann and 6 of the 10 PC members were participants in the Dagstuhl seminar.
Participants
- Umut A. Acar
Carnegie Mellon University – Pittsburgh, US
- Yanif Ahmad
Johns Hopkins University – Baltimore, US
- Alexander Alexandrov
TU Berlin, DE
- Carsten Binnig
DHBW – Mannheim, DE
- Giuseppe Castagna
University Paris-Diderot, FR
- James Cheney
University of Edinburgh, GB
- Laurent Daynès
Oracle Corporation, FR
- Nate Foster
Cornell University – Ithaca, US
- Pierre Geneves
INRIA – Grenoble, FR
- Alexey Gotsman
IMDEA Software – Madrid, ES
- Todd J. Green
LogicBlox – Atlanta, US
- Torsten Grust
Universität Tübingen, DE
- Fritz Henglein
University of Copenhagen, DK
- Jan Hidders
TU Delft, NL
- Christoph Koch
EPFL – Lausanne, CH
- Tim Kraska
Brown University, US
- Sam Lindley
University of Edinburgh, GB
- Todd Mytkowicz
Microsoft Corp. – Redmond, US
- Thomas Neumann
TU München, DE
- Frank Neven
Hasselt Univ. – Diepenbeek, BE
- Ryan R. Newton
Indiana University – Bloomington, US
- Kim Nguyen
University Paris-Sud – Gif sur Yvette, FR
- Klaus Ostermann
Universität Tübingen, DE
- Christopher Ré
Stanford University, US
- Tiark Rompf
Purdue University, US
- Andrew Stevenson
Queen’s Univ. – Kingston, CA
- Julia Stoyanovich
Drexel Univ. – Philadelphia, US
- Dan Suciu
University of Washington – Seattle, US
- Jens Teubner
TU Dortmund, DE
- Alexander Ulrich
Universität Tübingen, DE
- Jan Van den Bussche
Hasselt Univ. – Diepenbeek, BE
- Stijn Vansummeren
Université Libre de Bruxelles, BE
- Marcos Vaz Salles
University of Copenhagen, DK
- Jan Vitek
Northeastern University – Boston, US
- Dimitrios Vytiniotis
Microsoft Research UK – Cambridge, GB
- Ryan Wisnesky
MIT – Cambridge, US
|
{"Source-Url": "http://drops.dagstuhl.de/opus/volltexte/2015/5005/pdf/dagrep_v004_i012_p048_s14511.pdf", "len_cl100k_base": 10759, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 45230, "total-output-tokens": 12661, "length": "2e13", "weborganizer": {"__label__adult": 0.00041961669921875, "__label__art_design": 0.0005517005920410156, "__label__crime_law": 0.0003495216369628906, "__label__education_jobs": 0.004669189453125, "__label__entertainment": 0.0001468658447265625, "__label__fashion_beauty": 0.00019502639770507812, "__label__finance_business": 0.0003235340118408203, "__label__food_dining": 0.0004930496215820312, "__label__games": 0.0005860328674316406, "__label__hardware": 0.0010232925415039062, "__label__health": 0.0008568763732910156, "__label__history": 0.0004868507385253906, "__label__home_hobbies": 0.00016963481903076172, "__label__industrial": 0.0006394386291503906, "__label__literature": 0.0004701614379882813, "__label__politics": 0.00036716461181640625, "__label__religion": 0.0006914138793945312, "__label__science_tech": 0.11199951171875, "__label__social_life": 0.0002541542053222656, "__label__software": 0.0098876953125, "__label__software_dev": 0.8642578125, "__label__sports_fitness": 0.0003204345703125, "__label__transportation": 0.0007314682006835938, "__label__travel": 0.0002551078796386719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55244, 0.02565]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55244, 0.28524]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55244, 0.8796]], "google_gemma-3-12b-it_contains_pii": [[0, 2683, false], [2683, 5720, null], [5720, 8049, null], [8049, 9375, null], [9375, 11911, null], [11911, 14255, null], [14255, 17210, null], [17210, 20270, null], [20270, 23309, null], [23309, 26561, null], [26561, 29059, null], [29059, 32054, null], [32054, 35002, null], [35002, 38126, null], [38126, 40193, null], [40193, 42860, null], [42860, 46978, null], [46978, 50335, null], [50335, 53562, null], [53562, 55244, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2683, true], [2683, 5720, null], [5720, 8049, null], [8049, 9375, null], [9375, 11911, null], [11911, 14255, null], [14255, 17210, null], [17210, 20270, null], [20270, 23309, null], [23309, 26561, null], [26561, 29059, null], [29059, 32054, null], [32054, 35002, null], [35002, 38126, null], [38126, 40193, null], [40193, 42860, null], [42860, 46978, null], [46978, 50335, null], [50335, 53562, null], [53562, 55244, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55244, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55244, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55244, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55244, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55244, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55244, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55244, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55244, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55244, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55244, null]], "pdf_page_numbers": [[0, 2683, 1], [2683, 5720, 2], [5720, 8049, 3], [8049, 9375, 4], [9375, 11911, 5], [11911, 14255, 6], [14255, 17210, 7], [17210, 20270, 8], [20270, 23309, 9], [23309, 26561, 10], [26561, 29059, 11], [29059, 32054, 12], [32054, 35002, 13], [35002, 38126, 14], [38126, 40193, 15], [40193, 42860, 16], [42860, 46978, 17], [46978, 50335, 18], [50335, 53562, 19], [53562, 55244, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55244, 0.06024]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
c3b28d3aee44808e0919a10a00d66db3d6934db2
|
Body LayARs
A Toolkit for Body-Based Augmented Reality
Pohl, Henning; Dalsgaard, Tor-Salve; Krasniqi, Vesa; Hornbæk, Kasper
Published in:
VRST ’20: 26th ACM Symposium on Virtual Reality Software and Technology
DOI:
10.1145/3385956.3418946
Publication date:
2020
Document version
Peer reviewed version
Citation for published version (APA):
ABSTRACT
Technological advances are enabling a new class of augmented reality (AR) applications that use bodies as substrates for input and output. In contrast to sensing and augmenting objects, body-based AR applications track people around the user and layer information on them. However, prototyping such applications is complex, time-consuming, and cumbersome, due to a lack of easily accessible tooling and infrastructure. We present Body LayARs, a toolkit for fast development of body-based AR prototypes. Instead of directly programming for a device, Body LayARs provides an extensible graphical programming environment with a device-independent runtime abstraction. We focus on face-based experiences for headset AR, and show how Body LayARs makes a range of body-based AR applications fast and easy to prototype.
KEYWORDS
Augmented reality, toolkit, body-based augmentation
With Body LayARs, we present an open-source toolkit that facilitates the creation of body-based AR prototypes. Users of the toolkit get access to tracking information on nearby people and can link it with outputs to create applications. The web-based visual programming environment enables quick iteration and experimentation, as well as easy collaboration. We provide a large set of built-in capabilities, but users can also extend the toolkit to add functionality or target new devices. Development in Body LayARs is device independent and allows for quick execution on any connected device that implements the Body LayARs runtime.
Figure 1 shows an example of Body LayARs in use. The application here is meant to support users who have trouble recognizing facial expressions of emotion (often impaired in persons with Alzheimer’s disease [17]). With Body LayARs, developers can easily track faces, get data on the corresponding emotions, and surface this to the user in a convenient way. In this example, emoji sprites are used to adorn each tracked person and provide additional emotion cues to the user.
In summary, our contributions are:
- a description of body-based AR
- a toolkit for easy prototyping of body-based AR apps
- a runtime for the Microsoft HoloLens to run these apps
- a demonstration of the utility and expressiveness of the toolkit through a set of example applications.
2 DESCRIBING BODY-BASED AR
Historically, AR has been focused on object-based interactions, such as in gaming, assembly, or training. In contrast, the body has always played a role in more performative AR experiences1. Julie Martin’s Dancing in Cyberspace [24] is an early example, where acrobats interacted on stage with virtual objects. Another example is DanceSpace [47], where music and graphics are generated based on dancers’ movements. Yet, such experiences have mostly been restricted to instrumented rooms (e.g., with multi-camera coverage for marker tracking). Mobile AR experiences have not generally had information on bodies. Today, with the development of more powerful computer vision methods, bodies are becoming fully available as material to use in AR. In this section, we describe the technologies that body-based AR is building on and provide examples of use.
2.1 Recent Advances in Body Tracking
Tracking is a core requirement for AR; in body-based AR, this means tracking of people. Classic techniques like marker-based tracking [25] are not suitable for this and thus a different set of methods is required. In addition to positional tracking, body-based AR will often also require information on the state, actions, and identity of others as well as users themselves.
Image-based techniques for detecting faces are mature and have been extended to full bodies. This spans from detecting the presence of a single body to full pose tracking of multiple bodies. For example, the OpenPose project enables pose tracking of any number of people [7]. Other examples are HoloPose [15], DensePose [4], VNect [32], PoseNet [35, 36], or SMPL-X [37]. They differ in the fidelity of the tracked skeleton, whether they provide a 2D or 3D skeleton, or a full body mesh, and whether they work with one or more people. Most also provide face tracking.
2.1.1 State. Once bodies are detected, additional properties of them can be inferred. For example, facial expressions can be derived from tracked faces. An AR application might detect if a person in front of the user is smiling, winking, or frowning. Tracking of facial expressions can also be used to infer the emotional state of others. A recent survey by Mehta et al. [31] detailed many techniques for this and demonstrated emotion detection with a HoloLens camera. Similar to identification, emotion can also be recognized based on audio features of speech [6].
Another set of properties can be derived from pose data. For example, such data can be used to determine whether a teacher has a ‘confident’ stance [43]. Some other possibilities are slouching or sitting detection, whether somebody is within an area of interest, or whether two people are sitting close to each other.
2.1.2 Action. Many properties of tracked bodies are time related. For example, dynamic gestures and movements, such as waving, nodding, shaking, and their properties, such as walking speed. Instead of just detecting movements, recent work also has shown the prediction of short-term future motions [21, 51].
There is also active research in detection of higher-level actions from video. For example, HMDB51 [23] and UCF101 [46]—two common video datasets used for this research—contain actions such as eating, brushing teeth, or shaking hands. These actions can already be recognized well from video [26] and, given further improvements, at some point likely will also work in realtime on AR devices.
2.1.3 Identity. With tracked bodies, their identity can be important additional information. An established way to do this is by using facial data. Face detection and tracking comes built into many platforms and is fast enough for realtime processing of camera data. However, people are also identifiable through properties such as their gait [20], voice [42], and overall look [2].
2.2 Body-Based Output
In body-based AR, bodies should not just be tracked, but it should also be possible to have output relative to or added to a body. With pose information, rendering content around a body is straightforward. For example, instead of rendering a model on top of a tracking marker, the model could be rendered above a person’s head.
However, body-based AR also brings in new ways to visualize information in AR. A popular body-based visualization are face filters (Instagram) and lenses (Snapchat). Both layer visual content on top of and around faces, such as adding animal ears, fake glasses, or artificial makeup. These effects can be dynamic and, for example, also react to head pose and facial expressions. The term ‘filter’ is also used for image-based effects. For example, the view can be altered to appear like a film noir, or color graded to be more vivid. With machine learning methods, more elaborate manipulations of people’s faces in images have become feasible. An example of this are generated artificial hair colors [38] and makeup [19]. A common goal behind such methods is beautification [9, 27].
1See http://www.augmentedperformance.com/ for a sample selection
2.3 Examples of Existing Body-Based AR
There are a number of existing systems within the space of body-based AR. A common use of body-based AR is overlays of anatomical and medical data, such as in physiotherapy education [11, 18]. Similarly, the *AnatOnMe* system demonstrated how such visualization could be used to improve doctor-patient communication [34]. *Labella* is designed to augment and to promote self-discovery of the user’s vagina [3].
Instead of anatomical data, the *LightGuide* system overlays directional information on the user’s body in order to guide them through movements [45]. In *LumiWatch* a graphical user interface is projected on the user’s arm [49]. Visual output that is linked to the body is also enabled by the *MultiFi* system, which extends the screen space of a user’s wearables [14].
Saquib et al. built a presentation system that allows for flexible coupling of AR content to bodies and movement [44]. This allows users to, for example, attach icons to their hand and then gesture to switch between states. Performances, such as guitar playing, can also be augmented with spatially coherent visual effects.
Body-based AR has particular promise where it augments human-human interaction. For example, the *LittleHelper* system supports users with autism during job interviews [50]. One component of this system tracks the face of the interviewer and guides the user back to it, should they be looking away. *Superpower Glass* [8] and *Brain Power System* [28] also aim to support people with autism. Both systems are designed for therapy support and share modes aimed at training emotion recognition. Here the systems detect the emotional state of a person the user is interacting with. This information is then either surfaced to the user or used to quiz them—in either case in order to help them train their own emotion recognition abilities.
2.4 Developing Body-Based AR Applications
There are many tools available to prototype and develop AR applications. On top of the general challenges in prototyping AR, this section shows that the support for body-based AR is scarce.
As noted by Ashtari et al. in a recent paper, the entry barriers for AR development are high [5]. Domain experts do not commonly have the skillset to develop AR projects “from scratch.” This can be especially daunting in the more technical parts of an AR application. Ashtari et al. cited a participant who remarked that they had no idea how computer vision works and consider those parts a “black box.” Unfortunately, this exacerbates the challenges when developing body-based AR, as face and body tracking commonly is not available as a component out of the box.
There are several AR prototyping tools aimed at non-developer audiences. For example, the *DART* toolkit, was built for designers [29]. Non-programmers were also targeted by Güven and Feiner with MARS [16]. Recently, there has also been more work on phone-based AR prototyping. For example, *ProtoAR* enables creation of prototypes from sketches and captured clay models [33]. However, none of these systems offer the capabilities required for body-based AR.
Table 1 shows an overview of common toolkits for AR development as well as of libraries relevant for body-based AR. While many options exist, none are suitable for fast body-based AR prototyping. For example, many toolkits, such as *ARToolKit* [25] only handle tracking of the camera (e.g., with visual markers). While face tracking now also is a commonly available component, it is regularly restricted to only the front camera. This allows for selfie apps, but not for creation of applications that work with the faces of others.
<table>
<thead>
<tr>
<th>Toolkit/API</th>
<th>Target</th>
<th>Availability</th>
<th>Development</th>
<th>Body Tracking</th>
<th>Body-Based Augmentation</th>
</tr>
</thead>
<tbody>
<tr>
<td>ARCore</td>
<td>free</td>
<td>compiled code</td>
<td>1 face†</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>ARKit</td>
<td>free</td>
<td>compiled code</td>
<td>1–3 faces†+ 1 person</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>Vuforia</td>
<td>§</td>
<td>compiled code</td>
<td>n/a</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>Maxst</td>
<td>§</td>
<td>compiled code</td>
<td>n/a</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>EasyAR</td>
<td>§</td>
<td>compiled code</td>
<td>n/a</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>ARToolKit</td>
<td>free</td>
<td>compiled code</td>
<td>n/a</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>Wikitude</td>
<td>§</td>
<td>compiled code + GUI</td>
<td>n/a</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>Torch</td>
<td>§</td>
<td>GUI</td>
<td>n/a</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>ZapWorks</td>
<td>§</td>
<td>GUI</td>
<td>1 face†</td>
<td>face paint</td>
<td>n/a</td>
</tr>
<tr>
<td>HoloJS</td>
<td>free</td>
<td>scripting</td>
<td>n/a</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>MagicScript</td>
<td>free</td>
<td>scripting</td>
<td>n/a</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>buildwagon</td>
<td>§</td>
<td>scripting</td>
<td>n/a</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>DeepAR</td>
<td>§</td>
<td>compiled code + GUI</td>
<td>1 face†</td>
<td>“face filters, lenses, and masks”</td>
<td>n/a</td>
</tr>
<tr>
<td>Lens Studio</td>
<td>§⁺</td>
<td>free</td>
<td>GUI</td>
<td>multiple faces</td>
<td>face lenses</td>
</tr>
<tr>
<td>Xing</td>
<td>§</td>
<td>compiled code</td>
<td>faces + emotions</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>Spark AR</td>
<td>§⁺</td>
<td>free</td>
<td>GUI</td>
<td>multiple faces + hands</td>
<td>face masks, filters, and “people effects”</td>
</tr>
<tr>
<td>Face AR</td>
<td>§</td>
<td>compiled code</td>
<td>multiple faces</td>
<td>filters, makeup, lenses, beautification</td>
<td>n/a</td>
</tr>
<tr>
<td>SentiiMask</td>
<td>§</td>
<td>compiled code</td>
<td>faces + attributes (e.g., age, beard)</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>visage</td>
<td>§</td>
<td>compiled code</td>
<td>faces + attributes (gender, age, emotion)</td>
<td>n/a</td>
<td>n/a</td>
</tr>
<tr>
<td>Makeup AR-tist</td>
<td>§</td>
<td>compiled code</td>
<td>faces</td>
<td>virtual makeup</td>
<td>n/a</td>
</tr>
</tbody>
</table>
* For use in instagram/snapchat. Does not allow development of stand-alone applications. † Only works with front camera and hence cannot be used to track others. § We only consider “out-of-the-box” support. Developers are commonly able to render custom content. phone = phone, tablet = tablet, glasses = glasses, and = library.
For body-based AR, an additional third-party face tracking library thus would need to be included—something that is difficult for non-developers. We also see that support for body augmentation is mostly limited to external libraries as well as the filter editors from Instagram and Snapchat. Yet, while Instagram’s filter development tool, Spark AR, for example, does allow for visual programming (as well as scripting) of face and body effects, these can only be used in their phone apps. Working with external libraries that bring in advanced face tracking and augmentation features also comes at a prize. Instead of fast prototyping in a graphical environment, tying in these libraries requires writing code and working with the system on a comparably low level. Hence, there is a gap in the AR development ecosystem, where no solution allows for fast and easy prototyping of body-based experiences.
2.5 Motivation for Body LayARs
With Body LayARs, we address this gap and present an environment that brings together easy visual programming and body tracking as well as augmentation. By enabling prototyping with a visual programming approach, we cater to domain experts and other people not trained in software development, as there is evidence that visual programming can improve the performance of such users [41]. They can quickly assemble systems from a set of building blocks, yet Body LayARs also allows for extensive scripting and customization. Hence, developers with more advanced expertise are able to leverage it as well. Applications aimed at augmenting interpersonal interactions benefit especially from headset AR. Hence, prototypes developed with Body LayARs can be executed on the Microsoft Hololens (yet are fundamentally device agnostic). Furthermore, Body LayARs makes information on people available to built-in components, requiring no inclusion of additional libraries and thus allowing easy access to these features.
In the remainder of the paper, we describe the design and capabilities of Body LayARs in detail. We also show a range of examples of simple prototypes that would be complicated or impossible to build with existing prototyping solutions.
3 THE BODY LAYARS TOOLKIT
As we have described earlier, existing environments for AR development do not adequately address the requirements for body-based AR prototyping. Our Body LayARs toolkit is specifically designed to address these shortcomings. Specifically, we designed the toolkit around five goals:
- **Low barrier of use** to enable people without expert knowledge in computer graphics, computer vision, networking, or machine learning to prototype body-based AR experiences. MacIntyre et al. pointed out that "non-technologists" find it hard to build prototypes, "due to both their lack of expertise in areas such as tracking, and to the complicated software development that would be required" [29]. While their and other people’s software have made this easier for AR in general, body-based AR faces similar issues.
- **Fast iteration** to encourage experimentation with minimal delay between changing a project and seeing that change in a running application. In addition to compile time costs, AR prototyping commonly requires an additional deployment step to the target device.
- **Device independence** to enable project development compatible with several different AR devices. The AR landscape is changing rapidly which, as Ashtari et al. noted, “can make end-user developers feel especially left behind and struggle to keep up” [5]. An abstraction from specific hardware can help reduce the complexity users have to deal with.
- **Collaboration** to allow multiple people to work on a prototype at the same time. Collaborative coding tools, such as Collabode [13], have shown the potential of this approach.
- **Extensibility** to allow users to add functionality and share it with others. This is a common goals shared with many other toolkits, and AR prototyping—with a diverse device landscape—can particularly benefit from this.
3.1 Considerations
To achieve fast iteration times and low entry barriers, we opted for a web-based solution where projects are deployed to a host application already running on a target device. Similarly to HoloJS or buildwagon, this eliminates the need for users to have a compiler toolchain installed for the target device. Furthermore, deployment of projects to the target device becomes faster as no restart of the application is needed if projects are executed inside a host application. A browser-based editor also enables easier collaboration with project and code sharing, as well as simultaneous editing.
To further make development device-agnostic, we decided to develop most of Body LayARs in JavaScript and only have a small API to actual devices. Devices then only need to implement a small set of functions (e.g., rendering a model, returning the current user position, finding faces in the scene) to be able to run Body LayARs applications. Using JavaScript for the majority of the code also makes it easier to customize and extend Body LayARs.
3.2 Overview
Users work with the Body LayARs toolkit (see Figure 2 for a conceptual overview) via a web application. The application server holds all project files and enables project management, versioning, editing, and sharing. Prototype development primarily happens within a visual programming editor. In addition to the visual flow-based editing, users can also write JavaScript inside scripting nodes. Assets (such as 3d models or audio files) can be uploaded and then used within a project.
While development happens inside the web application, projects run on AR devices or in a standalone desktop application. In either case, after starting the Body LayARs application on a device, it automatically registers with the webserver. Users can see currently connected devices and their status at the bottom of the project editor. Once ready to test a project, users only need to click on one of the available devices to run their project on it.
To run a project, the webservice transforms the flow-based representation of the project into a JavaScript application package. Each node is translated into a corresponding JavaScript object and the resulting node graph linearized. Assets are stored on the server, and then referenced from the application package so applications
can fetch them later. When starting a project, the host application on the selected device receives this package and runs the contained project. During execution, host devices are asked to call an update function of the packaged application every time a frame is rendered. While an application is executing, users can still make changes in the editor and update the application state. For example, they can change the color of a label that is being shown at runtime.
Because of the differences between AR devices, each device currently requires its own implementation of the host application. For example, there are different SDKs for the Microsoft HoloLens and the Magic Leap. While environments like Unity or the Unreal Engine provide some abstraction, there remain some fundamental architectural differences (such as the HoloLens only running UWP applications). We envision that the future OpenXR standard will soon enable more device-agnostic development.
We built host applications for the Microsoft HoloLens as well as for the Windows desktop. The latter allows for convenient prototyping but is limited in its capabilities due to running on a desktop. However, both implement the full Body LayARs JavaScript API and thus can run the same application bundles.
All parts of Body LayARs are open source.² We hope that this will result in further development of and interest in body-based AR. We are especially keen on widening access to experimentation with body-based AR from AR experts to a broader audience.
### 3.3 Project Server and Editor
The server contains all projects and offers management, editing, and deployment capabilities. This approach allows users to work on their body-based AR projects without setting up a development environment on their own machine. Instead of requiring a powerful development setup, they can work on any device with a browser.
Figure 3 shows the project editor in action. Users can instantiate nodes from a categorized drawer on the left. They can freely move nodes on the main canvas and drag between node attributes to connect them. The canvas can be panned and zoomed, which enables working with larger node layouts than fit on one screen.


²Available at https://github.com/henningpohl/body-based-ar-tk.
3https://socket.io/
General flow control is available through nodes like filter, conditional, or loop. For example, the filter node can be used to reduce a bundle of all detected faces to only the closest one. The conditional node works similarly, but also allows for branching to, for example, show different outputs depending on a currently visible face. For more complex logic or data handling, we provide the script node, that allows users to embed arbitrary JavaScript code into their application. We describe this node in the next section on customizing and extending Body LayARs.
Users can use a set of output nodes to show the results of their applications. A basic example is the sound node which plays back a sample when triggered. With a label node, floating text can be shown anchored within the scene, while the sprite and model nodes do the same but with a sprite and full model respectively. While the anchor used can be static it can also be dynamically tied to a tracked scene feature, such as a person’s head. For display of movement data we provide a path node, while a bargraph node can be used to put a corresponding visualization into the world.
Finally, we also provide two kinds of debugging nodes. The app debug node enables textual output to an overlay inside a device. On the other side, the graph debug node only surfaces debug information inside the project editor.
### 3.3.2 Customizing, Extending, and Sharing
Users can modify Body LayARs in multiple ways. First, the script node can be used for operations not supported by the visual programming environment. For example, users could use it to keep a face history (e.g., to trigger output based on when a person was last seen). In a script node, any standard JavaScript language feature and type can be used. We also provide a few custom types specific to AR, such as color, vector, matrix, face, and pose. Additionally, user scripts can make calls to the underlying Body LayARs JavaScript API (see below).
Second, node groups in a project can be shared by saving them as a named building block. These blocks are available to all other users in an extra menu at the bottom of the node drawer. This makes it easy to share common node combinations but also to share custom logic in script nodes.
Third, all nodes are editable on the server. Nodes consist of at least an interface definition in JSON format and runtime JavaScript code. By editing a node’s interface, users can add, change, or remove connection points available on that node. For example, they might want to add an input to the face recognizer node to be able to activate or deactivate it at runtime. Changes to the runtime code get deployed to devices and can substantially alter the behavior of a node.
More complex nodes also have custom styling and code for the editor. For example, the color node contains a color picker that shows up when the color value is selected. This is also editable by users so they can make improvements to existing widgets as well as add new ones.
Finally, users are able to apply the node editing capabilities to create entirely new nodes from scratch or based on existing nodes. In this way large changes to Body LayARs are possible. Users might want to create a new node from a script they have used or to make a common design easier to build. As with built-in nodes, how these new nodes show up and behave in the editor is also fully customizable.
Nodes on a project server are shared with all users. Changes made by one automatically manifest in everybody’s projects. Similarly, if one user adds a new node or saves a building block, this is also available to all. We opted for this open design as we assume collaborating users working on prototypes. For this scenario, we value flexibility and collaboration higher than stability.
### 3.4 Body LayARs JavaScript API
While all application logic is handled via nodes and scripts, these have no way of reading inputs or effecting any outputs on their own. To interface the application logic with actual devices, we provide an API layer. The API is designed to be stateful and to mostly work asynchronously. Sounds, models, and labels are identified by handles that are passed to the API. This does not expose the actual objects to the runtime and allows devices to implement the API in a way that suits them most. Any input is received by callback functions that are registered for events, such as user movement or face tracking. In addition to device capabilities, it also provides access to state, such as the current time or frame number.
In addition to the Body LayARs API, host applications are also required to provide two extensions to the JavaScript runtime: (1) logging, and (2) networking. For logging, a console.log function needs to be available. This is an especially useful functionality when debugging applications in IDEs. For networking, we require implementations to provide the XMLHttpRequest API. This allows users to move networking code from the browser directly to Body LayARs. Prototypes can use this functionality to make requests to servers to, for example, fetch additional resources at runtime or to save tracking data online.
### 3.5 Host Applications
Applications written with Body LayARs are not directly executable on a device. Instead, they require a host application to be written for each device that can then run the packaged JavaScript bundle. An important requirement is hence that such applications need to be able to interface with JavaScript code. However, JavaScript engines are available for all relevant platforms for integration into applications. Hence, enabling running of Body LayARs application boils down to implementing the about 20 functions that form the JavaScript API.
We built an example host application for the Microsoft HoloLens, which we describe in this section. We chose to focus on AR headsets, in particular the HoloLens, as body-based AR experiences especially benefit from hands-free and see-through kinds of AR. Having to hold a phone in front of them would make testing of many scenarios (e.g., augmenting conversation) awkward and unnatural. However, note that the HoloLens also is not ideal for this purpose and does inhibit eye contact between participants.
We also compile a variant of the application that instead of on the HoloLens runs in a local window (enabling faster local testing). It shares all the Body LayARs relevant code with the HoloLens version and hence we will not describe it here.
#### 3.5.1 Microsoft HoloLens
We built our host application for the HoloLens around the UrhoSharp engine. This provides a graphics layer abstraction, asset loading, an audio system, as well as integration of the HoloLens tracking. To run Body LayARs applications,
we embedded the ChakraCore JavaScript engine. The application itself is small and primarily translates calls to the Body LayARs JavaScript API into UrhoSharp calls. For example, when a model is loaded, the resource is fetched from the project server and just added to the engine’s resource cache.
While the HoloLens is fast enough to run all logic and rendering locally, some operations require additional processing capabilities. For example, while we run face tracking on the HoloLens directly, this was not feasible for the more advanced person-centered detection and tracking. We hence offload this work to an external server. Correspondingly, these parts of Body LayARs are device-agnostic and other devices could make use of this.
3.5.2 Remote Services. Our external server provides services for face recognition, emotion classification, and pose tracking. As we run face detection locally, we only need to involve this server if (1) people are present, and (2) face or emotion recognition are actually required. For pose recognition we always need to send whole video frames to the server. We use PoseNet [48] for pose recognition, the Face Recognition library [12] for face recognition, and FER [30] for facial expression recognition, which we use to classify emotional state.
More advanced detection, tracking, and recognition is available, however we chose a set of models that still allowed us for close to realtime (we do not run the models on every frame from the HoloLens camera) execution. Furthermore, a limitation of the models we used is that they only provide 2D results. Hence, while we estimate the depth of recognized faces and joints, this is less accurate than full 3D model fitting. We will return to this and other limitations below.
3.6 Comparison to Other AR Tools
As we have described earlier, current AR development environments (see Table 1) do not adequately support prototyping of body-based AR. While there are some toolkits that allow for easy prototyping with an editor, such as Wikitude Studio or Torch, these do not track bodies. Google’s ARCore and Apple’s ARKit both support some tracking of faces and poses. However, this only works with the front facing camera of phones and tablets, prohibiting prototyping of applications that augment interaction with others—a core aspect of body-based AR—as well as immersive experiences. Similarly, while Spark AR enables building of some body-based experiences, it can only be used for filters running inside of the Instagram phone app.
Body LayARs is similar to Microsoft’s HoloJS and Magic Leap’s MagicScript, in that all three are built on top of a JavaScript stack, which enables faster prototyping. Our visual programming environment is a further abstraction on top of this. Furthermore, HoloJS and MagicScript both are comparatively low-level and, for example, require developers to program in WebGL for graphics. Like Body LayARs, HoloJS applications can also be deployed quickly to an app running on a target device with Spin.
Table 1 also showed several libraries that can be used to add similar functionality as in Body LayARs to applications. However, all these are costly and require expertise in software development.
4 EXAMPLE PROJECTS
To show how Body LayARs enables easy development of body-based AR prototypes we have created a set of example applications. We have aimed to (1) cover a range of use cases, and (2) demonstrate the development capabilities provided by Body LayARs. For each of these examples, we show the corresponding Body LayARs application as well as captures of these applications running on the HoloLens. Note that we have reduced the fidelity Body LayARs nodes are shown at in the corresponding figures to make them more readable.
4.1 Placing Nametags on Recognized People
The first example application demonstrates a basic use of Body LayARs. To help people remember names, this application attaches nametags to them. As shown in Figure 4, this application only requires a few nodes, primarily: (1) a face tracker node to find faces in front of the user, (2) a face recognizer node to associate a name with each face, and (3) a label node that places the name next to each face.
This application could be extended in many ways. For example, additional information for each recognized person could be retrieved from a web service with a script node. The label could then show a combination of name and position, or name and last time that person was met.
4.2 Tracking Student Responses in Classrooms
Our second example application prototypes an application for teachers and instructors, working in classrooms. When one-on-one engagement is not feasible, they commonly just put problems before the students and ask them to vote on potential answers. While there is tooling support for this activity, it usually requires students to vote on a website using their laptop or mobile, instead of directly responding to the instructor. Traditionally, students could also just raise their hand to show their agreement with an answer.

Figure 5: The project shown here tracks people’s poses and determines how many raised their hand and how many did not. This is combined with some color coding and labels to create a bar chart visualizing the result of this show of hands.
4.3 Emotion Annotation
Our third example shows how body-based AR can help users to better notice the emotional state of people around them. For people that have trouble reading faces, this can be a conversational aid, but it could also be seen as a form of expression. In the project shown in Figure 8, a face tracker works in combination with an emotion detector to infer emotional state from faces in view of the user. Here, we are only interested in sad faces, which we detect with a string comparison against the dominant detected emotion. If a person is found to be sad, a model is added to hover just above their face. In this case, the project includes the model of a cartoon-like cloud that is colored in a dark gray.
4.4 Visualizing Self-Tracking Data
The earlier examples all demonstrate prototypes that work with other people. To show that Body LayARs can also be used to work with one’s own body and movements we included this fourth example (shown in Figure 6). This application aggregates the user’s movement through space and visualizes it using a path.
The example shows the most reduced instance of self tracking and we can envision multiple ways this can be extended to prototype more intricate applications. Instead of showing movement as a path, the aggregating function could instead discretize location data and build a spatial heatmap for a room.
4.5 Sonification
Finally, with our fifth example we demonstrate that Body LayARs can also be used to build non-visual applications. The project shown in Figure 7 again tracks people around the user, but this time uses the positional information to anchors sounds in the scene. Every two seconds the sounds effect are triggered.
Instead of playing sound samples, this could be extended by recognizing people and then speaking out their names with a speak text node. With an additional script, it could be detected when a person first appears and instead of continuously sounding out their names, they would only be announced once. While emotion can also be gleaned from voice, additional sonification of visual information could aid blind people in conversations.
Figure 6: This projects shows how a script node can be used to aggregate incoming data into more complex structures. Here, a path is assembled from the movement data of the user and shown in the world.
Figure 7: In this project, every two seconds a sound is played at the location of every person around the user.
However, keeping track of students and getting a decent tally of the room can be challenging, especially in large rooms and small differences in voting behavior. Yet, where it is hard for people to keep track of a large number of people, this is not the case for computers. Figure 5 shows a Body LayARs prototype that works with a small group of people. Their poses are tracked and then the instances where the hands are raised or lowered are counted up respectively. These counts are then visualized in a bar graph—color coded and labeled.
5 LIMITATIONS
Body LayARs has a number of limitations on the editor and runtime side as well as in the HoloLens application. While the editor and runtime allow for extension and customization, the set of nodes available out of the box is still limited. We believe that the set we provide allows for an interesting set of initial explorations, but for more complex kinds of prototypes likely will need to be extended.
Use of JavaScript allows for faster iteration (nothing needs to be compiled) and broad accessibility. However, the lack of static typing also means that runtime behavior of prototypes can be more fragile. Once it has become clearer what kind of functionality is required and which features are superfluous for body-based AR, it would be sensible to put more constraints on the development.
We have already mentioned above that the tracking capabilities on the device are currently limited. This is to strike a balance between the desire for realtime performance and the fidelity of tracking. For example, more advanced face tracking on the HoloLens is possible (as demonstrated by the HoloFace project [22]), yet comes at additional computational cost. As we do not have full 3d data available on tracked faces and bodies, some augmentation can be expressed in Body LayARs, yet not rendered by the HoloLens application. For example, the API allows users to specify that they would like to add an eye shadow to a tracked face, yet this is not currently rendered. A host application for future, more powerful, AR headsets could then enable this kind of augmentation.
While Body LayARs supports playback of spatial audio, the audio functionality in general is comparably limited. For example, in addition to tracking people around oneself using cameras, it should also be possible to do the same with microphones. Yet, while some headsets come with microphone arrays built-in, we have found access to these to be too restricted. Hence, Body LayARs currently does not include any capability for using audio as an input.
In Body LayARs we currently also do not address the privacy issues that body-based AR is fraught with. As shown by Acquisti et al., face recognition in an AR context can easily be abused [1]. Hence, while body-based AR can be beneficial for users (e.g., by aiding them in social situations), the cost of it can be felt more by other people. Social acceptability will depend on negotiating a balance between privacy and utility. As Body LayARs is a prototyping tool, we took a non-restrictive approach. However, we do limit all recognition to people who are explicitly added to the system.
Figure 8: In this example people’s emotions are inferred and a dark cloud is rendered above people found to be sad.
6 CONCLUSION
Advances in computer vision are enabling a new kind of AR experience: body-based AR, where the augmentation is focused on adding to interaction with people. Prototyping this kind of AR experience, however, is currently complicated, effectively limiting who can explore the space of body-based AR. To alleviate this issue, we have presented the open source Body LayARs toolkit, which enables users to rapidly prototype body-based AR experiences. Body LayARs provides a graphical flow-based programming environment, but also allows users to deeply customize and extend the toolkit. Where the graphical programming is not sufficient, users can integrated chunks of JavaScript directly or use JavaScript to write entirely new components. With a set of example applications, we have shown that Body LayARs enables easy augmentation of interactions based on identity, emotional state, and movement of oneself and others. We have kept these simple but have outlined throughout the paper how the capabilities of Body LayARs can support more complex developments. For example, as networking is available, users can tie body-based AR prototypes to more powerful web backends. And as a full JavaScript engine forms the underlying execution environment, users are free to pull in any of the plethora of JavaScript modules available from others.
We believe that body-based AR offers exciting possibilities. With Body LayARs many experiences can be prototyped today. But ongoing development in the underlying technologies will open up more possibilities in the future. In particular, we can expect full 3d face and pose tracking to mature and for AR devices to incorporate hardware for faster execution of neural network models. A benefit of developing with Body LayARs is that, because these applications build on higher level abstractions, improvements in the underlying tracking will directly benefit existing projects. Furthermore, these technological advances will enable more complex applications, such as visually augmenting other’s bodies in realtime or tracking larger groups (such as in classrooms).
ACKNOWLEDGMENTS
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement 648785).
REFERENCES
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/251025719/AR_Toolkit.pdf", "len_cl100k_base": 9328, "olmocr-version": "0.1.41", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 38207, "total-output-tokens": 13057, "length": "2e13", "weborganizer": {"__label__adult": 0.0007648468017578125, "__label__art_design": 0.002689361572265625, "__label__crime_law": 0.00051116943359375, "__label__education_jobs": 0.001819610595703125, "__label__entertainment": 0.0003066062927246094, "__label__fashion_beauty": 0.0004897117614746094, "__label__finance_business": 0.0002906322479248047, "__label__food_dining": 0.0005049705505371094, "__label__games": 0.0020160675048828125, "__label__hardware": 0.00421905517578125, "__label__health": 0.002162933349609375, "__label__history": 0.0006685256958007812, "__label__home_hobbies": 0.00015974044799804688, "__label__industrial": 0.0005841255187988281, "__label__literature": 0.0004558563232421875, "__label__politics": 0.00032138824462890625, "__label__religion": 0.0008029937744140625, "__label__science_tech": 0.30126953125, "__label__social_life": 0.00017917156219482422, "__label__software": 0.0171966552734375, "__label__software_dev": 0.66064453125, "__label__sports_fitness": 0.00055694580078125, "__label__transportation": 0.000743865966796875, "__label__travel": 0.00029659271240234375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53719, 0.03879]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53719, 0.40417]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53719, 0.90247]], "google_gemma-3-12b-it_contains_pii": [[0, 621, false], [621, 1507, null], [1507, 7942, null], [7942, 14054, null], [14054, 20445, null], [20445, 23389, null], [23389, 30149, null], [30149, 35339, null], [35339, 38576, null], [38576, 43623, null], [43623, 51045, null], [51045, 53719, null]], "google_gemma-3-12b-it_is_public_document": [[0, 621, true], [621, 1507, null], [1507, 7942, null], [7942, 14054, null], [14054, 20445, null], [20445, 23389, null], [23389, 30149, null], [30149, 35339, null], [35339, 38576, null], [38576, 43623, null], [43623, 51045, null], [51045, 53719, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53719, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53719, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53719, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53719, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53719, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53719, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53719, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53719, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53719, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53719, null]], "pdf_page_numbers": [[0, 621, 1], [621, 1507, 2], [1507, 7942, 3], [7942, 14054, 4], [14054, 20445, 5], [20445, 23389, 6], [23389, 30149, 7], [30149, 35339, 8], [35339, 38576, 9], [38576, 43623, 10], [43623, 51045, 11], [51045, 53719, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53719, 0.11]]}
|
olmocr_science_pdfs
|
2024-11-21
|
2024-11-21
|
36779f8b2ecc540723b5ebf410ef807b9f5fd22b
|
Chapter 2
Using R
This chapter covers the essentials for using R to explore data interactively. Section 2.1 covers basic access to an R session. Users interact with R through a single language for both data analysis and programming (Section 2.3, page 19). The key concepts are function calls in the language and the objects created and used by those calls (2.4, 24), two concepts that recur throughout the book. The huge body of available software is organized around packages that can be attached to the session, once they are installed (2.5, 25). The system itself can be downloaded and installed from repositories on the Web (2.6, 29); there are also a number of resources on the Web for information about R (2.7, 31).
Lastly, we examine aspects of R that may raise difficulties for some new users (2.8, 34).
2.1 Starting R
R runs on the commonly used platforms for personal computing: Windows®, Mac OS X®, Linux, and some versions of UNIX®. In the usual desktop environments for these platforms, users will typically start R as they would most applications, by clicking on the R icon or on the R file in a folder of applications.
An application will then appear looking much like other applications on the platform: for example, a window and associated toolbar. In the
standard version, at least on most platforms, the application is called the "R Console". In Windows recently it looked like this:
The application has a number of drop-down menus; some are typical of most applications ("File", "Edit", and "Help"). Others such as "Packages" are special to R. The real action in running R, however, is not with the menus but in the console window itself. Here the user is expected to type input to R in the form of expressions; the program underlying the application responds by doing some computation and if appropriate by displaying a version of the results for the user to look at (printed results normally in the same console window, graphics typically in another window).
This interaction between user and system continues, and constitutes an R session. The session is the fundamental user interface to R. The following section describes the logic behind it. A session has a simple model for user interaction, but one that is fundamentally different from users’ most common experience with personal computers (in applications such as word processors, Web browsers, or audio/video systems). First-time users may feel abandoned, left to flounder on their own with little guidance about what to do and even less help when they do something wrong. More guidance is available than may be obvious, but such users are not entirely wrong in their
reaction. After intervening sections present the essential concepts involved in using R, Section 2.8, page 34 revisits this question.
2.2 An Interactive Session
Everything that you do interactively with R happens in a session. A session starts when you start up R, typically as described above. A session can also be started from other special interfaces or from a command shell (the original design), without changing the fundamental concept and with the basic appearance remaining as shown in this section and in the rest of the book. Some other interfaces arise in customizing the session, on page 17.
During an R session, you (the user) provide expressions for evaluation by R, for the purpose of doing any sort of computation, displaying results, and creating objects for further use. The session ends when you decide to quit from R.
All the expressions evaluated in the session are just that: general expressions in R’s version of the S language. Documentation may mention “commands” in R, but the term just refers to a complete expression that you type interactively or otherwise hand to R for evaluation. There’s only one language, used for either interactive data analysis or for programming, and described in section 2.3. Later sections in the book come back to examine it in more detail, especially in Chapter 3.
The R evaluator displays a prompt, and the user responds by typing a line of text. Printed output from the evaluation and other messages appear following the input line.
Examples in the book will be displayed in this form, with the default prompts preceding the user’s input:
```r
> quantile(Declination)
0% 25% 50% 75% 100%
-27.98 -11.25 8.56 17.46 27.30
```
The "> " at the beginning of the example is the (default) prompt string. In this example the user responded with
```r
quantile(Declination)
```
The evaluator will keep prompting until the input can be interpreted as a complete expression; if the user had left off the closing ")", the evaluator would have prompted for more input. Since the input here is a complete expression, the system evaluated it. To be pedantic, it parsed the input text
and evaluated the resulting object. The evaluation in this case amounts to calling a function named quantile.
The printed output may suggest a table, and that’s intentional. But in fact nothing special happened; the standard action by the evaluator is to print the object that is the value of the expression. All evaluated expressions are objects; the printed output corresponds to the object; specifically, the form of printed output is determined by the kind of object, by its class (technically, through a method selected for that class). The call to quantile() returned a numeric vector, that is, an object of class "numeric". A method was selected based on this class, and the method was called to print the result shown. The quantile() function expects a vector of numbers as its argument; with just this one argument it returns a numeric vector containing the minimum, maximum, median and quartiles.
The method for printing numeric vectors prints the values in the vector, five of them in this case. Numeric objects can optionally have a names attribute; if they do, the method prints the names as labels above the numbers. So the "0%" and so on are part of the object. The designer of the quantile() function helpfully chose a names attribute for the result that makes it easier to interpret when printed.
All these details are unimportant if you’re just calling quantile() to summarize some data, but the important general concept is this: Objects are the center of computations in R, along with the function calls that create and use those objects. The duality of objects and function calls will recur in many of our discussions.
Computing with existing software hinges largely on using and creating objects, via the large number of available functions. Programming, that is, creating new software, starts with the simple creation of function objects. More ambitious projects often use a paradigm of creating new classes of objects, along with new or modified functions and methods that link the functions and classes. In all the details of programming, the fundamental duality of objects and functions remains an underlying concept.
Essentially all expressions are evaluated as function calls, but the language includes some forms that don’t look like function calls. Included are the usual operators, such as arithmetic, discussed on page 21. Another useful operator is `?`, which looks up R help for the topic that follows the question mark. To learn about the function quantile():
```
> ?quantile
```
In standard GUI interfaces, the documentation will appear in a separate window, and can be generated from a pull-down menu as well as from the
2.2. AN INTERACTIVE SESSION
`?` operator.
Graphical displays provide some of the most powerful techniques in data analysis, and functions for data visualization and other graphics are an essential part of R:
```r
> plot(Date, Declination)
```
Here the user typed another expression, `plot(Date, Declination)`, in this case producing a scatter plot as a side effect, but no printed output. The graphics during an interactive session typically appear in one or more separate windows created by the GUI, in this example a window using the native `quartz()` graphics device for Mac OS X. Graphic output can also be produced in a form suitable for inclusion in a document, such as output in a general file format (PDF or postscript, for example). Computations for graphics are discussed in more detail in Chapter 7.
The sequence of expression and evaluation shown in the examples is essentially all there is to an interactive session. The user supplies expressions and the system evaluates them, one after another. Expressions that produce simple summaries or plots are usually done to see something, either graphics or printed output. Aside from such immediate gratification, most expressions are there in order to assign objects, which can then be used in later computations:
```r
> fitK <- gam(Kyphosis ~ s(Age, 4) + Number, family = binomial)
```
Evaluating this expression calls the function `gam()` and assigns the value of the call, associating that object with the name `fitK`. For the rest of the
session, unless some other assignment to this name is carried out, \texttt{fitK} can be used in any expression to refer to that object; for example, \texttt{coef(fitK)} would call a function to extract some coefficients from \texttt{fitK} (which is in this example a fitted model).
Assignments are a powerful and interesting part of the language. The basic idea is all we need for now, and is in any case the key concept: Assignment associates an object with a name. The term “associates” has a specific meaning here. Whenever any expression is evaluated, the context of the evaluation includes a local \textit{environment}, and it is into this environment that the object is assigned, under the corresponding name. The object and name are associated in the environment, by the assignment operation. From then on, the name can be used as a \textit{reference} to the object in the environment. When the assignment takes place at the “top level” (in an input expression in the session), the environment involved is the \textit{global} environment. The global environment is part of the current session, and all objects assigned there remain available for further computations in the session.
Environments are an important part of programming with R. They are also tricky to deal with, because they behave differently from other objects. Discussion of environments continues in Section 2.4, page 24.
A session ends when the user quits from R, either by evaluating the expression \texttt{q()} or by some other mechanism provided by the user interface. Before ending the session, the system offers the user a chance to save all the objects in the global environment at the end of the session:
\begin{verbatim}
> q()
Save workspace image? [y/n/c]: y
\end{verbatim}
If the user answers yes, then when a new session is started in the same working directory, the global environment will be restored. Technically, the environment is restored, not the session. Some actions you took in the session, such as attaching packages or using \texttt{options()}, may not be restored, if they don’t correspond to objects in the global environment.
Unfortunately, your session may end involuntarily: the evaluator may be forced to terminate the session or some outside event may kill the process. R tries to save the workspace even when fatal errors occur in low-level C or Fortran computations, and such disasters should be rare in the core R computations and in well-tested packages. But to be truly safe, you should explicitly back up important results to a file if they will be difficult to recreate. See documentation for functions \texttt{save()} and \texttt{dump()} for suitable techniques.
Customizing the R session
As you become a more involved user of R, you may want to customize your interaction with it to suit your personal preferences or the goals motivating your applications. The nature of the system lends itself to a great variety of options from the most general to trivial details.
At the most general is the choice of user interface. So far, we have assumed you will start R as you would start other applications on your computer, say by clicking on the R icon.
A second approach, available on any system providing both R and a command shell, is to invoke R as a shell command. In its early history, S in all its forms was typically started as a program from an interactive shell. Before multi-window user interfaces, the shell would be running on an interactive terminal of some sort, or even on the machine’s main console. Nowadays, shells or terminal applications run in their own windows, either supported directly by the platform or indirectly through a client window system, such as those based on X11. Invoking R from a shell allows some flexibility that may not be provided directly by the application (such as running with a C-level debugger). Online documentation from a shell command is printed text by default, which is not as convenient as a browser interface. To initiate a browser interface to the help facility, see the documentation for help.start().
A third approach, somewhat in between the first two, is to use a GUI based on another application or language, potentially one that runs on multiple platforms. The most actively supported example of this approach is ESS, a general set of interface tools in the emacs editor. ESS stands for Emacs Speaks Statistics, and the project supports other statistical systems as well as R; see ess.r-project.org. For those who love emacs as a general computational environment, ESS provides a variety of GUI-like features, plus a user-interface programmability characteristic of emacs. The use of a GUI based on a platform-independent user interface has advantages for those who need to work regularly on more than one operating system.
Finally, an R session can be run in a non-interactive form, usually invoked in a batch mode from a command shell, with its input taken from a file or other source. R can also be invoked from within another application, as part of an inter-system interface.
In all these situations, the logic of the R session remains essentially the same as shown earlier (the major exception being a few computations in R that behave differently in a non-interactive session).
Encoding of text
A major advance in R’s world view came with the adoption of multiple locales, using information available to the R session that defines the user’s preferred encoding of text and other options related to the human language and geographic location. R follows some evolving standards in this area. Many of those standards apply to C software, and therefore they fit fairly smoothly into R.
Normally, default locales will have been set when R was installed that reflect local language and other conventions in your area. See Section 8.1, page 293, and ?locales for some concepts and techniques related to locales. The specifications use standard but somewhat unintuitive terminology; unless you have a particular need to alter behavior for parsing text, sorting character data, or other specialized computations, caution suggests sticking with the default behavior.
Options during evaluation
R offers mechanisms to control aspects of evaluation in the session. The function options() is used to share general-purpose values among functions. Typical options include the width of printed output, the prompt string shown by the parser, and the default device for graphics. The options() mechanism maintains a named list of values that persist through the session; functions use those values, by extracting the relevant option via getOption():
```r
> getOption("digits")
[1] 7
```
In this case, the value is meant to be used to control the number of digits in printing numerical data. A user, or in fact any function, can change this value, by using the same name as an argument to options():
```r
> 1.234567890
[1] 1.234568
> options(digits = 4)
> 1.234567890
[1] 1.235
```
For the standard options, see ?options; however, a call to options() can be used by any computation to set values that are then used by any other computation. Any argument name is legal and will cause the corresponding option to be communicated among functions.
Options can be set from the beginning of the session; see \texttt{Startup}. However, saving a workspace image does not cause the options in effect to be saved and restored. Although the \texttt{options()} mechanism does use an \texttt{R} object, \texttt{.Options}, the internal C code implementing \texttt{options()} takes the object from the base package, not from the usual way of finding objects. The code also enforces some constraints on what's legal for particular options; for example, "digits" is interpreted as a single integer, which is not allowed to be too small or too large, according to values compiled into \texttt{R}.
The use of \texttt{options()} is convenient and even necessary for the evaluator to behave intelligently and to allow user customization of a session. Writing functions that depend on options, however, reduces our ability to understand these functions' behavior, because they now depend on external, changeable values. The behavior of code that depends on an option may be altered by any other function called at any earlier time during the session, if the other function calls \texttt{options()}. Most \texttt{R} programming should be \textit{functional programming}, in the sense that each function call performs a well-defined computation depending only on the arguments to that call. The \texttt{options()} mechanism, and other dependencies on external data that can change during the session, compromise functional programming. It may be worth the danger, but think carefully about it. See page 47 for more on the programming implications, and for an example of the dangers.
\section*{2.3 The Language}
This section and the next describe the interactive language as you need to use it during a session. But as noted on page 13, there is no interactive language, only the one language used for interaction and for programming. To use \texttt{R} interactively, you basically need to understand two things: functions and objects. That same duality, functions and objects, runs through everything in \texttt{R} from an interactive session to designing large-scale software. For interaction, the key concepts are function calls and assignments of objects, dealt with in this section and in section 2.4 respectively. The language also has facilities for iteration and testing (page 22), but you can often avoid interactive use of these, largely because \texttt{R} function calls operate on, and return, whole objects.
\subsection*{Function Calls}
As noted in Section 2.2, the essential computation in \texttt{R} is the evaluation of a call to a function. Function calls in their ordinary form consist of
the function's name followed by a parenthesized argument list; that is, a sequence of arguments separated by commas.
```
plot(Date, Declination)
glm(Survived ~ .)
```
Arguments in function calls can be any expression. Each function has a set of formal arguments, to which the actual arguments in the call are matched. As far as the language itself is concerned, a call can supply any subset of the complete argument list. For this purpose, argument expressions can optionally be named, to associate them with a particular argument of the function:
```
jitter(y, amount = .1 * rse)
```
The second argument in the call above is explicitly matched to the formal argument named `amount`. To find the argument names and other information about the function, request the online documentation. A user interface to R or a Web browser gives the most convenient access to documentation, with documentation listed by package and within package by topic, including individual functions by name. Documentation can also be requested in the language, for example:
```
> ?jitter
```
This will produce some display of documentation for the topic "jitter", including in the case of a function an outline of the calling sequence and a discussion of individual arguments. If there is no documentation, or you don't quite believe it, you can find the formal argument names from the function object itself:
```
> formalArgs(jitter)
[1] "x" "factor" "amount"
```
Behind this, and behind most techniques involving functions, is the simple fact that `jitter` and all functions are objects in R. The function name is a reference to the corresponding object. So to see what a function does, just type its name with no argument list following.
```
> jitter
function (x, factor = 1, amount = NULL)
{
if (length(x) == 0)
return(x)
if (!is.numeric(x))
stop("'x' must be numeric")
etc.
```
The printed version is another R expression, meaning that you can input such an expression to define a function. At which point, you are programming in R. See Chapter 3. The first section of that chapter should get you started.
In principle, the function preceding the parenthesized arguments can be specified by any expression that returns a function object, but in practice functions are nearly always specified by name.
**Operators**
Function calls can also appear as operator expressions in the usual scientific notation.
\[
y - \text{mean}(y)
\]
\[
\text{weight} > 0
\]
\[
x < 100 \mid \text{is.na(date)}
\]
The usual operators are defined for arithmetic, comparisons, and logical operations (see Chapter 6). But operators in R are not built-in; in fact, they are just special syntax for certain function calls. The first line in the example above computes the same result as:
\[
\text{\texttt{\`-\texttt{}}(y, \text{mean(y))}}
\]
The notation `\texttt{\`-\texttt{}}` is an example of what are called backtick quotes in R. These quotes make the evaluator treat an arbitrary string of characters as if it was a name in the language. The evaluator responds to the names "y" or "\texttt{mean}" by looking for an object of that name in the current environment. Similarly `\texttt{\`-\texttt{}}` causes the evaluator to look for an object named "\texttt{-}". Whenever we refer to operators in the book we use backtick quotes to emphasize that this is the name of a function object, not treated as intrinsically different from the name \texttt{mean}.
Functions to extract components or slots from objects are also provided in operator form:
\[
\text{mars\$Date}
\]
\[
\text{classDef\@package}
\]
And the expressions for extracting subsets or elements from objects are also actually just specialized function calls. The expression
\[
y[i]
\]
is recognized in the language and evaluated as a call to the function `\texttt{[\texttt{}}`, which extracts a subset of the object in its first argument, with the subset defined by the remaining arguments. The expression \texttt{y[i]} is equivalent to:
\`[\`\`(y, i)
You could enter the second form perfectly legally. Similarly, the function \`[[\` extracts a single element from an object, and is normally presented as an operator expression:
\texttt{mars[["Date"]]}"
You will encounter a few other operators in the language. Frequently useful for elementary data manipulation is the \`:\` operator, which produces a sequence of integers between its two arguments:
\texttt{1:length(x)}
Other operators include \`\sim\`, used in specifying models, \`\%\%\` for modulus, \`\%*\%\` for matrix multiplication, and a number of others.
New operators can be created and recognized as infix operators by the parser. The last two operators mentioned above are examples of the general convention in the language that interprets
\`\%text\%\`
as the name of an operator, for any \textit{text} string. If it suits the style of computation, you can define any function of two arguments and give it, say, the name \`\%d\%\`. Then an expression such as
\texttt{x \%d\% y}
will be evaluated as the call:
\`\%d\%`(x, y)
**Iteration: A quick introduction**
The language used by R has the iteration and conditional expressions typical of a C-style language, but for the most part you can avoid typing all but the simplest versions interactively. The following is a brief guide to using and avoiding iterative expressions.
The workhorse of iteration is the \texttt{for} loop. It has the form:
\texttt{for( var in seq) expr}
where \textit{var} is a name and \textit{seq} is a vector of values. The loop assigns each element of \textit{seq} to \textit{var} in sequence and then evaluates the arbitrary expression \textit{expr} each time. When you use the loop interactively, you need to either show something each time (printed or graphics) or else assign the result somewhere; otherwise, you won’t get any benefit from the computation. For example, the function \texttt{plot()} has several “types” of x-y plots (points, lines, both, etc.). To repeat a plot with different types, one can use a \texttt{for()} loop over the codes for the types:
\begin{verbatim}
> par(ask=TRUE)
> for(what in c("p","l","b")) plot(Date, Declination, type = what)
\end{verbatim}
The call to \texttt{par()} caused the graphics to pause between plots, so we get to see each plot, rather than having the first two flash by. The variables \texttt{Date} and \texttt{Declination} come from some data on the planet Mars, in a data frame object, \texttt{mars} (see Section 6.5, page 176). If we wanted to see the class of each of the 17 variables in that data frame, another \texttt{for()} loop would do it:
\begin{verbatim}
for(j in names(mars)) print(class(mars[,j]))
\end{verbatim}
But this will just print 17 lines of output, which we’ll need to relate to the variable names. Not much use.
Here’s where an alternative to iteration is usually better. The workhorse of these is the function \texttt{sapply()}. It applies a function to each element of the object it gets as its first argument, so:
\begin{verbatim}
> sapply(mars,class)
Year X Year.1 Month
"integer" "logical" "integer" "integer"
Day Day..adj. Hour Min
\end{verbatim}
etc.
The function tries to simplify the result, and is intelligent enough to include the names as an attribute. See \texttt{?sapply} for more details, and the “See Also” section of that documentation for other similar functions.
The language has other iteration operators (\texttt{while()} and \texttt{repeat}), and the usual conditional operators (\texttt{if ... else}). These are all useful in programming and discussed in Chapter 3. By the time you need to use them in a non-trivial way interactively, in fact, you should consider turning your computation into a function, so Chapter 3 is indeed the place to look; see Section 3.4, page 58, in particular, for more detail about the language.
2.4 Objects and Names
A motto in discussion of the S language has for many years been: everything is an object. You will have a potentially very large number of objects available in your R session, including functions, datasets, and many other classes of objects. In ordinary computations you will create new objects or modify existing ones.
As in any computing language, the ability to construct and modify objects relies on a way to refer to the objects. In R, the fundamental reference to an object is a name. This is an essential concept for programming with R that arises throughout the book and in nearly any serious programming project.
The basic concept is once again the key thing to keep in mind: references to objects are a way for different computations in the language to refer to the same object; in particular, to make changes to that object. In the S language, references to ordinary objects are only through names. And not just names in an abstract, global sense. An object reference must be a name in a particular R environment. Typically, the reference is established initially either by an assignment or as an argument in a function call.
Assignment is the obvious case, as in the example on page 15:
> fitK <- gam(Kyphosis ~ s(Age, 4) + Number, family = binomial)
Assignment creates a reference, the name "fitK", to some object. That reference is in some environment. For now, just think of environments as tables that R maintains, in which objects can be assigned names. When an assignment takes place in the top-level of the R session, the current environment is what’s called the global environment. That environment is maintained throughout the current session, and optionally can be saved and restored between sessions.
Assignments appear inside function definitions as well. These assignments take place during a call to the function. They do not use the global environment, fortunately. If they did, every assignment to the name "x" would overwrite the same reference. Instead, assignments during function calls use an environment specially created for that call. So another reason that functions are so central to programming with R is that they protect users from accidentally overwriting objects in the middle of a computation.
The objects available during an interactive R session depend on what packages are attached; technically, they depend on the nested environments through which the evaluator searches, when given a name, to find a corresponding object. See Section 5.3, page 121, for the details of the search.
2.5 Functions and Packages
In addition to the software that comes with any copy of R, there are many thousands of functions available to be used in an R session, along with a correspondingly large amount of other related software. Nearly all of the important R software comes in the form of packages that make the software easily available and usable. This section discusses the implications of using different packages in your R session. For much more detail, see Chapter 4, but that is written more from the view of writing or extending a package. You will get there, I hope, as your own programming efforts take shape. The topic here, though, is how best to use other people's efforts that have been incorporated in packages.
The process leading from needing some computational tool to having it available in your R session has three stages: finding the software, typically in a package; installing the package; and attaching the package to the session.
The last step is the one you will do most often, so let's begin by assuming that you know which package you need and that the required package has been installed with your local copy of R. See Section 2.5, page 26, for finding and installing the relevant package.
You can tell whether the package is attached by looking for it in the printed result of `search()`; alternatively, you can look for a particular object with the function `find()`, which returns the names of all the attached packages that contain the object. Suppose we want to call the function `dotplot()`, for example.
```r
> find("dotplot")
character(0)
```
No attached package has an object of this name. If we happen to know that the function is in the package named `lattice`, we can make that package available for the current session. A call to the function `library()` requests this:
```r
library(lattice)
```
The function is `library()` rather than `package()` only because the original S software called them libraries. Notice also that the package name was given without quotes. The `library()` function, and a similar function `require()`, do some nonstandard evaluation that takes unquoted names. That's another historical quirk that saves users from typing a couple of quote characters.
If a package of the name "lattice" has been installed for this version of R, the call will attach the package to the session, making its functions and other objects available:
> library(lattice)
> find("dotplot")
[1] "package:lattice"
By "available", we mean that the evaluator will find an object belonging to the package when an expression uses the corresponding name. If the user types dotplot(Declination) now, the evaluator will normally find the appropriate function. To see why the quibbling "normally" was added, we need to say more precisely what happens to find a function object.
The evaluator looks first in the global environment for a function of this name, then in each of the attached packages, in the order shown by search(). The evaluator will generally stop searching when it finds an object of the desired name, dotplot, Declination, or whatever. If two attached packages have functions of the same name, one of them will "mask" the object in the other (the evaluator will warn of such conflicts, usually, when a package is attached with conflicting names). In this case, the result returned by find() would show two or more packages.
For example, the function gam() exists in two packages, gam and mgcv. If both were attached:
> find("gam")
[1] "package:gam" "package:mgcv"
A simple call to gam() will get the version in package gam; the version in package mgcv is now masked.
R has some mechanisms designed to get around such conflicts, at least as far as possible. The language has an operator, `::`, to specify that an object should come from a particular package. So mgcv::gam and gam::gam refer unambiguously to the versions in the two packages. The masked version of gam() could be called by:
> fitK <- mgcv::gam(Kyphosis ~ s(Age, 4) + etc.
Clearly one doesn’t want to type such expressions very often, and they only help if one is aware of the ambiguity. For the details and for other approaches, particularly when you’re programming your own packages, see Section 5.3, page 121.
Finding and installing packages
Finding the right software is usually the hardest part. There are thousands of packages and smaller collections of R software in the world. Section 2.7, page 31, discusses ways to search for information; as a start, CRAN, the
central repository for R software, has a large collection of packages itself, plus further links to other sources for R software. Extended browsing is recommended, to develop a general feel for what’s available. CRAN supports searching with the Google search engine, as do some of the other major collections.
Use the search engine on the Web site to look for relevant terms. This may take some iteration, particularly if you don’t have a good guess for the actual name of the function. Browse through the search output, looking for a relevant entry, and figure out the name of the package that contains the relevant function or other software.
Finding something which is not in these collections may take more ingenuity. General Web search techniques often help: combine the term "R" with whatever words describe your needs in a search query. The e-mail lists associated with R will usually show up in such a search, but you can also browse or search explicitly in the archives of the lists. Start from the R home page, r-project.org, and follow the link for "Mailing Lists".
On page 15, we showed a computation using the function `gam()`, which fits a generalized additive model to data. This function is not part of the basic R software. Before being able to do this computation, we need to find and install some software. The search engine at the CRAN site will help out, if given either the function name "gam" or the term "generalized additive models". The search engine on the site tends to give either many hits or no relevant hits; in this case, it turns out there are many hits and in fact two packages with a `gam()` function. As an example, suppose we decide to install the gam package.
There are two choices at this point, in order to get and install the package(s) in question: a binary or a source copy of the package. Usually, installing from binary is the easy approach, assuming a binary version is available from the repository. Binary versions are currently available from CRAN only for Windows and Mac OS X platforms, and may or may not be available from other sources. Otherwise, or if you prefer to install from source, the procedure is to download a copy of the source archive for the package and apply the "INSTALL" command. From an R session, the function `install.packages()` can do part or all of the process, again depending on the package, the repository, and your particular platform. The R GUI may also have a menu-driven equivalent for these procedures: Look for an item in the tool bar about installing packages.
First, here is the function `install.packages()`, as applied on a Mac OS X platform. To obtain the gam package, for example:
28
CHAPTER 2. USING R
install.packages("gam")
The function will then invoke software to access a CRAN site, download the packages requested, and attempt to install them on the same R system you are currently using. The actual download is an archive file whose name concatenates the name of the package and its current version; in our example, "gam_0.98.tgz".
Installing from inside a session has the advantage of implicitly specifying some of the information that you might otherwise need to provide, such as the version of R and the platform. Optional arguments control where to put the installed packages, whether to use source or binary and other details.
As another alternative, you can obtain the download file from a Web browser, and run the installation process from the command shell. If you aren't already at the CRAN Web site, select that item in the navigation frame, choose a mirror site near you, and go there.
Select "Packages" from the CRAN Web page, and scroll or search in the list of packages to reach a package you want (it's a very long list, so searching for the exact name of the package may be required). Selecting the relevant package takes you to a page with a brief description of the package. For the package gam at the time this is written:
```
gam: Generalized Additive Models
Functions for fitting and working with generalized additive models, as described in chapter 7 of "Statistical Models in S" (Chambers and Hastie (eds), 1991), and "Generalized Additive Models" (Hastie and Tibshirani, 1990).
Version: 0.98
Depends: R (>= 2.0), stats, splines
Suggests: akima
Date: 2006-07-11
Author: Trevor Hastie
Maintainer: Trevor Hastie
License: GPL2.0
Downloads:
Package source: gam_0.98.tar.gz
MacOS X binary: gam_0.98.tar.gz
Windows binary: gam_0.98.zip
Index of contents: gam. INDEX
Reference manual: gam.pdf
```
At this stage, you can access the documentation or download one of the proffered versions of the package. Or, after studying the information, you could revert to the previous approach and use `install.packages()`. If you do work from one of the source or binary archives, you need to apply the shell-style command to install the package. Having downloaded the source archive for package gam, the command would be:
2.6. *GETTING R*
R CMD INSTALL gam_0.98.tar.gz
The `INSTALL` utility is used to install packages that we write ourselves as well, so detailed discussion appears in Chapter 4.
**The package for this book**
In order to follow the examples and suggested computations in the book, you should install the `SoDA` package. It is available from CRAN by any of the mechanisms shown above. In addition to the many references to this package in the book itself, it will be a likely source for new ideas, enhancements, and corrections related to the book.
### 2.6 Getting R
R is an open-source system, in particular a system licensed under the *GNU Public license*. That license requires that the source code for the system be freely available. The current source implementing R can be obtained over the Web. This open definition of the system is a key support when we are concerned with trustworthy software, as is the case with all similar open-source systems.
Relatively simple use of R, and first steps in programming with R, on the other hand, don’t require all the resources that would be needed to create your local version of the system starting from the source. You may already have a version of R on your computer or network. If not, or if you want a more recent version, binary copies of R can be obtained for the commonly used platforms, from the same repository. It’s easier to start with binary, although as your own programming becomes more advanced you may need more of the source-related resources anyway.
The starting point for obtaining the software is the central R Web site, r-project.org. You can go there to get the essential information about R. Treat that as the up-to-date authority, not only for the software itself but also for detailed information about R (more on that on page 31).
The main Web site points you to a variety of pages and other sites for various purposes. To obtain R, one goes to the CRAN repository, and from there to either "R Binaries" or "R Sources". Downloading software may involve large transfers over the Web, so you are encouraged to spread the load. In particular, you should select from a list of mirror sites, preferably picking one geographically near your own location. When we talk about the
CHAPTER 2. USING R
CRAN site from now on, we mean whichever one of the mirror sites you have chosen.
R is actively maintained for three platforms: Windows, Mac OS X, and Linux. For these platforms, current versions of the system can be obtained from CRAN in a form that can be directly installed, usually by a standard installation process for that platform. For Windows, one obtains an executable setup program (a ".exe" file); for Mac OS X, a disk image (a ".dmg" file) containing the installer for the application. The Linux situation is a little less straightforward, because the different flavors of Linux differ in details when installing R. The Linux branch of "R Binaries" branches again according to the flavors of Linux supported, and sometimes again within these branches according to the version of this flavor. The strategy is to keep drilling down through the directories, selecting at each stage the directory that corresponds to your setup, until you finally arrive at a directory that contains appropriate files (usually ".rpm" files) for the supported versions of R.
Note that for at least one flavor of Linux (Debian), R has been made a part of the platform. You can obtain R directly from the Debian Web site. Look for Debian packages named "r-base", and other names starting with "r-". If you’re adept at loading packages into Debian, working from this direction may be the simplest approach. However, if the version of Debian is older than the latest stable version of R, you may miss out on some later improvements and bug fixes unless you get R from CRAN.
For any platform, you will eventually download a file (".exe", "dmg", ".rpm", or other), and then install that file according to the suitable ritual for this platform. Installation may require you to have some administration privileges on the machine, as would be true for most software installations. (If installing software at all is a new experience for you, it may be time to seek out a more experienced friend.) Depending on the platform, you may have a choice of versions of R, but it’s unlikely you want anything other than the most recent stable version, the one with the highest version number. The platform’s operating system will also have versions, and you generally need to download a file asserted to work with the version of the operating system you are running. (There may not be any such file if you have an old version of the operating system, or else you may have to settle for a comparably ancient version of R.) And just to add further choices, on some platforms you need to choose from different hardware (for example, 32-bit versus 64-bit architecture). If you don’t know which choice applies, that may be another indication that you should seek expert advice.
Once the binary distribution has been downloaded and installed, you should have direct access to R in the appropriate mechanism for your plat-
form.
Installing from source
Should you? For most users of R, not if they can avoid it, because they will likely learn more about programming than they need to or want to. For readers of this book, on the other hand, many of these details will be relevant when you start to seriously create or modify software. Getting the source, even if you choose not to install it, may help you to study and understand key computations.
The instructions for getting and for installing R from source are contained in the online manual, *R Installation and Administration*, available from the Documentation link at the r-project.org Web site.
2.7 Online Information About R
Information for users is in various ways both a strength and a problem with open-source, cooperative enterprises like R. At the bottom, there is always the source, the software itself. By definition, no software that is not open to study of all the source code can be as available for deep study. In this sense, only open-source software can hope to fully satisfy the *Prime Directive* by offering unlimited examination of what is actually being computed.
But on a more mundane level, some open-source systems have a reputation for favoring technical discussions aimed at the insider over user-oriented documentation. Fortunately, as the R community has grown, an increasing effort has gone into producing and organizing information. Users who have puzzled out answers to practical questions have increasingly fed back the results into publicly available information sources.
Most of the important information sources can be tracked down starting at the main R Web page, r-project.org. Go there for the latest pointers. Here is a list of some of the key resources, followed by some comments about them.
**Manuals:** The R distribution comes with a set of manuals, also available at the Web site. There are currently six manuals: *An Introduction to R*, *Writing R Extensions*, *R Data Import/Export*, *The R Language Definition*, *R Installation and Administration*, and *R Internals*. Each is available in several formats, notably as Web-browsable HTML documents.
Help files: R itself comes with files that document all the functions and other objects intended for public use, as well as documentation files on other topics (for example, `?Startup`, discussing how an R session starts).
All contributed packages should likewise come with files documenting their publicly usable functions. The quality control tools in R largely enforce this for packages on CRAN.
Help files form the database used to respond to the help requests from an R session, either in response to the Help menu item or through the `?’ operator or `help()` function typed by the user.
The direct requests in these forms only access terms explicitly labeling the help files; typically, the names of the functions and a few other general terms for documentation (these are called aliases in discussions of R documentation). For example, to get help on a function in this way, you must know the name of the function exactly. See the next item for alternatives.
Searching: R has a search mechanism for its help files that generalizes the terms available beyond the aliases somewhat and introduces some additional searching flexibility. See `?help.search` for details.
The `r-project.org` site has a pointer to a general search of the files on the central site, currently using the Google search engine. This produces much more general searches. Documentation files are typically displayed in their raw, \LaTeX-like form, but once you learn a bit about this, you can usually figure out which topic in which package you need to look at.
And, beyond the official site itself, you can always apply your favorite Web search to files generally. Using "R" as a term in the search pattern will usually generate appropriate entries, but it may be difficult to avoid plenty of inappropriate ones as well.
The Wiki: Another potentially useful source of information about R is the site `wiki.r-project.org`, where users can contribute documentation. As with other open Wiki sites, this comes with no guarantee of accuracy and is only as good as the contributions the community provides. But it has the key advantage of openness, meaning that in some “statistical” sense it reflects what R users understand, or at least that subset of the users sufficiently vocal and opinionated to submit to the Wiki.
The strength of this information source is that it may include material that users find relevant but that developers ignore for whatever reason (too trivial, something users would never do, etc.). Some Wiki sites have sufficient support from their user community that they can function as the main information source on their topic. As of this writing, the R Wiki has not reached that stage, so it should be used as a supplement to other information sources, and not the primary source, but it’s a valuable resource nevertheless.
The mailing lists: There are a number of e-mail lists associated officially with the R project (officially in the sense of having a pointer from the R Web page, r-project.org, and being monitored by members of R core). The two most frequently relevant lists for programming with R are r-help, which deals with general user questions, and r-devel, which deals generally with more “advanced” questions, including future directions for R and programming issues.
As well as a way to ask specific questions, the mailing lists are valuable archives for past discussions. See the various search mechanisms pointed to from the mailing list Web page, itself accessible as the Mailing lists pointer on the r-project.org site. As usual with technical mailing lists, you may need patience to wade through some long tirades and you should also be careful not to believe all the assertions made by contributors, but often the lists will provide a variety of views and possible approaches.
Journals: The electronic journal R News is the newsletter of the R Foundation, and a good source for specific tutorial help on topics related to R, among other R-related information. See the Newsletter pointer on the cran.r-project.org Web site.
The Journal of Statistical Software is also an electronic journal; its coverage is more general as its name suggests, but many of the articles are relevant to programming with R. See the Web site jstatsoft.org.
A number of print journals also have occasional articles of direct or indirect relevance, for example, Journal of Computational and Graphical Statistics and Computational Statistics and Data Analysis.
2.8 What’s Hard About Using R?
This chapter has outlined the computations involved in using R. An R session consists of expressions provided by the user, typically typed into an R console window. The system evaluates these expressions, usually either showing the user results (printed or graphic output) or assigning the result as an object. Most expressions take the form of calls to functions, of which there are many thousands available, most of them in R packages available on the Web.
This style of computing combines features found in various other languages and systems, including command shells and programming languages. The combination of a functional style with user-level interaction—expecting the user to supply functional expressions interactively—is less common. Beginning users react in many ways, influenced by their previous experience, their expectations, and the tasks they need to carry out. Most readers of this book have selected themselves for more than a first encounter with the software, and so will mostly not have had an extremely negative reaction. Examining some of the complaints may be useful, however, to understand how the software we create might respond (and the extent to which we can respond). Our mission of supporting effective exploration of data obliges us to try.
The computational style of an R session is extremely general, and other aspects of the system reinforce that generality, as illustrated by many of the topics in this book (the general treatment of objects and the facilities for interacting with other systems, for example). In response to this generality, thousands of functions have been written for many techniques. This diversity has been cited as a strength of the system, as indeed it is. But for some users exactly this computational style and diversity present barriers to using the system.
Requiring the user to compose expressions is very different from the mode of interaction users have with typical applications in current computing. Applications such as searching the Web, viewing documents, or playing audio and video files all present interfaces emphasizing selection-and-response rather than composing by the user. The user selects each step in the computation, usually from a menu, and then responds to the options presented by the software as a result. When the user does have to compose (that is, to type) it is typically to fill in specific information such as a Web site, file or optional feature desired. The eventual action taken, which might be operationally equivalent to evaluating an expression in R, is effectively defined by the user’s interactive path through menus, forms and other specialized tools in the interface. Based on the principles espoused
in this book, particularly the need for trustworthy software, we might object to a selection-and-response approach to serious analysis, because the ability to justify or reproduce the analysis is much reduced. However, most non-technical computing is done by selection and response.
Even for more technical applications, such as producing documents or using a database system, the user’s input tends to be relatively free form. Modern document-generating systems typically format text according to selected styles chosen by the user, rather than requiring the user to express controls explicitly. These differences are accentuated when the expressions required of the R user take the form of a functional, algebraic language rather than free-form input.
This mismatch between requirements for using R and the user’s experience with other systems contributes to some common complaints. How does one start, with only a general feeling of the statistical goals or the “results” wanted? The system itself seems quite unhelpful at this stage. Failures are likely, and the response to them also seems unhelpful (being told of a syntax error or some detailed error in a specific function doesn’t suggest what to do next). Worse yet, computations that don’t fail may not produce any directly useful results, and how can one decide whether this was the “right” computation?
Such disjunctions between user expectations and the way R works become more likely as the use of R spreads. From the most general view, there is no “solution”. Computing is being viewed differently by two groups of people, prospective users on one hand, and the people who created the S language, R and the statistical software extending R on the other hand.
The S language was designed by research statisticians, initially to be used primarily by themselves and their colleagues for statistical research and data analysis. (See the Appendix, page 475.) A language suited for this group to communicate their ideas (that is, to “program”) is certain to be pitched at a level of abstraction and generality that omits much detail necessary for users with less mathematical backgrounds. The increased use of R and the growth in software written using it bring it to the notice of such potential users far more than was the case in the early history of S.
In addition to questions of expressing the analysis, simply choosing an analysis is often part of the difficulty. Statistical data analysis is far from a routine exercise, and software still does not encapsulate all the expertise needed to choose an appropriate analysis. Creating such expert software has been a recurring goal, pursued most actively perhaps in the 1980s, but it must be said that the goal remains far off.
So to a considerable extent the response to such user difficulties must
include the admission that the software implemented in R is not directly suited to all possible users. That said, information resources such as those described earlier in this chapter are making much progress in easing the user’s path. And, those who have come far enough into the R world to be reading this book can make substantial contributions to bringing good data analysis tools to such users.
1. Specialized selection-and-response interfaces can be designed when the data analysis techniques can be captured with the limited input provided by menus and forms.
2. Interfaces to R from a system already supporting the application is another way to provide a limited access expressed in a form familiar to the user of that system. We don’t describe such interfaces explicitly in this book, but see Chapter 12 for some related discussion.
3. Both educational efforts and better software tools can make the use of R seem more friendly. More assistance is available than users may realize; see for example the suggestions in Section 3.5. And there is room for improvement: providing more information in a readable format for the beginning user would be a valuable contribution.
4. Last but far from least in potential value, those who have reached a certain level of skill in applying data analysis to particular application areas can ease their colleagues’ task by documentation and by providing specialized software, usually in the form of an R package. Reading a description in familiar terminology and organized in a natural structure for the application greatly eases the first steps. A number of such packages exist on CRAN and elsewhere.
|
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9780387759357-c1.pdf?SGWID=0-0-45-591612-p173791503", "len_cl100k_base": 12144, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 55246, "total-output-tokens": 13447, "length": "2e13", "weborganizer": {"__label__adult": 0.0003209114074707031, "__label__art_design": 0.0006170272827148438, "__label__crime_law": 0.0002956390380859375, "__label__education_jobs": 0.004451751708984375, "__label__entertainment": 0.00015151500701904297, "__label__fashion_beauty": 0.0001455545425415039, "__label__finance_business": 0.0003886222839355469, "__label__food_dining": 0.00041031837463378906, "__label__games": 0.0007958412170410156, "__label__hardware": 0.0007376670837402344, "__label__health": 0.0003228187561035156, "__label__history": 0.00041866302490234375, "__label__home_hobbies": 0.00018739700317382812, "__label__industrial": 0.0004267692565917969, "__label__literature": 0.0006098747253417969, "__label__politics": 0.00021648406982421875, "__label__religion": 0.0004930496215820312, "__label__science_tech": 0.05767822265625, "__label__social_life": 0.0002491474151611328, "__label__software": 0.0726318359375, "__label__software_dev": 0.857421875, "__label__sports_fitness": 0.00025272369384765625, "__label__transportation": 0.00033092498779296875, "__label__travel": 0.00022709369659423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57672, 0.02802]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57672, 0.5339]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57672, 0.92359]], "google_gemma-3-12b-it_contains_pii": [[0, 1279, false], [1279, 2656, null], [2656, 4806, null], [4806, 7471, null], [7471, 8979, null], [8979, 11662, null], [11662, 14248, null], [14248, 16202, null], [16202, 18845, null], [18845, 20729, null], [20729, 22837, null], [22837, 24303, null], [24303, 26688, null], [26688, 29246, null], [29246, 31654, null], [31654, 33754, null], [33754, 36431, null], [36431, 38697, null], [38697, 40947, null], [40947, 43859, null], [43859, 45991, null], [45991, 48292, null], [48292, 50460, null], [50460, 53204, null], [53204, 56022, null], [56022, 57672, null], [57672, 57672, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1279, true], [1279, 2656, null], [2656, 4806, null], [4806, 7471, null], [7471, 8979, null], [8979, 11662, null], [11662, 14248, null], [14248, 16202, null], [16202, 18845, null], [18845, 20729, null], [20729, 22837, null], [22837, 24303, null], [24303, 26688, null], [26688, 29246, null], [29246, 31654, null], [31654, 33754, null], [33754, 36431, null], [36431, 38697, null], [38697, 40947, null], [40947, 43859, null], [43859, 45991, null], [45991, 48292, null], [48292, 50460, null], [50460, 53204, null], [53204, 56022, null], [56022, 57672, null], [57672, 57672, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57672, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57672, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57672, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57672, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57672, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57672, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57672, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57672, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57672, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 57672, null]], "pdf_page_numbers": [[0, 1279, 1], [1279, 2656, 2], [2656, 4806, 3], [4806, 7471, 4], [7471, 8979, 5], [8979, 11662, 6], [11662, 14248, 7], [14248, 16202, 8], [16202, 18845, 9], [18845, 20729, 10], [20729, 22837, 11], [22837, 24303, 12], [24303, 26688, 13], [26688, 29246, 14], [29246, 31654, 15], [31654, 33754, 16], [33754, 36431, 17], [36431, 38697, 18], [38697, 40947, 19], [40947, 43859, 20], [43859, 45991, 21], [45991, 48292, 22], [48292, 50460, 23], [50460, 53204, 24], [53204, 56022, 25], [56022, 57672, 26], [57672, 57672, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57672, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
ba6738e38cb173bf325ea68f6741ba095a6ebe5d
|
XtremeRAT - When Unicode Breaks
Harri Sylvander
XtremeRAT – WHEN UNICODE BREAKS
GIAC GREM Gold Certification
Author: Harri Sylvander, harri@sylvander.net
Advisor: Richard Carbone
Accepted: March XX, 2015
Abstract
XtremeRAT is a commonly abused remote administration tool that is prevalent in the Middle East; prevalent to the degree that it is not uncommon to find at least one active RAT in a network on any given incident response engagement. The tool is readily available to anyone with a desire to build one on their own. Availability means that the RAT is being employed for nefarious purposes by adversaries ranging from those who do not fully comprehend the consequences of their actions, to advanced threat actors that care less about legal aspects and more about the objectives of their respective missions. One of the tools provided by XtremeRAT to aid in achieving these goals is a built-in Unicode keylogging capability; however, there are situations when the logging fails, resulting in incomprehensible keylogs. The data, or parts thereof, that are captured in these logs can still be recovered, and it is vital to the defender to understand what data has potentially been stolen. The objective of this paper is to shed light on the challenges posed in extracting useful information from the logs when non-Latin character sets, specifically Arabic, are used, and to publish an author-developed tool that can aid in decoding the broken parts of extracted keylogs.
1. An Introduction to the RAT’s Nest
The past few years have been turbulent in the Middle East and North Africa. This turbulence manifested itself as the “Arab Spring”, which began in December 2010 in Tunisia, and spread through many of the nations in the region. The ensuing conflicts have played a part in many cyber realm attacks as well.
Regional conflicts, whether Arab-Israeli, Shi’a-Sunni sectarian violence, or other ideological or political differences (e.g. Western, and specifically US influence in the region), are at the heart of many attacks in this region. The largest portion, 45%, of the attacks observed in Middle East and North Africa (MENA) are hacktivism related, followed by cybercrime at 40%, and a significant 15% is associated with cyber-warfare type attacks including espionage. (Hamid, T., 2013)
Specially crafted malware, used by both sides in the ongoing high-tech, invisible war, are the exception to the norm. When these kinds of highly specialized tools (malware) are employed, and ultimately uncovered, they have a tendency to be high profile events that occupy headlines. These events are not created by the less skilled regional attackers; they are the ones that are ready to exploit the newsworthiness of the events in social engineering attacks, tricking unsuspecting users into opening exploit code-laden documents or executing malicious programs that purport to contain information relevant to the current event.
The result of a successful social engineering attack, such as the ones described above, will many times result in one of the regionally most prevalent RATs: DarkComet, njRAT, or XtremeRAT, being installed on the victim’s system. (FireEye, 2014) All of the aforementioned RATs are publicly available and customizable by a potential attacker.
XtremeRAT has been used by various groups and against diverse targets in the Middle East and abroad. (Villeneuve, N., Moran, N. & Haq, T., 2013) (Villeneuve, N., 2012) While some of these targets have been diverse enough to make it difficult to establish, at least with any level of certainty, the intent and goals of the attackers, there is
Harri Sylvander, harri@sylvander.net
also evidence that some campaigns have specifically targeted Syrian anti-government groups. (Hyppönen, M., 2012)
RATs tend to be associated with some level of targeted activity due to the additional level of effort required by the attacker to control individual systems. However, in early 2014, FireEye released an article indicating that the majority of XtremeRAT activity is related to traditional cybercrime. There, it is used to distribute spam, which ultimately leads to the download of a more traditional banking Trojan such as Zeus. (Villeneuve, N. & Bennet, J. T., 2014)
Whether in the hands of hacktivists, cybercriminals, or threat groups with political motives, there are plenty of tools to choose from. Using commodity tools may offer a level of protection for more advanced attackers by allowing them to blend in with other actors.
1.1. Arabic Localization
Basic Arabic consists of 28 letters. In everyday use, Arabic is written omitting short vowels, and only long vowels are explicitly written. This removes the need for typing diacritics above or below the preceding consonant in a syllable; however, subtle changes in vowels can change the meaning of the word, which means the reader must have a fair understanding of the language. While this may sound confusing and may render the language incomprehensible, consider the oft-used abbreviations of words in messaging using mobile devices: “txt”, “msg”, and others. The concept is more or less the same, and while a single word may be ambiguous, context often removes that ambiguity. (Smart, J. & Altorfer, F., 2010)
In addition, there is no distinction between upper and lower case characters, so if one wishes, all required diacritics can be generated by applying modifier keys. This means that standard keyboards are more than capable of accommodating the required character set for producing standard Arabic text.
Harri Sylvander, harri@sylvander.net
There are differences in layouts that need to be taken into account when considering mappings of keys to the resulting character being represented. Depending on the locale and the exact physical layout of the keyboard, mappings can vary fairly significantly, e.g. Microsoft has three defined Arabic keyboard layouts: Arabic (101), Arabic (102), and Arabiz (102) AZERTY. (Microsoft Developer Network [MSDN], n.d.-a) The proof of concept code included in “Appendix A: xtrat_log_fixer.rb - Fixing Broken Keylogs” is created for one such layout that is commonly used in Arab countries.
To establish which languages and keyboard layouts are available for a user profile on a system, an analyst can look for configured Input Locales in the registry. An Input Locale defines the input language and how that language is being entered into the system. All available keyboard layouts on a system are defined by the registry key “HKLM\SYSTEM\ControlSet001\Control\Keyboard Layouts”, but for the purpose of identifying potential languages configured by users, focus needs to be shifted to user registry hives. The registry key, “HKCU\Keyboard Layout\Preload”, shown in Figure 2 defines available locales.
---
Harri Sylvander, harri@sylvander.net
If a configured locale does not have its own unique keyboard layout or the system is configured to use a keyboard layout other than the default one, a mapping of the Locale ID (LCID) to keyboard layout is stored in “HKCU\Keyboard Layout\Substitutes”. This is the case for all Arabic locales except for Arabic_Saudi_Arabia (LCID “0x0401”), which defaults to the Arabic 101 keyboard layout, represented by the hexadecimal value “0x00000401”. For example, if Arabic_UAE (LCID “0x3801”) is configured, the data “0x00003801” is stored in one of “Preload” key’s values to represent this fact. Since there is no unique keyboard layout for the locale – it uses the same Arabic 101 layout as Arabic_Saudi_Arabia – the configured keyboard layout will be mapped in the “Substitutes” key, as shown in Figure 3. The “Substitutes” key contains the value “0x00003801”, and the data, “0x00000401”, of that value represents the configured keyboard layout.
The data contained in the values of the “Preload” key are hexadecimal values that need to be interpreted to reveal the actual configured languages. The interpretation is possible by mapping LCIDs to their respective Locales, e.g. Arabic_Saudi_Arabia, using a table available on MSDN (MSDN, n.d.-b). Below in Table 1 is an excerpt containing the Arabic LCID:Input Locale combinations in the following table from MSDN.
<table>
<thead>
<tr>
<th>Locale</th>
<th>LCIDHex</th>
<th>Valid Locale ID:InputLocale combinations</th>
<th>Language Collection</th>
</tr>
</thead>
<tbody>
<tr>
<td>Arabic_Saudi_Arabia</td>
<td>0401</td>
<td>0409:00000409, 0401:00000401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Iraq</td>
<td>0801</td>
<td>0409:00000409, 0801:00000401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Egypt</td>
<td>0c01</td>
<td>0409:00000409, 0c01:00000401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Libya</td>
<td>1001</td>
<td>040c:0000040c, 1001:00020401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Algeria</td>
<td>1401</td>
<td>040c:0000040c, 1401:00020401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Morocco</td>
<td>1801</td>
<td>040c:0000040c, 1801:00020401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Tunisia</td>
<td>1c01</td>
<td>040c:0000040c, 1c01:00020401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Oman</td>
<td>2001</td>
<td>0409:00000409, 2001:00000401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Yemen</td>
<td>2401</td>
<td>0409:00000409, 2401:00000401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Syria</td>
<td>2801</td>
<td>0409:00000409, 2801:00000401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Jordan</td>
<td>2c01</td>
<td>0409:00000409, 2c01:00000401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Lebanon</td>
<td>3001</td>
<td>0409:00000409, 3001:00000401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Kuwait</td>
<td>3401</td>
<td>0409:00000409, 3401:00000401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_UAE</td>
<td>3801</td>
<td>0409:00000409, 3801:00000401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Bahrain</td>
<td>3c01</td>
<td>0409:00000409, 3c01:00000401</td>
<td>Complex Script</td>
</tr>
<tr>
<td>Arabic_Qatar</td>
<td>4001</td>
<td>0409:00000409, 4001:00000401</td>
<td>Complex Script</td>
</tr>
</tbody>
</table>
“0409:00000409” is the Locale ID:Input Locale representation for US English, which is used as a default Input Locale in non-English locales.
Harri Sylvander, harri@sylvander.net
By extracting the LCIDs, an analyst can determine what input languages were available on a system at the time of acquisition. One quick and convenient way to do this is using Metasploit’s Rex Module. Rex::Registry removes any dependencies on the Windows API, which makes processing registry hives using Ruby feasible across multiple platforms. (Perry, B., 2012)
Having the LCID information available is essential to properly decode some of the data logged by XtremeRAT in specific circumstances; these specifics will be discussed in the next section.
2. Dissecting the RAT
2.1. XtremeRAT’s capabilities
XtremeRAT is a versatile piece of code. The versions that were analyzed for this document are 3.6 and 3.7. The former version was included due to the availability of its source code, and the latter to make sure that the latest functionality, at the time of writing, was covered.
As with many remote administration tools (RAT), XtremeRAT provides basic capabilities such as executing programs and uploading and downloading files; however, these are far from a complete list, as can be seen in the screenshots shown below in Figure 4 and Figure 5.
Most of the functions and server options are self-explanatory and do not warrant an in-depth analysis. However, the one component that is of special interest, even if it is not unique to XtremeRAT in anyway, is the keylogger.
Harri Sylvander, harri@sylvander.net
As can be seen in the drop-down menu screenshots presented above, the client portion of the RAT provides the capability of searching keylogs for specific keywords, downloading keylogger logs, and browsing keylogger files on the server. Keylogging capabilities can be configured in the RAT during the build phase, as shown in Figure 6.
2.2. XtremeRAT’s keylogger
2.2.1. Decoding and analyzing
XtremeRAT stores the keylog data in a trivially decodable format. Nart Villeneuve and James Bennett have documented some of the deficiencies in the encryption employed by XtremeRAT on FireEye’s blog. (Villeneuve, N. & Bennet, J. T., 2014) Furthermore, Bennett has released tools to decrypt known variants’ configuration files as well as keylogger logs. The tools are available for download from GitHub (https://github.com/fireeye/tools/tree/master/malware/Xtreme%20RAT).
Examining a keylog file created by XtremeRAT in a hex editor or some other viewer that outputs the hexadecimal representation of the data contained therein, it becomes clear that the encryption scheme used is not very complex, as seen in Figure 7.
Harri Sylvander, harri@sylvander.net
The null-bytes, i.e. hexadecimal value “0x00”, that can be seen repeating for every second byte in the keylog file presented above, indicate the possibility of Unicode data being logged and possibly a one or two byte XOR scheme being used.
Any character that can be represented with a single byte will have a null-byte as the second byte in its Unicode representation. Performing a XOR operation on this null-byte using a key X will result in the same value, X, immediately exposing the key. Clearly this has not been done, or the second byte of each Unicode character is more or less static. And, the XOR key used resulted in that character being represented as “0x00” – not a very likely scenario.
The source code of XtremeRAT 3.6 reveals that the author specifically excludes null-bytes, carriage returns, and newlines from the XOR encoding. In Pascal, these characters can be represented by “#0”, “#10”, and “#13”, respectively, as shown in the following figure:
```
procedure DecryptionKeylogger(pKey: WideString; KeyLength: int64);
var
p: integer;
c: widechar;
begin
for i := 0 to KeyLength do begin
c := WideChar(ord(pKey[i]) xor $55);
if (pKey[i] <> #13) and
(pKey[i] <> #10) and
(pKey[i] <> #8) and
(c <> #13) and
(c <> #10) and
(c <> #0) then
pKey[i] := c;
end;
end;
```
Figure 8: Source code of XtremeRAT 3.6 showing exclusion of characters from XOR encoding.
Harri Sylvander, harri@sylvander.net
This matches what was seen in IDA Pro during the analysis of the downloaded version, as depicted in Figure 9 below.
Many samples in the wild share XOR-keys used for encoding keylog data. James Bennett’s `xtrat_decrypt_keylog.py`, available from the GitHub repository referred to earlier in this section, includes some of the commonly encountered XOR-keys. If an attacker changes the XOR-key, the tool will no longer properly decode keylog data. Fortunately, discovering the new key, and including it in the tool is not an overly complex procedure. The article, “Tools for Examining XOR Obfuscation for Malware Analysis”, hosted on SANS Digital Forensics and Incident Response blog, provides
Harri Sylvander, harri@sylvander.net
multiple tools for analyzing XOR encoded data and finding possible keys for decoding. (Zeltser, L., 2013)
One thing to note is that any tool that searches for a known ASCII string will fail, since the stored data is Unicode. A quick workaround is stripping out the null-bytes from the file, but keep in mind that this will break any wide characters that may be included in the data. In addition, the output may become a mix of unprintable binary characters and ASCII.
On most UNIX systems, stripping out null-bytes can be accomplished by issuing the following command:
\[
\text{\$ LC\_ALL=C \text{tr} -d '\0' < \text{unicode\_keylog.dat} > \text{keylog.dat}}
\]
Without the “LC\_ALL=C”, the reader may encounter “tr: Illegal byte sequence” errors, as “tr” is expecting non-binary data. The resulting file will be stripped of null-bytes and will be susceptible to XOR-analysis using one of the tools referred to earlier.
The recommended way to ensure nothing is lost in the decoding process is adding a bit of logic to verify if the second byte is a null byte or if the character being decoded is a wide character. This is what \text{xtrat\_decrypt\_keylog.py} does; however, it will still be necessary to find the key that decodes the wide characters. The problem can be reduced to finding the second half of a two-byte key, if the first byte is found using the destructive method described above.
One quick way to find the first byte is to strip the null-bytes and then look for artifacts that are expected in the keylog. XtremeRAT identifies so-called “deadkeys” that have been pressed by denoting the name of the key in brackets, e.g. “[Backspace]”, “[Delete]”, and “[Right Alt]”. Alternatively, XtremeRAT logs the title of the active window in order to give context to logged data, which means the logs are bound to have entries that contain words such as “Internet”, “Explorer”, “Firefox”, “Word”, “Excel”,
Harri Sylvander, harri@sylvander.net
and any others that may be appropriate for the computing environment whence the keylog was extracted from.
Some versions of XtremeRAT prepend a few bytes to the keylog data file that can be used to identify potential keylog files. The analyzed XtremeRAT source code reveals this clearly, as can be seen in the figure below:
```
if PrimeiraVez = True then
begin
TempStr := '#13';
WriteFile(KeyloggerFile, TempStr[1], Length(TempStr) + 2, c, nil);
WriteFile(KeyloggerFile, TempStr[1], Length(TempStr) + 2, c, nil);
TempStr := ' --- ';
WriteFile(KeyloggerFile, TempStr[1], Length(TempStr) + 2, c, nil);
ShowTime(Hora);
WriteFile(KeyloggerFile, Hora, StrLen(Hora) + 2, c, nil);
TempStr := '#13#10';
WriteFile(KeyloggerFile, TempStr[1], Length(TempStr) + 2, c, nil);
end;
```
*Figure 10: XtremeRAT 3.6 keylogger source code showing 'magic bytes' of a keylog file.*
“Primiera Vez”, seen at the top of the code in Figure 10, is Portuguese for “first time”, which suggests that the code will be run when the file is initially created. In Pascal, prepending an integer with a “#” is equivalent to the character of the ordinal, e.g. “#13” is equivalent to carriage return, and “#10” is equivalent to newline (i.e. “#13#10” is what is often represented as “\r\n” in other languages).
However, this bit of code does not explain how the “0xAA 0xFE”, seen at offset 0 in the keylog file shown in Figure 7, gets there; in fact, the source code in Figure 11 clearly shows that the “header” will consist of “0xFF 0xFE”. These two bytes get written to a file defined by the variable “KeyloggerFile”.
The analyzed source code does shed some light on how these bytes are generated, but no definitive explanation for the difference observed in the code and the actual keylog data was discovered. It is worth noting that the source code (see Figure 11 below) and binary versions differed, which may be the reason behind the discrepancy.
Harri Sylvander, harri@sylvander.net
XtremeRAT samples recovered from the wild have used different “magic bytes”, and XOR-keys used for encoding have varied between analyzed samples. Anyone that has access to the XtremeRAT source code, which is available on the Internet, can modify these, making such changes an expected occurrence.
### 2.2.2. Deficiencies in the keylogger
Anyone that has analyzed XtremeRAT’s keylogs extracted from an environment where multi-byte Unicode character sets are in use may have discovered that even after “successful” decoding, portions of the logs contain seemingly random strings. This “random data” stems from the fact that the keylogger fails to properly map captured scan codes to the correct character representation under specific circumstances.
A scan code is a device-independent identifier assigned to each key on the keyboard. When the user presses a key, a scan code is generated. This scan code needs to be converted into something meaningful, like the character that the person typing on the
keyboard intended to create by pressing that given key. To achieve this, the scan key is interpreted by the keyboard driver and mapped to a virtual-key code using what is known as a VK_MAP – virtual-key map. The VK_MAP defines the aforementioned intent, giving the keypress its actual purpose. Once the translation has been done, a message with the scan code, mapped virtual-key code, and any other relevant information gets placed in the system message queue. (MSDN, n.d.-e)
To illustrate the above, assume the input language of a Windows system where the XtremeRAT server is running is set to Arabic. The scan code generated when pressing the “A”-key on a standard QWERTY-keyboard is “0x1E” (MSDN, n.d.-f). This should in turn normally generate the character “ش” as defined by the VK_MAP in use at the time of the key being pressed; however, on occasion, this will be logged as “a” by XtremeRAT. The issue was originally discovered when analyzing keylog data submitted via form fields using recent versions of Internet Explorer.
Further analysis of the anomalous behavior described above suggested that this occurs only in the context of a few applications. Any time keylog data manifested itself as the Latin character set representation of the sequence of keys pressed, instead of the expected Arabic words, the title of the window logged suggested a relationship to Internet Explorer, or a component thereof. For instance, using the example above, the letter “a” was written to the log file, not “ش” as was expected.
The components that make up much of Internet Explorer’s functionality can be easily reused, due to its Component Object Model (COM) based architecture. ShDocVw.dll provides required functionality of a browser, e.g. navigation and history; while MSHTML.dll - commonly referred to by its codename “Trident” – is responsible for parsing and rendering HTML and CSS, without the added browser capabilities. These two commonly reused components allow developers to extend their applications with functionality present in a modern browser, negating the need to re-implement everything. This modularity and component reuse was also the likely source of the observed anomalous behavior.
Harri Sylvander, harri@sylvander.net
The image below shows the various components that make up Microsoft Internet Explorer, including ShDocVw.dll and MSHTML.dll mentioned earlier. Each rectangle represents a coherent, modular entity that provides a subset of the browser’s functionality. Since the anomalous behavior seemed to be application specific and related to component reuse, any component imported into all of the misbehaving applications also defined the probable scope of the problem being analyzed.
Internet Explorer 6 on Windows XP SP3 32-bit behaved “correctly” in the sense that it logged Arabic when Arabic was input into text fields. On Windows 7 SP1 systems, both 32-bit and 64-bit versions, using Internet Explorer 8 and 10, the data was incorrectly logged as Latin characters. Other native applications that were tested seemed to log data as expected, i.e. Arabic in Arabic, English in English.
Harri Sylvander, harri@sylvander.net
Tests with Chrome and Firefox resulted in the expected behavior for both Windows XP SP3 32-bit; tests on Windows 7 SP1 32-bit and 64-bit further narrowed down the issue to Internet Explorer, or a component thereof. Furthermore, in Internet Explorer, the issue only seemed to present itself in cases where data was typed into a HTML form field, and not for example when inputting text in the browser’s URL field. This suggested that the problem stemmed from the “Trident” rendering engine component of Internet Explorer.
Wikipedia provides a list of some software that uses the “Trident” engine for rendering HTML (http://en.wikipedia.org/wiki/Trident_%28layout_engine%29). Two browsers from this list, Avant and Sleipnir, were selected for further testing in order to verify if the behavior was consistent with what was observed while analyzing keylog data captured in Internet Explorer form fields. Tests confirmed that both of these browsers had the same issue, i.e. different characters were captured in the keylogs than what was actually being typed into and represented in the form fields.
Below, screenshots of testing clearly demonstrate the differences in the logged and expected data for the various browsers. Before showing the incorrect behavior, an example, generated using IE6 on Windows XP SP3 32-bit (WinXPSP3x32) and Firefox 21 on Windows 7 SP1 32-bit (Win7SP1x32), of the expected behavior should be reviewed.
Note that the HTML text area form field, with Arabic at the bottom of the page, does not contain proper Arabic. It does read “Arabic” in proper Arabic, but the following script is a sequence of keypresses that would result in “textarea” on a standard QWERTY-keyboard. The screenshots are intended to highlight the issue with the data that was logged from specific applications by XtremeRAT.
Harri Sylvander, harri@sylvander.net
Looking at the data typed into the form fields above and comparing it to the keylog data below, we see that XtremeRAT logged the active window title, timestamp, and whatever the expected representation of each pressed key was, as defined by the active Input Locale. The Input Locale was switched between English and Arabic using a mouse after each form field was filled out. The logged “[Tab]” entries were due to focus being shifted from one field to the next as data was typed into the form elements. This behavior was repeated in each subsequent test.
It is worth noting that the keylog viewer built into the client does not properly distinguish between left-to-right (LTR) and right-to-left (RTL) requirements of the characters being logged. Thus, any Arabic seen in the “Keylogger” window depicted in...
Figure 14 will need to be read from LTR, rendering the script in a manner different than expected since characters cannot be joined properly in this direction.
One might attribute the differences in how key presses are logged to the underlying operating system version, was it not for the fact that the second and third tests (and others not included in this document) showed differing behaviors for one specific OS version. This fact enforced the understanding that the difference must have been tied to the application itself.
The following screenshot is of Firefox 21 (FF21), running on a Windows 7 system, showing data being inserted in multiple languages and with multiple character sets. Immediately below Figure 15, another screenshot shows the captured keystrokes, which were correctly interpreted, though the LTR caveat discussed above still applies.

Figure 16 below is a screenshot of the XtremeRAT client’s keylog viewer displaying captured keylog data of the form being filled out in FF21; the captured data
Harri Sylvander, harri@sylvander.net
matches the entered data. The analysis of such data should pose no problems for anyone fluent in the language that has been logged.
Figure 16: XtremeRAT 3.7 client showing live keylog data collected from FF21 on Win7SP1x32.
Figure 17 shows the same form rendered on Internet Explorer 10 (IE10) running on Windows 7SP1 32-bit. The form was filled out with exactly the same data as the form of the FF21 browser used in the previous example. Comparing the keylog data collected from the FF21 and the IE10 forms, depicted in Figure 16 and Figure 18 respectively, there is a clear difference; one that indicates keylogging of FF21 forms worked as expected and that the behavior broke in newer versions of Internet Explorer.
Figure 17: Test form rendered using IE10 on Win7SP1x32.
The keylog data from IE10, displayed below in Figure 18, shows no Arabic characters, only Latin ones. Where Arabic is expected, the text has been replaced by the ASCII representation of each key that would have resulted in the rendering of the appropriate Arabic character, e.g. “ا” instead of “ش”. The string “hguvfdm” thus represents the sequence of keys that needs to be pressed on a QWERTY keyboard, when Arabic is selected as the input language on a Windows system, to write the word “Arabic” in Arabic.
Screenshots of the other tested browsers, Avant and Sleipnir, that reuse core components of Internet Explorer are not included here, but the results were identical to the Internet Explorer 10 test.
2.2.3. Proposed solution
Not being able to analyze and understand data that has been captured in keylogs poses a problem for both parties – the victim of the keylogger as well as the attacker. The attacker is obviously trying to steal information, but the behavior exhibited by various browsers using the “Trident” engine renders some of captured data illegible. Conversely, the victim should be interested in trying to identify what data an attacker may have successfully stolen.
Since both the act of converting wrongly captured keylogs back into the original representation of the data and identifying the nature of the “random data” contained in the keylogs are far from complicated, releasing a tool to do the conversion seemed pertinent. As such, the author has provided a tool for doing just this. The code presented in
Harri Sylvander, harri@sylvander.net
“Appendix A: xtrat_log_fixer.rb - Fixing Broken Keylogs” will manually parse extracted strings and convert them into Arabic. Do note that running the script on some systems, where the terminal fails to show RTL scripts inline properly, will yield isolated Arabic characters written left-to-right LTR. The quick fix is to copy the generated output and paste the information into a text editor that has proper support for RTL text.
The code will remove some “deadkeys”, most importantly any “Delete” and “Backspace” actions, along with the characters that they were meant to delete; however, there are cases where the user can unknowingly edit the text in a manner that will render it unfit for parsing using this script. An example of such an action is a multi-line or multi-character selection using a combination of the “Shift” key and “arrow” keys. Selecting and then overwriting, will replace multiple characters with the first key pressed after selection, but the script will not understand such a scenario. More complex scenarios, such as the one described above, will have to be manually analyzed and distilled down into what the final sequence of keypresses is meant to be, and then that is parsed with the script.
The screenshot below (see Figure 19) shows how the author-provided tool could be used to decode portions of illegible data contained within captured keylogs. The string being processed, “hguvfdm” is extracted from the data displayed above in Figure 18.

The decoded output in the above screenshot suffers from the LTR-versus-RTL issue previously discussed. Copying and pasting the above string into a text editor that renders Arabic properly yields the correct text as shown in Figure 20. The data in this
Harri Sylvander, harri@sylvander.net
final, corrected form matches the original input that was entered into the forms, as depicted in the browser screenshots presented earlier.

A keylog file is an extremely powerful artifact, as it gives an immediate understanding of the type of data that an attacker may have been able to steal from an environment, and it can help less technical people see the gravity of the issue that is being tackled. When arguing for resources to respond to a compromise, having a file that contains legible text, instead of random character strings can be a deciding factor. To this end, the author-provided proof-of-concept tool should suffice in shedding light on data that would have been obscured from analysts’ and management’s eyes in the past.
3. Conclusion
The author-provided solution is not perfect – it does not work for all character sets, and it does not cater for all edge cases, but it should help anyone analyzing keylog data extracted from an incident involving an XtremeRAT, if the language used uses a fairly limited Unicode character set.
The exact reason why the keylogging fails when “Trident”-based browsers are used was not discovered during this exercise, but it does seem that no other applications are subject to the same issues. Discussions on extracting the correct Unicode characters when using various methods of capturing keystrokes from applications are abundant on the Internet, which would indicate that this problem is not one that is limited to XtremeRAT.
Harri Sylvander, harri@sylvander.net
In the end, establishing the exact nature of the technical issue that causes the keystrokes to be wrongly logged is of academic interest while understanding the artifacts present in the data collected from compromised systems may well be a necessity for compromised organizations. This analysis and the referenced tools should provide the means to help fulfill that requirement.
Harri Sylvander, harri@sylvander.net
4. References
Harri Sylvander, harri@sylvander.net
Harri Sylvander, harri@sylvander.net
Harri Sylvander, harri@sylvander.net
5. Appendix A: xtrat_log_fixer.rb - Fixing Broken Keylogs
The source code shown below is available for download at:
http://sylvander.net/projects/xtrat/
5.1. xtrat_keylog_fixer.rb
```ruby
#!/opt/local/bin/ruby1.9
# encoding: UTF-8
#
# xtrat_keylog_fixer.rb v0.1
# Harri Sylvander - harri@sylvander.net
# This script can be used to decode parts of keylogs
# generated by XtremeRAT that have been erroneously logged as
# Latin characters when the proper representation would have
# been Unicode characters.
#
# The script reads a defined file's contents to decode and strips
# XtremeRAT's presentation of some special characters. If those
# special characters actually modify the data, e.g. [Delete]
# or [Backspace], the content will be modified accordingly.
#
# There are cases where this simplistic assumption will break,
# e.g. if someone uses [Shift]+[Arrows] to select data, and
# then overwrite or delete data. For now, these cases are not
# taken into account and any such cases will require manual
# modification of the keylog data prior to decoding.
require_relative 'keymaps/xtrat_keymap.rb'
require 'optparse'
require 'rex/registry'
# Right-to-left and left-to-right Unicode representations
rtl = '\u200e'
ltr = '\u200f'
#----------------------------------------------------------[
# remove_special_chars ]---#
def remove_special_chars(line)
if /\[(Delete|Backspace)\]/.match(line)
# Delete => remove following character
if /\<lhs>\?\[(Delete\]\(rhs\?)\]/.match(line)
# Substring of the right hand side w/o first character
# and with special char removed
rhs = rhs[1,(rhs.length-1)]
else
# Substring of the left hand side w/o last character
# and with special char removed
lhs = lhs[0,(lhs.length-1)]
else
return line
end
end
```
Harri Sylvander, harri@sylvander.net
line = "#{lhs}#{rhs}"
remove_special_chars(line)
end
return line end
#------------------------------------------[ decode_keylog_string ]---#
def decode_keylog_string(keylog_string, keymap)
decoded_string = ""
if keymap.nil?
raise "ERROR! Must define keymap to convert to. Exiting."
end
keylog_string.each_char do |keylog_char|
if keymap[keylog_char].nil?
decoded_string << keylog_char
else
decoded_string << keymap[keylog_char]
end
end
return decoded_string end
#------------------------------------------[ get_kbd_layouts ]---#
def get_kbd_layouts(regfile)
kbd_layouts = []
hive = Rex::Registry::Hive.new(regfile)
nodekey = hive.relative_query('\Keyboard\Layout\Preload')
nodekey.value_list.values.each { |k| kbd_layouts << k.value.data }
return kbd_layouts end
def print_decoded_keylog_string(keylog_string, decoded_keylog_string)
puts "Converting original keylog data:\n"
puts keylog_string
puts "\n" + Decoded output:"
puts decoded_string end
#------------------------------------------[ Main Program ]---#
# Required input
infile = nil
# Optional input
regfile = nil
selected_keymap = nil
# Don't parse data, only list available keymaps in specified NTUSER.DAT
list_layouts_only = false
OptionParser.new do |opts|
opts.banner = "Usage: ./xtrat_keylog_fixer.rb -i INFILE [-r REGISTRYFILE [-l]] [-k KEYPID]"
opts.on("-i", "--infile ", String, "Input file containing 'broken' strings from decoded"
keylogs") do |f|
infile = f
end
end
Harri Sylvander, harri@sylvander.net
opts.on("-r", "--regfile [OPT]", String, "Registry file (NTUSER.DAT) containing 'Keyboard Layout\Preload' values") do |f|
regfile = f
end
opts.on("-l", "--list-kbd-layouts [OPT]", "List keyboard layouts defined for current user profile") do |f|
list_kbd_layouts = true
end
opts.on("-k", "--keymap [OPT]", String, "InputLocale to use when parsing keylog data") do |f|
selected_keymap = f
end
opts.on("-h", "--help", "Show this message") do
puts optsexit
end
begin
ARGV << ":-h" if ARGV.empty?
opts.parse!(ARGV)
rescue OptionParser::ParseError => e
STDERR.puts e.message, ":n", opts
exit(-1)
end
if infile.nil?
raise "Must define infile to operate on. Exiting...";
exit(1)
end
kbd_layouts = []
keylog_string = ""
File.foreach(infile) do |line|
keylog_string << remove_special_chars(line)
end
# If an NTUSER.DAT was passed for parsing, extract all possible
# keymaps that might've been used. See if a mapping has been
# created for the keymap and try converting for each. If the user
# defines the '-l' parameter, only list the Keymaps, but do not
# parse and process the INFILE.
if regfile
kbd_layouts = get_kbd_layouts(regfile)
end
# Can't proceed if netither a registry file (NTUSER.DAT) with
# Keyboard Layouts nor a specific InputLocale is provided.
unless (kbd_layouts.size > 0 or selected_keymap )
puts "Must specify a registry (NTUSER.DAT) with valid keyboard layouts or provide a target
InputLocale. Exiting..."
exit(1)
end
if list_layouts_only
kbd_layouts.each { |kbd_layout| puts " - #{kbd_layout}" }
else
# Use user defined Inputlocale if one was passed as an argument.
Harri Sylvander, harri@sylvander.net
5.2. xtrat_keymap.rb
```ruby
# encoding: UTF-8
# For now, this script assumes that the keyboard layout
# in use is a standard QWERTY, Latin character set keyboard. The
# more correct way to parse this would be to map from logged character
# to a possible scancode based on available layouts and then
# mapping back to the expected character.
class XtremeRATKeymap
attr_reader :keymap
def initialize(kbd_layout)
# Initialize keymap hash
@keymap = {}
# Special characters
@keymap[" "] = ' ' # Space
# Only Arabic use case defined. Add keymaps as necessary,
# using the InputLocale value, as defined by Microsoft.
# See "Table 1" of "XtremeRAT: When Unicode Breaks" for
# examples of Arabic InputLocales.
case kbd_layout
when "0\x00\x00\x00\x00\x00\x00\x004\x00\x000"
# Arabic - 00000401 used in Arab nations with the exception of
# French speaking countries of North Africa.
# Top row Numerals
@keymap["\""] = 'ٔ'
@keymap["1"] = '١'
@keymap["2"] = '٢'
@keymap["3"] = '٣'
@keymap["4"] = '٤'
@keymap["5"] = '٥'
@keymap["6"] = '٦'
@keymap["7"] = '٧'
end
end
end
```
Harri Sylvander, harri@sylvander.net
"@keymap["8"] = '8'
@keymap["9"] = '9'
@keymap["0"] = '0'
# QWERTY
@keymap["q"] = 'ض'
@keymap["w"] = 'ص'
@keymap["e"] = 'ث'
@keymap["r"] = 'ق'
@keymap["t"] = 'ف'
@keymap["y"] = 'غ'
@keymap["u"] = 'ع'
@keymap["i"] = 'ؤ'
@keymap["o"] = 'ع'
@keymap["p"] = 'خ'
@keymap["] = 'خ'
@keymap[","] = 'د'
# ASDFG
@keymap["a"] = 'ش'
@keymap["s"] = 'س'
@keymap["d"] = 'ي'
@keymap["f"] = 'ب'
@keymap["g"] = 'ل'
@keymap["h"] = 'ن'
@keymap["] = 'ت'
@keymap["k"] = 'ن'
@keymap["l"] = 'م'
@keymap[","] = 'ك'
@keymap[";" ] = 'ط'
# ZXCVB
@keymap["z"] = 'ى'
@keymap["x"] = 'س'
@keymap["c"] = 'ر'
@keymap["v"] = 'ؤ'
@keymap["b"] = 'ل' # laam-alif
@keymap["n"] = 'ي'
@keymap["m"] = 'ئ'
@keymap[","] = 'ز'
@keymap["," ] = 'ط'
@keymap["/" ] = 'ط'
else
raise "No keymap found"
end
end
end
Harri Sylvander, harri@sylvander.net
<table>
<thead>
<tr>
<th>Event Name</th>
<th>Location</th>
<th>Dates</th>
<th>Event Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>SANS DFIRCON 2019</td>
<td>Coral Gables, FLUS</td>
<td>Nov 04, 2019 - Nov 09, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>MGT521 Beta One 2019</td>
<td>Crystal City, VAUS</td>
<td>Nov 12, 2019 - Nov 13, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>GridEx V 2019</td>
<td>Online,</td>
<td>Nov 13, 2019 - Nov 14, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Atlanta Fall 2019</td>
<td>Atlanta, GAUS</td>
<td>Nov 18, 2019 - Nov 23, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>Pen Test HackFest Summit & Training 2019</td>
<td>Bethesda, MDUS</td>
<td>Nov 18, 2019 - Nov 25, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Austin 2019</td>
<td>Austin, TXUS</td>
<td>Nov 18, 2019 - Nov 23, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Bangalore 2019</td>
<td>Bangalore, IN</td>
<td>Nov 25, 2019 - Nov 30, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Nashville 2019</td>
<td>Nashville, TNUS</td>
<td>Dec 02, 2019 - Dec 07, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS San Francisco Winter 2019</td>
<td>San Francisco, CAUS</td>
<td>Dec 02, 2019 - Dec 07, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Frankfurt December 2019</td>
<td>Frankfurt, DE</td>
<td>Dec 09, 2019 - Dec 14, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Austin Winter 2020</td>
<td>Austin, TXUS</td>
<td>Jan 06, 2020 - Jan 11, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Miami 2020</td>
<td>Miami, FLUS</td>
<td>Jan 13, 2020 - Jan 18, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Amsterdam January 2020</td>
<td>Amsterdam, NL</td>
<td>Jan 20, 2020 - Jan 25, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Anaheim 2020</td>
<td>Anaheim, CAUS</td>
<td>Jan 20, 2020 - Jan 25, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>Cyber Threat Intelligence Summit & Training 2020</td>
<td>Arlington, VAUS</td>
<td>Jan 20, 2020 - Jan 27, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Vienna January 2020</td>
<td>Vienna, AT</td>
<td>Jan 27, 2020 - Feb 01, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Mumbai 2019</td>
<td>OnlineIN</td>
<td>Nov 04, 2019 - Nov 09, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS OnDemand</td>
<td>Books & MP3s OnlyUS</td>
<td>Anytime</td>
<td>Self Paced</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://www.sans.org/reading-room/whitepapers/malicious/xtremerat-unicode-breaks-35897", "len_cl100k_base": 10980, "olmocr-version": "0.1.48", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 87404, "total-output-tokens": 14319, "length": "2e13", "weborganizer": {"__label__adult": 0.0009756088256835938, "__label__art_design": 0.0012941360473632812, "__label__crime_law": 0.033447265625, "__label__education_jobs": 0.004001617431640625, "__label__entertainment": 0.0005202293395996094, "__label__fashion_beauty": 0.0005598068237304688, "__label__finance_business": 0.0010328292846679688, "__label__food_dining": 0.000614166259765625, "__label__games": 0.004238128662109375, "__label__hardware": 0.005329132080078125, "__label__health": 0.0011758804321289062, "__label__history": 0.0009860992431640625, "__label__home_hobbies": 0.0003082752227783203, "__label__industrial": 0.0014400482177734375, "__label__literature": 0.0013551712036132812, "__label__politics": 0.001796722412109375, "__label__religion": 0.0010042190551757812, "__label__science_tech": 0.3076171875, "__label__social_life": 0.00032901763916015625, "__label__software": 0.188720703125, "__label__software_dev": 0.442138671875, "__label__sports_fitness": 0.0004658699035644531, "__label__transportation": 0.0004906654357910156, "__label__travel": 0.0002455711364746094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48884, 0.03244]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48884, 0.38602]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48884, 0.84641]], "google_gemma-3-12b-it_contains_pii": [[0, 49, false], [49, 1484, null], [1484, 3662, null], [3662, 5589, null], [5589, 6826, null], [6826, 7765, null], [7765, 10499, null], [10499, 11917, null], [11917, 12252, null], [12252, 13069, null], [13069, 14558, null], [14558, 15288, null], [15288, 17244, null], [17244, 19246, null], [19246, 20250, null], [20250, 22491, null], [22491, 23407, null], [23407, 25266, null], [25266, 26076, null], [26076, 27204, null], [27204, 27982, null], [27982, 29558, null], [29558, 31392, null], [31392, 33008, null], [33008, 33425, null], [33425, 35183, null], [35183, 36656, null], [36656, 37159, null], [37159, 39007, null], [39007, 40642, null], [40642, 42303, null], [42303, 43526, null], [43526, 44334, null], [44334, 48884, null]], "google_gemma-3-12b-it_is_public_document": [[0, 49, true], [49, 1484, null], [1484, 3662, null], [3662, 5589, null], [5589, 6826, null], [6826, 7765, null], [7765, 10499, null], [10499, 11917, null], [11917, 12252, null], [12252, 13069, null], [13069, 14558, null], [14558, 15288, null], [15288, 17244, null], [17244, 19246, null], [19246, 20250, null], [20250, 22491, null], [22491, 23407, null], [23407, 25266, null], [25266, 26076, null], [26076, 27204, null], [27204, 27982, null], [27982, 29558, null], [29558, 31392, null], [31392, 33008, null], [33008, 33425, null], [33425, 35183, null], [35183, 36656, null], [36656, 37159, null], [37159, 39007, null], [39007, 40642, null], [40642, 42303, null], [42303, 43526, null], [43526, 44334, null], [44334, 48884, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48884, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48884, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48884, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48884, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48884, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48884, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48884, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48884, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48884, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48884, null]], "pdf_page_numbers": [[0, 49, 1], [49, 1484, 2], [1484, 3662, 3], [3662, 5589, 4], [5589, 6826, 5], [6826, 7765, 6], [7765, 10499, 7], [10499, 11917, 8], [11917, 12252, 9], [12252, 13069, 10], [13069, 14558, 11], [14558, 15288, 12], [15288, 17244, 13], [17244, 19246, 14], [19246, 20250, 15], [20250, 22491, 16], [22491, 23407, 17], [23407, 25266, 18], [25266, 26076, 19], [26076, 27204, 20], [27204, 27982, 21], [27982, 29558, 22], [29558, 31392, 23], [31392, 33008, 24], [33008, 33425, 25], [33425, 35183, 26], [35183, 36656, 27], [36656, 37159, 28], [37159, 39007, 29], [39007, 40642, 30], [40642, 42303, 31], [42303, 43526, 32], [43526, 44334, 33], [44334, 48884, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48884, 0.11921]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
73d76c0d8a512b9df2aa29c1fd770b9a8aff7289
|
This unit covers algorithms for one of the most basic computations we could hope to perform: multiplication. We have already seen from the previous unit how multiplying huge numbers forms the backbone of RSA cryptography. We’re going to look at how that actually works now, along with some new and exciting algorithmic ideas.
1 Representation of Numbers
We all know about single-precision integers, namely ints, which are stored in a single machine word of memory, as a sequence of bits corresponding to the base-2 (i.e., binary) representation of the integer.
Multiple-precision integers are those that are larger than $2^{32}$ (or perhaps $2^{64}$) and therefore cannot be represented as a single int. These are instead stored as a list of digits, usually in an array indexed from the least-significant digit, where each digit is a single-precision integer.
(Notice that we are specifically assuming the integers we want to deal with are all greater than or equal to zero. If we want negative integers too, it’s easy: just add a boolean flag to indicate negative-ness. For most of this unit, we’ll quietly ignore this case because it doesn’t really change the algorithms.)
From now on, we will say that each digit $d$ satisfies $0 \leq d < B$, where the number $B$ is the base and must be at least 2. In practice, $B$ will usually correspond to the machine word size, like $2^{32}$ or $2^{64}$.
For example, given the integer 4391354067575026 we could represent it in base $B = 10$ by the list
\[ [6, 2, 0, 5, 7, 5, 7, 6, 0, 4, 5, 3, 1, 9, 3, 4] \]
or in base $B = 2^8 = 256$ by the list
\[ [242, 224, 71, 203, 233, 153, 15] \]
Generally, if a number has $n$ digits $[d_0, d_1, d_2, \ldots, d_{n-1}]$, in base $B$ representation, then that number equals
$$d_0 + d_1B + d_2B^2 + d_3B^3 + \cdots + d_{n-1}B^{n-1},$$
which is always between 0 and $B^n - 1$.
Does the choice of base matter? In practice, yes; the difference between say 1000 and 2000 words of memory can be huge. But as far as algorithms are concerned, asymptotically, it doesn’t really matter. The difference in array lengths for any two bases will just be some constant factor (this is because of a basic property of logarithms). So we can always say the size of an integer $x$ is $\Theta(\log x)$, no matter what base is used.
2 Basic Arithmetic with Integers
We’ve already been using big-integer arithmetic in the RSA algorithm. Now let’s look at how it actually works. We’ll start with the so-called “school” algorithms — like the ones you learned in grade school.
In presenting these algorithms, two important simplifications will be used throughout:
The base $B$ will always be 10. This is so that each digit in the way we normally think about numbers corresponds to one array entry in the computer representation. As discussed above, this doesn’t make any asymptotic difference in the cost of any algorithm.
We will always assume both integers to be added or multiplied have the same size. Why can we assume this? Well, if one integer is initially shorter than the other one, just add 0 digits to the end of the array until they have the same length. This process is called zero padding and obviously won’t change the answer, but it does simplify the presentation. And it also doesn’t change the asymptotic cost, since the size of the input will only change by (at most) a constant factor of two.
In practice, the base will be more like the machine word size, probably a power of two, and there will be special algorithm tweaks when the input integers are of different sizes. But the same general algorithmic principles will still apply.
2.1 Addition
Let’s start with an example: add 40694 to 73593. Your first instinct is probably to stack these on top of each other, right-aligned, and start adding digits from right to left:
\[
\begin{array}{cccccc}
4 & 0 & 6 & 9 & 1 \\
+ & 7 & 3 & 5 & 9 & 3 \\
\hline
\end{array}
\]
This isn’t right because we ended up with individual digits in the sum that are larger than $B$. Of course you know that we just have to carry the overflowed 1’s across to the next digit. But doing that might produce another carry, and another carry — will this go on forever?
Obviously not. We will start at the right hand side and at each step add together the two digits in that position, plus (possibly) the carry digit from the previous step. This produces the digit in that position of the sum, plus (possibly) a carry digit for the next step.
Next we need to ask, how big can the carry digit be? For base 10 at least, you can do some examples and notice that the carry is never more than 1. This is because, in the worst case, both digits in some position are equal to 9, and the carry from the previous position is 1, and $9 + 9 + 1 = 19$, which means the sum digit in that position is 9 and the carry is 1.
Base 10 is a simplification we made, but it turns out the carry can never be more than 1 in any base. In general, if we are in base $B$, the most any digit can be is $B - 1$, so we add $(B - 1) + (B - 1) + 1 = 2B - 1$. This is less than $2B$, so the carry digit (which corresponds to the quotient when dividing by $B$) is never greater than 1. Knowing this bound on how big the carry can be is important because it’s how we know this method will finish and always produce a number that has at most $n + 1$ digits.
This leads us to the following algorithm. Certainly all of you knew this addition algorithm from grade school. But could you write it out as I have below? One thing to remember is that the right-hand side of the number when we write it down (the least-significant digit) is the digit that appears at the beginning of the array in the representation. Being able to transform our intuitive thoughts about a computation into an algorithm is an important skill! Don’t worry, you’ll get plenty of practice in this class...
Add
Input: Integers $x$ and $y$, stored in arrays $X$ and $Y$ of exactly $n$ digits each. Each digit is less than $B$.
Output: The integer $x + y$, stored in array Array $A$ of length at most $n + 1$
```
def add(X, Y, B):
n = len(X)
carry = 0
A = zero-filled array of length (n + 1)
for i in range(0, n):
carry, A[i] = divmod(X[i] + Y[i] + carry, B)
A[n] = carry
return A
```
(Note: `divmod` is a special built-in operation in Python that takes two numbers and returns both the quotient and the remainder of the division.)
Now from this it should be obvious that the worst-case cost of our algorithm is $\Theta(n)$, where $n$ is the size of the input arrays. This is as good as it can get! Here $n$ corresponds to the size of the input integers, which if you remember from the last unit is proportional to the logarithm of their values. And it’s really linear time because each of the operations is on small integers at most $2^\beta$.
Why is linear time the best possible? This might seem obvious, but at least in this case it’s true that \textit{we have to look at the entire input to get the right answer}. If even one digit in either input integer is changed, it changes the answer. So since it takes at least $\Omega(n)$ time to read through the input, this is a lower bound on any algorithm for this problem. Therefore the algorithm for integer addition that you learned in grade school is asymptotically optimal! Now go give your first grade teacher a big old kiss and thank them.
By the way, it should be mentioned that we can do subtraction in exactly the same way. With subtraction, however, the carries are negative, and we should be careful to only subtract bigger numbers from smaller ones, lest the whole result should be negative and we need a data structure more sophisticated than an array of digits to store it. (Not that much more sophisticated; we just need a flag to say if the integer is less than zero. But it’s a complication not worth dealing with here.)
### 2.2 Multiplication
Certainly you remember long integer multiplication from elementary school. If you went to school in the United States, then the algorithm you taught probably looks something like this:
\[
\begin{array}{cccc}
7 & 4 & 0 & 7 \\
\times & 2 & 9 & 1 & 5 \\
\hline
3 & 7 & 0 & 3 & 5 \\
7 & 4 & 0 & 7 \\
6 & 6 & 6 & 6 & 3 \\
+ & 1 & 4 & 8 & 1 & 4 \\
\hline
2 & 1 & 5 & 9 & 1 & 4 & 0 & 5 \\
\end{array}
\]
To multiply $x$ times $y$, we multiply $x$ times each digit in $y$ separately and then add up all these products, shifted appropriately by powers of the base $B$. Our algorithm to do this by computer will be almost the same, except that the sum at the bottom will be computed as we go along, rather than all at once at the end.
The multiplication of the first argument $x$ by each digit of the second argument will be an inner loop in the algorithm. It will be accomplished similarly to the `add` algorithm above, with a carry digit to keep track of overflow as we go along. But once again, we should figure out how large this carry will be.
If you do some examples in base 10, you may notice that the carry is never more than 8. This is true even if we try a worst-case example like 99999 times 9. And once again, this generalizes to any base: the carry digit in multiplying by a single digit in base $B$ is always at most $B - 2$. To prove this we just have to work out the worst case:
\[
(B - 1) \cdot (B - 1) + (B - 2) = B^2 - 2B + 1 + B - 2 = (B - 2)B + (B - 1)
\]
Since the end of this equation looks like division with remainder by $B$, the quotient (which corresponds to the carry) is always at most $B - 2$. This is nice because it means we can always store the carry in a single digit.
Here's the algorithm that we get:
```
standardMul
Input: Integers x and y, stored in arrays X and Y of exactly n digits each. Each digit is less than B.
Output: The integer xy, stored in array Array A of length at most 2n
def smul(X, Y, B):
n = len(X)
A = zero-filled array of length (2*n)
T = zero-filled array of length n
for i in range(0, n):
# set T = X * Y[i]
carry = 0
for j in range(0, n):
carry, T[j] = divmod(X[j] * Y[i] + carry, B)
# add T to A, the running sum
A[i : i+n+1] = add(A[i : i+n], T[0 : n], B)
return A
```
Analyzing this algorithm by now should be a piece of cake! The outer for loop runs exactly n times, and inside it we have another for loop that runs n times as well as a call to the add algorithm which has cost Θ(n). So the total cost of standardMul is Θ(n²).
This leaves us with the question: can we do better? In the case of addition, because the standard algorithm is already linear-time, there’s not much (any) room for improvement other than tweaks. But for multiplication, we have a quadratic-time algorithm, leaving lots of room to improve. But is this really possible? How could we avoid multiplying every digit of the first number by every digit of the second one?
### 3 Divide and Conquer Multiplication
The multiplication algorithm above is written iteratively, so we can easily analyze and implement it. But thinking recursively will be more helpful in reasoning about the algorithm and ultimately improving on it.
The way standard multiplication of x times y works is basically to write \( y = y_1 B + y_0 \), where \( y_0 \) is the least significant digit of \( y \), and then we multiply \( x \) times \( y_0 \) (the inner for loop) and add this to the result of the recursive call \( x \) times \( y_1 \). Writing out this recursion would give us the same running time as above, \( \Theta(n^2) \).
But this reveals how we might improve on the algorithm. Since we know that MergeSort improves on the quadratic-time sorting algorithm by first dividing the input in half, why not try a similar idea for multiplication? The main difference is that here we have two arrays to split in half instead of just one.
Say we have input integers \( x \) and \( y \) to multiply, and their digits are stored in arrays \( X \) and \( Y \), each of length \( n \). Write \( m = \lfloor n/2 \rfloor \), and then split the numbers in “half” as follows: Let \( X_0 = X[0..m−1] \) and \( X_1 = X[m..n−1] \). These are two arrays of digits, but they also represent (big) integers, so write \( x_0 \) and \( x_1 \) for the two integers that they represent. We can also define \( Y_0 \), \( Y_1 \), \( y_0 \), and \( y_1 \) in the same way. The mathematical relationship between the integers is:
- \( x = x_0 + B^m x_1 \)
- \( y = y_0 + B^m y_1 \)
So you see that although the arrays are split in half, the numbers aren’t really divided by 2, but by \( B^m \). This means for example that each of \( x_0 \) and \( x_1 \) are much closer to \( \sqrt{x} \) than they are to \( x/2 \).
All these formulas are making me dizzy. Let’s look at a concrete example. If we want to multiply 7407 by 2915, like in the example before, the splitting gives us all of the following:
- \( x_0 = 740 \) and \( x_1 = 74 \)
- \( y_0 = 291 \) and \( y_1 = 5 \)
```python
def smul(X, Y, B):
m = len(X) // 2
X0 = X[0:m]
X1 = X[m:]
Y0 = Y[0:m]
Y1 = Y[m:]
# ... recursive calls...
```
After the recursive calls, we reassemble the result:
```python
A = zero-filled array of length (2*m)
for i in range(0, m):
# ... addition...
return A
```
So the question is, how can we multiply 7407 by 2915 using some recursive calls on the integers 7, 74, 15, and 29? It helps to write it out:
\[ 7407 \cdot 2915 = (7 + 74 \cdot 100)(15 + 29 \cdot 100) = 7 \cdot 15 + 7 \cdot 29 \cdot 100 + 74 \cdot 15 \cdot 100 + 74 \cdot 29 \cdot 10000 \]
Now of course multiplying by any power of 10 (since we are using base \( B = 10 \)) is easy (linear time). So all we really need to do is compute the four products
- \( 7 \cdot 15 = 105 \)
- \( 7 \cdot 29 = 203 \)
- \( 74 \cdot 15 = 1110 \)
- \( 74 \cdot 29 = 2146 \)
Now we add these up, shifted as appropriate by powers of 10:
\[
\begin{array}{cccc}
1 & 0 & 5 \\
2 & 0 & 3 \\
1 & 1 & 1 & 0 \\
+ & 2 & 1 & 4 & 6 \\
\hline
2 & 1 & 5 & 9 & 1 & 4 & 0 & 5
\end{array}
\]
The same answer as before. Great! But it sure seems like a lot of work. Do we really save any time?
Well, more generally, the approach here is to multiply the four products \( x_0y_0 \), \( x_0y_1 \), \( x_1y_0 \), and \( x_1y_1 \) (four recursive calls), and then add them up (linear time). We can describe this with a recurrence:
- \( T(n) = 1 \) if \( n = 1 \)
- \( T(n) = n + 4T(n/2) \) if \( n \geq 2 \)
We haven’t seen a recurrence quite like this before, but let’s try solving it with our standard technique:
\[ T(n) = n + 4T(n/2) = 3n + 16T(n/4) = 7n + 64T(n/8) = \cdots \]
See the pattern? It’s a little trickier, but we just have to recognize the three sequences 1,3,7,15,... and 4,16,64,256,... and 2,4,8,16,... Think about it a little and we get the general pattern:
\( T(n) = (2^i - 1)n + 4^i T(n/2^i) \)
Figuring out the base case is nothing new; we solve \( n/2^i = 1 \) for \( i \) to get \( i = \lg n \). Now plug this back in and we have \( T(n) = (n - 1)n + 4^i n \). The second part of this looks a little odd, but we know that \( 4 = 2^2 \), so we can rewrite \( 4^i n = 2^{2i} n = n^2 \). Therefore \( T(n) = n(2n - 1) \in \Theta(n^2) \).
Well this is a little upsetting. We went through all this effort with dividing the numbers in “half” and then making this cool recursive method and... we get exactly the same asymptotic cost as before! Are you ready to give up and declare that quadratic-time is the best possible?
3.1 Karatsuba Algorithm
Luckily someone wasn’t ready to give up. And his name was... Gauss. Carl Friedrich Gauss, that is. Pretty famous for a mathematician — the Germans even put him on one of their bills! Gauss’s observation was actually about multiplying complex numbers, but it’s exactly what we need for this problem. It just amounts to basic algebra:
\((x_0 + x_1B^m)(y_0 + y_1B^m) = x_0y_0 + x_1y_1B^{2m} + ((x_0 + x_1)(y_0 + y_1) - x_0y_0 - x_1y_1)B^m\)
In the 1950s, a Russian guy named Karatsuba realized that this bunch of symbols is actually really useful — it means that we can multiply numbers faster!
Not buying it? Well, it’s a three step process:
1. Compute two sums: \(u = x_0 + x_1\) and \(v = y_0 + y_1\).
2. Compute three \(m\)-digit products: \(x_0y_0\), \(x_1y_1\), and \(uv\).
3. Sum them up and multiply by powers of \(B\) to get the answer: \(xy = x_0y_0 + x_1y_1B^{2m} + (uv - x_0y_0 - x_1y_1)B^m\)
Let’s go back to our favorite example of 7407 times 2915 to see how that works:
\[ x = 7407 = 7 + 74*100 \]
\[ y = 2915 = 15 + 29*100 \]
\[ u = x0 + x1 = 7 + 74 = 81 \]
\[ v = y0 + y1 = 15 + 29 = 44 \]
\[ x0*y0 = 7*15 = 105 \]
\[ x1*x1 = 74*29 = 2146 \]
\[ u*v = 81*44 = 3564 \]
\[ x*y = 105 + 2146*10000 + (3564 - 105 - 2146)*100 = 21591405 \]
The same answer again! Can you see why this is going to be a better algorithm? Let’s write it out formally.
```
karatsubaMul
Input: Integers \(x\) and \(y\), stored in arrays \(X\) and \(Y\) of exactly \(n\) digits each. Each digit is less than \(B\).
Output: The integer \(xy\), stored in array Array \(A\) of length at most \(2n\)
\[ n = \text{len}(X) \]
\[ \text{if } n \leq 3: \]
\[ \text{return } \text{smul}(X, Y, B) \]
\[ \text{else:} \]
\[ m = n \div 2 \]
\[ A = \text{zero-filled array of length } (2*n + 1) \]
\[ X0, X1 = X[0 : m], X[m : n] \]
\[ Y0, Y1 = Y[0 : m], Y[m : n] \]
\[ U = \text{add}(X1, X0, B) \]
\[ V = \text{add}(Y1, Y0, B) \]
\[ P0 = \text{kmul}(X0, Y0, B) \]
\[ P1 = \text{kmul}(X1, Y1, B) \]
\[ P2 = \text{kmul}(U, V, B) \]
\[ A[0 : 2*m] = P0 \]
\[ A[2*m : 2*n] = P1 \]
\[ A[m : 2*n+1] = \text{add}(A[m : 2*n], P2, B) \]
\[ A[m : 2*n+1] = \text{sub}(A[m : 2*n+1], P0, B) \]
\[ A[m : 2*n+1] = \text{sub}(A[m : 2*n+1], P1, B) \]
\[ \text{return } A[0 : 2*n] \]
```
This looks like a pretty long and complicated algorithm. But it’s not too bad; really it’s just the three steps that we had before, written out in their entirety.
For the analysis, look at the recursive case and think about the cost of each line. Other than the recursive calls, every line has running time $\Theta(n)$. And the three recursive calls are each on integers with about $n/2$ digits. So, with simplifications, the recurrence we need to solve looks like
- $T(n) = 1$ if $n = 1$
- $T(n) = n + 3T(n/2)$ if $n \geq 2$
Very similar to the one before, except crucially the coefficient of the $T(n/2)$ term is now 3 instead of 4. Think back to MergeSort; the recurrence for that had a very similar form, except the coefficient was 2 instead of 3. So when the coefficient is 4 we got $\Theta(n^2)$ and when it was 2 we got $\Theta(n \log n)$. What’s left in between for the coefficient of 3 that we have here? Let’s investigate:
$$T(n) = n + 3T(n/2) = 5n/2 + 9T(n/4) = 19n/4 + 27T(n/8) = 65n/8 + 81T(n/16)$$
The coefficients of $n$ are starting to get pretty messy. There’s a few different ways we could figure out the general pattern, including typing in the sequence of numerators to the online encyclopedia of integer sequences and seeing what pops out.
Instead, I’ll tell you the trick: take $2n$ out every time. This gives
$$T(n) = 3n - 2n + 3T(n/2) = 9n/2 - 2n + 9T(n/4) = 27n/4 - 2n + 27T(n/8)$$
And voilà! The pattern pops right out. It has to do with powers of 2 and powers of 3:
$$T(n) = 3^n/2^{i-1} - 2n + 3^i T(n/2^i)$$
This is a little hairy looking, but it’s nothing we can’t handle. As usual, the first thing to do is figure out how many recursive calls to get to the base case. This looks like what we’re used to: $n/2^i \leq 1$, so $i \leq \log n$.
Plugging this in is going to give us some $3^{\log n}$ coefficients popping up. (Remember before that we saw $4^{\log n}$? It’s not a coincidence!) We can simplify this as follows, using what we know about exponents and logarithms:
$$3^{\log n} = (2^{\log 3})^{\log n} = (2^{\log n})^{\log 3} = n^{\log 3}$$
Why is this simpler? Well, $\log 3$ is just a constant! We are used to seeing integers in the exponent of $n$, corresponding to linear time, quadratic time, cubic, etc., and $\log 3$ is just another constant wedged in between 1 and 2. If you must know, it’s approximately 1.585. So it’s still not quite as simple as we might want, but it looks a bit more sane. Now we can plug the $i = \log n$ in everywhere to get
$$T(n) = 3^{\log n} n^{\log 3} - 2n + 3^{\log n} T(1) = 3n^{\log 3} - 2n \in \Theta(n^{\log 3})$$
In case looking at “$\log 3$” makes your head spin, we can also say that $T(n)$ is $O(n^{1.59})$, which is what you’ll often see written.
This is exciting! We have a faster algorithm for multiplication, and its running time looks like nothing else we’ve seen, but fits somewhere between $\Theta(n \log n)$ and $\Theta(n^2)$. Karatsuba came up with this algorithm in 1958, and as it slowly trickled out from the Soviet press, computer scientists around the world were shocked that multiplication better than quadratic time was possible. Did it shock you too?
### 3.2 Even Better than Karatsuba
A running time like $\Theta(n^{\log 3})$ should be unsatisfying. It’s not a “clean” function like $n$ or $n^2$ or even $n \log n$. So it’s hard to believe that Karatsuba’s algorithm is really as good as we could possibly do.
In fact, it’s not. A few years after Karatsuba shattered the quadratic barrier, another Russian named Toom came up with an algorithm along similar lines but that divides each input integer into three equal parts instead of two. In fact, he generalized to splitting into any number $k$ of equal parts. Then a Ph.D. student at Harvard named Cook figured out how to analyze this crazy set of algorithms.
This is now referred to as the Toom-Cook algorithm and the running times for any \( k \) look like \( \Theta(n^c) \) where \( c \) is some number between 1 and 2. In fact, as \( k \) increases (splitting into more parts), \( c \) decreases down towards 1 (but it never quite gets there!). In practice, the “overhead” of the algorithm as \( k \) gets larger makes it unusable except when \( k=3 \).
So is Toom-Cook the best we can do? Again, there is something better! There is an algorithm that you might have heard of called the Fast Fourier Transform, or FFT, which turns out to be one of the most important numerical algorithms ever. It’s worst-case runtime is just like MergeSort, \( \Theta(n \log n) \).
Less than a decade after Toom published his algorithm, two Germans named Schoenhage and Strassen came up with a way to use the FFT to do integer multiplication! Unfortunately, it’s not quite as fast as the FFT, but it’s really close: the worst case is \( \Theta(n(\log n)(\log \log n)) \). Now “\log \log n” is not a function we’ve seen before, but as you can imagine it grows really really slowly. In fact, for all practical purposes in this algorithm the “\log \log n” contributes a factor of 3 to the cost of the algorithm.
The really cool thing is that all of these algorithms actually get used to multiply big integers! You already know one (really important) application of multiple-precision arithmetic, namely the RSA algorithm. There are of course lots of other uses of big integers as well, and when they grow to more than a few hundred or thousand bits, algorithms like Karatsuba’s start to become faster than the standard method. Actually, all three of the algorithms mentioned (Karatsuba, Toom-Cook, Schoenhage-Strassen) get used in some range of sizes for big integer multiplications.
There’s even a faster one — in theory — developed a few years ago by Martin Furer, with worst-case cost \( \Theta(n \log n^{2\log^* n}) \) that involves something called the iterated logarithm which we won’t talk about. This is actually slightly better asymptotically than Schoenhage-Strassen’s algorithm, but the hidden constant is too large to make it useful in practice.
Still no one has found an integer multiplication algorithm that costs \( n \log n \), although they are really really close. The main thing I want you to take away from this discussion is there’s a lot out there in algorithm development! The world of sorting seems simple and solved, but most problems get messy and have lots of interesting opportunities for improvements, both in theory and in practice.
### 4 Master Method for Solving Recurrences
We’ve seen and solved a lot of recurrence relations by now. For all of them the base case is the same: \( T(1) = 1 \). The difference is of course in the recursive case. Let’s see what kind of recurrences we have solved so far:
<table>
<thead>
<tr>
<th>Algorithm</th>
<th>Recurrence</th>
<th>Asymptotic big-( \Theta )</th>
</tr>
</thead>
<tbody>
<tr>
<td>BinarySearch</td>
<td>( 1 + T(n/2) )</td>
<td>( \log n )</td>
</tr>
<tr>
<td>LinearSearch</td>
<td>( 1 + T(n-1) )</td>
<td>( n )</td>
</tr>
<tr>
<td>MergeSort (space)</td>
<td>( n + T(n/2) )</td>
<td>( n )</td>
</tr>
<tr>
<td>MergeSort (time)</td>
<td>( n + 2T(n/2) )</td>
<td>( n \log n )</td>
</tr>
<tr>
<td>KaratsubaMul</td>
<td>( n + 3T(n/2) )</td>
<td>( n^{\log_3 3} )</td>
</tr>
<tr>
<td>SelectionSort</td>
<td>( n + T(n-1) )</td>
<td>( n^2 )</td>
</tr>
<tr>
<td>StandardMul</td>
<td>( n + 4T(n/2) )</td>
<td>( n^2 )</td>
</tr>
</tbody>
</table>
Our standard method to solve recurrences by writing them out, noticing the pattern, etc., is pretty useful, and it reveals something about the structure of the algorithm too. But after seeing the same patterns show up time and time again, this starts to get tedious. Could you generalize any of the patterns?
Well, the good news is that you don’t have to; someone else has done it for you. These so-called “master methods” are now available for your use because you have mastered the standard way of solving recurrences. (Okay, that’s not really why they’re called master methods, but just go with it.)
The first one is a simplified version of what you will find in your book and online, and the second one is specially created by your generous instructor.
For both of these, we write the non-recursive part of the recurrence — that is, the part that doesn’t involve \( T(\ldots) \) — as \( f(n) = n^c(\log n)^d \) for some non-negative constants \( c \) and \( d \). Observe that in every case above, \( c \) is either 0 or 1, and \( d \) is always 0. But of course more complicated situations will arise in the future, and it’s nice to be prepared.
**Master Method A**
Suppose \( f(n) = n^c (\log n)^d \) for non-negative constants \( c \) and \( d \), and \( T(n) = aT(n/b) + f(n) \) for all \( n \geq 2 \), where \( a \) is a positive integer and \( b > 1 \).
Write \( e = \log_b a = (\log a) / (\log b) \), which must be at least 0.
Then there are three cases to consider:
1) \( c = e \). Then \( T(n) \in \Theta(f(n) \log n) = \Theta(n^c(\log n)^{d+1}) \).
2) \( c < e \). Then \( T(n) \in \Theta(n^c) = \Theta(n^{\log_b a}) \).
3) \( c > e \). Then \( T(n) \in \Theta(f(n)) = \Theta(n^c(\log n)^d) \).
We won’t prove why this works, but the basic idea is that, looking at the recursion tree, either every level has the same cost (case 1, where we multiply by the number of levels, \( \log n \)), or the number of leaves at the bottom level dominates the cost (case 2), or the very first call dominates the whole cost (case 1).
This basically covers all the “divide-and-conquer” type algorithms that we have seen, and most of the ones that we ever will see. But it doesn’t cover some of the other recursive algorithms, like SelectionSort above. For that we need the following, for recurrences that subtract from \( n \) rather than divide it in the recursive term:
**Master Method B**
Suppose \( f(n) = n^c (\log n)^d \) for non-negative constants \( c \) and \( d \), and \( T(n) = aT(n - b) + f(n) \) for all \( n \geq 2 \), where \( a \) and \( b \) are both positive integers.
Then there are two cases to consider:
1) \( a = 1 \). Then \( T(n) \in \Theta(n^{\log_b a}) = \Theta(n^c(\log n)^d) \).
2) \( a > 1 \). Then \( T(n) \in \Theta(e^n) \), where \( e \) is the positive constant \( a^{1/b} \), which must be greater than 1.
Things to notice here are first of all that the value of \( b \) doesn’t matter in the asymptotic analysis, and secondly that if there is more than one recursive call, (i.e. \( a > 1 \)), we end up with an exponential-time algorithm.
This should greatly simplify the process of going from a recurrence to an asymptotic growth rate. The process no longer requires any great intuition or “magic” steps. All we have to do is look at the recursive case, and try to match it to one of the cases of one of the master methods.
It is worth going through the examples in the table at the beginning of this section and seeing how to prove the big-\( \Theta \) bound of each one using a master method.
5 **Matrix Multiplication**
Matrices are extremely important in computation. Every time you see an 3-D object move around on your screen, or Netflix recommends a movie for you to watch, the backbone of the operation is computations with (big) matrices.
Now it turns out that one of the most basic things we want to do with matrices is multiply them together. Let’s review how this works.
The **dimensions** of a matrix are the number of rows and columns. So a 5x7 matrix has 5 rows and 7 columns, for a total of 35 entries. You should also remember that the **dot product** of any two vectors (which must have the same length) is the sum of the corresponding products in those vectors. We can use these to define the product of two matrices, which consists of all the dot products of any row in the first matrix with any column in the second matrix.
Here’s an example. Say we want to multiply the following 4x3 and 3x2 matrices:
This matrix product will be a 4x2 matrix containing the 8 dot products of every row in \( A \) with every column in \( B \):
\[
\begin{bmatrix}
7 * 2 + 1 * 6 + 2 * 4 & 7 * 0 + 1 * 3 + 2 * 3 \\
6 * 2 + 2 * 6 + 8 * 4 & 6 * 0 + 2 * 3 + 8 * 3 \\
9 * 2 + 6 * 6 + 3 * 4 & 9 * 0 + 6 * 3 + 3 * 3 \\
1 * 2 + 1 * 6 + 4 * 4 & 1 * 0 + 1 * 3 + 4 * 3
\end{bmatrix} = \begin{bmatrix}
28 & 9 \\
56 & 30 \\
66 & 27 \\
24 & 15
\end{bmatrix}
\]
In general, we might be multiplying a \( m \times k \) matrix by a \( k \times n \) matrix, to produce a \( m \times n \) matrix product. Things to notice:
- The middle dimensions (3 in the example, \( k \) in general) must match up exactly, so that the dot products have matching lengths.
- Each entry in the product matrix requires exactly \( \Theta(k) \) operations and exactly \( k \) multiplications in particular.
We could write out this algorithm formally, but I don’t think we really need to. Since there are exactly \( mn \) entries in the product matrix, and each of them costs \( \Theta(k) \) operations to compute, the total cost is \( \Theta(mkn) \).
Here of course we are implicitly assuming that all the elements in either array are single-precision integers. If they are not, then the cost of the matrix product will just be multiplied by the cost of whatever integer multiplication algorithm we use. But the two don’t really affect each other at all, so while we’re concentrating on the matrix operations it’s fine to assume single-precision entries.
Notice that \( mkn \) is also the exact number of multiplications required to compute the matrix product. This will be important later. Also notice that, if the dimensions are all \( n \), i.e. we are multiplying square matrices, then the total cost is \( n^3 \) multiplications.
### 5.1 Strassen Algorithm
For a long time no one thought you could do better than \( n^3 \) operations for multiplying matrices. The multiplications seem to be involved in the very definition of the problem! But then Divide and Conquer strikes again!
Dividing a matrix “in half” is a little more involved than dividing an array or a polynomial, because matrices are two-dimensional objects. If we divide a matrix in half in one dimension only, then the resulting sub-problems won’t have the same shape as the original one.
The solution is two-fold: First, we only multiply matrices that are square, meaning that the row and column dimensions are the same. And second, we assume both dimensions are even, so the matrices can be evenly divided in quarters. So we are just talking about the product of two \( n \times n \) matrices, where \( n \) is evenly divisible by 2. Just like with integers, we can “pad” the matrices with zeroes to get the right dimensions.
\[
\begin{bmatrix} S & T \\ U & V \end{bmatrix} \begin{bmatrix} W & X \\ Y & Z \end{bmatrix} = \begin{bmatrix} SW + TY & SX + TZ \\ UW + VY & UX + VZ \end{bmatrix}
\]
Now remember that each of these submatrices \( S, T, \ldots, Z \) is just a matrix with dimensions \( \frac{n}{2} \times \frac{n}{2} \). So doing the product this way, you can count up to confirm that we will have to perform 8 multiplications and 4 additions of \( \frac{n}{2} \times \frac{n}{2} \) matrices. Each matrix addition requires \( n^2/4 = \Theta(n^2) \) time, so the total cost is given by the recurrence:
- \( T(n) = 1 \) when \( n = 1 \)
- \( T(n) = n^2 + 8T(n/2) \) when \( n \geq 2 \)
Now we can plug this into Master Method A to conclude that \( T(n) \in \Theta(n^3) \). Darn it! Once again, the straightforward divide-and-conquer approach didn’t gain us anything!
But just like with Karatsuba’s algorithm, there is a very clever way to add up some of the blocks before multiplying and then add and subtract the multiples so that the number of recursive calls is reduced by one — in this case from 8 to 7.
If you’re interested in this stuff, I recommend reading the description in CLRS, Section 4.2, where they try to explain how Strassen came up with this algorithm. (Really, the answer is “Trying lots of stuff, seeing what works, learning from past mistakes, and being persistent”. This is how nearly every difficult problem gets solved!)
The way Strassen’s algorithm works is to first compute the seven $\frac{n}{2} \times \frac{n}{2}$ matrix products:
\[
P_1 = S(X - Z) \\
P_2 = (S + T)Z \\
P_3 = (U + V)W \\
P_4 = V(Y - W) \\
P_5 = (S + V)(W + Z) \\
P_6 = (T - V)(Y + Z) \\
P_7 = (S - U)(W + X)
\]
After computing these, the four blocks of the matrix product just require some additions and subtractions:
\[
\begin{bmatrix}
S & T \\
U & V
\end{bmatrix}
\begin{bmatrix}
W & X \\
Y & Z
\end{bmatrix}
= \begin{bmatrix}
P_5 + P_4 - P_2 & P_6 \\
P_3 + P_4 & P_1 + P_2
\end{bmatrix}
\]
I’m not going to write out this algorithm. But hopefully you can see that it would work recursively, making each of the seven computations of the $P_i$’s into recursive calls. All in all, each recursive call requires 10 additions/subtractions, then 7 (recursive) multiplications, and then 8 more additions/subtractions, all on $\frac{n}{2} \times \frac{n}{2}$ matrices. Since each addition/subtraction costs $\Theta(n^2)$, the total cost is given by the recursion
\[
T(n) = \begin{cases}
1, & n < 1 \\
n^2 + 7T(n/2), & n \geq 2
\end{cases}
\]
Applying the Master theorem here tells us that $T(n) \in \Theta(n^{\lg 7})$, which is $O(n^{2.81})$. Interestingly, when this algorithm was first presented, no one ever thought it would be used, because the “hidden constant” in the computation is so much larger than for the standard $\Theta(n^3)$ algorithm. But alas, the power of asymptotic analysis is that, as time goes on and computers get more powerful, problems get bigger. And when problems get bigger, eventually the better asymptotic algorithm wins. Today Strassen’s algorithm is used in practice for very large matrix computations (think more than a million entries in each matrix).
But although the importance of this discovery — both in theory and in practice — cannot be overstated, it is again somewhat “unsatisfying” and doesn’t seem to be the best possible algorithm.
In fact, asymptotically faster algorithms have been developed, and the fastest among them was invented by Coppersmith and Winograd a few decades ago; it has worst-case running time $O(n^{2.38})$. Actually, the running time of this algorithm has gone down twice in the last two years — even though there haven’t really been any new algorithmic ideas! Instead, the analysis gets so difficult that people have been using computer programs to do the analysis for them, and coming up with slightly improved exponents on the worst-case cost. Yes, there are actually algorithms to do the analysis of other algorithms!
(And no, none of the asymptotic improvements to Strassen’s algorithm get used in practice, yet.)
### 6 Computing Fibonacci Numbers
Remember our old pal the Fibonacci sequence? It’s a little off-topic for this unit on “multiplication”, but useful to introduce some techniques that will be useful to solve an interesting problem on matrix multiplication. Remember that the Fibonacci numbers are defined by the recurrence relation:
- $f_i = i$ when $0 \leq i \leq 1$
- $f_i = f_{i-1} + f_{i-2}$ when $i \geq 2$
Now here’s a simple recursive algorithm to compute any Fibonacci number:
Fibonacci numbers, simple recursive version
Input: Non-negative integer \( n \)
Output: \( f_n \)
```python
def fib(n):
if n <= 1:
return n
else:
return fib(n-1) + fib(n-2)
```
Seems simple enough. Now let’s analyze it. The running time is given by the recurrence
\[
T(n) = \begin{cases}
1, & n \leq 1 \\
1 + T(n - 1) + T(n - 2), & n \geq 2
\end{cases}
\]
Does this look familiar? It’s just like the actual Fibonacci recurrence except for that pesky “+ 1” part. Typing the first few values into the Online Encyclopedia of Integer Sequences will give you the general formula, which you can confirm with a proof by induction: \( T(n) = 2f_{n+1} - 1 \).
And this is bad! We already showed before that \( f_n < 2^n \) for any value of \( n \). And you can follow along the same steps of that proof to show that \( f_n \geq 2^{n/2} - 1 \) as well. Therefore the cost of this algorithm is actually exponential in the \textit{value} of the input \( n \).
If you coded this algorithm up, you probably wouldn’t be able to get as far as computing \( f_{50} \), even if you had the most powerful computer in the world! To see what the problem is, let’s look at the tree of recursive calls for computing \( \text{fib}(6) \):
6.1 Memoizing Fibonacci
Remember that a key tool to improving algorithms is to look for repeated or unnecessary computations. There is certainly a lot of repetition here: just to compute \( \text{fib}(6) \), for example, there are 5 separate calls to \( \text{fib}(2) \), each of which \textit{separately} calls \( \text{fib}(1) \) and \( \text{fib}(0) \).
The idea of \textit{memoization} is just to remember the return value each time a function is called, storing them in a big table. The “table” could be a simple dynamic-sized array (like the vector class), or a hash table, or a red-black tree... you get the idea.
The memoized version of the Fibonacci function would look like the following, where \( T \) is the \textit{globally-defined} table.
Fibonacci numbers, memoized version: \( \text{fibmemo}(n) \)
Input: Non-negative integer \( n \)
Output: \( f_n \)
```python
fib_table = {} # an empty hash table
def fib_memo(n):
if n not in fib_table:
if n <= 1:
return n
else:
fib_table[n] = fib_memo(n-1) + fib_memo(n-2)
return fib_table[n]
```
Now look back and compare this to the non-memoized version. Notice that lines 2 and 3 correspond \textit{exactly} to the original recursive function, and the rest is just about doing the table lookup. So we can apply memoization to any recursive function!
Figure 1: Recursion tree for fib(6)
Figure 2: Recursion tree for fibmemo(6)
And at least for Fibonacci, this technique greatly reduces the running time of the function. Here is the tree of recursive calls for fibmemo(6):
To analyze this a little bit different than most analysis we’ve seen. But it’s not too hard to follow. First, assume that table lookups take constant time. Then notice two facts:
1) The worst-case cost of fibmemo(n) is just a constant number of primitive operations, plus a constant number of recursive calls.
2) The only recursive calls required to compute fibmemo(n) are fibmemo(n−1), fibmemo(n−2), down to fibmemo(0).
Therefore, since each recursive call is only fully evaluated at most once, the total worst-case cost is Θ(n) calls each with cost Θ(1), for a total of Θ(n) primitive operations. Much improved from the original version!
Actually, there’s a much simpler way to calculate Fibonacci numbers using a simple loop and keeping track of the previous two values in the sequence at every step, which also achieves Θ(n) time.
But the power of memoization is its generality. It can be applied to a wide variety of problems, not just this particular one. For any problem where we see the same recursive calls showing up again and again, memoization is an easy way to speed it up, often by a considerable margin. In fact, some programming languages even have memoization built in! (Not C++ or Java, unfortunately. Can you think of why it would be a bad idea to memoize every function call?)
7 Matrix Chain Multiplication
Now let’s return to our theme of multiplication. We want to look at matrix multiplication, but this time suppose we have a whole list of matrices and want to multiply all n of them together:
\[ A_1 \cdot A_2 \cdot A_3 \cdots A_n. \]
All we know so far is how to compute a product of two matrices. And, at least for now, we’ll just use the standard algorithm to do it (not Strassen’s). So the only thing to do really is to figure out how to break this big product of n matrices into n−1 products of 2 matrices each. This ordering of multiplications could be specified for example by putting parentheses everywhere.
Now there are two things to know about matrix multiplication. The first is, they’re associative. So whether we do \((A_1A_2)A_3\) or \(A_1(A_2A_3)\), the answer will be the same. Great!
But matrix multiplication is definitely not commutative. For example, \(A_1A_2A_3\) will not (in general) be equal to \(A_2A_1A_3\). This has to do with the rules of multiplication, that the “inner” dimensions must match up. Mixing up the order in this way would mix up the dimensions, and wouldn’t make any sense at all. So when we say “ordering” of the products, we’re just talking about which multiplications to do first, second, and so on (which we can mess around with safely), and not about which matrix comes first in the products (which we definitely can’t mess around with).
To see what we’re talking about, consider for example if we wanted to multiply \(XYZ\) where
- \(X\) is a 10x2 matrix.
- \(Y\) is a 2x8 matrix.
- \(Z\) is an 8x3 matrix.
Therefore the whole product is a 10x3 matrix. But we have two choices for the parenthesization:
1) **Compute X times Y first.** This corresponds to the parenthesization \((XY)Z\). The number of multiplications to compute \(X\) times \(Y\) is \(10 \times 2 \times 8 = 160\), and the result \(XY\) will be a 10x8 matrix. Then the number of mults to compute \(XY\) times \(Z\) is \(10 \times 8 \times 3 = 240\). So the total number of mults in this parenthesization is \(160 + 240 = 400\).
2) **Compute Y times Z first.** This corresponds to the parenthesization \( X(YZ) \). The number of multiplications to compute \( Y \) times \( Z \) is \( 2 \times 8 \times 3 = 48 \), and the result \( XY \) will be a \( 2 \times 3 \) matrix. Then the number of mults to compute \( X \) times \( YZ \) is \( 10 \times 2 \times 3 = 60 \). So the total number of mults in this parenthesization is \( 48 + 60 = 108 \).
What a difference! When we generalize this out to the product of \( n \) matrices, the difference in the cost between different parenthesizations can actually be exponential in \( n \) — so it could mean the difference between being able to do the computation, ever, on any computer, or being able to do it on your cell phone.
### 7.1 Computing the minimal mults
As a first step towards computing the best parenthesization, let’s tackle the (hopefully) easier problem of computing what the least number of multiplications is to find the product of \( n \) matrices \( A_1 \) through \( A_n \). What you should realize is that the contents of these matrices actually doesn’t matter — all we need are their dimensions.
We’ll say the dimensions are stored in an array \( D \) of size \( n + 1 \). (We only need \( n + 1 \) because all the “inner dimensions” must match up.) For each \( i \) from 1 to \( n \), \( A_i \) will be a \( D[i-1] \times D[i] \) matrix. In particular, the whole product is a \( D[0] \times D[n] \) matrix.
There’s an easy way to figure out the minimal mults if we use recursion: we just need to figure out what is the last multiplication that should be performed (corresponding to the outermost parentheses), and then let recursion handle the rest. Here’s the algorithm:
**Minimal mults, version 1:** \( \text{mm}(D) \)
**Input:** Dimensions array \( D \) of length \( n + 1 \)
**Output:** The least number of element multiplications to compute the matrix chain product
```python
def mm(D):
n = len(D) - 1
if n == 1:
return 0
else:
fewest = infinity
for i in range(1, n):
t = mm(D[0 : i +1]) + D[0] * D[i] * D[n] + mm(D[i : n+1])
if t < fewest:
fewest = t
return fewest
```
Now let’s analyze it. It’s pretty easy to see that the cost is always \( \Theta(n) \) plus the cost of any recursive calls on line 5. So we just have to write down some summations for these recursive calls to get a recurrence for the whole thing:
\[
T(n) = \begin{cases}
1, & n = 1 \\
n + \sum_{i=1}^{n-1}(T(i) + T(n-i)), & n \geq 2
\end{cases}
\]
This obviously doesn’t look like anything we’ve seen before. But let’s try simplifying the recursive case:
\[
T(n) = n + \sum_{i=1}^{n-1}(T(i)) + T(n-i)) = n + 2 \sum_{i=1}^{n-1} T(i)
\]
This is just about re-ordering the second term in each entry of the summation. Now let’s extract the \( T(n-1) \) parts out of the above:
\[
T(n) = n + 2 \sum_{i=1}^{n-1} T(i) = n + 2 \sum_{i=1}^{n-2} T(i) + 2T(n-1)
\]
Now here’s the clever part: From the last simplification, we know that \( T(n-1) = n - 1 + 2 \sum_{i=1}^{n-2} T(i) \). So now we can do
\[
T(n) = n + 2 \sum_{i=1}^{n-2} T(i) + 2T(n-1) = T(n-1) + 1 + 2T(n-1) = 1 + 3T(n-1).
\]
16
Now that looks pretty nice. In fact, it’s so nice that we can apply the Master Method B to it and conclude that the running time of this algorithm is $\Theta(3^n)$.
That’s really, really slow! Just like with the original Fibonacci function, if we drew out the recursion tree, we would notice the same recursive calls getting computed over and over again. So you shouldn’t be surprised at the first idea for an improvement:
### 7.2 Memoized Minimal Mults
Whenever we notice the same recursive calls being computed again and again, memoization is a general technique that can always be used to avoid some unnecessary computations. Let’s try it for the matrix chain multiplication problem:
**Minimal mults, memoized version**: $\text{mmm}(D)$
**Input**: Dimensions array $D$ of length $n + 1$
**Output**: The least number of element multiplications to compute the matrix chain product
```python
mm_table = {}
def mmm(D):
n = len(D) - 1
if D not in mm_table:
if n == 1:
mm_table[D] = 0
else:
fewest = infinity
for i in range(1, n):
t = mmm(D[0 : i+1]) + D[0]*D[i]*D[n] + mmm(D[i:n+1])
if t < fewest:
fewest = t
mm_table[D] = fewest
return mm_table[D]
```
What’s important to recognize is that memoizing this function is exactly the same as memoizing the Fibonacci function (or any other one, for that matter). We just have to create a table to hold all the saved values, add an “if” statement around the original code, and change all the original “return” statements to set the table entry.
The only real tricky part in implementing this is getting the table right. Notice now that the keys for looking up into the table $T$ are now arrays rather than single integers. This is going to affect the choice of data structure for $T$ (a hash table would still work great), as well as the cost of looking things up in the table (which should now be $\Theta(n)$).
Now for the analysis. Like with memoized Fibonacci, we want to ask what the worst-case cost is if any call not counting the recursive calls, and then ask what is the total number of recursive calls there could be.
Here the worst case cost of a single call (not counting recursion) isn’t too difficult to figure it out. There is a for loop with $n - 1$ steps, and two table lookups that each cost $\Theta(n)$. So the total cost is $\Theta(n^2)$, plus as many recursive calls.
But how many recursive calls with there be in total? Notice that every recursive call is on a contiguous subarray, or “chunk”, of the original array $D$. Counting these up, we have
$D[0..1], D[0..2], D[0..3], \ldots, D[0..n], D[1..2], D[1..3], \ldots, D[n-1..n]$,
which is the familiar arithmetic sequence $n + (n - 1) + (n - 2) + \cdots + 1$. Hopefully you remember (or can figure out) that this sums to $n(n+1)/2$. So there are $O(n^2)$ recursive calls in total.
Putting all this together, we have $O(n^2)$ recursive calls, each costing $O(n)$ for a total cost at most $O(n^3)$. Despite the cosmetic similarity, this is quite an improvement on the original $\Theta(3^n)$ cost!
7.3 Dynamic Programming Solution
Memoization is a fantastic solution to this problem, and it is a sufficiently general solution that it applies to many others as well. However, it has at least three drawbacks:
1) The choice of data structure for the table is going to have a big effect on performance. The best we can really do in general is to say “try a hash table and hope it works”, but this requires a lot of trust and assumptions.
2) Our analysis was a bit tricky! There’s some sophisticated stuff going on, and we have to unwind it a bit to do the analysis. It would be nice to have a clearer picture of the cost just by looking at the algorithm.
3) Clearly we have to use some memory to avoid the exponential running time of the original algorithm, but memoization can use too much extra memory, especially when there is a single global table $T$ and there are a lot of calls to the original function.
Dynamic programming solves these issues by being more explicit about what the table looks like and how it gets filled in. It is more of a “bottom-up” approach to solving the problem, rather than the “top-down” approach of the memoized version. Generally speaking, dynamic programming solutions are harder to come up with than the memoized version, but they run faster and are easier to analyze.
From our analysis of the memoized version, notice that the recursive calls we need to compute $mm(D)$ all look like $mm(D[i..j])$ for some integers $i$ and $j$ that satisfy $0 \leq i < j \leq n$. This leads to the idea of storing the saved values in a two-dimensional array $A$, where the entry in the $i$’th row and $j$’th column of $A$ will specify the return value of $mm(D[i..j])$.
Because of the way things are defined, $A[i,j]$ will be undefined anywhere that $i \geq j$. This means that $A$ will be a $(n+1) \times (n+1)$ matrix, where only the entries in the top-right half are filled in.
So far we are just formalizing the data structure for the memoized version. The tricky part for dynamic programming is getting the order right. Because we are working from the bottom-up, without using recursion, we have to be careful that we have the results of all the necessary subproblems “filled in” in the table before moving on.
To figure this out, consider what we need to fill in $A[i,j]$. This corresponds to $mm(D[i..j])$, which will make further recursive calls to $mm(D[i..k])$ and $mm(D[k..j])$ for every $k$ between $i$ and $j$. These correspond to the table entries $A[i,i+1]$ up to $A[i,j-1]$ and $A[j-1,j]$ up to $A[i-1,j]$. Labelling the diagonals with numbers starting with the main diagonal of the table, this corresponds to the following picture:
For any particular entry in the table, we need all the ones below it and to the left filled in already. More generally, for any entry in the $i$’th diagonal, we need every entry in every diagonal from 1 to $i-1$ filled in. This gives the ordering we’ll use: fill in each diagonal, starting with number 1, until the whole table is filled in. Here’s the algorithm:
**Minimal mults, dynamic programming version**
Input: Dimensions array $D$ of length $n+1$
Output: The least number of element multiplications to compute the matrix chain product
```python
def dmm(D):
n = len(D) - 1
A = (n+1) by (n+1) array
for diag in range(1, n+1):
for row in range(0, n-diag+1):
col = diag + row
if diag == 1:
A[row][col] = 0
else:
A[row][col] = infinity
for i in range(row+1, col):
if t < A[row][col]:
A[row][col] = t
return A[0][n]
```
18
Figure 3: Dynamic programming for minimal muls
It’s worth running through an example to see how this works. Try finding the minimal number of multiplications to compute $WXYZ$, where
- $W$ is a 8x5 matrix
- $X$ is a 5x3 matrix
- $Y$ is a 3x4 matrix
- $Z$ is a 4x1 matrix
Therefore $D = [8, 5, 3, 4, 1]$ is the matrix of dimensions, and the table $A$ will be a 5x5 matrix holding all the intermediate values. See if you can follow the algorithm to fill it in. You should get:
<table>
<thead>
<tr>
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>120</td>
<td>216</td>
<td>67</td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>60</td>
<td>27</td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>0</td>
<td>12</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>0</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>4</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
This tells us that the minimal number of multiplications is 67, which we can see by unwinding the computation corresponds to the parenthesization $W(X(YZ))$. In practice this information on how to “unwind” the computation in order to figure out the actual ordering of the multiplications to perform would be stored in the table along side the minimal multiplication values. This just means saving the value of $i$ in the inner for loop that corresponds to when the minimal multiple value is set.
Now let’s compare the memoized and dynamic programming versions of this problem. The dynamic programming solution is more difficult to formulate, in large part because it requires us to specify the data structure more explicitly, and to choose the ordering carefully to fill in the table. But the advantages to this are that we get a more compact data structure, generally resulting in faster code, and it is easier to see how the algorithm actually works. For example, it is now just a familiar exercise in counting nested loops to see that the cost of this algorithm is $\Theta(n^3)$.
|
{"Source-Url": "https://www.usna.edu/Users/cs/roche/courses/s15si335/u04/u04-notes.pdf", "len_cl100k_base": 15674, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 65977, "total-output-tokens": 17167, "length": "2e13", "weborganizer": {"__label__adult": 0.00034236907958984375, "__label__art_design": 0.00048232078552246094, "__label__crime_law": 0.00034618377685546875, "__label__education_jobs": 0.0027942657470703125, "__label__entertainment": 0.0001289844512939453, "__label__fashion_beauty": 0.00016379356384277344, "__label__finance_business": 0.0003829002380371094, "__label__food_dining": 0.0004584789276123047, "__label__games": 0.0009603500366210938, "__label__hardware": 0.002117156982421875, "__label__health": 0.0005035400390625, "__label__history": 0.0004804134368896485, "__label__home_hobbies": 0.0002574920654296875, "__label__industrial": 0.000911712646484375, "__label__literature": 0.00039768218994140625, "__label__politics": 0.0002913475036621094, "__label__religion": 0.0006918907165527344, "__label__science_tech": 0.169677734375, "__label__social_life": 0.00014328956604003906, "__label__software": 0.0108489990234375, "__label__software_dev": 0.80615234375, "__label__sports_fitness": 0.00033164024353027344, "__label__transportation": 0.0007414817810058594, "__label__travel": 0.00025534629821777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54922, 0.05296]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54922, 0.58296]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54922, 0.89676]], "google_gemma-3-12b-it_contains_pii": [[0, 2637, false], [2637, 6069, null], [6069, 9609, null], [9609, 13353, null], [13353, 15566, null], [15566, 17838, null], [17838, 21667, null], [21667, 26313, null], [26313, 29640, null], [29640, 33481, null], [33481, 36947, null], [36947, 39555, null], [39555, 39591, null], [39591, 39631, null], [39631, 43159, null], [43159, 46359, null], [46359, 49510, null], [49510, 53226, null], [53226, 53273, null], [53273, 54922, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2637, true], [2637, 6069, null], [6069, 9609, null], [9609, 13353, null], [13353, 15566, null], [15566, 17838, null], [17838, 21667, null], [21667, 26313, null], [26313, 29640, null], [29640, 33481, null], [33481, 36947, null], [36947, 39555, null], [39555, 39591, null], [39591, 39631, null], [39631, 43159, null], [43159, 46359, null], [46359, 49510, null], [49510, 53226, null], [53226, 53273, null], [53273, 54922, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54922, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54922, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54922, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54922, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 54922, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54922, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54922, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54922, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54922, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 54922, null]], "pdf_page_numbers": [[0, 2637, 1], [2637, 6069, 2], [6069, 9609, 3], [9609, 13353, 4], [13353, 15566, 5], [15566, 17838, 6], [17838, 21667, 7], [21667, 26313, 8], [26313, 29640, 9], [29640, 33481, 10], [33481, 36947, 11], [36947, 39555, 12], [39555, 39591, 13], [39591, 39631, 14], [39631, 43159, 15], [43159, 46359, 16], [46359, 49510, 17], [49510, 53226, 18], [53226, 53273, 19], [53273, 54922, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54922, 0.03143]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
93c23ca1f83c437a4074779b97cec93707ce2a71
|
Real-time crowd control of existing interfaces
The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters.
As Published: http://dx.doi.org/10.1145/2047196.2047200
Publisher: Association for Computing Machinery (ACM)
Persistent URL: http://hdl.handle.net/1721.1/73083
Version: Author’s final manuscript: final author’s manuscript post peer review, without publisher’s formatting or copy editing
Terms of use: Creative Commons Attribution-Noncommercial-Share Alike 3.0
Real-time Crowd Control of Existing Interfaces
Walter S. Lasecki\textsuperscript{1}, Kyle I. Murray\textsuperscript{1}, Samuel White\textsuperscript{1}, Robert C. Miller\textsuperscript{2}, and Jeffrey P. Bigham\textsuperscript{1}
University of Rochester, Computer Science\textsuperscript{1}
Rochester, NY 14627 USA
\{wlasecki,jbigham\}@cs.rochester.edu
\{kyle.murray,samuel.white\}@rochester.edu
MIT CSAIL\textsuperscript{2}
Cambridge, MA 02139 USA
rcm@mit.edu
ABSTRACT
Crowdsourcing has been shown to be an effective approach for solving difficult problems, but current crowdsourcing systems suffer two main limitations: (i) tasks must be repackaged for proper display to crowd workers, which generally requires substantial one-off programming effort and support infrastructure, and (ii) crowd workers generally lack a tight feedback loop with their task. In this paper, we introduce Legion, a system that allows end users to easily capture existing GUIs and outsource them for collaborative, real-time control by the crowd. We present mediation strategies for integrating the input of multiple crowd workers in real-time, evaluate these mediation strategies across several applications, and further validate Legion by exploring the space of novel applications that it enables.
ACM Classification: H5.2 [Information interfaces and presentation]: User Interfaces. - Graphical user interfaces.
General terms: Human Factors, Experimentation
Keywords: real-time crowd control, real-time human computation, crowdsourcing, remote control
INTRODUCTION
Crowdsourcing has been shown to be effective at solving problems that are beyond the capabilities of current automated approaches [2, 3]. However, current crowdsourcing systems suffer from two main limitations: (i) tasks must first be repackaged for proper display to crowd workers, which generally requires substantial one-off programming effort and corresponding support infrastructure; and (ii) crowds generally participate asynchronously, without a tight feedback loop between workers and their task. This paper considers a new approach to crowd computing that surpasses both limitations by using existing graphical user interfaces and putting the crowd in control of the mouse and keyboard. We introduce Legion, a system that allows end users to easily capture existing GUIs and outsource them for collaborative, real-time control by the crowd.
To use Legion, users first select a portion of their desktop interface that they would like the crowd to control, provide a natural language description of the task for the crowd to perform, and offer a price that they are willing to pay (Figure 2). Legion then forwards a video feed of the interface to the crowd and forwards key presses and mouse clicks made by the crowd back to the interface. To improve reliability, multiple workers are recruited to collaboratively complete the task. A fundamental question that we explore in this paper is how to effectively mediate crowd work to balance reliability with the desire for real-time control of the interface. Legion coordinates task completion by recruiting crowd workers, distributing the video feed, and providing a flexible mediation framework to synthesize the input of the workers.
Legion lets end users leverage crowdsourcing in ways previously not possible. Our original motivation was to provide a quick way of bootstrapping highly-robust, intelligent assistive robots. Such systems usually require significant (and costly) training to work automatically, are prone to errors, and so can often be controlled remotely by experts. We imagined a hybrid system in which robots could operate mostly automatically, but in which new tasks could be crowdsourced on demand for real-time control. Legion supports the flexible control of such existing remote-control interfaces.
We have used Legion to turn an inexpensive robot into one that intelligently follows natural language commands. We have outsourced bits of office work using a word processor or spreadsheet. We have used it to fill in for us while playing games requiring constant attention while we got a drink. We have used it to provide the intelligence of a predictive keyboard to make its suggestions quicker and more accurate. As we will highlight, not all of these use cases currently work flawlessly, but they illustrate the broad possibilities of outsourcing existing interfaces and motivate our work on real-time crowd control.
Legion supports experimentation with different ways of combining the input of multiple crowd workers in real-time while retaining reliability guarantees. Although there are numerous approaches that could be taken, we have implemented five mediation strategies in Legion that we compare in this paper: (i) control by a single crowd worker, (ii) mob rule in which all input from all workers is serialized and forwarded to the interface, (iii) voting over small time windows in which only the most popular input is forwarded to the interface, (iv) dynamically choosing a random worker to put in
control, switching only when they become inactive, and (v) using crowd-agreement to dynamically elect leaders whose input is immediately forwarded to the interface and whose time in control is a function of their reputation built over time. The most appropriate mediation strategy is context dependent, as we will demonstrate with experiments across several different types of applications and tasks.
Our contributions are the following:
- We articulate the idea of real-time crowd control of existing interfaces, and describe considerations for the design of applications in this space.
- We present a system, Legion, that lets end users easily outsource existing interfaces to the crowd and exposes a framework for mediating the inputs of crowd workers.
- We formulate several mediation strategies for aggregating the input of multiple crowd workers, and investigate these strategies in experiments with a diverse set of applications.
- We further validate Legion by showing several new types of applications that we created that illustrate interactive crowd assistance, programming by demonstration, and the mash-up of several desktop applications.
### BACKGROUND
Individual users have controlled interfaces remotely as long as networked systems have existed, dating back to early terminals that allowed users to log in and control time-sharing systems. With graphical user interfaces came remote display protocols such as the X Window System [18], and Virtual Network Computing (VNC) [17] became popular. Remote control has also been used to compensate for limitations in mobile browsers. For instance, Highlight runs a full browser on its server, which is remote controlled by the mobile browser [14]. Specialized remote control systems even allow aircraft to be piloted remotely (Figure 1). The main difference between these prior systems and Legion is the idea that multiple workers could collectively control the end user’s interface directly.
Real time groupware allows remote users to collaborate in shared online spaces [8], and many online games and multi-user dungeons (MUDs) likewise allow users to play or interact in the same space with one another. In contrast, Legion synthesizes the input of multiple workers to act as a single controller of existing interfaces. A few web-based games allow multiple users to control a single interface. For example, Massively Multiplayer Pong allows all of the current players to control the paddle [13]. Its interface displays both the “real” paddle position and the user-specific paddle position. Maynes-Aminzade et al. have brought these techniques into the real world by enabling large audiences to collectively control a projected interface with collective actions like leaning to the left or right [12].
In machine learning, meta-learners combine multiple weak learners for better performance [16]. A specific class of meta-learners called arbiters learn to combine the input of multiple base classifiers in order to arrive at a final decision in a supervised manner and can work in an online fashion [4]. Legion is able to use the metric of crowd agreement that we have defined to learn how to combine crowd input in an unsupervised manner.
Prior work has considered how graphical user interfaces could be controlled automatically. Early work in this area used operating system APIs, but these projects quickly ran into problems because limitations in the APIs meant that many interfaces could not be correctly interpreted and manipulated in this way. The CoScripter [11] web automation system leverages the openness of the web to reliably interpret and manipulate the web interface, affording the freedom to focus on high-level problems like end user programming and intelligent interfaces for interface automation. Recent projects have taken a more robust low-level, pixel-based approach to interpreting and manipulating GUI components [23, 7]. Legion crowdsources not only the interpretation and manipulation of GUI components but also higher-level planning, allowing greater flexibility in how end users decide to automate their interfaces and what can be automated.
Human computation was introduced to integrate people into computational processes to solve problems too difficult for computers to solve alone, but has not been applied to real-time control problems. Human computation has been shown useful in writing and editing [2], image description and interpretation [3, 22], and protein folding [6], among many other areas. Existing abstractions focus on obtaining quality work, and generally introduce redundancy and layering into tasks so that multiple workers contribute and verify results at each stage. For instance, guaranteeing reliability through answer agreement [22] or the find-fix-verify pattern of Soy lent [2]. Unfortunately, this takes time, which makes these approaches unsuitable for real-time control. Naive solutions like recruiting a single online worker may allow for real-time control, but would subvert existing methods of achieving reliability and are not robust to workers leaving (common in the crowd). As a result, new abstractions are necessary.
Several systems have explored how to make human computation interactive. As an example, VizWiz [3] answers visual questions for blind people quickly. It uses quikTurkit to pre-queue workers on Amazon’s Mechanical Turk so that they
will be available when needed. Legion needs multiple users to be available at the same time in order for its input mediat-ors to work correctly. Prior systems have also needed multiple workers to be available. For instance, the ESP Game encouraged accurate image labels by pairing players together and requiring them both to enter the same label, although ESP Game players could also be paired with simulated players [22]. Seaweed reliably got Mechanical Turk workers to be available at the same time to play economic games by requiring the first worker to arrive to wait (generally for a few seconds) [5]. Legion similarly utilizes the input of multiple workers and asks workers to wait until enough workers have arrived, but engages workers for longer control tasks.
Prior systems have enabled real-time control from the web, most often in the context of robotics [19]. Osentoski et al. used a web-based interface to a robot to crowdsourc a substan-tial amount of training data that they then used to train a system for automatic real-time control of a robot [15]. Goldberg et al. enabled groups of web users to collectively control a web cam [20] and make navigation decisions for a human actor [9] by interactively choosing regions of interest in captured images. Such systems are generally created only for the control of a particular system, whereas Legion can be used to control a variety of interfaces that were not originally intended to be crowdsourced. Legion might help researchers train other types of systems for real-time control.
THE CROWD
We define the crowd as a dynamic pool of anonymous workers of varying reliability. Because the pool is dynamic, workers come and go, and no specific worker can be relied upon to be available at a given time or to continue working on a job for a set amount of time. Workers cannot be relied upon to provide high-quality work of the type one might expect from a traditional employee for various reasons including misunderstanding of task directives, laziness, or even maliciousness. Finally, workers may experience delays that are beyond their control, such as network bandwidth variability.
For enabling real-time control, the dimensions of the crowd that are most relevant are (i) the time each recruited worker continues working on the task and (ii) the quality of the worker’s output. These can be measured empirically for a specific crowd source, but are expected to be task-dependent [21]. A related dimension is the latency required to recruit workers to a particular job. For this paper, we assume that workers can be made available quickly, recruited and kept available using systems like quikTurk [3].
Our experiments are run on Amazon’s Mechanical Turk because of the ease by which workers can be recruited. Nevertheless, our framework is compatible with worker pools from other marketplaces, volunteers drawn from social networks, or any other group of workers available.
CONTROL CONSIDERATIONS
Legion can control a variety of applications, but several di-mensions are especially important for enabling real-time crowd control. As we explain later in this paper, Legion synthesizes input from multiple workers by identifying when workers in the crowd provided the same input at the same time. To rec-ognize when multiple inputs agree, Legion requires the input to be discrete, and, to associate input over time, Legion uses fixed time windows. As we will see, this does not mean worker input needs to be delayed until the end of a window.
The input space is defined by the application that is being controlled. GUI applications vary from being controllable by a few discrete keys to using the full continuous input of a pointing device. Key presses are already discrete. Pointer clicks are also discrete in the space of the pixel locations. Legion reduces the size of this space to a fixed grid in order to aggregate clicks. Legion does not currently handle other pointer interactions, such as movement paths or dragging.
Many tasks have several correct ways of completing them. For instance, if the task is to navigate a robot around an obstacle to a specified location there are at least two reasonable, high-level paths (going left around the obstacle or going right). Applications can be characterized by the number and degree of these decision points. Crowd input can be expected to diverge more at such decision points.
To correlate worker inputs over time, we divide time into dis-crete windows called epochs and associate inputs received in the same epoch together. Tasks with more decision points may be more easily solved by mediation strategies that allow for longer term strategies over multiple epochs.
LEGION
Legion is comprised of (i) an end user client application for capturing and controlling interfaces with the crowd, (ii) a server-side framework for recruiting and mediating input from crowd workers, and (iii) a web-based front end on which crowd workers complete the specified task (Figure 2).
End User Client Application
The Legion client allows users to select a portion of their screen to be controlled by the crowd by drawing a box around it. We chose to allow users to flexibly choose the region of the screen to export, instead of choosing a particular window, because it allows users to (i) exercise control over the information shared with the crowd, (ii) expose simpler interFaces for workers comprising only necessary interface components, and (iii) create new mash-ups by sharing pieces of multiple applications arranged next to one another (discussed later). Smaller regions also lower the required bandwidth.
Users provide a natural language description of the task that they would like the crowd to complete, and create a legend of keys that the crowd can use and where they are able to click. To simulate mouse clicks and keyboard events locally, Legion uses the OS X Application Services framework to post Quartz events into the event stream at specific screen locations. The CamTwist library captures video from the user’s screen and sends it to the server. Only specified keys and mouse clicks in the defined region are simulated (we use a white list). This does not completely ensure security, but reduces what workers are able to control on the client GUI. Future work may explore how to further isolate crowd input.
Legion Client
Figure 2: Legion is a framework that allows existing interfaces to be outsourced to the crowd. In this example, a user has outsourced control of her Rovio robot. The Legion client allows end users to choose a portion of their screen to send to crowd workers, sends a video stream of the interface to the server, and simulates events (key presses, mouse clicks) when instructed by the server. The Legion server recruits workers, aggregates the input of multiple crowd workers using flexible input mediators, and forwards the streaming video from the client to the crowd workers. The web interface presents the streaming video, collects worker input (key presses and mouse clicks), and gives workers feedback.
Users decide when the crowd is done, and decide whether the crowd as a whole successfully completed their given task.
Server-Side Framework
The Legion server is a Java application that recruits multiple workers from the crowd; collects, mediates, and forwards worker key presses and mouse clicks to the client application; and pays workers for their work. To recruit workers, Legion uses quikTurkit, a script for the TurKit platform [10], that maintains an active pool of workers. The HITs are described using the short task description provided by the end user. Workers can either be recruited on-demand or automatically when the user opens the application in anticipation of future need. If workers arrive early they are paid to wait. quikTurkit was able to satisfactorily maintain the dynamic pool of at least 3-5 workers needed for Legion.
The Legion server also includes Flash Media Server (FMS)². Video is streamed from the client application to FMS, which supports multiple workers receiving the stream via their web browsers. The video is compressed using On2.VP6 and sent using the User Datagram Protocol (UDP), which unlike TCP, allows packets to be dropped. As a result, if bandwidth is temporarily reduced between our server and the workers, frames will drop instead of being queued, helping to ensure that the frame currently being shown is the most recent one.
²http://www.adobe.com/products/flashmediaserver/
Figure 3: Our method for discretizing mouse clicks. The screen is split into a grid without knowledge of the interface beneath it, and so some inputs that are semantically the same will be discretized into different grid cells (as here). When simulating the click, Legion either uses the coordinate directly or averages the positions of each, depending on the input mediator.
Worker key presses and mouse clicks are sent to the server, which aggregates them and uses one of several input mediators to choose which events to send to the client. The server framework allows inputs to be analyzed, aggregated and filtered in a number of ways. It includes a number of procedures for deciding which inputs to pass through, for blocking or allowing inputs only from certain workers, or for analyzing inputs over time windows. End users will neither write these procedures nor need to decide between them, but the flexibility of the framework may provide opportunities for researchers looking to evaluate different strategies for mediating input from multiple crowd workers controlling an interface in real-time.
The input mediators require input to be discrete. This is straightforward for key presses. To discretize mouse clicks, we divide the screen into a grid and use the grid cell in which the click occurred (Figure 3). For instance, the event descriptor 12,20 refers to a mouse click in the grid cell at (12,20). Later, this can be generalized by clustering the mouse clicks in order to find a discrete set of possible actions to select from. Discrete inputs allow the input mediators to compare the inputs of crowd workers.
The input mediators that we have implemented are described in the next section.
**Worker Web Page**
Workers control the interfaces via a web page hosted by the Legion server. This web page displays the live video feed of the client interface, and collects the key presses that workers make and sends them back to the client. As workers are queued, they play a simple game in which they are shown a letter and asked to type what it says. Although we don’t currently, we could use this as a simple test to weed out workers who provide bad input or whose latency is too high.
Providing good feedback is difficult because a worker’s input may not always be followed. In the robot control case, for instance, a worker may tell the robot to turn right, but the robot may go left because that action is preferred by other workers. As users press keys or click the mouse, their input is reflected back to them visually in the interface (Figure 2). They also see whether their input or the crowd’s input was last sent to the client. We explored showing the worker who provides bad input or whose latency is too high.
We expect that using worker inputs as votes will improve accuracy, but at the cost of slowing down response time. This is because the votes of crowd workers will not be temporally synchronized, meaning that epoch windows can only be reduced so far before corresponding votes start falling into different epochs, thus skewing the vote. We used an epoch of 1 second in our experiments.
**Crowd Control**
Legion aggregates the control input from all of the workers in the crowd into a single input that is forwarded to the interface as if it was sent by a single user. We developed the five input mediators described below. Each has strengths and weaknesses, which we expect to be application dependent and compare later in the paper. The input mediators each balance reliability and latency differently. For example, the solo input mediator recruits a single worker to directly control the interface. Latency will be low but so will reliability.
**Mob**
The mob input mediator simply serializes the inputs from each of the workers and sends the single stream of actions to the interface. Each input from a worker is immediately sent to the interface being controlled. This approach may work for systems in which large numbers of redundant inputs are either ignored or handled gracefully. In this case, the ‘wisdom’ of the crowd is preserved by the fact that the majority of the inputs to the system will be those most agreed upon by the crowd. For applications in which excess or redundant input leads to loss of accuracy, such as editing a document, this style of mediation will perform poorly.
**Vote**
The vote input mediator attempts to address the problem of unreliable individual crowd workers. We use a weighted vote, in which each user has a corresponding weight that acts as an influence measure. At the end of each epoch, individual worker’s most recent inputs are collected as votes and scaled based on the weight of the worker who cast the vote, then summed to find the action with the highest weighted value amongst all participating workers. This action is then sent to the client and the weights of each worker were recomputed according to the following formula:
\[
w_{i(t+1)} = \alpha w_{i(t)} + (1 - \alpha) \frac{\sum_{j=1}^{N_{A_i}} w_{j(t)}}{N}
\]
Where \( t \) is the current epoch, \( N_{A_i} \) is the number of workers that voted for selected action \( A_i \) and \( N \) is the total number of workers casting votes. \( \alpha \) is a discount factor selected such that \( \alpha < 1 \). Its effect is that a worker’s influence is affected more by recent agreement with the crowd than historical agreement.
We expect that using worker inputs as votes will improve accuracy, but at the cost of slowing down response time. This is because the votes of crowd workers will not be temporally synchronized, meaning that epoch windows can only be reduced so far before corresponding votes start falling into different epochs, thus skewing the vote. We used an epoch of 1 second in our experiments.
**Leader**
In order to reduce the latency inherent in gathering votes over the span of an epoch, the leader input mediator selects the highest influence worker at the beginning of each epoch to assume direct control for its duration. This means that each input entered by the leader is immediately forwarded to the interface being controlled. Since the leader is elected based on weight, they serve as leader for as long as they remain in agreement with the crowd, on average.
The leadership model provides a means for real-time responses to feedback from the system without sacrificing the benefits of crowd agreement and enables longer term plans. For example, suppose a navigation task requires a decision to be made as to which path to take in order to avoid an obstacle. In the vote or mob input mediators, close crowd decisions can result in actions belonging to disparate plans being performed in a sequence because of crowd weight and participation fluctuations. This may lead to a course of action which is not in alignment with any individual worker’s plan. In the navigation example, this may result in colliding with the obstacle. Electing a single leader allows them to take consecutive actions coinciding with their individual plan.
Worker weights are calculated using a bag-of-votes model to choose a leader after each epoch. These weights are calculated by comparing the normalized actions of each worker to the actions of the crowd with the vector-cosine as follows:
\[
VC(a_i, c) = \frac{a_i \cdot c}{||a_i|| \cdot ||c||}
\]
where \( a_i \) is a vector of the proportion of votes cast for each action by the \( i^{th} \) worker and \( c \) is the same dimension vector computed for the whole crowd. We then recompute the
worker’s weight after each epoch similar to before, but use the vector-cosine as the new crowd agreement metric:
$$w_i^{(t+1)} = \alpha w_i^{(t)} + (1 - \alpha) VC(a_i^{(t)}, c)$$
where $\alpha < 1$ is the same discount factor as in Eq. 1.
**Active** The active input mediator is variation on leader that we created in order to tease apart the two benefits of leader: (i) control by a single worker in the crowd, and (ii) the selection of a worker that has been shown to agree the most with the crowd. The active input mediator randomly selects a worker, who maintains control as long as they continue to provide input. If they fail to provide input during some number of consecutive epochs (here we used 5), a new random worker is selected.
**Mediation Strategies Not Explored** There are clearly a number of mediation strategies beyond the five described here. In particular, we did not explore hierarchical input mediators in which different workers have different roles. For instance, some workers could be responsible for controlling the interface and a separate group of workers could vote whether (or even which of) the workers were doing a good job. Our input mediators also do not allow workers to indicate their confidence in the actions they suggest, for instance determining that they are so confident of an action that they would like to wager that it will end up being a good action to take in the long run. We could imagine variations on the strategies above that allow limited communication between workers to help them devise and exercise long-term plans.
**MEDIATOR EVALUATION**
We evaluated our mediation strategies on two applications: robot navigation and data entry into a spreadsheet. These applications varied in characteristics that we hypothesize will manifest in the mediation strategies. Specifically, the robot task has a small input space and is designed such that workers can follow only one general path to complete it. The spreadsheet entry task has a large input space (all keys and mouse clicks) and can reasonably be completed in different orders.
We paid workers 5 cents per task in our experiments if the crowd completed the task. Workers could earn up to 10 more cents based on their crowd agreement level and the total time taken to complete the task. We waited for at least 3 workers to be available before starting the task, which generally took less than one minute.
As an initial feasibility test, we used Legion to control the web application in Figure 5 to measure latency. The application displayed a random key, crowd workers typed it, and then repeated. They were not shown the latency measurements. Timing is done in the web application, which we ran on our machine, and sums both system and human latency on this simple task. On average the recorded latency was 854.6 milliseconds (SD=743.0). This gives a sense of the lowest overall latency supported by Legion on Mechanical Turk, although realized latency may be task dependent.
**Robot Navigation**
Robot control is natural as a motivating task because it exercises three main contributions of Legion: robots need to be controlled in real-time, creating a new robot control interface for web workers would require significant one-off programming effort, and no obvious way previously existed for multiple crowd workers to simultaneously, collectively control robots in real-time. Robot control is difficult, so enabling even simple navigation tasks can require substantial...
customization for the robot and the environment. Even relatively simple tasks can become complex - the robot we used drifts severely to the left when told to go straight. Completing tasks with a robot can require longer-term strategies to be executed, e.g. it may need to go around a barrier, moving farther from its goal before reaching it. Finally, tasks naturally take some time to complete, and so workers available at the start might leave before the end. Eventually, our aim is to use crowds to control assistive robots that both navigate through and manipulate the environment.
We used an inexpensive remote-controlled mobile webcam called Rovio\(^3\) as a platform for experiments with robot navigation. This device can be controlled over wifi via its web-based interface (Figure 2). Although it is marketed as a “robot,” it does not contain any functionality for automatic decision-making or navigation. By connecting the Rovio to the crowd with Legion, we created an intelligent mobile robot that accepts natural language commands.
The navigation task was to drive the robot from a start position into a tea kettle a few feet away. Although the goal could be seen from the start of the task, the direct path to it was obstructed by various objects (Figure 2). Legion captured video at 320x240 resolution. As a baseline, we asked three local students to complete the robot navigation task three times as quickly as possible. On average these local workers required 46.3 seconds (SD=12.4) to complete the task.
We ran 10 trials for each of the five mediation strategies. The total unique voters (those who voted at least once) varied from 1 to 14 per task, although on average 3.1 workers voted during each epoch. These numbers highlight the highly dynamic nature of the crowd on Mechanical Turk. There were not detectably significant differences in the number of engaged workers across conditions. We ended trials that lasted more than 10 minutes.
In both the active and leader conditions, all trials were successfully completed, as compared to 8/10 successful trials with vote and mob, and only 4/10 successful trials with solo (Figure 7). Few trials were completed successfully with solo because workers who did not quickly complete the task dis-connected. The crowd seemed unable to exercise a consistent strategy with the vote and mob input mediators. vote was further hampered by its slow rate of issuing actions.
We also considered task completion time (Figure 6). When the chosen worker completed the task, solo was the fastest, averaging just 56.0 seconds (SD=12.9). Trials in the leader condition were completed faster than trials in the active condition, 101.7 seconds (SD=81.0) vs 165.7 seconds (SD=166.3), a significant difference ($F_{1.9}=6.96$, $p < .05$). This suggests that choosing the leader based on crowd agreement, rather than randomly, leads to better results. vote performed no better than mob, 232.4 seconds (SD=110.5) vs. 205.8 seconds (SD=140.1), a difference that was not statistically significant.
### Spreadsheet Transcription
Our next experiment explored the vote, active, and leader mediators on a simple spreadsheet transcription task (Figure 8). Prior work has demonstrated the value and feasibility of crowds assistance in word processing tasks [2]. In this experiment, we used Legion to capture both a Google spreadsheet and a picture of a table scribbled on a whiteboard (the timing results from the robot navigation task), and workers were asked to transcribe the table into the spreadsheet. Legion captured video at 640x480 resolution. This task is interesting because of the large input space (numbers, letters, arrow keys, and mouse clicks), which makes it more difficult for the crowd to agree. Furthermore, while Google spreadsheets already enable collaborative use, there is no protection against malicious users. As such, it is not suitable for collaborative use by the crowd.
We again conducted 10 trials of each of the input mediators, and ended trials lasting more than 10 minutes. Trials were deemed successful when all of the labels and numbers were entered into the spreadsheet correctly.
\footnote{http://www.wowwee.com/en/support/rovio}
None of the vote trials were successful, whereas 9 of 10 trials with both active and leader were successful. With the vote input mediator, it was particularly difficult for the crowd to converge on a single plan. We did not test the mob mediator because we expected similar performance, and we expect the solo mediator to again perform like active but with fewer successful completions. Task completion times were again lower for the leader condition as compared to active, 78.2 seconds (SD=72.3) vs. 100.0 seconds (SD=74.8), although this difference was not statistically significant.
VALIDATION IN NEW TYPES OF APPLICATIONS
Legion enables end users to flexibly and creatively apply crowdsourcing to their existing interfaces. In this section, we explore several new types of applications enabled by Legion that we created using the framework. Although additional work is necessary to fully evaluate the utility of these applications, we present them here as a demonstration of the broad set of possibilities Legion opens up.
Interactive End User Assistance
Most of our discussion of Legion has assumed that a user will outsource full control of her interface to the crowd, but Legion can be used much more flexibly to provide interactive assistance to end users.
User Support Legion enables end users to outsource certain functions to the crowd who then can work cooperatively with them as they work. We used this idea to help make a crowd-powered predictive keyboard out of an existing predictive keyboard software program called KeyStrokes4. On-screen keyboards are used by many people with motor impairments, who use trackballs, head pointers, and other devices to type. Because typing in this way can be slow, KeyStrokes predicts what word the user is likely trying to type and displays suggestions that can be selected with a mouse click. We used Legion to outsource the selection of suggestions in order to bring human intelligence to prediction and possibly make typing faster (Figure 9). When using this application, we found workers would often disrupt our typing by choosing suggestions prematurely; applications of this type clearly need to manage the initiative between users and the crowd.
Co-Pilot Mode In co-pilot mode, the end user controls the interface herself with the crowd standing by in case she needs to temporarily attend to something else. We implemented this in the Legion framework by creating a modified version of the Legion client that captures and sends local user input to the server, a modified version of the server that can accept input from the local user in addition to the crowd, and a modified Leader input mediator that artificially gives the end user a weight high enough that she will always be the leader.
The result is that a user can control her interface as she normally would, but the system will automatically and quickly transition control to the crowd when she leaves. This mode may be particularly useful in real-time gaming or other applications that require continuous attention and control. Currently, users must stay at their computers, or signal they will be away from keyboard (afk).
The Co-Pilot Apprentice An interesting extension of co-pilot mode is for the end user to train the crowd to control an interface. Because the end user is always the leader through an artificially high weight, the best way for the crowd to increase their agreement score (and receive a higher bonus) is to mimic what the end user does. The co-pilot application can thus be used to program the crowd by demonstration.
Programming by Demonstration
We used Legion to enable basic programming-by-demonstration across diverse applications. The eventual goal is for automatic systems to learn to complete new tasks with existing interfaces by watching the crowd complete those tasks, but as a proof of concept we implemented a simple recording mechanism on the Legion server that can capture the inputs provided by the crowd and then replay them. We successfully recorded and replayed tasks with both the Rovio robot and Google spreadsheet.
4http://www.assistiveware.com
Mash-Ups of Desktop Applications:
Finally, we explored how Legion could be used to create a novel type of desktop mash-up in which pieces of multiple existing interfaces are sent to the crowd for control.
We have already seen an example of this in the second experiment in the previous section, in which we combined a simple photo viewer and Excel to enable the crowd to fill in a spreadsheet with the numbers sketched on a whiteboard.
As a second example, we created a new video-enabled robot by combining a remote-controlled Scribbler robot and an iPod Touch running Skype (Figure 10). The Scribbler driving platform was controlled over Bluetooth from a terminal window. To construct the robot, the iPod was simply taped to the platform. On the desktop, we moved the Skype and terminal windows close together and then used the Legion end user interface to select the relevant parts of both of these windows. This mash-up allowed workers to see where the robot was going, and type commands to control it.
DISCUSSION
We have introduced Legion, a system for real-time crowd control of existing interfaces. Although Legion can control a wide variety of interfaces, our experiments highlighted the fact that different input mediators may be appropriate for different types of applications. Applications with a large input space, such as the spreadsheet, proved most difficult for the input mediators that did not select a single individual. Tasks in which the crowd was presented with multiple reasonable courses of action and a large input space made it especially difficult to achieve high crowd agreement levels.
The input mediator that did consistently well was the leader input mediator, which elects a leader who has direct control over the interface as long as he remains in power. This would go against the notion of the wisdom of the crowds, if the leader had not been elected by the crowd. Nevertheless, input mediators that allow a single crowd worker to control an existing interface trade expediency for trust. As a result, applications in domains in which errors have consequences may need to trade higher latencies for reliability.
It was clear that the crowd faced challenges related to missing feedback. Because multiple workers were controlling the same interface, a worker’s actions would not always be reflected in the behavior of the interface. We received several emails from workers wanting to let us know that they had been trying to control the application as instructed but that it did not seem to be following their instructions. These concerns may dissipate as workers become accustomed to interacting in this way, but we may also need to find ways to give better feedback via the interface. Such concerns may have real implications, as we suspect that workers who felt that they were not being listened to quit the task earlier. Our current interface shows workers when the crowd decision was taken over their own, but it would be nice to give users the impression that the interface followed their input.
Legion does not support communication between workers, which is unusual for collaborative systems. Early experiments that showed workers what action the crowd chose resulted in poor quality input as workers mimicked what the crowd was already doing. Nevertheless, watching the crowd seem to struggle against one another to complete tasks suggests that a form of limited communication may be helpful.
Our experiments show that it is possible to have multiple crowd workers collaboratively control an existing interface in real-time. Our experiments were conducted on Amazon’s Mechanical Turk, and many of our workers came from countries other than the United States. We expect that other crowds would face fewer technological limitations.
FUTURE WORK
Future work may explore how to better enable group work in Legion. For instance, new input mediators may help facilitate consistency. Our leader input mediator enables a consistent strategy as long as the same leader remains in control. Intelligent input mediators might be able to automati-
cally cluster input into distinct strategies, allowing leaders to be replaced by other workers likely to execute similar plans. Other promising approaches include allowing workers limited communication, or to enforce a management hierarchy.
We are currently working on ways of extending Legion to more robustly and automatically adjust to different domains. It may be useful, for example, to dynamically switch between multiple input mediators. For instance, the mob could normally provide control with the benefit of a collective crowd voice, but leader could take over at a decision point at which the collective voice might not agree.
Complex applications may be better controlled by multiple specialized groups (Figure 1). For instance, an assistive robot requires control for both navigation and manipulation; each function may be best handled by different sets of crowd workers. For some tasks, limited communication between workers may facilitate collaboration. Finding effective ways of moderating communication to avoid excessive or malicious messaging in this context is ongoing.
As explored briefly in this paper, Legion can be used as a basis for interactive programming by demonstration. Future work may look to better support end users training a crowd to help them on their tasks, or new input mediators that will continue to reward the crowd for exercising control similar to what the end user demonstrated even after the user has left.
Workers often found the feedback provided by Legion confusing. Because the control input sent to the interface is based on the crowd, an individual may find that the interface does not do what they tell it. It may be interesting to explore how to allow different users to take different paths. In some systems with real-world consequences this would not be possible, e.g. in the robot domain, but for many others it may be possible to simulate the effects of different actions being sent to the interface. For instance, copies of the application could be run simultaneously in virtual machines and only merged when the crowd’s consensus was clear. We also plan to explore giving workers more information about their current status, for instance whether they are the current leader.
CONCLUSION
We have presented Legion, a system that enables real-time control of existing interfaces by the crowd. Prior approaches to crowdsourcing require programmers to encapsulate tasks into new interfaces, cannot be used for continuous control, and use abstractions for reliability that introduce additional latency. We have implemented and evaluated several ways of combining input from multiple workers in real-time, and have demonstrated how letting the crowd control existing interfaces allows for several new kinds of applications.
ACKNOWLEDGMENTS
The authors would like to acknowledge the contributions of Craig Harman and Robin Miller.
REFERENCES
|
{"Source-Url": "https://dspace.mit.edu/bitstream/handle/1721.1/73083/Miller_Real-time%20crowd.pdf;jsessionid=64E03B121E787CC0E3E4F1A03364BAEE?sequence=1", "len_cl100k_base": 8982, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34048, "total-output-tokens": 11237, "length": "2e13", "weborganizer": {"__label__adult": 0.0003459453582763672, "__label__art_design": 0.00171661376953125, "__label__crime_law": 0.0003349781036376953, "__label__education_jobs": 0.008148193359375, "__label__entertainment": 0.00035190582275390625, "__label__fashion_beauty": 0.00025200843811035156, "__label__finance_business": 0.00041103363037109375, "__label__food_dining": 0.0004150867462158203, "__label__games": 0.0015401840209960938, "__label__hardware": 0.0016994476318359375, "__label__health": 0.000732421875, "__label__history": 0.000667572021484375, "__label__home_hobbies": 0.0001957416534423828, "__label__industrial": 0.000507354736328125, "__label__literature": 0.0007123947143554688, "__label__politics": 0.0003402233123779297, "__label__religion": 0.000499725341796875, "__label__science_tech": 0.30712890625, "__label__social_life": 0.0002925395965576172, "__label__software": 0.07122802734375, "__label__software_dev": 0.6015625, "__label__sports_fitness": 0.0002875328063964844, "__label__transportation": 0.0005979537963867188, "__label__travel": 0.00025391578674316406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50429, 0.01918]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50429, 0.33178]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50429, 0.93409]], "google_gemma-3-12b-it_contains_pii": [[0, 820, false], [820, 5871, null], [5871, 11255, null], [11255, 17625, null], [17625, 20885, null], [20885, 27283, null], [27283, 30775, null], [30775, 34983, null], [34983, 39101, null], [39101, 43186, null], [43186, 50429, null]], "google_gemma-3-12b-it_is_public_document": [[0, 820, true], [820, 5871, null], [5871, 11255, null], [11255, 17625, null], [17625, 20885, null], [20885, 27283, null], [27283, 30775, null], [30775, 34983, null], [34983, 39101, null], [39101, 43186, null], [43186, 50429, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50429, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50429, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50429, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50429, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50429, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50429, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50429, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50429, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50429, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50429, null]], "pdf_page_numbers": [[0, 820, 1], [820, 5871, 2], [5871, 11255, 3], [11255, 17625, 4], [17625, 20885, 5], [20885, 27283, 6], [27283, 30775, 7], [30775, 34983, 8], [34983, 39101, 9], [39101, 43186, 10], [43186, 50429, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50429, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
ebfa119fb25d037910d8c89174c977429344c3c5
|
M. RITTTRI
Retrieving library functions by unifying types modulo linear isomorphism
*Informatique théorique et applications*, tome 27, n° 6 (1993), p. 523-540
<http://www.numdam.org/item?id=ITA_1993__27_6_523_0>
© AFCET, 1993, tous droits réservés.
L’accès aux archives de la revue « Informatique théorique et applications » implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/conditions). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.
**Numdam**
Article numérisé dans le cadre du programme
Numérisation de documents anciens mathématiques
http://www.numdam.org/
RETRIEVING LIBRARY FUNCTIONS BY UNIFYING TYPES
MODULO LINEAR ISOMORPHISM (*)
by M. RITTRI (1)
Communicated by G. LONGO
Abstract. — An improved method to retrieve a library function via its Hindley/Milner type is described. Previous retrieval systems have identified types that are isomorphic in any Cartesian closed category (CCC), and have retrieved library functions of types that are either isomorphic to the query, or have instances that are. Sometimes it is useful to instantiate the query too, which requires unification modulo isomorphism. Although unifiability modulo CCC-isomorphism is undecidable, it is decidable modulo linear isomorphism, that is, isomorphism in any symmetric monoidal closed (SMC) category.
We argue that the linear isomorphism should retrieve library functions almost as well as CCC-isomorphism, and we report experiments with such retrieval from the Lazy ML library. When unification is used, the system retrieves too many functions, but sorting by the sizes of the unifiers tends to place the most relevant functions first.
Résumé. — Cet article présente une nouvelle méthode pour la recherche d'une fonction dans une bibliothèque de programmes à partir de son type (au sens de Hindley/Milner). Les méthodes utilisées jusqu'ici identifient les types qui sont isomorphiques dans n'importe quelle catégorie cartésienne fermée (CCF), et le type résultat est soit isomorphe au type demandé, soit en est une généralisation. Il est quelquefois utile d'instancier le type demandé, ce qui nécessite de résoudre un problème d'unification modulo isomorphismes. Bien que l'unification modulo CCF-isomorphismes soit indécidable, ce problème est décidable modulo isomorphismes linéaires, c'est-à-dire isomorphismes dans une catégorie monoïdale fermée symétrique.
Notre thèse est que la recherche d'une fonction modulo isomorphismes linéaires doit être aussi utile que modulo CCF-isomorphismes. Nous présentons quelques résultats expérimentaux, qui ont été effectués dans la librairie de fonctions de Lazy ML. En présence d'unification, le système trouve trop de fonctions, mais ce problème peut être résolu en classant les substitutions par leur taille.
(*) Received March 1992, accepted May 1993.
(1) Department of Computing Science, Chalmers University of Technology and University of Göteborg, S-412 96 Göteborg, Sweden, rittri@cs.chalmers.se.
1. BACKGROUND
There are many general-purpose methods for automated retrieval of documents from a database. For software libraries, one can use the special structure of software to improve the retrieval, as surveyed by Frakes [7].
In functional languages, polymorphic types work well as queries [13, 17, 18, 20]. For instance, the function that reverses lists has the type \( \forall \alpha. [\alpha] \to [\alpha] \), and there are few common functions of this type, since they cannot examine the list elements. Some retrieval systems allow the query to be a type augmented with a formal specification [3, 15, 19].
I have developed a retrieval system based purely on types [17, 18]; it has become popular in the Lazy ML community at Chalmers. This article describes how the system was improved by using unification modulo type isomorphism.
1.1 Isomorphic types
In my previous papers [17, 18], I wanted to abstract from details like the currying and argument order of functions, so I needed a notion of type isomorphism that expressed the abstraction. A library function should then be retrieved if its type was isomorphic to the query, since a bijection (like \( curry \)) could convert the function into the query type. It turned out that the so-called CCC-isomorphism in category theory was suitable.
Categories are mathematical structures that possess types (or objects), functions (or arrows) between types, and a notion of type isomorphism. Some categories can be seen as models for various versions of \( \lambda \)-calculus; the most well-known are the Cartesian closed categories, or CCCs. We do not have to define the CCC-isomorphism in a categorical way; we can use a result by Lambek [10] instead: two types \( A \) and \( B \) are isomorphic in all Cartesian closed categories if, and only if, there are \( \lambda \)-expressions \( f : A \to B \) and \( g : B \to A \) such that the equalities \( g \circ f = \text{id}_A \) and \( f \circ g = \text{id}_B \) hold in simply typed \( \lambda \beta \eta \)-calculus with surjective pairing. I write this as \( A \cong B \) or
\[
\begin{align*}
& f : A \cong B. \\
& g \\
\end{align*}
\]
A statement of the form \( A \cong B \) will be called an isomorphism. The functions \( f \) and \( g \) are usually also called isomorphisms, but I will call them bijections.
### Table 1
Equational axioms (with associated bijections) for isomorphism in all Cartesian closed categories.
<table>
<thead>
<tr>
<th>Axiom</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>( A \times B \simeq B \times A )</td>
<td>(Com-2)</td>
</tr>
<tr>
<td>( A \times (B \times C) \simeq A \times B \times C )</td>
<td>(Ass-2)</td>
</tr>
<tr>
<td>( (A \times B) \times C \simeq A \times (B \times C) )</td>
<td>(Cur-2)</td>
</tr>
<tr>
<td>( 1 \times A \simeq A )</td>
<td>(Ass-0)</td>
</tr>
<tr>
<td>( A \times (B \times C) \simeq (A \times B) \times (A \times C) )</td>
<td>(Dist-2)</td>
</tr>
<tr>
<td>( A \rightarrow 1 \simeq I )</td>
<td>(Cur-0)</td>
</tr>
<tr>
<td>( A \rightarrow (B \times C) \simeq (A \rightarrow B) \times (A \rightarrow C) )</td>
<td>(Dist-0)</td>
</tr>
</tbody>
</table>
Remark 1. — I use the axioms also on Hindley/Milner types, which may contain variables that may be bound at the top-level, simply by allowing renaming of bound variables. When used in this way, the axioms are not quite complete for Hindley/Milner types. Some additional axioms, like
\[
\forall \alpha . \ A \times B \simeq (\lambda p . (\text{fst} p, \text{snd} p)) \forall \alpha \beta . \ A \times (B[\beta/\alpha]),
\]
would make them complete, but these extra axioms can instead be used directly by the type-deriver [6], in which case the retrieval system does not need them.
Remark 2. — The axioms are not valid in all \(\lambda\)-calculi or functional languages, but they hold in an approximate sense, which should be enough for software retrieval.
1.2. Matching and unification
Independently, Runciman and Toyn suggested retrieving library functions whose types are unifiable with the query, as well as functions with extra arguments [20]. They did not use any equivalence relation, though.
When I tried to unite the best parts of Runciman and Toyn’s work and my own, my first intention was to implement matching and unification modulo CCC-isomorphism (that is to seek substitutions that can make types isomorphic). Matching, or one-sided unification, allows us to retrieve library functions of types more general than the query, modulo isomorphism. I will assume henceforth that we wish to do so, since it is useful when we overlook a possible generalization. I gave motivating examples and an algorithm for such matching in [18]. Morgan uses a similar algorithm [15].
However, unifiability modulo CCC-isomorphism is undecidable, as was shown by Narendran, Pfenning and Statman [16]. But by removing the distributivity axiom (Dist-2), they were able to construct a unification algorithm, which I have implemented and used in a software retrieval system for the functional language Lazy ML.
This article has two main parts. In section 2, I argue that the removal of the Dist axioms usually does not harm the retrieval, and in section 3, I present some experiences with the retrieval system.
2. LINEAR ISOMORPHISM FOR LIBRARY SEARCH
If we want unifiability modulo equivalence to be decidable, we are forced to remove the (Dist-2) axiom, and since the (Dist-2) and the (Dist-0) axioms are two instances of a similar n-tuple axiom (Dist-n), it seems most consistent to remove the (Dist-0) axiom as well. The removal means that some queries will retrieve fewer functions, which is bad if the omitted functions are useful, but good if they are irrelevant. What can we expect?
Let us study the individual axioms of table 1. The axioms (Com-2), (Ass-2), and (Cur-2) are crucial for function retrieval, as they abstract from argument order and currying. (Cur-0) is useful in a strict language, if lazily evaluated expressions of type $C$ are simulated by functions of type $1 \to C$. If we have (Cur-0,2), then (Ass-0) holds “to the left of an arrow” [since $(1 \times A) \to B \simeq_{(Cur-2)} 1 \to (A \to B) \simeq_{(Cur-0)} A \to B$], so we may as well include (Ass-0) in general. Finally, the main motive for the (Dist-0,2) axioms has been to get a nice semantics of the equivalence relation. Isomorphism in all CCCs seemed appropriate for functional programming, since when two types...
are isomorphic, one can easily convert back and forth, so the choice between them seems arbitrary and unguessable.
But using ideas from linear logic [8, 9], we see that all bijections in table 1 are linear, except those for the (Dist) axioms. (A closed \( \lambda \)-expression is linear if every variable is bound once and used once [9, section 7]. \textit{distrib} and \textit{collect} use variables twice, while \textit{unarr} and \textit{arr} bind variables they do not use.) Non-linear bijections will change the amount of sharing, and a library function has often a natural amount of sharing, which a user can guess. In these cases, the non-linear bijections are not needed for library search. This is illustrated best by examples.
No user should miss the (Dist-0) axiom, which says that \((A \to 1) \equiv 1\). In lazy languages, hardly any functions have the result type 1, so we have few opportunities to apply the axiom. In strict languages, such functions are common but have side-effects, for instance \(cd:\text{[Char]} \to 1\), which changes the working directory. The (Dist-0) axiom identifies for instance \([\text{Char}] \to 1\), \(\text{Bool} \to 1\), and \(\text{Int} \to 1\), which seems bad in the presence of side-effects.
The (Dist-2) axiom says that a function that returns a pair can be translated to two functions that return the components. But it is quite unlikely that a pair of two functions is named as a library item, so the (Dist-2) axiom will have little effect at the top-level of a type. (Of course, a query that is a Cartesian product could make the system look for possible components, but I have not implemented this. I think that if the retrieval system tries to combine different library items, too many possibilities will arise.) The (Dist-2) axiom can be applied to parts of a type, but since the \textit{distrib} and \textit{collect} bijections change the sharing, the user can often guess which variant occurs in a library function. Roughly, if a function returns a \(B\)-value and a \(C\)-value in a single computation for every \(A\)-value, its most natural type is \(A \to (B \times C)\), but if it computes only \(B\)-values for some \(A\)-values and only \(C\)-values for others, it is more natural to split it into a function-pair of the distributed type \((A \to B) \times (A \to C)\).
\textit{Example 1.} — The \textit{choplist} function, predefined in Lazy ML [0], takes a function \(f\) and a list \(xs\). \(f\) can take a list of the same type as \(xs\) and chop off a prefix, to return a pair of the chopped part and the rest of the list. \textit{choplist} applies \(f\) to \(xs\) repeatedly to get a list of chopped parts, e.g., if \textit{takeword} chops off the first lexeme of a string, then \textit{choplist takeword} will return a list of the lexemes in a string. \textit{choplist} can be defined by
\[
\text{choplist} : \forall \alpha. ([\alpha] \to [\alpha] \times [\alpha]) \to [\alpha] \to [\alpha]
\]
choplist f [ ] = [ ]
choplist f xs = let (ys, zs) = f xs in ys :: choplist f zs
where "::" is infix cons. By using (Dist-2), we find an alternative definition with a CCC-isomorphic type:
choplist' : \forall x. ([\alpha] \to [\alpha]) \times ([\alpha] \to [\alpha]) \to [\alpha] \to [[\alpha]]
choplist' (g, h)[ ] = [ ]
choplist' (g, h) xs = g xs :: choplist' (g, h) (h xs)
so that choplist' (g, h) = choplist f if (g, h) = distrib (f). Now, the normal situation is that g and h do similar work. (In the lexeme example above, g should find the first lexeme of a string and keep it, whereas h should find the first lexeme and discard it.) If g and h are not encapsulated into an f function, their common work will not be shared. In this case, a library programmer can be expected to write the original version of choplist and the user will guess the non-distributed type. The (Dist-2) axiom is not necessary.
Example 2. — maplast is like map, but applies a different function to the last element of the list (useful for formatting with separators and terminators). Thus,
maplast : \forall \beta. (\alpha \to \beta) \to (\alpha \to \beta) \to [\alpha] \to [\beta]
maplast g h [ ] = [ ]
maplast g h [x] = [h x]
maplast g h (x1 :: x2 :: xs) = g x1 :: maplast g h (x2 :: xs)
By using the axioms (Cur-2) and (Dist-2), we find an alternative definition with a CCC-isomorphic type
maplast' : \forall \beta. (\alpha \to \beta \times \beta) \to [\alpha] \to [\beta]
maplast' f [ ] = [ ]
maplast' f [x] = [snd (fx)]
maplast' f (x1 :: x2 :: xs) = fst (f x1) :: maplast' f (x2 :: xs)
so that maplast' f = maplast g h if distrib (f) = (g, h). The maplast' version computes both g(x) and h(x) for every element in the list, only to throw away one of them. In a strict language, this might be much more work; in a lazy language, the unwanted value need not be fully computed, but there is still unnecessary overhead in building the pair and the representation of g(x)
or \( h(x) \). So both a library programmer and a user should feel that the distributed version of the type is more natural for \( \text{mapList} \). □
These examples are representative for those I have come across: usually there is a natural amount of sharing. So the removal of the (Dist) axioms should not harm the retrieval too much.
2.1. Theoretical aspects
The exact definition of linear \( \lambda \)-expressions can be found in [9, section 7], except that we do not need the if-then-else. Our \( 1 \) corresponds to the tensor unit, our \( \times \) corresponds to the tensor product \( \otimes \), and our \( \rightarrow \) corresponds to the linear implication \( \rightarrow^0 \). Two types are linearly isomorphic if there are linear bijections between them; this is a stronger requirement than the linear logical equivalence \( \rightarrow^0 \). It is far from obvious that the five linear axioms in table 1 form a complete equational axiomatization of linear isomorphism, but I had the good luck to be able to contact Sergei Soloviev, who found a proof [23]. Instead of generating the isomorphisms that hold in any Cartesian closed category, the five axioms generate those that hold in all symmetric monoidal closed (SMC) categories, sometimes just called closed categories [12, section VII.7].
The equational axioms in table 1 are decorated with bijections, and Soloviev’s proof implies that the decorations survive (in modified forms) during equational reasoning. This gives an inductive way of generating the linear bijections—we just extend Birkhoff’s rules for equational reasoning [2] to handle decorations (table 2).
Rule 2(v) may need some examples. What happens if \( F \) is \( \text{List} \)? Then the rule says that from
\[
\begin{align*}
A & \not\Rightarrow B, \\
f & \not\Rightarrow f^{-1}
\end{align*}
\]
we can infer that
\[
\begin{align*}
\text{List}(A) & \not\Rightarrow \text{List}(B), \\
\text{mapList}(f, f^{-1}) & \not\Rightarrow \text{mapList}(f^{-1}, f)
\end{align*}
\]
but what is \( \text{mapList} \)? It is simply the ordinary \( \text{map} \) function over lists, except that it has an extra argument \( f^{-1} \), which is ignored:
\[
\text{mapList}(f, f^{-1})[x_1, \ldots, x_n] = [f(x_1), \ldots, f(x_n)].
\]
TABLE 2
How Birkhoff's laws for equational reasoning modify the bijections.
<table>
<thead>
<tr>
<th></th>
<th>f</th>
<th>g</th>
</tr>
</thead>
<tbody>
<tr>
<td>A \rightleftharpoons B</td>
<td>\text{id}</td>
<td>A \rightleftharpoons B \rightleftharpoons C</td>
</tr>
<tr>
<td>\text{id}</td>
<td>\text{id}</td>
<td>\text{id}</td>
</tr>
<tr>
<td>f^{-1}</td>
<td>f^{-1}</td>
<td>g^{-1} \circ f</td>
</tr>
<tr>
<td>B \rightleftharpoons A</td>
<td>f</td>
<td>A \rightleftharpoons C</td>
</tr>
<tr>
<td>f</td>
<td>f^{-1}</td>
<td>g^{-1} \circ f^{-1}</td>
</tr>
</tbody>
</table>
(i) reflexivity (ii) symmetry (iii) transitivity
<table>
<thead>
<tr>
<th></th>
<th>f</th>
<th>g</th>
</tr>
</thead>
<tbody>
<tr>
<td>A \rightleftharpoons B</td>
<td>f</td>
<td>A \rightleftharpoons B \rightleftharpoons C</td>
</tr>
<tr>
<td>f^{-1}</td>
<td>f^{-1}</td>
<td>g^{-1} \circ f</td>
</tr>
<tr>
<td>\sigma(A) \rightleftharpoons \sigma(B)</td>
<td>f^{-1}</td>
<td>\text{map}_{\sigma}(f_1, f_1^{-1}, \ldots, f_n, f_n^{-1})</td>
</tr>
<tr>
<td>f</td>
<td>f^{-1}</td>
<td>g^{-1} \circ f^{-1}</td>
</tr>
</tbody>
</table>
(iv) stability
Similarly, \text{map}_{\times} is defined by
\text{map}_{\times}(f_1, f_1^{-1}, f_2, f_2^{-1})(a_1, a_2) = (f_1(a_1), f_2(a_2)),
and also ignores its $f_1^{-1}$ and $f_2^{-1}$ arguments.
The general rule 2 (v) must allow the map function to use every $f_i^{-1}$ as well as every $f_i$, because it needs them when $F$ is contravariant in an argument (roughly: an argument of $F$ occurs to the left of an arrow). For instance, when $F$ itself is the arrow, we have to define map$^{-}$ by
\text{map}_{-}(f_1, f_1^{-1}, f_2, f_2^{-1})(g) = f_2 \circ g \circ f_1^{-1}.
As another example, let
\text{type} F(\alpha) = CL(\alpha) + C2(\alpha \rightarrow \text{Int}),
then
\text{map}_{F}(f, f^{-1})(C1(\alpha)) = C1(f(\alpha))
\text{map}_{F}(f, f^{-1})(C2(g)) = C2(g \circ f^{-1}).
The decorated rules in table 2 give an inductive way to construct the set of linear bijections, which we can call Linb. Starting from the five linear axioms in table 1, we get that \text{exch}, \text{assr}, \text{assl}, \text{dell}, \text{insl}, \text{curry}, \text{uncurry}, \text{appunit}, and \text{absunit} belong to Linb. Rule 2 (i) says that \text{id} is in Linb. Rule 2 (iii) says that if $g$ and $f$ are in Linb, so is \text{g} \circ f. Finally, rule 2 (v) says that if...
$F$ is an $n$-ary type operator, and $f_1, \ldots, f_n$ are in $\text{Linb}$, then so is $\text{map}_F(f_1, f_1^{-1}, \ldots, f_n, f_n^{-1})$.
A consequence of this inductive definition is that whenever a function is in $\text{Linb}$, then so is its inverse.
The set $\text{Linb}$ depends on which type operators exist in our language. If the only type operators are $\times$ and $\rightarrow$, then Soloviev's proof of equational completeness [23] says that $\text{Linb}$ contains exactly the bijections that exist between types that are isomorphic in every SMC category. But when we search functional libraries, we must allow all type operators that occur in the library.
3. EXPERIMENTS WITH EQUATIONAL UNIFICATION
When we retrieve library functions via types, it is reasonable to allow instantiation of library types, since a polymorphic library function can be used in a less general context. Some retrieval systems also allow unrestricted instantiation of the query [13, 19, 20], but this can be slow and too permissive. We can give the user some control over query instantiation if the queries are formulated explicitly by him (rather than derived from examples, say). A query variable can then express either polymorphism or an unknown subtype, and only in the latter case should it be instantiated. Since polymorphic variables are bound at top-level, we can use free variables to stand for unknown types. For example, when we seek the reverse function on lists, we know that its type $\forall \alpha. [\alpha] \rightarrow [\alpha]$ is polymorphic, so we do not want to retrieve functions of type $[\text{Float}] \rightarrow [\text{Float}]$. On the other hand, the query $\alpha \rightarrow \text{Float}$ (without quantifier) would retrieve all functions that return floats.
We still need an algorithm for unification modulo isomorphism. There is no general method to unify in a given equational theory, and there are theories in which unifiability is undecidable. Usually, one has to resort to ad hoc algorithms. Siekmann has made a comprehensive survey [21]. Narendran, Pfenning and Statman [16] proved that unifiability modulo CCC-isomorphism is undecidable, but gave an algorithm for unification modulo linear isomorphism, which I have implemented on top of a Standard ML program for associative-commutative unification [11]. I have added the restriction that variables in library types must not be instantiated to $1$, as this seems to retrieve only rubbish.
The retrieval system is still often too liberal; for instance, if the user searches for a function of type $Q$, and allows extra arguments by submitting
the query $\varepsilon \to Q$, then every function of a type $\forall \alpha. A \to \alpha$ will be retrieved via the substitution $\{ \alpha := Q, \varepsilon := A[Q/\alpha] \}$. Although the query and the answer are unifiable in this case, they need not be similar in any other way. This mechanism alone can retrieve a lot of rubbish, since many library functions have types that end with "\ldots \to \alpha".
Fortunately, the useful library types can be unified with the query by fairly simple unifiers, while the rubbish tend to require more complex ones (in the example above, $Q$ is probably a medium-sized type, while $A[Q/\alpha]$ can be large). This observation can be explained as follows: the more general type a function has, the less it can do, since it cannot examine the internal structure of its polymorphic arguments; therefore, the more instantiation needed to fit a library type to a query, the less likely it is that the associated function is useful.
Therefore, my system ranks the retrieved items by the sizes of the unifiers. (When several most general unifiers exist, the smallest one is used for the ranking). The effect is that library functions whose types need only be instantiated a little (or not at all) are placed first.
3.1. Defining the size of substitutions
The size of a substitution can be defined in various ways. There are two primitive ways to specialize a type: either you make two variables equal, or you replace a variable by a constant or an operator applied to new, distinct variables. Therefore, it is reasonable to measure a substitution $\{ \alpha_1 := t_1, \ldots, \alpha_n := t_n \}$ by assigning one weight to each repeated variable, and another weight to each occurrence of a constant or operator in the types $t_1, \ldots, t_n$. In my first tests, both weights were 1, but I got slightly better results by increasing the weight of operators and constants to 2. This fits intuition: it is a bigger step to introduce a constant or operator out of the blue, than to identify two variables that already exist.
*Example 3.* — A variable renaming, like
$$\{ \alpha := \beta, \beta := \gamma, \gamma := \alpha \},$$
has size 0, since no variable is repeated among the right hand sides, and no operators or constants occur. □
*Example 4.* — The substitution
$$\{ \alpha := \delta, \beta := \delta, \gamma := \delta \},$$
has the size 2, since $\delta$ is repeated twice. [A variable that occurs $n$ times is repeated $(n-1)$ times.]\hfill\square
Example 5. — The substitution
\[
\{ \alpha := \text{Foo} (\text{Foo} (\beta)) \}
\]
has the size 4, since each occurrence of the operator \text{Foo} carries the weight 2. \hfill\square
A complication is that the unifiers can affect both polymorphic library variables and free query variables. At first, I treated these the same when I measured size. But the argument above, that more polymorphic functions can do less, only concern the polymorphic library variables—it says nothing about the free query variables. Therefore, I now split each unifier in two parts: one that acts on library variables and one that acts on free query variables. Both are measured separately, but the former is most significant and the latter is only used to break ties. This gave better results than equal significance.
In summary, the definition of substitution size is a heuristic intended to place the most relevant functions first. It works well, but can probably be improved after more experiments.
3.2. Examples of retrieval
We will look at some examples of use. The size of the smallest substitution is given as the pair of sizes of the library-variable part and the free query-variable part. The times are averages of ten trials and are CPU seconds on a SPARC Server 10, model 41, with 32 Mbyte; the system was compiled by Standard ML of NJ, version 0.75.
Example 6. — We wish to print a floating point number. If we query the Lazy ML library with the type \text{Float $\rightarrow$ Char}, we retrieve only
\[
\text{ftos} : \text{Float $\rightarrow$ Char} \ (0, 0)
\]
Time: 0.12 s.
which indeed prints numbers in a standard way. If we now suspect that there is a more flexible print routine, which allows the user to choose the format, we should query with a type $\varepsilon \times \text{Float $\rightarrow$ Char}$. Since $\varepsilon$ is a free type variable, it can be instantiated to any type, which is fortunate since we do not know
the type of the extra formatting information. This query retrieves
\[
\begin{align*}
ftos & : Float \to [\text{Char}] & (0, 2) \\
fmtf & : [\text{Char}] \to Float \to [\text{Char}] & (0, 4) \\
ftosf & : Int \to Int \to Float \to [\text{Char}] & (0, 6) \\
show\_pair & : \forall \alpha\beta. (\alpha \to [\text{Char}]) \times (\beta \to [\text{Char}]) \to \alpha \times \beta \to [\text{Char}] & (2, 17) \\
\text{while} & : \forall \beta. (\beta \to \text{Bool}) \to (\beta \to \beta) \to \beta \to \beta & (8, 40)
\end{align*}
\]
\[
\text{Time: 1.13 s.}
\]
The standard formatter \( \text{ftos} \) is retrieved again, via the substitution \( \{ \varepsilon := 1 \} \). But we also find two more flexible formatters \( \text{fmtf} \) and \( \text{ftosf} \), via the substitutions \( \{ \varepsilon := \text{Char} \} \) and \( \{ \varepsilon := \text{Int} \times \text{Int} \} \). The first function takes a formatting string in the style of the \texttt{printf} of \( C \), the other takes a minimum field width and a number of significant digits. We also retrieve thirty-two useless functions, but this does not matter much, as the useful ones were placed first. But the possibility of instantiating library types gave nothing useful in this example.
**Example 7.** Let us look for a function to check membership in a list. First we submit the query \( \forall \alpha. \alpha \times [\alpha] \to \text{Bool} \), which retrieves
\[
\begin{align*}
\text{mem} & : \forall \beta. \beta \to [\beta] \to \text{Bool} & (0, 0) \\
\text{Time: 0.75 s.}
\end{align*}
\]
via the substitution \( \{ \beta := \alpha \} \).
To try Runciman and Toyn's strategy [20] to allow extra arguments to library functions, we can query with \( \forall \alpha. \varepsilon \times \alpha \times [\alpha] \to \text{Bool} \). Since \( \varepsilon \) is a free variable, unlike \( \alpha \), it can be instantiated to the unknown type of the extra argument(s). This query retrieves
\[
\begin{align*}
\text{mem} & : \forall \beta. \beta \to [\beta] \to \text{Bool} & (0, 2) \\
\text{member} & : \forall \beta\gamma. (\beta \to \gamma \to \text{Bool}) \to \beta \to [\gamma] \to \text{Bool} & (1, 7) \\
(=) & : \forall \beta. \beta \to \beta \to \text{Bool} & (5, 5) \\
\text{while} & : \forall \beta. (\beta \to \text{Bool}) \to (\beta \to \beta) \to \beta \to \beta & (9, 47) \\
\text{Time: 10.1 s.}
\end{align*}
\]
Now we also retrieve a function \textit{member} that can take an equivalence test as an argument, but since nothing forces this test to take arguments of the same
type, the type of member is more general than the query. The required
substitution is
\[ \{ \beta := \alpha, \gamma := \alpha, \varepsilon := (\alpha \rightarrow \alpha \rightarrow \text{Bool}) \} \]. □
**Example 8.** – This query was suggested by Dan Synek. He had defined
\[ \text{norm} : \forall \alpha \beta . (\alpha \rightarrow [\beta] \rightarrow [\beta]) \rightarrow [\alpha] \rightarrow [\beta] \]
\[ \text{norm} f [ ] = [ ] \]
\[ \text{norm} f (x :: xs) = f x (\text{norm} f xs) \]
and wondered if it was already defined in the standard library. If we use the
type of norm as a query, nothing is retrieved. But if we again allow extra
arguments in library types, by querying with
\[ \forall \alpha \beta . \varepsilon \rightarrow (\alpha \rightarrow [\beta] \rightarrow [\beta]) \rightarrow [\alpha] \rightarrow [\beta] \]
we retrieve
\[ \text{itlist} : \forall \gamma \delta . (\gamma \rightarrow \delta \rightarrow \delta) \rightarrow [\gamma] \rightarrow \delta \rightarrow \delta \] \hspace{1cm} (2, 2)
\[ \text{revitlist} : \forall \gamma \delta . (\gamma \rightarrow \delta \rightarrow \delta) \rightarrow [\gamma] \rightarrow \delta \rightarrow \delta \] \hspace{1cm} (2, 2)
\[ \text{reduce} : \forall \gamma \delta . (\gamma \rightarrow \delta \rightarrow \delta) \rightarrow \delta \rightarrow [\gamma] \rightarrow \delta \] \hspace{1cm} (2, 2)
\[ (\cdot ) : \forall \alpha \beta \gamma . (\alpha \rightarrow \beta) \rightarrow (\gamma \rightarrow \alpha) \rightarrow (\gamma \rightarrow \beta) \] \hspace{1cm} (10, 9)
... twenty-six functions omitted...
\[ \text{while} : \forall \beta . (\beta \rightarrow \text{Bool}) \rightarrow (\beta \rightarrow \beta) \rightarrow \beta \rightarrow \beta \] \hspace{1cm} (19, 90)
Time: 31.8 s.
If you are familiar with the itlist/revitlist functions, also known as foldr/foldl,
you will realize that norm could have been defined by letting \( \text{norm f l} = \text{itlist f l} [ ] \), so it is just a special case of itlist, where the “start-state” of
itlist has been frozen to the empty list. This also instantiates the type of itlist,
since norm can return lists only. Therefore, to retrieve itlist, it was necessary
both to instantiate \( \varepsilon \) to the extra argument, and to instantiate the library
variable \( \delta \) to \([\beta]\). The substitution becomes \( \{ \gamma := \alpha, \delta := [\beta], \varepsilon := [\beta] \} \). □
The test library contains 294 identifiers, whose types can be divided into
148 linear-isomorphism classes. The implementation of the retrieval is rather
naïve; it just tests the classes one by one against the query. To get faster
retrieval, it should be possible to organize the classes by their result types. It
is also likely that the unification algorithm would be more efficient if it were
based directly on associative-commutative-unit unification, which gives fewer unifiers than associative-commutative unification.
3.3. Comparisons between equivalence, matching, and unification
Equational unification is normally more complex than matching and plain equivalence tests. The CCC-isomorphism test for Hindley/Milner types is graph-isomorphism-complete [1]; such problems are believed to be between polynomial and NP-complete. CCC-matchability is NP-complete, and CCC-unifiability is undecidable, although the restriction to linear CCC-isomorphism makes the unifiability NP-complete and thus decidable [16].
My implementation can usually test CCC-isomorphism of a query against 148 library types in less than a second. To find the library types that are more general than the query (modulo CCC-isomorphism), the time can be several seconds. And to test unifiability (modulo linear CCC-isomorphism), the time can be half a minute or more for the queries I have tried, but is usually less.
<table>
<thead>
<tr>
<th>query</th>
<th>CCC-iso.</th>
<th>CCC-match.</th>
<th>lin-unif.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Float -> [Char]</td>
<td>0.22 (1)</td>
<td>0.27 (1)</td>
<td>0.12 (1)</td>
</tr>
<tr>
<td>e x Float -> [Char]</td>
<td>0.21 (0)</td>
<td>0.25 (0)</td>
<td>1.13 (35)</td>
</tr>
<tr>
<td>V \alpha. \alpha x [\alpha] -> Bool</td>
<td>0.21 (1)</td>
<td>0.24 (1)</td>
<td>0.75 (1)</td>
</tr>
<tr>
<td>V \alpha. e x [\alpha] x [\alpha] -> Bool</td>
<td>0.20 (0)</td>
<td>0.27 (0)</td>
<td>10.1 (38)</td>
</tr>
<tr>
<td>V \alpha. [\beta] -> [\beta] -> [\alpha] -> [\beta]</td>
<td>0.24 (0)</td>
<td>0.46 (0)</td>
<td>1.28 (0)</td>
</tr>
<tr>
<td>V \alpha. e (\alpha -> [\beta] -> [\beta]) -> [\alpha] -> [\beta]</td>
<td>0.25 (0)</td>
<td>0.61 (0)</td>
<td>31.8 (31)</td>
</tr>
</tbody>
</table>
Table 3 gives some times for various queries. The tests of isomorphism and matching treated the free variables in the queries as if they were bound, that is, they were not instantiated, but possibly renamed. The figures in parentheses are the number of items retrieved from the library.
The queries were taken from examples 6-8. For other queries, and still using the Lazy ML library of 294 items, the number of retrieved items can be around half a dozen for isomorphism, and around a dozen for matching (that is, for checking if library types are more general than the query).
3.4. A user interface with windows
Thomas Hallgren has made a window-based user interface to the retrieval system. The answers to a query appear in one window (fig. 1). If the user
4. SUGGESTIONS FOR FUTURE WORK
4.1. Conjunctive queries
The free type variables in queries make conjunctive queries possible. For instance, the query
\[ A \rightarrow \beta, \quad \beta \rightarrow C \]
(where $\beta$ is free) should return all pairs of functions $f$ and $g$ such that $f$ has type $A \rightarrow B$ and $g$ has type $B \rightarrow C$, for some type $B$. But it is not clear how to implement this efficiently.
4.2. Unifiers of bounded size
It may be possible to find a threshold size for unifiers, such that larger ones hardly ever retrieve useful functions. It would then suffice to check if substitutions smaller than the threshold are unifiers, and that could save time. What is more, there will be a finite number of substitutions to check, so the procedure will terminate even if the full CCC-isomorphism is used. This is an alternative if one wants to keep the distributivity axioms.
4.3. Retrieving proved lemmas
In semi-automated theorem proving or program verification, it would be useful to have easy access to a library of previously proved lemmas. Since lemmas can be seen as a kind of types which are more expressive than Hindley/Milner types (using the Curry/Howard correspondence), it may be possible to extend the retrieval method of this report to such types. To avoid undecidable problems, such a retrieval system must necessarily give only approximative results, but even a simple system could be useful in practice. The two basic questions are: what equivalence relation on types should be used, and to what extent should instantiation be allowed.
ACKNOWLEDGEMENTS
I thank Paliath Narendran, Frank Pfenning and Richard Statman for inventing the unification algorithm, Erik Lindström for implementing associative-commutative unification, and Sergei Soloviev for proving equational completeness. This work would not be possible without assistance from them. My retrieval system is much nicer to use since Thomas Hallgren and Staffan Truvé made a window-based user interface. I am also grateful to Magnus Carlsson, Thierry Coquand, Roberto Di Cosmo, Peter Dybjer, Yves Lafont, Pierre Lescanne, Giuseppe Longo, G. E. Mints, and Antti Valmari for help and advice. And my local user group, especially Annika Aasa, has provided valuable feedback.
REFERENCES
1. D. A. Basin, Equality of Terms Containing Associative-Commutative Functions and Commutative Binding Operators is Isomorphism Complete, In M. E. Stickel
Informatique théorique et Applications/Theoretical Informatics and Applications
14. L. Meertens and A. Siefkes, Universal Type Isomorphisms in Cartesian Closed Categories– Preliminary Version, Centrum voor Wiskunde en Informatica, Amsterdam, the Netherlands (lambert@cwi.nl and arno@cwi.nl), 1990.
23. S. V. SOLOVIEV, The Ordinary Identities Form a Complete Axiom System for Isomorphism of Types in Closed Categories, 1991, The Institute for Informatics and Automation of the Academy of Sciences, 199178, St. Petersburg, Russia, e-mail via S. Baranoff, sergei@iias.spb.su. (The author is currently at Aarhus University, Denmark, e-mail: soloviev@daimi.aau.dk).
|
{"Source-Url": "http://www.numdam.org/article/ITA_1993__27_6_523_0.pdf", "len_cl100k_base": 10284, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 43100, "total-output-tokens": 12515, "length": "2e13", "weborganizer": {"__label__adult": 0.0003612041473388672, "__label__art_design": 0.0005016326904296875, "__label__crime_law": 0.0003707408905029297, "__label__education_jobs": 0.00113677978515625, "__label__entertainment": 0.00010758638381958008, "__label__fashion_beauty": 0.00016808509826660156, "__label__finance_business": 0.00022542476654052737, "__label__food_dining": 0.00037789344787597656, "__label__games": 0.0006542205810546875, "__label__hardware": 0.0006947517395019531, "__label__health": 0.0005741119384765625, "__label__history": 0.00035881996154785156, "__label__home_hobbies": 0.00012874603271484375, "__label__industrial": 0.0003893375396728515, "__label__literature": 0.0006017684936523438, "__label__politics": 0.0002968311309814453, "__label__religion": 0.0005741119384765625, "__label__science_tech": 0.0662841796875, "__label__social_life": 0.00015115737915039062, "__label__software": 0.01044464111328125, "__label__software_dev": 0.91455078125, "__label__sports_fitness": 0.0002522468566894531, "__label__transportation": 0.0004963874816894531, "__label__travel": 0.0001931190490722656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40478, 0.02731]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40478, 0.47213]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40478, 0.79459]], "google_gemma-3-12b-it_contains_pii": [[0, 743, false], [743, 3117, null], [3117, 5447, null], [5447, 6822, null], [6822, 9366, null], [9366, 12340, null], [12340, 14308, null], [14308, 16572, null], [16572, 18751, null], [18751, 21381, null], [21381, 23756, null], [23756, 25819, null], [25819, 28375, null], [28375, 31195, null], [31195, 33598, null], [33598, 33804, null], [33804, 36239, null], [36239, 39351, null], [39351, 40478, null]], "google_gemma-3-12b-it_is_public_document": [[0, 743, true], [743, 3117, null], [3117, 5447, null], [5447, 6822, null], [6822, 9366, null], [9366, 12340, null], [12340, 14308, null], [14308, 16572, null], [16572, 18751, null], [18751, 21381, null], [21381, 23756, null], [23756, 25819, null], [25819, 28375, null], [28375, 31195, null], [31195, 33598, null], [33598, 33804, null], [33804, 36239, null], [36239, 39351, null], [39351, 40478, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40478, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40478, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40478, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40478, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40478, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40478, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40478, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40478, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40478, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40478, null]], "pdf_page_numbers": [[0, 743, 1], [743, 3117, 2], [3117, 5447, 3], [5447, 6822, 4], [6822, 9366, 5], [9366, 12340, 6], [12340, 14308, 7], [14308, 16572, 8], [16572, 18751, 9], [18751, 21381, 10], [21381, 23756, 11], [23756, 25819, 12], [25819, 28375, 13], [28375, 31195, 14], [31195, 33598, 15], [33598, 33804, 16], [33804, 36239, 17], [36239, 39351, 18], [39351, 40478, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40478, 0.10526]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
0146b5176602f1acb9afdb883174fd98a2ad4be5
|
Chapter 1
Algorithms
The term “computer” used to be a job description for a person doing the same tedious computations over and over, hopefully without error. When electrical computers became available, these human computers often transitioned to become computer programmers. Instead of doing the computations themselves, they told the computer what to do.
**Definition 1.1 (Algorithm).** An *algorithm* is a sequence of computational instructions that solves a class of problems. Often the algorithm computes an output for a given input, i.e., a mathematical function.
**Remarks:**
- While the number of algorithms is theoretically unlimited, surprisingly many problems can be solved with just a few algorithmic paradigms that we will review in this chapter. A simple yet powerful algorithmic concept is recursion. Let us start with an example.
### 1.1 Recursion
You have won an *all-you-can-carry* run through an electronics store. The rules are simple: Whatever you manage to carry, you can have for free. Being well-prepared you bring a high-capacity backpack to the event. Which items should you put into your backpack such that you can carry the maximum possible value out of the store?
**Problem 1.2 (Knapsack).** An *item* is an object that has a *name*, a *weight* and a *value*. Given a list of *items* and a knapsack with a weight *capacity*, what is the maximal value that can be packed into the knapsack?
**Remarks:**
- An algorithm solving Knapsack computes a function; the inputs of this function are the set of possible *items* and the *capacity* limit of the knapsack, the output is the maximal possible *value*.
- A simple way to solve Knapsack is to check for every *item* whether it should be packed into the knapsack or not, expressed as the following recursion:
def knapsack(items, capacity):
if len(items) == 0:
return 0
first, *rest = items
take = 0
if first.weight <= capacity:
take = knapsack(rest, capacity-first.weight) + first.value
skip = knapsack(rest, capacity)
return max(take, skip)
Algorithm 1.3: A recursive solution to Knapsack.
Remarks:
- Algorithm 1.3 may look like pseudo-code, but really is correct Python.
- In Lines 7 and 8, the algorithm calls itself. This is called a recursion.
Definition 1.4 (Recursion). An algorithm that splits up a problem into sub-problems and invokes itself on the sub-problem is called a recursive algorithm. A recursion ends when reaching a simple base case that can be solved directly. Also, see Definition 1.4.
Remarks:
- In mathematics, we find a similar structure in some prominent inductive functions such as the Fibonacci function.
- Recursive algorithms are often easy to comprehend, but not necessarily fast.
- How can we measure “fast”?
Definition 1.5 (Time Complexity). The time complexity of an algorithm is the number of basic arithmetic operations (+, −, ×, ÷, etc.) performed by the algorithm with respect to the size $n$ of the given input.
Remarks:
- Each variable assignment, if statement, iteration of a for loop, comparison (==, <, >, etc.) or return statement also counts as one basic arithmetic operation, and so do function calls (len(), max(), knapsack()).
- Unfortunately, there is no agreement on how the size of the input should be measured. Often the input size $n$ is the number of input items. If input items get large themselves (e.g., the input may be a single but huge number), $n$ refers to the number of bits needed to represent the input.
1.2. GREEDY
- We are usually satisfied if we know an approximate and asymptotic time complexity. The time complexity should be a simple function of $n$, just expressing the biggest term as $n$ goes to infinity, ignoring constant factors. Such an asymptotic time complexity can be expressed by the “big O” notation.
**Definition 1.6** (O-notation). The O-notation is used to denote a set of functions with similar asymptotic growth. More precisely,
$$O(f(n)) = \left\{ g(n) \mid \lim_{n \to \infty} \frac{g(n)}{f(n)} < \infty \right\}.$$
Remarks:
- In other words, $O(f(n))$ is the set of functions $g(n)$ that asymptotically do not grow much faster than $f(n)$.
- For example, $O(1)$ includes all constants and $O(n)$ means “linear in the input size $n$”.
- In other words, the O-notation is quite crude, but nevertheless useful, both in theory and practice.
- Other useful asymptotic notations are $\Omega()$ for lower bounds, but also $o()$, $\omega()$, $\Theta()$, etc.
**Lemma 1.7.** The time complexity of Algorithm 1.3 is $O(2^n)$.
*Proof.* Each call of the knapsack()-procedure performs constantly many basic arithmetic operations itself and makes (at most) two additional calls to the knapsack()-procedure. Hence, it suffices to count the total number of knapsack()-invocations. We get 1 invocation on the first item, at most 2 on the second, 4 on the third, \ldots, and $2^n - 1$ on the last. Hence, there are less than $2^n$ invocations of the knapsack()-function. \hfill \Box
Remarks:
- The time complexity of Algorithm 1.3 is exponential in the number of items. Even if there were only $n = 100$ items to be evaluated, the currently fastest supercomputer in the world would take $2^{100}$ ops/$(148 \cdot 10^{15}$ ops/s) $\approx 271\,000$ years to compute our knapsack function. So for many realistic inputs, Algorithm 1.3 is not usable. We need a better approach!
1.2 Greedy
What about sorting all the items by their value-to-weight ratio, and then simply greedily packing them!? •
def knapsack(items, capacity):
items.sort(key=lambda item : -item.value/item.weight)
value = 0
for item in items:
if item.weight <= capacity:
capacity -= item.weight
value += item.value
return value
Algorithm 1.8: A naive greedy algorithm for Knapsack.
Remarks:
- Algorithm 1.8 is fast, with a time complexity of $O(n \log n)$, just for calling the sorting function in Line 2. So a large input is no problem.
- Also, the output of Algorithm 1.8 often seems reasonable. However, Algorithm 1.8 does not solve Knapsack optimally. For example, assume a capacity 6 knapsack, two items each with value 3 and weight 3, and one higher-ratio item with value 5 and weight 4.
- Can we gain a speed-up from first sorting the elements?
1.3 Backtracking
Definition 1.9 (Backtracking). A backtracking algorithm solves a computational problem by constructing a candidate solution incrementally, until either a solution or a contradiction is reached. In case of a contradiction, the algorithm “backtracks” (i.e. reverts) its last steps to a state where another solution is still viable. Efficient backtracking algorithms have two main ingredients:
- **Look-ahead**: We order the search space such that the most relevant solutions come up first.
- **Pruning**: We identify sub-optimal paths early, allowing to discard parts of the search space without explicitly checking.
Remarks:
- Algorithm 1.3 was an inefficient backtracking algorithm.
- Our look-ahead idea is to sort the items by value-to-weight ratio as in Algorithm 1.8.
- The algorithm prunes the solution space if it cannot possibly achieve the best solution so far.
Algorithm 1.10: An efficient backtracking solution to Knapsack.
Remarks:
- The missing parameter is the additional value that is required to surpass the previously best solution.
- The time complexity of Algorithm 1.10 is still $O(2^n)$ in the worst case. Can we do better?
### 1.4 Dynamic Programming
**Definition 1.11 (Dynamic Programming).** *Dynamic programming (DP) is a technique to reduce the time complexity of an algorithm by utilizing extra memory. To that end, a problem is divided into sub-problems that can be optimized independently. Intermediate results are stored to avoid duplicate computations.*
Remarks:
- Knapsack can be solved with dynamic programming. To that end, we store a value matrix $V$ where $V[i][c]$ is the maximum value that can be achieved with capacity $c$ using only the first $i$ items.
Algorithm 1.12: A dynamic programming solution to Knapsack.
Remarks:
- Note that Algorithm 1.12 is not correct Python. Line 3 is just pseudo-code, far from actual Python notation. Line 4 could be Python, but unfortunately needs an extra `enumerate()` function.
- Line 6 is incorrect: If `item.weight > c`, `c-item.weight` becomes negative. The programmer of Algorithm 1.12 assumed that accessing a negative index of an array returns 0; however, most programming languages return an error. We can fix Line 6 by adding the conditional expression `if c >= item.weight else 0` to the first term of the `max()` function.
- The time complexity of Algorithm 1.12 is $O(n \cdot \text{capacity})$. In Definition 1.5 we postulated that the time complexity should be a function of $n$. So the DP approach only makes sense when \text{capacity} is a natural number with $\text{capacity} < 2^n/n$.
**Definition 1.13** (Space Complexity). The **space complexity** of an algorithm is the amount of memory required by the algorithm, with respect to the size $n$ of the given input.
Remarks:
- As for Definition 1.5, we are usually satisfied if we know the approximate (asymptotic) space complexity.
- Also, the amount of memory can be measured in bits or memory cells.
- The space complexity of Algorithm 1.12 is $O(n \cdot \text{capacity})$.
- For reasonably small \text{capacity}, Algorithm 1.12 is faster than Algorithms 1.3–1.10, but is it correct?
**Lemma 1.14.** Assuming that all items have integer weights, Algorithm 1.12 solves Knapsack correctly.
**Proof.** We show the correctness of each entry in the matrix V by induction. As a base case, we have $V[0] = [0, \ldots, 0]$ since without item, no value larger than 0 can be achieved. For the induction step, assume that $V[i]$ correctly contains the maximum values that can be achieved using only the first $i$ items. When we set a value $V[i+1][c]$, we can either include the item $i+1$ or select the optimal solution for Knapsack with capacity using only the first $i$ items. Algorithm 1.12 stores the $\text{max()}$ of these two values in $V[i+1][c]$ (for all $c \in \{0, \ldots, \text{capacity}\}$), which is optimal.
Hence, the value $V[n][\text{capacity}]$ contains the maximum value that can be achieved with the weight \text{capacity}, using any combination of the $n$ items. □
Remarks:
- Line 6 of Algorithm 1.12 is typical for dynamic programming algorithms: either the previous best solution can be improved, or it remains unchanged. This is called Bellman’s principle of optimality.
- The computation order of Algorithm 1.12 is important. For example, we can only compute the entry \( V[i+1][c] \) once we have computed both \( V[i][c-item.weight] \) and \( V[i][c] \).
- The sub-problem dependencies can be visualized as a dependency graph. In order to apply dynamic programming, this graph must be a directed acyclic graph (DAG).
- Algorithm 1.12 is a so-called \textit{bottom-up} dynamic programming algorithm as it begins computing the entries of matrix \( V \) starting with the simple cases.
- But do we really need to compute the entire matrix \( V \)?
\textbf{Definition 1.15 (Memoization).} \textit{Memoization} generally refers to a technique that avoids duplicate computations by storing intermediate results.
```python
def knapsack(items, capacity, memo={}):
index = (len(items), capacity)
if index in memo:
return memo[index]
if len(items) == 0:
return 0
first, *rest = items
take = 0
if first.weight <= capacity:
take = knapsack(rest, capacity-first.weight, memo)
take += first.value
skip = knapsack(rest, capacity, memo)
memo[index] = max(take, skip)
return memo[index]
```
Algorithm 1.16: A top-down DP solution to Knapsack.
Remarks:
- Memoization can be used to implement \textit{top-down} DP algorithms.
- This is not so different from our initial Algorithm 1.3!
- We only changed Line 1 and added Line 2 to set up memoization, which is then used in Lines 3–4 and 13–14.
• Top-down DP is inheriting the best of recursion and bottom-up DP. Consequentially, the time complexity of Algorithm 1.16 is
\[ O(\min(2^n, n \cdot \text{capacity})) \].
• So far we have learned a family of related algorithmic techniques: recursion, backtracking, dynamic programming, and memoization. Together, this family can help solving many demanding algorithmic problems.
• However, there are powerful algorithmic paradigms beyond this family of techniques, for instance linear programming.
1.5 Linear Programming
So far, we were only considering unsplittable items. However, for liquid goods, Knapsack can be solved quickly using a greedy method (Algorithm 1.8). What if we had more than one constraint?
**Problem 1.17 (Liquid Knapsack).** A beverage has a name, a value per liter → notebook and a preparation time per liter. Given t hours to prepare for a party and a fridge with a storage capacity, what is the maximal value that can be prepared and stored in the fridge?
**Remarks:**
• With more than one constraint, the greedy method does not work.
• However, this problem has a nice property: the objective and the constraints are linear functions of the quantity of each prepared beverage. We call such problems linear programs.
**Definition 1.18 (Linear Program or LP).** A linear program (LP) is an → notebook optimization problem with n variables and m linear inequalities
\[
\begin{align*}
a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n &\leq b_1 \\
a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n &\leq b_2 \\
\vdots &\vdots \vdots \\
a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n &\leq b_m
\end{align*}
\]
We are interested in finding a point \( \mathbf{x} = (x_1, \ldots, x_n)^T \) with \( x_i \geq 0 \), respecting all these constraints, and maximizing a linear function
\[ f(\mathbf{x}) = c_1x_1 + c_2x_2 + \cdots + c_nx_n \]
where \( a_{ij} \), \( b_i \), and \( c_i \) are given real-valued parameters. We call the point \( \mathbf{x} \) an optimum of the LP.
1.5. LINEAR PROGRAMMING
Remarks:
• There is also a short hand notation using linear algebra
\[ \text{max}\{c^T x \mid Ax \leq b, x \geq 0\}, \]
where \( A \) is the matrix with entries \( a_{ij} \) and \( b \) and \( c \) the vectors given by the \( b_i \) and \( c_i \), respectively.
• In general, if you have the problem of maximizing or minimizing a linear function under constraints that are linear (in)equalities, there is a way to formulate it in above canonical form. For instance, a constraint \( a^T x = b \) can be rewritten as a combination of \( a^T x \leq b \) and \( a^T x \geq b \) which itself can be rewritten as \(-a^T x \leq -b\). Also, minimizing a linear function with coefficients \( c_1, \ldots, c_n \) is the same as maximizing a linear function with coefficients \(-c_1, \ldots, -c_n\).
• It is possible to model some functions which do not look linear at first sight. For example, minimizing an objective function \( f(x) = |x| \) can be expressed as \( \text{min}\{t \mid x \leq t, -x \leq t\} \).
Definition 1.19 (Feasible Point). Given an LP, a point is feasible if it is a solution of the set of constraints.
Remarks:
• Geometrically, the set of feasible points of an LP corresponds to an \( n \)-dimensional convex polytope. The hyperplanes bounding the polytope are given by the restricting inequalities.
• Polytopes are a generalization of 2D polygons to an arbitrary number of dimensions. Convexity, however, deserves a more formal definition.
Definition 1.20 (Convex Set). A set of points in \( \mathbb{R}^n \) is convex if for any two points of the set, the line segment joining them is also entirely included in the set.
Lemma 1.21. The set of feasible points of an LP is convex.
Proof. Given two feasible points \( x_1 \) and \( x_2 \), any point in the line segment joining them can be written as \( x_1 + \lambda(x_2 - x_1) \) for \( \lambda \in [0, 1] \). For any constraint \( a^T x \leq b \), we compute
\[ a^T[x_1 + \lambda(x_2 - x_1)] = (1 - \lambda)a^T x_1 + \lambda a^T x_2 \leq (1 - \lambda)b + \lambda b = b. \]
Definition 1.22. Given an LP, we call polytope the set of feasible points. A constraint \( a^T x \leq b \) is tight at \( x \) if \( a^T x = b \). For an LP with \( n \) variables, feasible points activating \( n \) (resp. \( n - 1 \)) linearly independent constraints are called the nodes (resp. edges) of the polytope. Each edge links two nodes \( x_1, x_2 \) with \( n - 1 \) common activating constraints; we say that the two nodes \( x_1, x_2 \) are neighbors.
Remarks:
- A polytope can be unbounded, i.e. infinitely large. If the convex polytope is unbounded, it is often rather called a convex polyhedron. In some cases, it is even possible to have an infinitely large solution, e.g., \( \max\{x|x \geq 0\} \). Following our definition, the LP does not admit an optimum in this case.
- In order to solve an LP, one has to find a point in the polytope that maximizes our objective function \( f(x) \).
**Theorem 1.23.** If the polytope of an LP is bounded, then at least one node of the polytope is an optimum of the LP.
*Proof.* For any value \( y \) that the objective function can take, the set of points reaching this value is given by the hyperplane \( c^T x = y \). We can find an optimum of the LP by sliding this hyperplane until the boundary of the polytope is reached, which happens at some node of the polytope.
Remarks:
- One popular method exploiting Theorem 1.23 for solving LPs is the simplex algorithm. The idea is simple: starting from a node of the LP polytope, greedily jump to a neighboring node having a better objective until you cannot improve the solution anymore.
```python
def simplex(polytope, f, x):
for y in neighbors(x, polytope):
if f(y) > f(x):
return simplex(polytope, f, y)
return x
```
Algorithm 1.24: Simplex Algorithm.
Remarks:
- While the simplex algorithm performs well in practice, there are instances where its time complexity is exponential in the size of the input. Other LP algorithms known as interior point methods are provably fast.
- In practice, we do not build and store the whole polytope of the LP, as the polytope could have an exponential number of nodes! Instead, we represent a node as a set of tight constraints. To find its neighbors, we remove a constraint of the set, add another constraint and check if the point is feasible.
- The node returned by the simplex algorithm is better than any neighboring node by construction, but how can we convince ourselves that no other point anywhere in the feasible polytope is better?
Definition 1.25 (Local Optimum). A feasible node $x$ is a **local optimum** if $f(x) \geq f(y)$ for any neighboring node $y$.
Remarks:
- In contrast to a local optimum, an optimum from Definition 1.18 is called **global optimum**.
- While it is easy to find a local optimum, finding a global optimum is often difficult. However, it turns out that every local optimum of an LP is also a global optimum!
**Theorem 1.26.** The node $x^*$ returned by the simplex algorithm is an optimum.
**Proof.** Let us consider the hyperplane $c^T x = f^*$, where $f^* = c^T x^*$. We know that all the neighbors of node $x^*$ are on the side $c^T x \leq f^*$. Since the polytope is convex, we know that the whole polytope must be on this side of the hyperplane. Hence no node $x'$ in the polytope can be on the side $c^T x > f^*$, and hence the node $x^*$ is a global optimum.

Remarks:
- So we have seen that every local optimum of an LP is also a global optimum. This important property in optimization is true for convex functions in general, and as such LPs are only a special case of convex optimization.
- We call Algorithm 1.24 with $x$ being any node of the polytope. But wait, how do we find such a start node?! It turns out that we can construct an auxiliary LP:
**Definition 1.28 (Phase 1 LP).** Given an LP
$$\max \{c^T x \mid Ax \leq b, x \geq 0\},$$
we build the so-called **phase 1 LP** by replacing every constraint $a_i^T x \leq b_i$ with $a_i^T x - y_i \leq b_i$, introducing a new artificial variable $y_i$. If we minimize all artificial variables $y_i$, we get:
$$\max \{-1^T y \mid Ax - Iy \leq b, x \geq 0, y \geq 0\}.$$
Lemma 1.29. Setting each $x_i = 0$ and each $y_i = \max(0, -b_i)$ yields a feasible node of the phase 1 LP.
Proof. With each original variable $x_i = 0$, each constraint is reduced to $-y_i \leq b_i$, which is satisfied when $y_i = \max(0, -b_i)$.
Also, this point is a node of the polytope: Algebraically, a point is a node if at least $n$ linearly independent constraints are tight at this point. The constraint $x_i \geq 0$ is tight for each original variable $x_i$ and either $a_i x - y_i \leq b_i$ or $y_i \geq 0$ is tight for each artificial variable $y_i$, depending on the sign of $b_i$. Thus, the number of tight constraints is at least equal to the number of variables, and this point is a node of the polytope.
Lemma 1.30. If the original LP is feasible, then the phase 1 LP will find a feasible node.
Proof. If the original LP is feasible, then its polytope is not empty, i.e., there exists a feasible node $x$ in the original LP. Together with $y = 0$, node $x$ is also feasible in the phase 1 LP. Since $\max\{1^T y\} = \min\{\sum(y)\}$ is optimal for $y = 0$, node $x$ is optimal in the phase 1 LP. With Theorem 1.26, we know that the phase 1 LP will find such a node $x$.
Remarks:
- Algorithm 1.31 is the complete procedure to solve an LP. This process is often called the two-phase simplex algorithm.
- In Python, one can solve an LP using the function `linprog` from the module `scipy.optimize`.
```python
1 def solveLP(A, b, c):
2 x, y = simplex(polytope([A -I], b), -1, (0, max(0, -b)))
3 if sum(y) == 0:
4 return simplex(polytope(A, b), c, x)
5 else:
6 return 'no solution'
```
Algorithm 1.31: Two-phase simplex algorithm to solve LPs.
1.6 Linear Relaxation
Linear programming is covering a broad class of problems, but we are often confronted with discrete tasks, for which we need an integer solution.
Definition 1.32 (Integer Linear Programming or ILP). An integer linear program (ILP) is an LP in which all variables are restricted to integers.
Remarks:
• In a lot of combinatorial problems, variables are restricted to just two values \( \{0, 1\} \). Such variables are called indicator (“to be or not to be”) variables. We call such programs binary ILPs.
• Apart from LP and ILP, there exist many other optimization techniques: Mixed Integer Linear Programming (MILP) with both integer and continuous variables, Quadratic Programming (QP), Semidefinite Programming (SDP), …
Problem 1.33 (ILP Knapsack). We can model Knapsack (Problem 1.2) with capacity \( c \) and \( n \) items of value \( v_i \) and weight \( w_i \) as a binary ILP, using indicator variables \( x_i \):
\[
\begin{align*}
\text{maximize} & \quad \sum v_i x_i \\
\text{subject to:} & \quad \sum w_i x_i \leq c \\
& \quad x_i \in \{0, 1\}.
\end{align*}
\]
Remarks:
• Unlike LPs, no efficient algorithm solving ILPs is known.
• It is tempting to relax the constraints \( x_i \in \{0, 1\} \) to \( 0 \leq x_i \leq 1 \), apply the simplex algorithm, and round the possible solution to the nearest feasible point.
Definition 1.34 (Linear Relaxation). Given a binary ILP, we construct the linear relaxation of the LP by replacing the constraint \( x \in \{0, 1\}^n \) with the constraint \( 0 \leq x_i \leq 1 \).
Remarks:
• However, in general, there is no guarantee that a linear relaxation finds the optimum.
• In the case of Knapsack, the solution of the linear relaxation is similar to Algorithm 1.8. All items \( i \) with a high value-to-weight ratio will get an indicator variable \( x_i = 1 \), all items with a low value-to-weight ratio will get an indicator variable \( x_i = 0 \). The critical item(s) in the middle will get a non-integer indicator variable which we must round down to 0 to get a valid solution. This solution can be arbitrarily bad, as the best (highest value-to-weight ratio) item might already be too heavy; we might end up without any object in the knapsack.
• However, a linear relaxation sometimes has the same optimum as its ILP. In particular, this is true for some classes of constraint matrices, e.g., totally unimodular matrices.
• A matrix is totally unimodular if every square submatrix has determinant \(-1, 0\) or \(+1\). This is a non-trivial property to check. For a certain class of problems we know that the constraint matrices are always totally unimodular.
Problem 1.35 (Assignment Problem). Given a list of customers and a list of cabs, how to match customers to cabs in order to minimize the total waiting time?
Algorithm 1.36. This problem can be modeled as an ILP. We denote the waiting time of customer \( i \) for cab \( j \) by \( w_{i,j} \). Also, we introduce a set of indicator variables \( x_{i,j} \) describing the assignment: \( x_{i,j} = 1 \) if and only if customer \( i \) is assigned to cab \( j \). We get:
\[
\begin{align*}
\text{minimize} & \quad \sum_{i,j} x_{i,j} w_{i,j} \\
\text{subject to:} & \quad \sum_{j} x_{i,j} = 1 \quad \text{for each customer } i \\
& \quad \sum_{i} x_{i,j} \leq 1 \quad \text{for each cab } j \\
& \quad x_{i,j} \in \{0,1\}
\end{align*}
\]
This ILP can be solved optimally with linear relaxation: the constraint matrix is totally unimodular.
1.7 Flows
Graphs and flows are useful algorithmic concepts, related to LPs and linear relaxations.
Definition 1.37 (Graph). A graph \( G \) is a pair \( (V,E) \), where \( V \) is a set of nodes and \( E \subseteq V \times V \) is a set of edges between the nodes. The number of nodes is denoted by \( n \) and the number of edges by \( m \).
Remarks:
- A directed graph \( G = (V,E) \) is a graph, where each edge has a direction, i.e., we distinguish between edges \((u,v)\) and \((v,u)\). If all edges of a graph are undirected, then the graph is called undirected.
- In a directed graph, we note \( \text{in}(u) \) (resp. \( \text{out}(u) \)) the set of edges entering (resp. leaving) node \( u \).
- A weighted graph \( G = (V,E,\omega) \) is a graph, where \( \omega : E \to \mathbb{R} \) assigns a weight \( \omega(e) \) for each edge \( e \in E \).
- Weights can for instance be used for delay \( d(e) \) or capacity \( c(e) \) of an edge.
- In the rest of this chapter, we consider capacitated directed graphs.
- Consider a company that wants to optimize the flow of goods in a transportation network from their factory to a customer.
Definition 1.38 (Flow). Formally, an s-t-flow from a source node \( s \) to a target node \( t \) is given as a function \( f : E \to \mathbb{R}_{\geq 0} \) such that
\[
\begin{align*}
f(u,v) & \leq c(u,v) \quad \text{for all } (u,v) \in E \quad \text{(capacity constraints)} \\
\sum_{e \in \text{in}(u)} f(e) & = \sum_{e \in \text{out}(u)} f(e) \text{ for all } u \in V \setminus \{s,t\} \quad \text{(flow conservation)}
\end{align*}
\]
We call the total flow reaching \( t \) the value of \( f \), i.e. \(|f| = \sum_{(u,t) \in E} f(u,t)\).
Problem 1.39 (Max-Flow). What is the maximum flow that can be established between a source and a target node in a network?
Remarks:
- Max-Flow can be written as an LP maximizing the value of the flow.
- Flows are also useful to model discrete (integral) data. Imagine traffic flow for example: every road as some capacity of cars and at each intersection, and every whole car getting in is expected to eventually get out!
- Fortunately, we can use the linear relaxation of the ILP and be guaranteed to have the optimal solution!
Theorem 1.40 (Integral Flow Theorem). If the capacity of each edge is an integer, then there exists a maximum flow such that every edge has an integral flow.
Proof. Assume you have an optimal but non-integral flow. If there is a path from $s$ to $t$ with every edge being non-integral, we can increase the flow on that path, so our original flow was not optimal. Hence, there cannot be a non-integral path from $s$ to $t$.
Let $u$ be a node adjacent to an edge $e$ with non-integral flow. Then $u$ needs at least another edge $e'$ with non-integral flow because of flow conservation at node $u$. We can follow these non-integral edges. Since they cannot include both $s$ and $t$, we must find a cycle $C$ of non-integral edges. All edges in $C$ can both change their flow by $\pm \varepsilon$, without changing the flow from $s$ to $t$. We change the flow of all edges in $C$ until a first edge in $C$ has integral flow. Now we have one edge less with non-integral flow. If there is still an edge with non-integral flow, we repeat this procedure, until all edges have integral flow.
Remarks:
- Thanks to Theorem 1.40, we can solve a discrete maximum flow problem with the linear relaxation of the ILP formulation and the simplex algorithm!
- There are also more efficient algorithms, known as augmenting paths algorithms.
Lemma 1.41. The following LP can be used to solve the flow problem:
\[
\begin{align*}
\text{maximize} & \quad \sum_{(u,v)} x_{(u,v)} & \quad \text{for each edge } (u,v) \text{ in } \text{out}(s) \\
\text{subject to:} & \quad x_{(u,v)} > 0 & \quad \text{for each edge } (u,v) \in E \\
& \quad x_{(u,v)} \leq c_{(u,v)} & \quad \text{for each edge } (u,v) \in E \\
& \quad \sum_{e \in \text{in}(u)} x_e - \sum_{e \in \text{out}(u)} x_e = 0 & \quad \text{for all } u \in V \setminus \{s,t\}
\end{align*}
\]
Proof. The flow $f$ is represented by one variable $x_{(u,v)}$ for every directed edge $(u,v) \in E$ that indicates the value on that edge, i.e. $f(u,v) = x_{(u,v)}$. We maximize the total flow value by looking at the flow that leaves $s$. The first constraint ensures that the flow is non-negative, while the second enforces the capacity constraint and the third one flow conservation.
Definition 1.42 (Augmenting Path). We define an augmenting path as a path from $s$ to $t$ such that the flow of each edge does not reach its capacity or flow can be pushed back. This is the case if the residual capacity on every edge of the path is greater than 0, where the residual capacity $r$ of an edge is defined as:
$$\text{residual}(u,v) = c(u,v) - f(u,v) + f(v,u)$$
Remarks:
- We can find an augmenting path in linear time, using a recursive algorithm!
- Instead of using the residual capacity defined above, we can also add all missing directed edges to the graph and give them capacity 0. Then, when we add flow to an edge $(u,v)$ we decrease the flow on the reverse edge $f(v,u)$ by the same amount. In this case $c(u,v) - f(u,v) > 0$ if and only if $c(u,v) = f(u,v) + f(v,u)$ and we can use the former check for finding edges with non-zero residual capacity.
```python
def find_augmenting_path(u, t, G, flow, visited):
visited.insert(u)
for v in G.neighbors(u):
if v is not in visited and residual[u, v] > 0:
path = find_augmenting_path(v, t, G, flow, visited)
if len(path) > 0 or v == t:
path.append((u, v))
return path
return []
```
Algorithm 1.43: Find augmenting path
Remarks:
- If the network has an augmenting path, then none of the edges of this path is at full capacity and we can add some flow on this path. This gives us a greedy algorithm: Find an augmenting path, push as much flow as possible on this path, then try again. This is known as the Ford-Fulkerson algorithm.
Algorithm 1.44: Ford-Fulkerson algorithm
Remarks:
- The maximum flow is closely related to the minimum cut.
Definition 1.45 (Cut). An s-t cut is a partition of the vertices $V$ into two sets $S$ and $T = V \setminus S$, such that $s \in S$ and $t \in T$. Each valid cut has a set of edges $C$ which point from nodes in $S$ to nodes in $T$. The minimum s-t cut is cut where the edges in set $C$ have minimum total capacity.
Theorem 1.46 (Max-Flow Min-Cut). The maximum s-t flow (Definition 1.38) is equal to the minimum s-t cut (Definition 1.45).
Chapter Notes
The word algorithm is derived from the name of Muhammad ibn Musa al-Khwarizmi, a Persian mathematician who lived around AD 780–850. Some algorithms are as old as civilizations. A division algorithm was already used by the Babylonians around 2500 BCE [2]. Analyzing the time efficiency of recursive algorithms can be a difficult task. An easy but powerful approach is given by the master theorem [1]. Linear programming is an old concept whose origins lie in solving logistic problems during World War 2. Back in the days, the term programming meant optimization, and not coding. Maximum flow has been studied since the 1950s, when it was formulated to study the Soviet railway system. The classic algorithm is by Ford and Fulkerson [4]. However just recently there has been progress, and Chen et al. [3] managed to solve maximum flow in pretty much linear time. This chapter was written in collaboration with Henri Devillez and Roland Schmid.
Bibliography
|
{"Source-Url": "https://disco.ethz.ch/courses/coti/lecturenotes/chapter1.pdf", "len_cl100k_base": 9027, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 50275, "total-output-tokens": 10298, "length": "2e13", "weborganizer": {"__label__adult": 0.0003690719604492187, "__label__art_design": 0.0005130767822265625, "__label__crime_law": 0.0004665851593017578, "__label__education_jobs": 0.0023860931396484375, "__label__entertainment": 0.00014770030975341797, "__label__fashion_beauty": 0.0001939535140991211, "__label__finance_business": 0.0004172325134277344, "__label__food_dining": 0.0005745887756347656, "__label__games": 0.0010519027709960938, "__label__hardware": 0.0024929046630859375, "__label__health": 0.0010528564453125, "__label__history": 0.0005497932434082031, "__label__home_hobbies": 0.00027251243591308594, "__label__industrial": 0.000965595245361328, "__label__literature": 0.0007128715515136719, "__label__politics": 0.00029730796813964844, "__label__religion": 0.0006375312805175781, "__label__science_tech": 0.41845703125, "__label__social_life": 0.00012314319610595703, "__label__software": 0.00995635986328125, "__label__software_dev": 0.556640625, "__label__sports_fitness": 0.0003979206085205078, "__label__transportation": 0.0010080337524414062, "__label__travel": 0.00022995471954345703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33620, 0.01888]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33620, 0.61655]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33620, 0.87169]], "google_gemma-3-12b-it_contains_pii": [[0, 1792, false], [1792, 3502, null], [3502, 5509, null], [5509, 7180, null], [7180, 8009, null], [8009, 10351, null], [10351, 12046, null], [12046, 14048, null], [14048, 16591, null], [16591, 18655, null], [18655, 20388, null], [20388, 22393, null], [22393, 24731, null], [24731, 27262, null], [27262, 29625, null], [29625, 31588, null], [31588, 33531, null], [33531, 33620, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1792, true], [1792, 3502, null], [3502, 5509, null], [5509, 7180, null], [7180, 8009, null], [8009, 10351, null], [10351, 12046, null], [12046, 14048, null], [14048, 16591, null], [16591, 18655, null], [18655, 20388, null], [20388, 22393, null], [22393, 24731, null], [24731, 27262, null], [27262, 29625, null], [29625, 31588, null], [31588, 33531, null], [33531, 33620, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33620, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33620, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33620, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33620, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33620, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33620, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33620, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33620, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33620, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 33620, null]], "pdf_page_numbers": [[0, 1792, 1], [1792, 3502, 2], [3502, 5509, 3], [5509, 7180, 4], [7180, 8009, 5], [8009, 10351, 6], [10351, 12046, 7], [12046, 14048, 8], [14048, 16591, 9], [16591, 18655, 10], [18655, 20388, 11], [20388, 22393, 12], [22393, 24731, 13], [24731, 27262, 14], [27262, 29625, 15], [29625, 31588, 16], [31588, 33531, 17], [33531, 33620, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33620, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
8946cf5bc8a035adbbbee56073f5dec7c9351e41
|
DESCRIPTION
Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal's scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include:
- Software management, quality and metrics,
- Software processes,
- Software architecture, modelling, specification, design and programming
- Functional and non-functional software requirements
- Software testing and verification & validation
- Empirical studies of all aspects of engineering and managing software development
Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information.
The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering. Guidelines for conducting systematic reviews are provided here.
Special Issues and Special Sections proposals
To submit a proposal for a special issue (original contributions on a topic within the scope of the journal) or a special section with extended papers from a conference of workshop within the scope of the journal, please contact the Special Content Editor, Prof. C. Wohlin (claes.wohlin@bth.se).
Benefits to authors
We also provide many author benefits, such as free PDFs, a liberal copyright policy, special discounts on Elsevier publications and much more. Please click here for more information on our author services.
Please see our Guide for Authors for information on article submission. If you require any further information or help, please visit our Support Center.
AUDIENCE
Software project managers, management information systems managers, information centre managers, software engineers and developers in industry and commercial organizations, software and systems houses, total solution vendors, academics.
IMPACT FACTOR
2018: 2.921 © Clarivate Analytics Journal Citation Reports 2019
ABSTRACTING AND INDEXING
Web of Science
Ergonomics Abstracts
INSPEC
IT-Digest
Science Citation Index Expanded
ACM Guide to Computing Literature
Applied Science and Technology Index
Computer Literature Index
CompuScience
Current Contents
Deadline Newsletter
Engineering Index
Research Alert
Scopus
EDITORIAL BOARD
Editor-in-Chief
Günther Ruhe, University of Calgary Department of Computer Science, 2500 University Drive NW, Calgary, T2N 1N4, Alberta, Canada
Special Content Editor
Jeffrey Carver, The University of Alabama, Tuscaloosa, Alabama, United States
Associate Editors
Tracy Hall, Lancaster University School of Computing and Communications, B40, InfoLab21, Lancaster, United Kingdom
Tim Menzies, North Carolina State University, Raleigh, North Carolina, 27695, United States
Guilherme Horta Travassos, Federal University of Rio de Janeiro Centre of Technology, Cidade Universitária, 21941-909, RIO DE JANEIRO, Brazil
Emeritus Editor
Claes Wohlin, Blekinge Institute of Technology Department of Software Engineering, 37179, Karlskrona, Sweden
Editorial Board
Bram Adams, Montreal Polytechnic, Montreal, Quebec, Canada
Christian Bird, Microsoft Research, Redmond, Washington, United States
Sjaak Brinkkemper, Utrecht University, Utrecht, Netherlands
Yuanfang Cai, Drexel University, Philadelphia, Pennsylvania, United States
Ivica Crnkovic, Chalmers University of Technology Department of Computer Science and Engineering, Göteborg, Sweden
Maya Daneva, University of Twente, Enschede, Netherlands
Tore Dybå, SINTEF, Trondheim, Norway
Sebastian Elbaum, University of Nebraska-Lincoln, Lincoln, Nebraska, United States
Michael Felderer, University of Innsbruck, Department of Computer Science, Innsbruck, Austria
Xavier Franch, Polytechnic University of Catalonia Department of Service and Information System Engineering, Barcelona, Spain
Sudipto Ghosh, Colorado State University, Fort Collins, Colorado, United States
Paul Grünbacher, Johannes Kepler University Linz, Linz, Austria
GUIDE FOR AUTHORS
Your Paper Your Way
We now differentiate between the requirements for new and revised submissions. You may choose to submit your manuscript as a single Word or PDF file to be used in the refereeing process. Only when your paper is at the revision stage, will you be requested to put your paper in to a 'correct format' for acceptance and provide the items required for the publication of your article.
To find out more, please visit the Preparation section below.
INTRODUCTION
Original high-quality research and review papers falling within the Aims and Scope of the journal will be considered for publication. Contributions are normally received with the understanding that they comprise original, unpublished material and are not being submitted for publication elsewhere. Translated material, which has not been published in English, will also be considered.
Types of Paper
Research Papers, Short Communications and Review Articles. We also actively encourage the submission of Systematic Review Articles.
Submission checklist
You can use this list to carry out a final check of your submission before you send it to the journal for review. Please check the relevant section in this Guide for Authors for more details.
Ensure that the following items are present:
One author has been designated as the corresponding author with contact details:
• E-mail address
• Full postal address
All necessary files have been uploaded:
Manuscript:
• Include keywords
• All figures (include relevant captions)
• All tables (including titles, description, footnotes)
• Ensure all figure and table citations in the text match the files provided
• Indicate clearly if color should be used for any figures in print
Graphical Abstracts / Highlights files (where applicable)
Supplemental files (where applicable)
Further considerations
• Manuscript has been 'spell checked' and 'grammar checked'
• All references mentioned in the Reference List are cited in the text, and vice versa
• Permission has been obtained for use of copyrighted material from other sources (including the Internet)
• A competing interests statement is provided, even if the authors have no competing interests to declare
• Journal policies detailed in this guide have been reviewed
• Referee suggestions and contact details provided, based on journal requirements
For further information, visit our Support Center.
BEFORE YOU BEGIN
Ethics in publishing
Please see our information pages on Ethics in publishing and Ethical guidelines for journal publication.
Declaration of interest
All authors must disclose any financial and personal relationships with other people or organizations that could inappropriately influence (bias) their work. Examples of potential conflicts of interest include employment, consultancies, stock ownership, honoraria, paid expert testimony, patent applications/registrations, and grants or other funding. Authors should complete the declaration of interest
statement using [this template](#) and upload to the submission system at the Attach/Upload Files step. If there are no interests to declare, please choose: 'Declarations of interest: none' in the template. This statement will be published within the article if accepted. [More information](#).
**Submission declaration and verification**
Submission of an article implies that the work described has not been published previously (except in the form of an abstract, a published lecture or academic thesis, see 'Multiple, redundant or concurrent publication' for more information), that it is not under consideration for publication elsewhere, that its publication is approved by all authors and tacitly or explicitly by the responsible authorities where the work was carried out, and that, if accepted, it will not be published elsewhere in the same form, in English or in any other language, including electronically without the written consent of the copyright-holder. To verify originality, your article may be checked by the originality detection service [Crossref Similarity Check](#).
**Preprints**
Please note that preprints can be shared anywhere at any time, in line with Elsevier's [sharing policy](#). Sharing your preprints e.g. on a preprint server will not count as prior publication (see 'Multiple, redundant or concurrent publication' for more information).
**Use of inclusive language**
Inclusive language acknowledges diversity, conveys respect to all people, is sensitive to differences, and promotes equal opportunities. Articles should make no assumptions about the beliefs or commitments of any reader, should contain nothing which might imply that one individual is superior to another on the grounds of race, sex, culture or any other characteristic, and should use inclusive language throughout. Authors should ensure that writing is free from bias, for instance by using 'he or she', 'his/her' instead of 'he' or 'his', and by making use of job titles that are free of stereotyping (e.g. 'chairperson' instead of 'chairman' and 'flight attendant' instead of 'stewardess').
**Author contributions**
For transparency, we encourage authors to submit an author statement file outlining their individual contributions to the paper using the relevant CRediT roles: Conceptualization; Data curation; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Resources; Software; Supervision; Validation; Visualization; Roles/Writing - original draft; Writing - review & editing. Authorship statements should be formatted with the names of authors first and CRediT role(s) following. [More details and an example](#)
**Changes to authorship**
Authors are expected to consider carefully the list and order of authors before submitting their manuscript and provide the definitive list of authors at the time of the original submission. Any addition, deletion or rearrangement of author names in the authorship list should be made only before the manuscript has been accepted and only if approved by the journal Editor. To request such a change, the Editor must receive the following from the corresponding author: (a) the reason for the change in author list and (b) written confirmation (e-mail, letter) from all authors that they agree with the addition, removal or rearrangement. In the case of addition or removal of authors, this includes confirmation from the author being added or removed. Only in exceptional circumstances will the Editor consider the addition, deletion or rearrangement of authors after the manuscript has been accepted. While the Editor considers the request, publication of the manuscript will be suspended. If the manuscript has already been published in an online issue, any requests approved by the Editor will result in a corrigendum.
**Copyright**
Upon acceptance of an article, authors will be asked to complete a 'Journal Publishing Agreement' (see [more information](#) on this). An e-mail will be sent to the corresponding author confirming receipt of the manuscript together with a 'Journal Publishing Agreement' form or a link to the online version of this agreement.
Subscribers may reproduce tables of contents or prepare lists of articles including abstracts for internal circulation within their institutions. Permission of the Publisher is required for resale or distribution outside the institution and for all other derivative works, including compilations and translations. If excerpts from other copyrighted works are included, the author(s) must obtain written permission from the copyright owners and credit the source(s) in the article. Elsevier has [preprinted forms](#) for use by authors in these cases.
For gold open access articles: Upon acceptance of an article, authors will be asked to complete an 'Exclusive License Agreement' (more information). Permitted third party reuse of gold open access articles is determined by the author's choice of user license.
Author rights
As an author you (or your employer or institution) have certain rights to reuse your work. More information.
Elsevier supports responsible sharing
Find out how you can share your research published in Elsevier journals.
Role of the funding source
You are requested to identify who provided financial support for the conduct of the research and/or preparation of the article and to briefly describe the role of the sponsor(s), if any, in study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication. If the funding source(s) had no such involvement then this should be stated.
Open access
Please visit our Open Access page for more information.
Elsevier Researcher Academy
Researcher Academy is a free e-learning platform designed to support early and mid-career researchers throughout their research journey. The "Learn" environment at Researcher Academy offers several interactive modules, webinars, downloadable guides and resources to guide you through the process of writing for research and going through peer review. Feel free to use these free resources to improve your submission and navigate the publication process with ease.
Language (usage and editing services)
Please write your text in good English (American or British usage is accepted, but not a mixture of these). Authors who feel their English language manuscript may require editing to eliminate possible grammatical or spelling errors and to conform to correct scientific English may wish to use the English Language Editing service available from Elsevier's Author Services.
Submission
Our online submission system guides you stepwise through the process of entering your article details and uploading your files. The system converts your article files to a single PDF file used in the peer-review process. Editable files (e.g., Word, LaTeX) are required to typeset your article for final publication. All correspondence, including notification of the Editor's decision and requests for revision, is sent by e-mail.
Please also note that the maximum length for a research paper is 15,000 words with the exception for systematic literature review or systematic mapping studies where the maximum length is 20,000 words. Also notice that figures and tables count 200 words each. Manuscripts longer than the respective limits, will be sent back to authors.
Referees
Please submit the names and institutional e-mail addresses of at least 3 potential referees. For more details, visit our Support site. Note that the editor retains the sole right to decide whether or not the suggested reviewers are used.
PREPARATION
NEW SUBMISSIONS
Submission to this journal proceeds totally online and you will be guided stepwise through the creation and uploading of your files. The system automatically converts your files to a single PDF file, which is used in the peer-review process. As part of the Your Paper Your Way service, you may choose to submit your manuscript as a single file to be used in the refereeing process. This can be a PDF file or a Word document, in any format or layout that can be used by referees to evaluate your manuscript. It should contain high enough quality figures for refereeing. If you prefer to do so, you may still provide all or some of the source files at the initial submission. Please note that individual figure files larger than 10 MB must be uploaded separately.
References
There are no strict requirements on reference formatting at submission. References can be in any style or format as long as the style is consistent. Where applicable, author(s) name(s), journal title/book title, chapter title/article title, year of publication, volume number/book chapter and the article number or pagination must be present. Use of DOI is highly encouraged. The reference style used by the journal will be applied to the accepted article by Elsevier at the proof stage. Note that missing data will be highlighted at proof stage for the author to correct.
Formatting requirements
There are no strict formatting requirements but all manuscripts must contain the essential elements needed to convey your manuscript, for example Structured Abstract, Keywords, Introduction, Materials and Methods, Results, Conclusions, Artwork and Tables with Captions. If your article includes any Videos and/or other Supplementary material, this should be included in your initial submission for peer review purposes. Divide the article into clearly defined sections.
Figures and tables embedded in text
Please ensure the figures and the tables included in the single file are placed next to the relevant text in the manuscript, rather than at the bottom or the top of the file. The corresponding caption should be placed directly below the figure or table.
SHORT COMMUNICATIONS
Short communications at IST are a mean to quickly disseminate novel and impactful results. Short Communications have a limit of 2500 words in length (approx. 4 pages, figure and table count 200 words each) and must have no more than 10 references.
To meet a vital need to rapidly disseminate current scientific findings, short communications will be reviewed using a streamlined process. Papers are peer reviewed and (1) accepted as written or (2) rejected within four (4) weeks of submission. Minor revisions are allowed in the likelihood of an accept decision. The review and decision process will primarily focus on (i) novelty, (ii) technical soundness, (iii) expected impact on the state-of-the art and (iv) overall presentation and readability.
Peer review
This journal operates a single blind review process. All contributions will be initially assessed by the editor for suitability for the journal. Papers deemed suitable are then typically sent to a minimum of two independent expert reviewers to assess the scientific quality of the paper. The Editor is responsible for the final decision regarding acceptance or rejection of articles. The Editor's decision is final. More information on types of peer review.
REVISED SUBMISSIONS
Use of word processing software
Regardless of the file format of the original submission, at revision you must provide us with an editable file of the entire article. Keep the layout of the text as simple as possible. Most formatting codes will be removed and replaced on processing the article. The electronic text should be prepared in a way very similar to that of conventional manuscripts (see also the Guide to Publishing with Elsevier). See also the section on Electronic artwork.
To avoid unnecessary errors you are strongly advised to use the 'spell-check' and 'grammar-check' functions of your word processor.
LaTeX
You are recommended to use the Elsevier article class elsarticle.cls to prepare your manuscript and BibTeX to generate your bibliography. Our LaTeX site has detailed submission instructions, templates and other information.
Article structure
Subdivision - numbered sections
Divide your article into clearly defined and numbered sections. Subsections should be numbered 1.1 (then 1.1.1, 1.1.2, …), 1.2, etc. (the abstract is not included in section numbering). Use this numbering also for internal cross-referencing: do not just refer to 'the text'. Any subsection may be given a brief heading. Each heading should appear on its own separate line.
Essential title page information
- **Title.** Concise and informative. Titles are often used in information-retrieval systems. Avoid abbreviations and formulae where possible.
- **Author names and affiliations.** Please clearly indicate the given name(s) and family name(s) of each author and check that all names are accurately spelled. You can add your name between parentheses in your own script behind the English transliteration. Present the authors’ affiliation addresses (where the actual work was done) below the names. Indicate all affiliations with a lowercase superscript letter immediately after the author’s name and in front of the appropriate address. Provide the full postal address of each affiliation, including the country name and, if available, the e-mail address of each author.
- **Corresponding author.** Clearly indicate who will handle correspondence at all stages of refereeing and publication, also post-publication. This responsibility includes answering any future queries about Methodology and Materials. Ensure that the e-mail address is given and that contact details are kept up to date by the corresponding author.
- **Present/permanent address.** If an author has moved since the work described in the article was done, or was visiting at the time, a ‘Present address’ (or ‘Permanent address’) may be indicated as a footnote to that author’s name. The address at which the author actually did the work must be retained as the main, affiliation address. Superscript Arabic numerals are used for such footnotes.
**Highlights**
Highlights are optional yet highly encouraged for this journal, as they increase the discoverability of your article via search engines. They consist of a short collection of bullet points that capture the novel results of your research as well as new methods that were used during the study (if any). Please have a look at the examples here: example Highlights.
Highlights should be submitted in a separate editable file in the online submission system. Please use ‘Highlights’ in the file name and include 3 to 5 bullet points (maximum 85 characters, including spaces, per bullet point).
**Abstract**
A concise and factual abstract is required of no more than 300 words, including headings. To support this, the journal has started using (July 1, 2009) "structured abstracts". A structured abstract should contain the following headings (as in-line or run-in headings in bold): Context, Objective, Method, Results and Conclusions. An abstract is often presented separately from the article, so it must be able to stand alone. For this reason, references should be avoided, but if essential, then cite the author(s) and year(s). Also, non-standard or uncommon abbreviations should be avoided, but if essential they must be defined at their first mention in the abstract itself. Please see below for an example of a structured abstract:
**Context:** Throughout an organisation, people have different responsibilities and worktasks, hence, it is probable that different roles have different priorities when it comes to what should be improved within a company. This has been found in previous studies in marketing, but is this true for software improvement as well?
**Objective:** This paper evaluates how different roles in a software development organization view different issues in software process improvement and if such differences could be used in order to provide more tailor-made process improvements within an organization and uses this as a working hypothesis.
**Method:** A quantitative questionnaire containing five different weighted questions related to software process improvement was developed. 84 employees from all levels of a Swedish telecommunication company were then approached, of which 63 responded.
**Results:** The different roles disagreed in three of the questions while they agreed in two of the questions. The disagreement was related to issues about importance of improvement, urgency of problems, and threat against successful process management, while the questions where the roles agreed focused on communication of the processes (documentation and teaching).
Conclusion: It is concluded that it is important to be aware and take into account the different needs of different roles. This will make it possible to provide improvements tailored to specific roles which will probably help to overcome resistance to process improvements. It is also important to look into other areas and companies (for example, marketing) where it could be beneficial when conducting process improvements.
Graphical abstract
Although a graphical abstract is optional, its use is encouraged as it draws more attention to the online article. The graphical abstract should summarize the contents of the article in a concise, pictorial form designed to capture the attention of a wide readership. Graphical abstracts should be submitted as a separate file in the online submission system. Image size: Please provide an image with a minimum of 531 × 1328 pixels (h × w) or proportionally more. The image should be readable at a size of 5 × 13 cm using a regular screen resolution of 96 dpi. Preferred file types: TIFF, EPS, PDF or MS Office files. You can view Example Graphical Abstracts on our information site. Authors can make use of Elsevier's Illustration Services to ensure the best presentation of their images and in accordance with all technical requirements.
Keywords
Immediately after the abstract, provide a maximum of 6 keywords, using British spelling and avoiding general and plural terms and multiple concepts (avoid, for example, 'and', 'of'). Be sparing with abbreviations: only abbreviations firmly established in the field may be eligible. These keywords will be used for indexing purposes.
Abbreviations
Define abbreviations that are not standard in this field in a footnote to be placed on the first page of the article. Such abbreviations that are unavoidable in the abstract must be defined at their first mention there, as well as in the footnote. Ensure consistency of abbreviations throughout the article.
Acknowledgements
Collate acknowledgements in a separate section at the end of the article before the references and do not, therefore, include them on the title page, as a footnote to the title or otherwise. List here those individuals who provided help during the research (e.g., providing language help, writing assistance or proof reading the article, etc.).
Formatting of funding sources
List funding sources in this standard way to facilitate compliance to funder's requirements:
Funding: This work was supported by the National Institutes of Health [grant numbers xxxx, yyyy]; the Bill & Melinda Gates Foundation, Seattle, WA [grant number zzzz]; and the United States Institutes of Peace [grant number aaaa].
It is not necessary to include detailed descriptions on the program or type of grants and awards. When funding is from a block grant or other resources available to a university, college, or other research institution, submit the name of the institute or organization that provided the funding.
If no funding has been provided for the research, please include the following sentence:
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Nomenclature and Units
All measurements and data should be given in SI units, or if SI units do not exist in an internationally accepted unit. If you use any symbol or unit not generally recognised, please include an explanation the first time it is used.
Math formulae
Please submit math equations as editable text and not as images. Present simple formulae in line with normal text where possible and use the solidus (/) instead of a horizontal line for small fractional terms, e.g., X/Y. In principle, variables are to be presented in italics. Powers of e are often more conveniently denoted by exp. Number consecutively any equations that have to be displayed separately from the text (if referred to explicitly in the text).
Footnotes
Footnotes should be used sparingly. Number them consecutively throughout the article. Many word processors build footnotes into the text, and this feature may be used. Should this not be the case, indicate the position of footnotes in the text and present the footnotes themselves separately at the end of the article.
Artwork
Electronic artwork
General points
• Make sure you use uniform lettering and sizing of your original artwork.
• Preferred fonts: Arial (or Helvetica), Times New Roman (or Times), Symbol, Courier.
• Number the illustrations according to their sequence in the text.
• Use a logical naming convention for your artwork files.
• Indicate per figure if it is a single, 1.5 or 2-column fitting image.
• For Word submissions only, you may still provide figures and their captions, and tables within a single file at the revision stage.
• Please note that individual figure files larger than 10 MB must be provided in separate source files.
A detailed guide on electronic artwork is available. You are urged to visit this site; some excerpts from the detailed information are given here.
Formats
Regardless of the application used, when your electronic artwork is finalized, please 'save as' or convert the images to one of the following formats (note the resolution requirements for line drawings, halftones, and line/halftone combinations given below):
EPS (or PDF): Vector drawings. Embed the font or save the text as 'graphics'.
TIFF (or JPEG): Color or grayscale photographs (halftones): always use a minimum of 300 dpi.
TIFF (or JPEG): Bitmapped line drawings: use a minimum of 1000 dpi.
TIFF (or JPEG): Combinations bitmapped line/half-tone (color or grayscale): a minimum of 500 dpi is required.
Please do not:
• Supply files that are optimized for screen use (e.g., GIF, BMP, PICT, WPG); the resolution is too low.
• Supply files that are too low in resolution.
• Submit graphics that are disproportionately large for the content.
Color artwork
Please make sure that artwork files are in an acceptable format (TIFF (or JPEG), EPS (or PDF), or MS Office files) and with the correct resolution. If, together with your accepted article, you submit usable color figures then Elsevier will ensure, at no additional charge, that these figures will appear in color online (e.g., ScienceDirect and other sites) regardless of whether or not these illustrations are reproduced in color in the printed version. For color reproduction in print, you will receive information regarding the costs from Elsevier after receipt of your accepted article. Please indicate your preference for color: in print or online only. Further information on the preparation of electronic artwork.
Figure captions
Ensure that each illustration has a caption. A caption should comprise a brief title (not on the figure itself) and a description of the illustration. Keep text in the illustrations themselves to a minimum but explain all symbols and abbreviations used.
Tables
Please submit tables as editable text and not as images. Tables can be placed either next to the relevant text in the article, or on separate page(s) at the end. Number tables consecutively in accordance with their appearance in the text and place any table notes below the table body. Be sparing in the use of tables and ensure that the data presented in them do not duplicate results described elsewhere in the article. Please avoid using vertical rules and shading in table cells.
References
Citation in text
Please ensure that every reference cited in the text is also present in the reference list (and vice versa). Any references cited in the abstract must be given in full. Unpublished results and personal communications are not recommended in the reference list, but may be mentioned in the text. If these references are included in the reference list they should follow the standard reference style of the
journal and should include a substitution of the publication date with either 'Unpublished results' or 'Personal communication'. Citation of a reference as 'in press' implies that the item has been accepted for publication.
**Reference links**
Increased discoverability of research and high quality peer review are ensured by online links to the sources cited. In order to allow us to create links to abstracting and indexing services, such as Scopus, CrossRef and PubMed, please ensure that data provided in the references are correct. Please note that incorrect surnames, journal/book titles, publication year and pagination may prevent link creation. When copying references, please be careful as they may already contain errors. Use of the DOI is highly encouraged.
A DOI is guaranteed never to change, so you can use it as a permanent link to any electronic article. An example of a citation using DOI for an article not yet in an issue is: VanDecar J.C., Russo R.M., James D.E., Ambeh W.B., Franke M. (2003). Aseismic continuation of the Lesser Antilles slab beneath northeastern Venezuela. Journal of Geophysical Research, https://doi.org/10.1029/2001JB000884. Please note the format of such citations should be in the same style as all other references in the paper.
**Web references**
As a minimum, the full URL should be given and the date when the reference was last accessed. Any further information, if known (DOI, author names, dates, reference to a source publication, etc.), should also be given. Web references can be listed separately (e.g., after the reference list) under a different heading if desired, or can be included in the reference list.
**Data references**
This journal encourages you to cite underlying or relevant datasets in your manuscript by citing them in your text and including a data reference in your Reference List. Data references should include the following elements: author name(s), dataset title, data repository, version (where available), year, and global persistent identifier. Add [dataset] immediately before the reference so we can properly identify it as a data reference. The [dataset] identifier will not appear in your published article.
**References in a special issue**
Please ensure that the words 'this issue' are added to any references in the list (and any citations in the text) to other articles in the same Special Issue.
**Reference management software**
Most Elsevier journals have their reference template available in many of the most popular reference management software products. These include all products that support Citation Style Language styles, such as Mendeley. Using citation plug-ins from these products, authors only need to select the appropriate journal template when preparing their article, after which citations and bibliographies will be automatically formatted in the journal's style. If no template is yet available for this journal, please follow the format of the sample references and citations as shown in this Guide. If you use reference management software, please ensure that you remove all field codes before submitting the electronic manuscript. More information on how to remove field codes from different reference management software.
Users of Mendeley Desktop can easily install the reference style for this journal by clicking the following link:
http://open.mendeley.com/use-citation-style/information-and-software-technology
When preparing your manuscript, you will then be able to select this style using the Mendeley plug-ins for Microsoft Word or LibreOffice.
**Reference formatting**
There are no strict requirements on reference formatting at submission. References can be in any style or format as long as the style is consistent. Where applicable, author(s) name(s), journal title/book title, chapter title/article title, year of publication, volume number/book chapter and the article number or pagination must be present. Use of DOI is highly encouraged. The reference style used by the journal will be applied to the accepted article by Elsevier at the proof stage. Note that missing data will be highlighted at proof stage for the author to correct. If you do wish to format the references yourself they should be arranged according to the following examples:
**Reference style**
**Text:** Indicate references by number(s) in square brackets in line with the text. The actual authors can be referred to, but the reference number(s) must always be given.
Example: '..... as demonstrated [3,6]. Barnaby and Jones [8] obtained a different result ....'
List: Number the references (numbers in square brackets) in the list in the order in which they appear in the text.
Examples:
Reference to a journal publication:
Reference to a journal publication with an article number:
Reference to a book:
Reference to a chapter in an edited book:
Reference to a website:
Reference to a dataset:
Video
Elsevier accepts video material and animation sequences to support and enhance your scientific research. Authors who have video or animation files that they wish to submit with their article are strongly encouraged to include links to these within the body of the article. This can be done in the same way as a figure or table by referring to the video or animation content and noting in the body text where it should be placed. All submitted files should be properly labeled so that they directly relate to the video file's content. In order to ensure that your video or animation material is directly usable, please provide the file in one of our recommended file formats with a preferred maximum size of 150 MB per file, 1 GB in total. Video and animation files supplied will be published online in the electronic version of your article in Elsevier Web products, including ScienceDirect. Please supply 'stills' with your files: you can choose any frame from the video or animation or make a separate image. These will be used instead of standard icons and will personalize the link to your video data. For more detailed instructions please visit our video instruction pages. Note: since video and animation cannot be embedded in the print version of the journal, please provide text for both the electronic and the print version for the portions of the article that refer to this content.
Data visualization
Include interactive data visualizations in your publication and let your readers interact and engage more closely with your research. Follow the instructions here to find out about available data visualization options and how to include them with your article.
Supplementary material
Supplementary material such as applications, images and sound clips, can be published with your article to enhance it. Submitted supplementary items are published exactly as they are received (Excel or PowerPoint files will appear as such online). Please submit your material together with the article and supply a concise, descriptive caption for each supplementary file. If you wish to make changes to supplementary material during any stage of the process, please make sure to provide an updated file. Do not annotate any corrections on a previous version. Please switch off the 'Track Changes' option in Microsoft Office files as these will appear in the published version.
Research data
This journal encourages and enables you to share data that supports your research publication where appropriate, and enables you to interlink the data with your published articles. Research data refers to the results of observations or experimentation that validate research findings. To facilitate reproducibility and data reuse, this journal also encourages you to share your software, code, models, algorithms, protocols, methods and other useful materials related to the project.
Below are a number of ways in which you can associate data with your article or make a statement about the availability of your data when submitting your manuscript. If you are sharing data in one of these ways, you are encouraged to cite the data in your manuscript and reference list. Please refer to the "References" section for more information about data citation. For more information on depositing, sharing and using research data and other relevant research materials, visit the research data page.
Data linking
If you have made your research data available in a data repository, you can link your article directly to the dataset. Elsevier collaborates with a number of repositories to link articles on ScienceDirect with relevant repositories, giving readers access to underlying data that gives them a better understanding of the research described.
There are different ways to link your datasets to your article. When available, you can directly link your dataset to your article by providing the relevant information in the submission system. For more information, visit the database linking page.
For supported data repositories a repository banner will automatically appear next to your published article on ScienceDirect.
In addition, you can link to relevant data or entities through identifiers within the text of your manuscript, using the following format: Database: xxxx (e.g., TAIR: AT1G01020; CCDC: 734053; PDB: 1XFN).
Mendeley Data
This journal supports Mendeley Data, enabling you to deposit any research data (including raw and processed data, video, code, software, algorithms, protocols, and methods) associated with your manuscript in a free-to-use, open access repository. During the submission process, after uploading your manuscript, you will have the opportunity to upload your relevant datasets directly to Mendeley Data. The datasets will be listed and directly accessible to readers next to your published article online.
For more information, visit the Mendeley Data for journals page.
Data in Brief
You have the option of converting any or all parts of your supplementary or additional raw data into one or multiple data articles, a new kind of article that houses and describes your data. Data articles ensure that your data is actively reviewed, curated, formatted, indexed, given a DOI and publicly available to all upon publication. You are encouraged to submit your article for Data in Brief as an additional item directly alongside the revised version of your manuscript. If your research article is accepted, your data article will automatically be transferred over to Data in Brief where it will be editorially reviewed and published in the open access data journal, Data in Brief. Please note an open access fee of 600 USD is payable for publication in Data in Brief. Full details can be found on the Data in Brief website. Please use this template to write your Data in Brief.
MethodsX
You have the option of converting relevant protocols and methods into one or multiple MethodsX articles, a new kind of article that describes the details of customized research methods. Many researchers spend a significant amount of time on developing methods to fit their specific needs or setting, but often without getting credit for this part of their work. MethodsX, an open access journal, now publishes this information in order to make it searchable, peer reviewed, citable and reproducible. Authors are encouraged to submit their MethodsX article as an additional item directly alongside the revised version of their manuscript. If your research article is accepted, your methods article will automatically be transferred over to MethodsX where it will be editorially reviewed. Please note an open access fee is payable for publication in MethodsX. Full details can be found on the MethodsX website. Please use this template to prepare your MethodsX article.
Data statement
To foster transparency, we encourage you to state the availability of your data in your submission. This may be a requirement of your funding body or institution. If your data is unavailable to access or unsuitable to post, you will have the opportunity to indicate why during the submission process, for example by stating that the research data is confidential. The statement will appear with your published article on ScienceDirect. For more information, visit the Data Statement page.
AFTER ACCEPTANCE
**Online proof correction**
To ensure a fast publication process of the article, we kindly ask authors to provide us with their proof corrections within two days. Corresponding authors will receive an e-mail with a link to our online proofing system, allowing annotation and correction of proofs online. The environment is similar to MS Word: in addition to editing text, you can also comment on figures/tables and answer questions from the Copy Editor. Web-based proofing provides a faster and less error-prone process by allowing you to directly type your corrections, eliminating the potential introduction of errors.
If preferred, you can still choose to annotate and upload your edits on the PDF version. All instructions for proofing will be given in the e-mail we send to authors, including alternative methods to the online version and PDF.
We will do everything possible to get your article published quickly and accurately. Please use this proof only for checking the typesetting, editing, completeness and correctness of the text, tables and figures. Significant changes to the article as accepted for publication will only be considered at this stage with permission from the Editor. It is important to ensure that all corrections are sent back to us in one communication. Please check carefully before replying, as inclusion of any subsequent corrections cannot be guaranteed. Proofreading is solely your responsibility.
**Offprints**
The corresponding author will, at no cost, receive a customized Share Link providing 50 days free access to the final published version of the article on ScienceDirect. The Share Link can be used for sharing the article via any communication channel, including email and social media. For an extra charge, paper offprints can be ordered via the offprint order form which is sent once the article is accepted for publication. Both corresponding and co-authors may order offprints at any time via Elsevier's Author Services. Corresponding authors who have published their article gold open access do not receive a Share Link as their final published version of the article is available open access on ScienceDirect and can be shared through the article DOI link.
**AUTHOR INQUIRIES**
Visit the Elsevier Support Center to find the answers you need. Here you will find everything from Frequently Asked Questions to ways to get in touch.
You can also check the status of your submitted article or find out when your accepted article will be published.
© Copyright 2018 Elsevier | https://www.elsevier.com
|
{"Source-Url": "https://www.elsevier.com/journals/information-and-software-technology/0950-5849?generatepdf=true", "len_cl100k_base": 9058, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 32097, "total-output-tokens": 10195, "length": "2e13", "weborganizer": {"__label__adult": 0.00077056884765625, "__label__art_design": 0.001983642578125, "__label__crime_law": 0.000820159912109375, "__label__education_jobs": 0.11962890625, "__label__entertainment": 0.00034236907958984375, "__label__fashion_beauty": 0.0005426406860351562, "__label__finance_business": 0.0045318603515625, "__label__food_dining": 0.0007662773132324219, "__label__games": 0.001590728759765625, "__label__hardware": 0.0011501312255859375, "__label__health": 0.0022525787353515625, "__label__history": 0.0010633468627929688, "__label__home_hobbies": 0.0004849433898925781, "__label__industrial": 0.0006308555603027344, "__label__literature": 0.003360748291015625, "__label__politics": 0.0003948211669921875, "__label__religion": 0.0008645057678222656, "__label__science_tech": 0.1282958984375, "__label__social_life": 0.0006227493286132812, "__label__software": 0.034332275390625, "__label__software_dev": 0.69384765625, "__label__sports_fitness": 0.0005769729614257812, "__label__transportation": 0.0007395744323730469, "__label__travel": 0.0004503726959228515}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47517, 0.00571]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47517, 0.13936]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47517, 0.89574]], "google_gemma-3-12b-it_contains_pii": [[0, 2069, false], [2069, 4391, null], [4391, 4391, null], [4391, 7369, null], [7369, 12088, null], [12088, 15831, null], [15831, 19741, null], [19741, 23908, null], [23908, 27823, null], [27823, 31726, null], [31726, 36215, null], [36215, 40528, null], [40528, 44961, null], [44961, 47517, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2069, true], [2069, 4391, null], [4391, 4391, null], [4391, 7369, null], [7369, 12088, null], [12088, 15831, null], [15831, 19741, null], [19741, 23908, null], [23908, 27823, null], [27823, 31726, null], [31726, 36215, null], [36215, 40528, null], [40528, 44961, null], [44961, 47517, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47517, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47517, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47517, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47517, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47517, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47517, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47517, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47517, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47517, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47517, null]], "pdf_page_numbers": [[0, 2069, 1], [2069, 4391, 2], [4391, 4391, 3], [4391, 7369, 4], [7369, 12088, 5], [12088, 15831, 6], [15831, 19741, 7], [19741, 23908, 8], [23908, 27823, 9], [27823, 31726, 10], [31726, 36215, 11], [36215, 40528, 12], [40528, 44961, 13], [44961, 47517, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47517, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
a6f5f200e6ca3813d73d3358eb9ce2cde672d2c7
|
Getting started: performing basic operations on Beagle2
- Basics about the system
- Basics about programming environment
- Modules and Programming Environment (PrgEnv)
- How to work on the filesystem
- Description of the filesystem
- HIPAA
- Lustre
- Useful commands on lustre
- Striping
- Useful commands for striping
- How to move data to and from Beagle
- How to submit jobs
- Projects
- Basics about job submission on Beagle2
- Job Submission Best Practices
- Batch jobs
- Commands for submitting and inquiring about jobs
- PBS (batch) scripts
- Aprun
- Memory usage
- Running Swift on Beagle2
- Additional resources:
- In case you need help/support
Note: All policies and approaches are subject to changes. While we will do our best to keep users informed of such changes, but it is not always possible to do so.
Basics about programming environment
The operating system on Beagle2 is the native Cray Linux Environment (CLE)
On login nodes: Is very similar to a conventional Linux environment.
On compute nodes is available as:
- CLE Static (which only allows the utilization of statically linked software, and it is the basic OS used for large simulations in the "Extreme Scalability Mode" (ESM))
- CLE with Dynamic Shared Objects and Libraries (DSL) — see How to develop/port programs for/to Beagle
`xtnodestat` command shows:
- Current configuration of Beagle2's nodes: which blades are compute which are service and where they are located in the machine.
- It will also provide information about the current workload of the machine.
Type `man xtnodestat` for more details. Please note that: free nodes as seen through xtnodestat does not always mean they are available for your use.
Modules and Programming Environment (PrgEnv)
Programming environments support the creation, modification, execution and debugging of programs. Programming Environments available on Beagle2 are: Cray Programming Environment and the GNU programming environment. The programming environment is managed by the `module` command. To learn more about modules see this page.
When working with the Cray Linux Environment, you will usually have to load a "module", see Environment User's Guide
**Module** is a "package" on a Cray system that enables you to dynamically modify the user environment by installing or uninstalling "modulefiles". Module contains commands to configure the shell environment for a particular compiler or library. It allows multiple versions of software to be installed simultaneously; the user can choose which version to use while compiling code or running their jobs.
**Default compiler** on Beagle is PrgEnv-cray, if you want to switch to PrgEnv-gnu:
```
module swap PrgEnv-cray PrgEnv-gnu
```
The **module** command provides a number of capabilities to the user including:
- `module load` load a module
- `module unload` unload a module
- `module swap` unload a module file and load another (module switch produce the same effect)
- `module list` listing which module files are currently loaded
- `module avail` determining which module files can be loaded; lists all available modules on the system
- `module use dir` to prepend a directory `dir` to the MODULEPATH environment variable. If you want to add a directory to the list where the module command looks for new modules.
- `module use --append dir` will append the directory to MODULEPATH.
- `module unload dir` will remove directory `dir` from the MODULEPATH environment variable.
**Note**: in situations when a new compiler has to be utilized `--module swap` might be a more appropriate strategy.
The modules that a user has loaded are persistent as long as you're logged in.
To add modules permanently to your environment you can add module commands to a file in your home directory called `.modules`. For example if you want to always use the GNU programming environment you would add:
```
ams@login1:~> cat ~/.modules
module unload PrgEnv-cray
module load PrgEnv-gnu
```
### How to work on the filesystem
#### Description of the filesystem
Beagle now mount the following filesystems:
- **/home**: CI home directories **(read-only on compute nodes, will soon be removed)**
- Reliable for small storage of data like source code, shell scripts, etc.
- Slow. It is not tuned for high performance parallel jobs.
- **Should not be used for calculations on Beagle!**
- **10 GB quotas** and they are enforced!
- Referenced by the environment variable `$HOME`
- **/lustre/beagle2**: local Lustre filesystem **(this is where batch jobs should do most of their I/O)**
- It's a parallel distributed file system.
- Scratch filesystem. NO BACKUP.
- Files in Lustre are subjected to purging. **It is the users’ responsibility to protect themselves from data loss!**
- Referenced by the environment variable `$LUSTREDIR`
- **450TB of usable space**
- While there are currently no restrictions in terms of usage and capacity, these conditions will likely change.
- Allows users to control the **striping parameters** when storing data on the filesystem. Tuning these parameters correctly can lead to better computation performance—see bellow.
- **/soft**: local Cray software repository (read-only)
**NOTE**: Home directories are not mounted on the compute nodes (for performance reasons), so you'll always want to be working out of the Lustre scratch filesystem (`/lustre/beagle2/<your_user_name>`). Make sure to copy everything you're working on out of your home directory to your Lustre directory and work out of that Lustre directory whenever you're on Beagle.
- **/ufs**: internal filesystem for ALPS scheduler (read-write)
- **/tmp, /var, /opt, /dev** and so on are in general read only from any node and usually more restricted from the compute node.
**NOTE:** The CI Systems Group reserves the right to rebuild the Lustre filesystem at any time. While best efforts are always made to recover data, the primary focus will be to return the filesystem to availability as quickly as possible. Advance notice will be given as early as possible. Aside from unexpected disaster recovery, all attempts will be made to limit outages to necessary maintenance, reconfiguration, and reliability testing.
If you encounter undesirable behavior with the Lustre filesystem, please contact beagle-support@ci.uchicago.edu. (It is assumed that the filesystem will need some tuning as its use and activity increase.)
### Research and HIPAA Privacy Protections
#### Content Authors
- Reid Cushman, PhD
CITI Program
This module is for educational purposes only. It is not designed to provide legal advice or legal guidance. You should consult with your organization's attorneys if you have questions or concerns about the relevant laws and regulations discussed in this module.
#### HIPAA’s Regulatory Scope
HIPAA’s protections focus on “individually identifiable health information,” which HIPAA defines as information in “any form or medium” that “[r]elates to the past, present, or future physical or mental health or condition of an individual; the provision of healthcare to an individual; or the past, present, or future payment for the provision of health care to an individual” (Security and Privacy 2013).
HIPAA’s protections reach only a subset of individually identifiable health information — formally called protected health information or simply “PHI” — created in or by what HIPAA calls covered entities. Covered entities include individual healthcare providers, healthcare provider organizations, health plans, and health information clearinghouses that engage in electronic healthcare transactions (see Health and Human Services Covered Entity Decision Charts). HIPAA’s protections for PHI extend to non-U.S. citizens’ data as well.
Some identifiable health information used for research originates outside of covered entities, and so may not be covered by HIPAA. However, you must check with your organization’s privacy authorities before assuming your situation falls outside HIPAA’s scope.
#### What Kinds of Users and Uses Are Covered?
HIPAA regulations set requirements for use and disclosure of PHI by covered entities, and by extension on all members of a covered entity’s workforce that have contact with PHI. HIPAA’s data protection requirements also apply “in the same manner” to business associates (and by extension to the workforce of such business associates) that perform functions using PHI on a covered entity’s behalf.
Researchers may be part of the workforce of a covered entity, or may be covered entities themselves if they are also healthcare providers. If so, they are directly affected by the HIPAA’s research rules. Researchers who meet neither of these conditions are still indirectly affected by HIPAA rules if a covered entity is the source of their data and those data meet the definition of PHI.
HIPAA’s rules on use and disclosure are generally “purpose-based” — that is, the intended use sets the rules more than the type of data itself. The research rules discussed here are different than those for, say, treatment or treatment-related payments.
1. (relatively liberal), or for marketing or fundraising (relatively strict). A few types of data, such as psychotherapy notes do receive special protection under HIPAA. State laws also often have many categories of data with special protections, with which you should be familiar (or be in contact with an organizational official who has that knowledge).
What Constitutes "Research"?
Like the Common Rule, HIPAA defines research as a "systematic investigation, including research development, testing, and evaluation, designed to develop and contribute to generalizable knowledge" (Protection of Human Subjects 2009; Security and Privacy 2013). Note that some kinds of investigative activities that use patient data are excluded in this definition. For example:
1. Quality assessment and improvement, including outcomes evaluation and development of clinical guidelines or protocols, fall under the category of healthcare operations under HIPAA – provided the primary aim is not obtaining generalizable knowledge.
2. Activities that aim primarily for generalizable knowledge of population health can fall into the category of public health activity under HIPAA.
The regulations are complex. So, as with the covered entity status, a determination by an organization’s IRB, designated privacy official(s), or legal counsel is usually required to assure that an activity is “not research” and therefore subject to different HIPAA rules.
Who Enforces the HIPAA Research Protections?
A covered entity may choose to rely on an IRB to assess compliance with both the FDA and Common Rule requirements and HIPAA research requirements. Alternatively, HIPAA provides that covered entities may create a Privacy Board to handle some research-related issues, notably determinations about eligibility for waivers, alterations, and exemptions from authorization processes. A covered entity may also leave some decisions about compliance with the research provisions of HIPAA to its designated privacy officer. It is critical that you understand the allocation of responsibilities at your organization.
Research subjects, like patients generally, have recourse to both your organization’s authorities and to federal and state agencies in the event they wish to file complaints about or have questions regarding an organization’s protective efforts.
As with any other planned activity related to protected health information, research must be mentioned in a privacy notice that HIPAA requires be provided by covered entities to their patients/customers. The privacy notice must include the ways in which data subjects may register complaints and report problems, either locally or with federal authorities. Every researcher should be familiar with their organization’s privacy notice, particularly the persons or departments it identifies as enforcement authorities for the organization.
HIPAA Research-Related Rules
If the data in question meet the definition of PHI and are being used for purposes that fall within HIPAA’s definition of research, HIPAA generally requires explicit written authorization (consent) from the data subject for research uses.
However, HIPAA allows for research-related access to individuals’ identifiable health data without authorization under certain circumstances:
1. The research involves only minimal risk.
2. The research is used solely for activities preparatory to research.
3. Only deceased individual’s information is used.
4. It is “grandfathered” research where all legal permissions were in place before HIPAA took effect.
Data that do not identify individuals can be used for research without specific authorization if:
1. Only fully de-identified data are used.
2. A “limited data set” is used, under an approved “data use agreement.”
Each of these conditions is described in the sections below.
Waivers of Alterations of Authorization Requirement Due to Minimal Risk
An organization’s IRB or Privacy Board (and in some organizations a designated privacy official) may determine that a waiver or alteration of the authorization requirement is appropriate. The conditions are modeled on the criteria for a waiver of informed consent in the Common Rule.
Use or disclosure of the PHI must involve no more than minimal risk to the privacy of the research subjects, and include the following elements:
- An adequate plan to protect any data identifiers from improper use and disclosure.
- An adequate plan to destroy data identifiers at the earliest opportunity consistent with conduct of the research (unless there is a health or research justification for retaining the identifiers, or such retention is otherwise required by law).
Adequate written assurances that the PHI will not be reused or disclosed to any other individual or entity, except as required by law for authorized oversight of the research project, or for other research for which the use or disclosure of PHI would be permitted by HIPAA.
The research could not practicably be conducted without access to and use of the PHI.
The research could not practicably be conducted without the waiver or alteration to the authorization.
More about what counts as a data identifier is provided in the sections below on de-identified data and limited data sets.
Activities Preparatory to Research; Decedents’ Information Exceptions
HIPAA provides for two more exceptions to the authorization requirement for identifiable data:
- Where the PHI will be used solely for reviews preparatory to research (for example, for protocol development or identifying potential subjects) and will not leave the covered entity.
- Where the PHI refers solely to deceased individuals (the covered entity may ask for documentation of death of all data subjects).
In each case, the researcher must make a written or oral representation to the covered entity’s designated officials that such access is necessary for the research purposes -- someone from the IRB, the Privacy Board, or a privacy officer / designee -- who would then determine the appropriateness of the request.
Grandfathered Research
If all informed consents and other legal permissions required at the time were in place before HIPAA took effect (April 2003 in most cases), and have not changed since, a new HIPAA authorization is not required even for identified data. Obviously, this is no longer a commonly used pathway to bypass authorizations.
De-identified Data
A researcher may use fully de-identified health data without any authorization from individual data subjects. As the name implies, de-identified information must have all direct and indirect identifiers removed, to eliminate (or at least make highly improbable) re-identification using statistical techniques. De-identified information is no longer considered PHI, because by definition it is no longer individually identifiable.
HHS issued its Guidance Regarding Methods for De-identification of Protected Health Information in 2012. This guidance provides a detailed description of alternative methods, and should be considered required reading for anyone contemplating a de-identification strategy.
Under the HIPAA regulations, successful de-identification may be based on an “Expert Determination” by an “individual with appropriate knowledge” of statistical techniques who has analyzed the data set and can attest that the risk of re-identification is “very small.” (Very small is not defined in the regulations.) Alternatively, covered entities may use the “Safe Harbor” method of removing 18 types of identifying elements specified in the HIPAA regulations. In either case, the covered entity must have no actual knowledge that re-identification is possible or likely, for example by linking to other known data sets.
Limited Data Sets and Data Use Agreements
De-identification trades privacy protection for research productivity. Sometimes the trade-off is too steep, and a fully de-identified data set will not meet a research need. As an alternative, a covered entity may disclose PHI in a limited data set (LDS) to a researcher who has entered into an appropriate data use agreement. A LDS must have all direct identifiers removed; however, it may still include information that could “indirectly” identify the subject using statistical methods. That is, the disclosure risk is greater than “very small.”
The data use agreement for an LDS must:
- Delineate the permitted uses and disclosures of such information by the recipient, consistent with the purposes of research;
• Limit the individuals that can use or receive the data; and
• Require the recipient to agree not to re-identify the data or contact the individuals.
Minimum Necessary Uses and Disclosures
Uses and disclosures of data for research that are allowed to bypass the authorization requirement are still subject to the minimum necessary standard -- that is, the uses/disclosures must be no more than the minimum required for the described research purpose. A covered entity may rely on a researcher’s documentation -- or the assessment of an IRB or Privacy Board -- that the information requested is the minimum necessary for the research purpose.
By contrast, research information obtained using an authorization is not bound by the minimum necessary standard -- on the theory that the data subject has given explicit permission in accordance with the signed authorization. However, be aware that while HIPAA may not require a minimum necessary justification at all times, an IRB's evaluation of risks and burdens on human research subjects arguably does.
Disclosure Accounting
Individuals whose health information is covered by HIPAA have the right to an “accounting of disclosures” of their PHI. In this context, a “disclosure” occurs when PHI is communicated to an outside individual or entity, including another covered entity. Access within the covered entity -- for example, by members of a research team who are all part of the same organization’s workforce -- is considered a “use” not a disclosure. There is no accounting requirement for these internal uses for research.
In addition to being limited to external disclosures, disclosure accounting is not required for:
• Disclosures made under authority of a consent/authorization, on the theory that individuals are aware of what they have expressly permitted for that research.
• Disclosures to the individual directly about him/herself.
• Limited data set disclosures subject to a data use agreement.
• De-identified information that no longer qualifies as PHI.
When an accounting is required, it must include disclosures during the six years prior to the data subject’s request, and include certain types of information depending on the size of the protocol.
While HIPAA may not require it, many organizations will require that researchers maintain logs of all disclosures from research data collections as a security measure, including transfers to other individuals within the covered entity. Electronic data storage will increasingly offer this capability cheaply and automatically; older collections will require manual logging.
Characteristics of Authorizations
If a research activity meets none of the bypassing criteria above, an authorization (consent) is required. When they are required, authorizations must be:
• In “plain language” so that individuals can understand the information contained in the form, and therefore are able to make an informed decision.
• Executed in writing, and signed by the research subject (or an authorized personal representative).
Authorizations must include a specific description of the PHI to be used or disclosed, the name(s) or other identification of individuals involved in the research, and description of each purpose of the requested use or disclosure.
HIPAA authorizations are normally required to have an explicit expiration date. In the context of research, it is sufficient to specify an expiration “event” -- such as “the end of the study.” A research authorization can also have no expiration date at all, as would be the case for a research database or repository, or other future use, though this absence must be clearly indicated.
HIPAA authorizations cannot normally be combined with other types of documents (such as a privacy notice). However, HIPAA research authorizations can be combined with any other legal permission related to the study, including an informed consent that meets Common Rule or FDA regulations or another type of authorization.
As with any informed consent document, researchers are strongly urged to rely on standard models rather than creating their own authorization forms, lest they make a critical error in format or content. Most organizations will already have standard documents available; check with your IRB, Privacy Board, or privacy officer.
If there are multiple documents that limit information use or disclosure, the most restrictive one applies. Whether in a single instrument or several, the core requirement is to provide enough information for the data subject to make an informed choice.
Revocations of Authorizations
Like other kinds of HIPAA authorizations, those for research may be revoked by the subject at any time, provided that the revocation is in writing. Revocation of an authorization is not valid to the extent that the covered entity has taken actions relying on it, such as in the provision of prior treatment. Such revocations may be limited “as necessary to maintain the integrity of the research study.”
Recruiting into Research
It is still permissible under HIPAA to discuss recruitment into research with patients for whom such involvement might be appropriate. This common practice is considered to fall within the definition of treatment, at least when the conversation is undertaken by one of the patient’s healthcare providers.
Remember, however, that a data subject’s information cannot generally be disclosed to a third party -- even another care provider -- for a research use without an authorization from the individual or an approved waiver, alteration, or exception to authorization.
HHS guidance on HIPAA has affirmed that recruitment efforts can qualify as a “preparatory to research” activity that would allow a researcher to identify potential research participants, and even contact them for purposes of seeking their authorization (HHS 2004). However, such efforts must be approved, and the PHI used for this purpose cannot leave the covered entity during this activity.
"Retrospective" Research
As electronic health data collections grow in scale and scope it is an increasingly common practice to “browse” them, looking for interesting patterns that could translate into research possibilities. Indeed, bio-repositories of tissue and data created just for this purpose are increasingly common, and the scope and scale of such repositories grow daily. (Retrospective analysis of paper charts hasn’t gone away either.)
Use or disclosure of PHI for retrospective research studies may be done only with patient authorization -- or with a waiver, alteration, or exception determination from an IRB or Privacy Board. It should not be difficult to meet one of the criteria for the latter for such exploratory efforts. Alternatively, the data collection itself may have been created with an explicit authorization from subjects for future research. However, remember that you generally cannot proceed on your own without some approval from an IRB, Privacy Board, or other designated governing entity.
Security Rule
Efforts to meet the Common Rule, FDA, and HIPAA regulations’ privacy requirements are only part of the researcher’s task. HIPAA also has a Security Rule that complements its Privacy Rule. The Security Rule requires that PHI collections receive appropriate information security protections for as long as they exist. If you do not know how to do that, find a resource at your organization that does. In addition to a privacy officer, HIPAA requires designation of a security official, who should be able to help assure appropriate data protection.
It is important to note that HIPAA’s requirements include reporting of security breaches and data exposures. In addition to notifying affected individuals, HHS must be notified of exposures of PHI; in addition to potentially triggering an investigation, exposures involving more than 500 persons are posted on the HHS “Breach Portal” website for all the world to see. State laws may also include breach-reporting requirements.
Conclusion
Although the specifics are lengthy, the net administrative burden that HIPAA adds to existing Common Rule and FDA regulations is generally not a large one. Compared to protocol approval generally -- and the details of informed consent particularly -- a HIPAA authorization is relatively easy. Additionally, as noted, there are several pathways around the authorization requirement.
To approve a study under the Common Rule and FDA requirements, IRBs have long been required to determine that there are adequate provisions to protect the privacy of subjects and to maintain the confidentiality of data. Where researchers are meeting those requirements, HIPAA should change very little beyond the additional “paperwork.”
As noted, HIPAA applies to covered entities and their business associates, and to the PHI that originates in or by them. Research conducted by organizations that do not qualify as such, using data that does not derive from any covered entity source, is not reached by HIPAA. In such cases, the requirements of the Common Rule and FDA remain as protections for human subjects' privacy and other interests. The issue then is not "PHI" but what the Common Rule defines as identifiable "private information."
Here are the key points:
1. HIPAA privacy protections supplement those of other federal regulations (viz., the Common Rule and FDA), state law, and certification/accreditation requirements.
2. HIPAA protects identifiable health information (PHI) originating or held in covered entities or their business associates. De-identified data is not protected, and not all identifiable health information is considered PHI either.
3. Under HIPAA, research activity using PHI generally requires authorization. However, there are several alternatives that allow bypassing the authorization requirement.
4. Minimum necessary standards, disclosure accounting requirements, and the characteristics of authorizations (when required) must be understood by researchers when HIPAA applies.
5. Privacy protection includes a commitment to data security throughout the lifecycle of your data.
6. If you are unsure about the particulars at your organization or have questions, consult with your organization's IRB, Privacy Board, or privacy official. For data security issues, consult with your organization's security official.
Acknowledgements
The author would like to thank the following individuals for their editorial and content review of this and prior versions: Jaime Arango, Evelyne Bital, Helenemarie Blake, Joey Casanova, Anita Cava, Amanda Coltes-Rojas, Ken Goodman, Karen Hansen, Margaret Rankovic, Daniel Smith, and Sally Mann.
References
Additional Resources
Lustre
Useful commands on lustre:
- lfs df system configuration information
- lfs find [directory | file name] find a file or directory
- lfs quota -u $LOGNAME /login/beagle display quota
Striping
Useful commands for striping:
• *lfs setstripe* create a file or directory with a specific striping pattern
• *lfs getstripe* display file striping patterns
To find more about it use: *man lfs*
The default striping is 2: each file created is split across 2 OSTs (potentially double read/write bandwidth)
• Usually good values are between one and four.
• Striping can be set either on file or directory level.
• Cannot change the stripe pattern on an existing file.
• Can change the stripe pattern on a directory.
• Striping must be set on a directory before files in it are created.
• New files inherit the striping of the parent directory.
**NOTE**: Striping over too many OSTs will cause unnecessary overhead and lead to a loss in performance! We do NOT recommend changing striping settings unless you absolutely know what you are doing. Striping config is already set to Cray recommendations for a volume of that size.
### How to move data to and from Beagle
**Beagle is not HIPAA-compliant — do not put PHI (Protected Health Information) data on Beagle2 !!!**
*Make sure that you are properly handling PHI data, the consequences of mishandling could be considerable both for your and for the institutions you work for.*
**Factors for choosing a data movement tool:**
• Make sure you have permission to move such data from its source to its target if you are not the owner or the sole owner.
• Consider carefully the structure of Beagle's filesystem before deciding where you move your data:
• **Relatively small files** (say < 1 GB) that should be considered permanent: `/home/<username>` (disk quota 10 GB).
• **Larger data to be used for calculations**, but which does not need to be backed up locally: `/lustre/beagle2` (currently there is no disk quota).
**Recommended data movement tools:**
• **scp/sftp**
• quick to initiate but
• slow and not scalable.
• **Globus Online**
• Provides high-performance and is easy to use from either a command line or web browser.
• Provides fault tolerant, fire-and-forget transfers.
• For moving larger data.
• When scp is too slow/unreliable
• **Globus Online** See also Globus Tools and Grid Services
**Globus Online** addresses the challenges faced by researchers in moving, sharing, and archiving large volumes of data among distributed sites. With Globus Online, you hand-off data movement tasks to a hosted service that manages the entire operation, monitoring performance and errors, retrying failed transfers, correcting problems automatically whenever possible, and reporting status to keep you informed so that you can focus on your research. Command line and web-based interfaces are available. The command line interface, which requires only ssh to be installed on the client, is the method of choice for script-based workflows. Globus Online also has a REST-style transfer API.
After you register, simply use the **Beagle2 endpoint** "ci#beagle" as well as other sources or destinations. The Beagle2 endpoints server nodes are tuned especially for WAN data movement tasks. With a growing collection of Globus Online endpoints you'll be using the highest performing WAN-tuned systems with simplicity.
**By default any file transfer command will be initiated on the service/login in node.** The user can also bundle commands into a batch script and submit it to the scheduler. Users can also build multiple batch scripts with job dependency to move data to the machine using a few processors, run the jobs with a lot of processors, and then move the results off the machine. Here's an example of a batch script.
```bash
#!/bin/bash
JOB1=`qsub -lmppwidth=1 copy_input.pbs`
JOB2=`qsub -lmppwidth=128 -W depend=afterok:$JOB1 run.pbs`
JOB3=`qsub -lmppwidth=1 -W depend=afterok:$JOB2 copy_results.pbs`
```
**How to submit jobs**
Projects
A valid HPC project is required to submit jobs.
To join an HPC project visit [http://www.ci.uchicago.edu/hpc/projects](http://www.ci.uchicago.edu/hpc/projects)
`projects` to check whether or not you're a member of a project, to see what projects you're a member of (do this when you login on Beagle).
`projects --available` will tell you which are the projects that are available for your use.
`projects --set my_project_code` to set one of the projects that are available to you as your default project.
Basics about job submission on Beagle2
To run a batch job on Beagle2:
1. Prepare a PBS script that specifies the application you want to run and the resources it will require.
**Note:** Your application's executable line must start with one of the application launch commands (aprun for ESM jobs; ccmrun for CCM jobs).
2. Submit your job PBS script using the TORQUE `qsub` command.
3. Monitor your job's progress using the TORQUE `qstat` command, the Moab `showq` command
- When jobs are executed, they are allocated at least one node. Each node has 32 cores on Beagle2.
- If a user wants to run a different computation on each of the cores of a node Swift scripting language should be used Swift web site
- We are using PBS scripts with Moab (scheduler), see HPC Scheduling and Torque (resource manager), see HPC Job Management
- PBS script consist of: PBS directives, comments and executable statements (`aprun`).
- Every executable needs to be initiated by the `aprun` command.
- It is necessary to properly match your aprun parameters with your PBS parameters.
- `qsub` on Beagle2, simply reserves the node(s) for your usage but the command in your batch script will still be running on a login node.
- In order to actually run on the compute nodes `qsub` has reserved for you, you must use `aprun`
- `Job_ID` is assigned after the `qsub` command is executed. Use it to control your job!
- Batch jobs are submitted using the `qsub` command, e.g., `qsub/myjob.pbs`, where `myjob.pbs` is a script that will be described below.
Reservations:
Jobs can be sent either to the queues available on Beagle2 or users can ask for reservations: nodes specifically set aside for a task. In general reservations are awarded when a job has specific needs that cannot be easily met with the standard queues.
To request a reservation is necessary to send an email to beagle-support@ci.uchicago.edu
**Job Submission Best Practices**
*How many tasks per node?* -- On Beagle2 the number of cores per node is 32. Take this into account when submitting jobs.
*What if tasks are memory intensive?* -- Each compute node has 64GB, and 32 cores. If the memory requirements for your tasks are in terms of Gigabytes request much less than 32 tasks per node.
*How much wall-time to request?* -- Try to request relatively smaller walltime for your jobs. Scheduler employs a technique called backfilling that may be advantageous for shorter walltimed jobs. If the application is a long running one then a checkpointing mechanism could be used to submits fragments of application.
**Batch jobs**
**Commands for submitting and inquiring about jobs**
Batch jobs are controlled by PBS (batch) scripts written by the user and submitted to a batch system that manages the compute resource and schedules the job to run based on a set of policies.
**NOTE:** `job_id`, the numerical identifier associated with a batch job, is assigned after the `qsub` command is executed.
- `qsub` batch jobs are submitted using the `qsub` command, e.g., `qsub myjob.pbs`, where `myjob.pbs` is a script that will be described below.
- `qdel` `job_id` to delete a job. Users can only delete their own jobs.
- `qhold` `job_id` to request that the scheduler place one or more holds on a job. A job that has a hold is not eligible for execution (just for jobs which user owns)
- `qrls` `job_id` to release holds on batch jobs. A job may be blocked by one or more types of holds: USER, OTHER, and SYSTEM. USER holdcan be removed by the job’s owner,
- `qalter` `new_options` to modify the job’s attributes. If any of the specified attributes cannot be modified for a job, none of that job’s attributes will be modified.
- `qmove` `new_queue` `job_id` to move a job from one queue type to another one.
- `qstat` shows the jobs the resource manager, Torque, knows about (i.e., all those submitted using `qsub`).
- `qstat` `-a` show all jobs in submit order
- `qstat` `-a -u username` show all jobs of a specific user in submit order
- `qstat` `-t job_id` receive a detailed report on the job status
- `qstat` `-n job_id` what nodes is a job running on
- `qstat` `-q` gives the list of the queues available on Beagle2
- `showq` shows all jobs in priority order. Tells which jobs Moab, the scheduler, is considering eligible to run or is running.
- `showres` shows all the reservations currently in place or that have been scheduled (e.g., maintenance reservations, training reservations and specific user reservations) See Adaptive Computing: showres for more details.
- `showbf` shows what resources are available for immediate use as backfill. See Adaptive Computing: showbf for more details.
- `showstart` displays the estimated start time of a job. It is important to realize that this prediction is not strictly deterministic because jobs can be done earlier than forecasted. The command always assumes the job is the next to run, so it’s only useful for the top job in queue. See Adaptable Computing: showstart for more details.
**NOTE:** The behaviors of all these commands can be affected by the use of command line arguments, see the man pages for more details, e.g., but typing `man qsub` for the qsub command when logged in on Beagle2.
For more Moab commands and their descriptions, see the Adaptive Computing Scheduler Commands page
**To submit batch job:**
From the directory that contains the script file, type:
```
qsub myjob.pbs
```
**NOTE:** Scripts submitted via qsub use default bash shells, so you need to make sure you load modules or set any environmental variables you use in the submit script.
**PBS (batch) scripts**
A PBS job script is a text file you prepare that specifies which application to run and the resources required to run it. A detailed FAQ about PBS scripts is available from the Adaptive Computing Scheduler Commands page where users can learn the basics of building their scripts. **Note:** The TORQUE directives in your PBS script must precede your executable lines (lines that begin with one of the application launch commands `aprun` for ESM jobs; `ccmr` for CCM jobs, or `module load` commands); if directives occur on subsequent lines, they will be ignored. More specifically to Beagle these are some of the instructions that can be given:
```bash
#PBS -A my_project_code to set the project to which this run will be charged
#PBS -N job_name
#PBS -l mppwidth=nodes*cores_per_node is the number of processing elements (instance of an executable) requested and corresponds to the number of MPI or executable tasks. Default is one.
#PBS -l mppdepth=threads_per_MPI_task . Default is one. Use for OpenMP. The number cannot be larger than the number of cores per node (32). In some situations multiple threads can be run on same core, see Cray Doc:aprun or type `man aprun` for details.
#PBS -l mppnppn=Number of processing elements (or MPI tasks) per node. PE is one instance of an executable propagated by the Application Level Placement Scheduler.
NOTE: It is necessary to add `setenv OMP_NUM_THREADS=<number_of_threads>` in the PBS script before the `aprun flags:openMP program line, if using openMP`.
#PBS -l walltime=hh:mm:ss, i.e., in hours, minutes and seconds. Be mindful that specific queues might not allow all job-time lengths.
#PBS -q queue_name, to submit a job to a specific queue (use `qstat -q` to find which are the available queues). Batch is the default queue.
#PBS -o job_output_file_name to connect as specific file to the output of the PBS script
#PBS -j oe join output and error file.
#PBS -l advres=res_id, if a user is running a job that requires a reservation. In order to send computations that run on it it is necessary to add this line to the PBS script.
#PBS -V Please don't use this option! This can propagate large numbers of environment variable settings from the submitting shell into a job.
```
NOTE: In the script, these instructions can be followed by other instructions and in the end by the aprun command, to run the executable. Otherwise you would be attempting to run your calculations on a login node and not on the reserved compute nodes!
For pure MPI scripts running a single program the total number of nodes requested is the number of PEs requested divided the number of PEs per node and rounded up.
For MPI/OpenMP tasks the total number of nodes will be ceiling (mppdepth*mppwidth/32). Type man aprun for details.
NOTE: Since Moab assigns entire nodes to jobs, the total number of cores requested should be a multiple of 32. If it is smaller, Moab will effectively round it up to the closest multiple of 32 in the sense of locking up those resources.
Type man pbs_resources when logged into Beagle for more information/more option.
Example of PBS script:
```bash
#!/bin/bash
#PBS -N myjob
#PBS -l walltime=10:00:00
#PBS -l mppwidth=544 ## ceiling ((100 (tasks)/6(tasks per node))*32(total cores per node)=544
#PBS -j oe ## join standard output and standard error-recommended!
. /opt/modules/default/init/bash
cd $PBS_O_WORKDIR
aprun -n 100 -N 6 ./myexecutable
```
- Job directive lines begin with #PBS. These directives tell the batch system how many nodes to reserve for your job and how long to reserve those nodes.
- $PBS_O WORKDIR holds the path to the directory from which you submitted your job. While not required, most batch scripts have "cd $PBS_O WORKDIR" as the first command after the directives.
- The aprun command is used to start execution of your code on Beagle2’s compute nodes.
- Remember you can request up to 500 compute nodes for your batch jobs.
NOTE: All options may be specified as either (1) qsub command-line options (see below) or (2) as directives in the batch script as #PBS options (for batch jobs). We recommend putting your directives (options) in the script instead. Then you will have a record of the directives you used, which is useful for record-keeping as well as debugging should something go wrong.
Arun
All codes that execute on Beagle2’s compute nodes must be started with the "aprun" command. Without the "aprun" command, the code will run (if it runs at all) on the shared MOM node that executes your batch job commands.
To run aprun similar instructions should be used as given to the PBS script for qsub. Here are the equivalent aprun options.
<table>
<thead>
<tr>
<th>aprun Option</th>
<th>qsub -l Option</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>-n <strong>NMPI</strong></td>
<td>-l mppwidth=nodes*cores_per_node</td>
<td>Width (number of PEs). Number of MPI tasks. There is 32 cores per node on Beagle2.</td>
</tr>
<tr>
<td>-d <strong>mm</strong></td>
<td>-l mppdepth=threads_per_MPI_task</td>
<td>Depth (The number of threads to run for each PE). Number of OpenMP threads per MPI task. For OpenMP job you must also set the environment variable OMP_NUM_THREADS to this same value. Make sure that this value multiplied by the value for -N does not exceed 32.</td>
</tr>
</tbody>
</table>
-N NPEs
-l mppn=MP_iter_per_node
Number of PEs per node. Number of MPI tasks to run on each node.
-B
Reuse the width, depth, nppn and memory specified with qsub: no need to specify aprun options -n, -d, -N, and -m; aprun will exit with an error if the user specifies these with the -B option.
-S
Specifies the number of PEs to allocate per NUMA node. You'll get better performance if you distribute your MPI tasks among the 4 NUMA nodes (each NUMA node has 8 cores). Value can be 1-8. Default is 8.
Example of batch script for running an MPI/OpenMP code using 6 nodes:
```bash
#!/bin/bash
#PBS -l mppwidth=256
#PBS -l walltime=1:00:00
/opt/modules/default/init/bash
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=8
aprun -n 32 -N 4 -d 8 -S 1 ./myjob
```
Memory usage
Our compute nodes have 64 GB of physical memory (2GB per core), but not all the memory is available to user programs. “System overhead” requires memory to run the node, and message passing library buffers all consume memory, as does loading the executable into the memory. Thus the precise memory available to an application varies. So if you are using all 32 cores per node, you will get a bit less than 2 GB per MPI task on average.
If you see an error message, “OOM killer terminated this process.” in your job output, it means that your code has exhausted the memory available on the node (OOM stands for “out of memory”). One simple thing you can try when your code runs into an OOM error is to use more nodes and fewer cores per node. You can choose to launch fewer than 32 tasks per node to increase the memory available for each MPI task. Note that your account will be charged for all 32 cores per node, regardless of how many cores you actually use.
For aprun options refer to our wiki page or man page.
https://wiki.uchicago.edu/display/Beagle/Getting+started%3A+performing+basic+operations+on+Beagle2
https://wiki.uchicago.edu/display/Beagle/Examples+of+PBS+scripts
For example if you would like to run 64 MPI tasks and use only 16 cores per compute node:
```bash
#PBS -l mppwidth=128
aprun -n 64 -N 16 -S 3 ./a.out
```
This example uses #PBS -l mppwidth=128 because 128 cores are required and this number must be multiple of 32 (64 MPI tasks / 16 tasks used per compute node X 32 cores per compute node). Use the -S 3 option to place the 16 MPI tasks per compute node on cores from all four NUMA nodes to ensure best performance and access to all compute node memory. We need this option because the default is for aprun to pack the NUMA nodes, meaning 16 tasks on just two NUMA nodes.
Where -S Specifies the number of PEs to allocate per NUMA node. Each NUMA node has 8 cores. Value for S can be 1-8. Default is 8.
If you are using OpenMP please refer to this page:
https://wiki.uchicago.edu/display/Beagle/Examples+of+PBS+scripts
For more information see the CrayDoc page http://docs.cray.com/cgi-bin/craydoc.cgi?mode=Show;q=f=man/alpsm/31/cat1/aprun.1.html or type man aprun.
Running Swift on Beagle2
Swift is now installed on Beagle2 as a module. Swift supports a many-task computing environment for Beagle2. In this model, Swift scripts and the Swift runtime are used to submit and manage large numbers of small process executions on Beagle2’s massive number of cores. Swift is able to do this without overloading the Beagle2 scheduler by using a user space scheduler called Coasters.
- The Swift web site is here.
- Swift documentation is here.
- To get started with Swift on Beagle2 follow the steps outlined here.
Additional resources:
- Workload Management and Application Placement for the Cray Linux Environment from CrayDoc
- HPC Scheduling and HPC Job Management on job management
In case you need help/support
- please email beagle-support@lists.uchicago.edu. This will create a ticket in our ticketing system so that we can best track and resolve your issues.
|
{"Source-Url": "https://wiki.uchicago.edu/download/temp/pdfexport-20190403-030419-2253-2041/Beagle-143561327-030419-2253-2042.pdf?contentType=application/pdf", "len_cl100k_base": 10220, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 35449, "total-output-tokens": 11435, "length": "2e13", "weborganizer": {"__label__adult": 0.00066375732421875, "__label__art_design": 0.0006566047668457031, "__label__crime_law": 0.0013256072998046875, "__label__education_jobs": 0.018463134765625, "__label__entertainment": 0.00015485286712646484, "__label__fashion_beauty": 0.0003523826599121094, "__label__finance_business": 0.00213623046875, "__label__food_dining": 0.0006513595581054688, "__label__games": 0.0010986328125, "__label__hardware": 0.004241943359375, "__label__health": 0.0207672119140625, "__label__history": 0.0004515647888183594, "__label__home_hobbies": 0.0004849433898925781, "__label__industrial": 0.0010709762573242188, "__label__literature": 0.00044417381286621094, "__label__politics": 0.0005512237548828125, "__label__religion": 0.0007290840148925781, "__label__science_tech": 0.2193603515625, "__label__social_life": 0.0004322528839111328, "__label__software": 0.20849609375, "__label__software_dev": 0.51611328125, "__label__sports_fitness": 0.00054931640625, "__label__transportation": 0.0005092620849609375, "__label__travel": 0.0003476142883300781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48851, 0.00912]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48851, 0.45393]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48851, 0.90159]], "google_gemma-3-12b-it_contains_pii": [[0, 2229, false], [2229, 5836, null], [5836, 9176, null], [9176, 13852, null], [13852, 17691, null], [17691, 22697, null], [22697, 26437, null], [26437, 29841, null], [29841, 33626, null], [33626, 34538, null], [34538, 38813, null], [38813, 42001, null], [42001, 44968, null], [44968, 47951, null], [47951, 48851, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2229, true], [2229, 5836, null], [5836, 9176, null], [9176, 13852, null], [13852, 17691, null], [17691, 22697, null], [22697, 26437, null], [26437, 29841, null], [29841, 33626, null], [33626, 34538, null], [34538, 38813, null], [38813, 42001, null], [42001, 44968, null], [44968, 47951, null], [47951, 48851, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 48851, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48851, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48851, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48851, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48851, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48851, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48851, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48851, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48851, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48851, null]], "pdf_page_numbers": [[0, 2229, 1], [2229, 5836, 2], [5836, 9176, 3], [9176, 13852, 4], [13852, 17691, 5], [17691, 22697, 6], [22697, 26437, 7], [26437, 29841, 8], [29841, 33626, 9], [33626, 34538, 10], [34538, 38813, 11], [38813, 42001, 12], [42001, 44968, 13], [44968, 47951, 14], [47951, 48851, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48851, 0.01026]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
92a1edf485319f4d7efe2adcc415bc64536197d5
|
Formal Methods for GPGPU Programming
Citation for published version (APA):
DOI:
10.1007/978-3-030-63461-2_9
Document status and date:
Published: 01/01/2020
Document Version:
Author’s version before peer-review
Please check the document version of this publication:
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:
www.tue.nl/taverne
Take down policy
If you believe that this document breaches copyright please contact us at:
openaccess@tue.nl
providing details and we will investigate your claim.
Formal methods for GPGPU programming: is the demand met?
Lars B. van den Haak\(^1\), Anton Wijs\(^1\), Mark van den Brand\(^1\), and Marieke Huisman\(^2\)
\(^1\) Eindhoven University of Technology, \(\text{l.b.v.d.haak@tue.nl}\)
\(^2\) University of Twente
Abstract. Over the years, researchers have developed many formal method tools to support software development. However, hardly any studies are conducted to determine whether the actual problems developers encounter are sufficiently addressed. For the relatively young field of GPU programming, we would like to know whether the tools developed so far are sufficient, or whether some problems still need attention. To this end, we first look at what kind of problems programmers encounter in OpenCL and CUDA. We gather problems from Stack Overflow and categorise them with card sorting. We find that problems related to memory, synchronisation of threads, threads in general and performance are essential topics. Next, we look at (verification) tools in industry and research, to see how these tools addressed the problems we discovered. We think many problems are already properly addressed, but there is still a need for easy to use sound tools. Alternatively, languages or programming styles can be created, that allows for easier checking for soundness.
Keywords: GPU · GPGPU · Formal methods · Verification · Bugs · CUDA · OpenCL
1 Introduction
General-purpose GPU (GPGPU) programming has been around for over 10 years now, but is notoriously hard to do. In this work, we want to explore what kind of problems people experience during GPGPU programming and understand what the difficulties are in overcoming these problems. We accomplish this in two steps. First we find the problems and next we analyse current solutions in the domain of formal methods. We view this work as a way of identifying further research challenges and directions in this domain, with the aim to ease the difficulty of programming for a GPU.
To find the problems programmers encounter, we looked at Stack Overflow, which is a widely known website where programmers can ask questions related to programming. We took a sample of questions that are related to OpenCL and CUDA, the two dominant GPGPU programming languages, and categorise them using card sorting. These categories give us an up-to-date overview of (most) problems people encounter.
The next step is finding verification tools. Many tools have been developed that help people in their GPU programming work, like GPUVerify [17], Oclgrind [42], GKLEE [33], VerCors [18] and CUDA-MEMCHECK [2]. Although, only some of these have been picked up by developers of GPGPU programs. We look at scientific conferences and industry companies for tools. We narrow the scope to correctness issues and link the tools that solve these issues and indicate what improvements research can make.
In conclusion, in this work, we aim to help other researchers to focus their research on GPGPU programming problems that are not or incompletely addressed with and tools.
We make the following contributions.
1. An overview of common problems people struggle with whilst programming a GPGPU (Section 3).
2. Addressing problems of Section 3 where we think formal methods can make a direct contribution. We discuss solutions of existing tools and new research opportunities (Section 4).
2 Background
We base this section mostly on the CUDA Programming Guide [3]. GPUs are massive parallel compute devices, that work via the Single Instruction Multiple Threads (SIMT) execution model, which means that multiple threads are executing the same instruction in parallel, but with other data. In this paper, we consider mainly the CUDA and OpenCL programming languages. We work with the CUDA terms, but give the corresponding OpenCL terms in parentheses in this section. CUDA compiles to PTX [7], a pseudo-assembly language, which we call the instruction level, similarly OpenCL compiles to SPIR [8].
Functions that are executed on the GPU are called kernels. One can start kernels from the CPU, which we call the host. The GPU itself is called the device. Data stored on the RAM is not automatically accessible on the GPU and must be sent from the host to the device before invoking the kernel that uses the data. The programmer can schedule memory transfers and kernel executions in a queue.
**Threads (Work-items)** When scheduling a kernel, you specify how many threads (work-items) are going to be executing this kernel. Threads are grouped together in thread blocks (workgroups) and all the thread blocks together form the grid (NDRange). From the hardware perspective, thread blocks are subdivided into warps (sub-groups or AMD calls them wavefronts), that typically have a size of 32 (64 on AMD devices) threads. Threads of a warp are executed in lockstep, meaning that they execute all the instruction at the same time.[]
---
3 Although this is not exactly true any more for Nvidia’s Volta architecture and onward. See https://developer.nvidia.com/blog/inside-volta/
warp take different execution paths, e.g. due to if statements, the warp executes each path, but disables threads that are not on that path. This is called thread divergence, which can lead to performance loss.
A GPU consists of multiple streaming multiprocessors, which execute the warps in lockstep. Each thread block is assigned to one streaming multiprocessors.
**Memory model** A programmer has to manage the memory of a GPU manually. It has global memory, where transferred data from the host is stored, and any thread can access it. Shared memory (local memory) is shared in a thread block, which is faster than global memory. One can use it to share results within a thread block or to have faster access when data is reused. Per thread data is automatically stored in fast-access registers, or slow local memory in case not enough registers are available. For optimal global memory accesses, the accesses should be fully coalesced: this happens if threads of a warp call consecutive memory addresses and the first address is a multiple of the warp size.
**Synchronization** When two threads do a read and write, or two writes to the same memory address, and this could happen simultaneously, this is called a data race. Data races lead to non-determinism and are considered a bug. A GPU can synchronize with a barrier on the thread block level, which ensures that all threads wait for each other before continuing execution. It also makes sure that after the synchronization, all writes to global and shared memory are performed, or depending on the barrier, only to shared memory. Thus, barriers can prevent intra-block data races in a thread block. All threads in a thread block must reach the same barrier, otherwise it results in undefined behaviour and is called barrier divergence.
In between threads of different thread blocks, synchronization is not possible with a (standard) global barrier, although Sorensen et al. [50] show how this can be constructed. Data races in between thread blocks are called inter-block data races. When lockstep execution of warps is not ensured also intra-warp data races can occur.
Synchronization can also be achieved via fine-grained synchronization using locks or atomics. Locks can make sure that only one thread has access to a specific memory address. Atomics allow for communication via memory, without risks of data races and GPUs typically implement them more efficiently than locks. A GPU has a weak memory model [10], which means that memory actions within a thread can be reordered by the hardware if there exist no dependencies within the thread. Therefore, when using fine-grained synchronization, specific memory actions may not yet be visible to other threads. Memory fences can be inserted to enforce a memory order, which might be needed to make sure that no weak-memory data races occur.
**Other features** Some other features are less used, although we do want to mention them since they come up in this work. Dynamic parallelism allows parent
kernels, to launch child kernels. A parent and child kernel have a consistent view of global memory at the start of the launch, but this is not guaranteed while executing. The parent kernel can synchronize with the child kernels it launched. A child kernel can recursively call a new child kernel. Warp-level primitives (sub-group primitives) are primitives that allow communication between threads in a warp, via the faster registers. For instance, one can use them to make a faster scan and reduction operation.
3 GPGPU Programming Problems
To know how formal methods can help solve GPGPU problems, we first need to know with what actual developers are struggling with. Therefore, we look at Stack Overflow, which is the go-to place for programming-related questions and is used by many programmers as a reference. Of the languages programmers use for GPGPU programming, CUDA (729 questions), OpenMP (471) and OpenCL (311) are the most popular, based on the number of question asked on Stack Overflow in 2019\(^4\). We focus on CUDA and OpenCL since OpenMP does not solely focusses on the GPU.
We first explain our approach for gathering and categorizing the results (3.1). Next, we present the categories of programming problems we found, which we again ordered into themes and sub-themes for a clear overview (3.2).
3.1 Approach
Gathering Problems As argued above, we look at OpenCL and CUDA on Stack Overflow. Looking at the general tag gpgpu, cuda and opencl, we found that the 7 most related tags are gpu, c++, nvidia, c, parallel-processing, thrust and nvcc. The first five tags we consider too general, which would pollute our results. The tags thrust and nvcc are a specific CUDA library and compiler, which we do not want to focus on. Therefore, we stick with the tags gpgpu, cuda and opencl. On March 2, 2020 there are 17,539 questions on stack overflow that have the tag cuda, opencl or gpgpu.\(^5\) We look at 376 Stack Overflow questions, which is a representative sample with a confidence level of 95% and a confidence interval of 5%. Thus, with a 95% chance, we identify the problems which are present in at least 5% of the questions in the tags mentioned above.
Categorizing Problems On the gathered questions, we performed open card sorting [37, Card-sorting: From Text To Themes], which creates categories in an unknown data set. We decided to look at the title, body and answers of the questions, to determine the categories. The first author, together with another PhD student, sorted the first 84 questions, where they achieved a mutual understanding of categories and held discussions for any corner cases. The next 43
\(^4\) https://data.stackexchange.com/stackoverflow/query/1258739/gpgpu-tags
\(^5\) https://data.stackexchange.com/stackoverflow/query/1258838/gpgpu
cards were sorted separately, but in the same room, which allowed discussion on difficult cards. Eventually, this led to 26 different categories. The last 260 cards were sorted alone by the first author, and we ended up with 34 categories. For cards we could sort in multiple categories, we made new overlapping category or sorted them to the most appropriate category. After the sorting, we went over the relevant questions once more, to see if a newly made category would be more suitable.
Relevant Problems for Formal Methods In the 34 categories, we make two distinctions. First, we mark problems that are interesting for GPGPU programming: these are 28 of the 34 categories. The non-relevant categories are related to (GPU) hardware, errors in the host code (unrelated to CUDA or OpenCL API calls), installing the correct CUDA and OpenCL drivers or libraries, setting up a development environment, linking libraries and questions related to OpenGL. In total, we found that 220 of the 376 were relevant to GPGPU programming.
We present the 28 GPGPU categories in the remainder of this section. We mark the ones (10) where we think formal methods are directly applicable to solve correctness problems underlying these questions.
3.2 Results The results of the card sort are visible in Figure 1. To organize the results, we have put some structure into them. We identified two themes: memory and threads and synchronization. We place the remaining categories in the general theme. Within each theme, we distinguish between bugs and performance-related questions as sub-themes. The results of this can be viewed in Figure 2. We will explain each theme with its associated categories in the following subsections.
Memory We first consider the bugs sub-theme categories: ‘memory transfer bug’, ‘out of bounds’ and ‘memory bug’. An out of bounds error occurs when an array is indexed outside its bounds, which will be reported on at runtime. A memory transfer bug happens when not all necessary data was transferred to the device and causes uninitialized memory accesses. We assign the category memory bug to questions where a memory error happened, but the cause was unclear from the post. We think that formal methods could help detect these bugs or possibly ensure programmers that such bugs are not present in their program. For instance, CUDA-MEMCHECK [2] and ESBMC-GPU [38] are tools that can detect these kinds of bugs.
Next we consider the memory performance sub-theme: ‘manage memory spaces’ and ‘memory performance’. A GPU has to manage its own (faster) shared memory space. This management can be difficult and error-prone to do but is an essential optimization strategy. We also added questions related to a better understanding of the memory model here. We label other questions as memory performance when they are related to access patterns (coalesced) or other ways to optimize memory usage.
Fig. 1. Results of open card sorting 376 GPGPU related questions. We only show the 220 questions and categories relevant to GPGPU programming. The categories labelled FM Opportunities are the ones where we think formal methods could play a role in solving the underlying correctness issues.
The last two categories are ‘host transfer’ and ‘data types’. Both are related to getting memory from the host to the device. The host transfer category is more general. It is related to doing transfers efficiently, asynchronously, transferring the data back, transferring arrays, parameters or constants, and handling arrays too big for global memory. We also assign questions related to aligning and pinning memory here. Actual bugs related to this we report in the ‘memory transfer bug’ category. We assign questions about overlapping transfers to the ‘optimizing kernel launches’ category. The data types category is more specific. It contains questions related to correctly transferring a composite data type (‘struct\(^1\) in C) and making sure it has a correct corresponding data type on the device. We also consider questions related to Structure of Arrays (SoA) or Arrays of Structures (AoS) here. Although we think that tools can help to
Fig. 2. Overview of the card sort, where we place the categories under themes and sub-themes. Similar to Figure 1 we only show categories relevant to GPGPU programming. The underlined questions are the ones where we think formal methods could play a role in solving the underlying correctness issues. The percentages indicate how many questions are under a specific category, where 100% corresponds to all 220 relevant GPGPU questions.
solve problems in checking correct correspondence of data types, a programming language could do this automatically.
**Threads & Synchronization** Under the bug sub-theme, we consider ‘data races’, ‘incorrect thread configuration’ and ‘barrier divergence’. We assign the category data race to questions where this occurs. A data race is a problem that is hard to detect: it is non-deterministic by nature, and it is hard to reason about. Incorrect thread configuration happens when a programmer configures the wrong number of threads or goes over the maximum amount of threads possible. Some incorrect configurations will be reported at runtime, while others will run without errors but do not process all the input. We assign barrier divergence to questions, where not all threads in a thread block reach the same barrier. This is not allowed in the general GPU programming model and leads to undefined results. Data races and barrier divergence bugs are already the study of many formal method tools, like GPUPerify [17] and GKLEE [33]. We think formal methods can also reason about thread configurations, where a tool figures out if the indexing of the input by the threads corresponds with the size of the input or detects memory-related bugs which are caused by incorrect configurations. Another idea is to check whether kernels work the same for each thread configuration.
The ‘optimise thread configuration’ and ‘threads divergence’ categories are related to the performance sub-theme. When optimising the amount of threads, one can choose the number of threads per thread block and how much work each thread does, which both influence performance. Thread divergence, on the other hand, could lead to bad performance, which a programmer can sometimes avoid.
The threads - general category consists of questions related to understanding the execution model of threads, and what correct configurations are. Synchronization is used to prevent data races from happening by using barriers or atomics. General questions on how to use this, about using warp primitives or what can and cannot be synchronized we give this tag. We think formal methods can help people understand when barriers are necessary, or maybe even place barriers automatically. For instance, the Simulee \[51\] tool can detect unnecessary barriers.
General First we consider the bug sub-theme. We have a general bug category, which means something is wrong in the program, but not one of the previously mentioned bugs. This can be incorrect usage of the available resources (e.g. registers), people computing something incorrectly, incorrect use of the Thrust library or it is not yet clear what is wrong. Formal methods, for instance VerCors \[18\], can check for functional correctness of programs when something is incorrectly calculated. Bug in dependency consists of all bugs in Thrust that were fixed in later versions of the library, and are therefore not considered for formal methods later on. Dynamic parallelism bug consists of a single question (so/19527391), where a bug was encountered using dynamic parallelism, although it is unclear what exactly went wrong. Formal methods tools could also reason about correctness in this case, although dynamic parallelism would have to be supported.
General performance are questions, where people want to understand, given a program, algorithm or function, the performance and how to improve it. Questions about overlapping computations and memory transfers, and ideal scheduling of computation kernels we place in the optimizing kernel launches category.
We came across many questions where people wondered how a specific problem or algorithm should be programmed on the GPU, or if a library contained a specific functionality. We placed these in the how to do algorithm category. Formal methods could help to prove the equivalence between a sequential and parallel implementation. The basics category has questions related to how certain concepts are called (e.g. what is a thread block), how they work, how specific API calls work, or some basic easy to fix mistakes that prevent correct compilation.
Some questions arose about using higher level patterns in CUDA and OpenCL, for instance using templated functions. We think these problems are best solved by a beginners GPU programming book or by using a higher-level programming language. Profiling are questions related to how to use the profiling tools available or how to measure runtimes correctly. Sparse matrices are questions on how to process matrices, or on how to use the cuSparse library. Multi GPU are questions related to how to use multiple GPUs for computations. The limitation
category consists of questions related to the limitation of the CUDA/OpenCL programming model. For example, the CUDA runtime library can only be called from the main scope of a C++ program (so/55819758). Kernel launches are questions related how to start a computation on the GPU correctly. CUDA memcheck is about using that specific tool for debugging.
3.3 Insights
Summarizing, we observe that 32.3% of the relevant questions are related to performance, 34.1% to memory, 20% to bugs and 18.2 % to threads and synchronization. These are the areas developers on Stack Overflow are most interested in. Performance makes sense since programmers will use a GPU to get better performance, otherwise they would have used the CPU. Memory related questions are important since memory management works quite differently from CPU programs. The transferring of data is error-prone, and the management of memory without race conditions is hard to do. We also think that many developers are just interested in the result: have a faster parallel version of their original (sequential) code, which is related to our ‘how to do algorithm’ category. Concluding, there is potential for formal methods to help solve correctness related issues that GPGPU programmers experience. We will further discuss this in Section 4.
3.4 Threats to Validity
External Validity There is a bias in the results since we look only at questions located at Stack Overflow. This may not address the general population of GPGPU developers. We suspect that there will be more questions by beginning GPGPU programmers, than by more experienced ones. Therefore, we might not discover the problems of more experienced users.
Internal Validity As the categories have been manually created, there is an internal bias, meaning that if other people were to perform this study with the same questions, there could be a different outcome. We think that although the categories might be different, the general topics would be similar. Also, part of the categorizing is done together with another PhD student for exactly this reason.
4 Formal verification solutions
In Section 3, we looked at problems that programmers struggle with when coding in CUDA and OpenCL. In this section we focus on the problems where we think formal methods can make a direct contribution, and provide an overview of tools that (partially) solve these problems. Again, we focus mainly on correctness. First we explain how we selected these verification tools (Section 4.1). Next, we discuss for each of the selected problems the available solutions and possible research directions (Section 4.2).
4.1 Approach
In order to find as many tools as possible that target the verification of GPU applications, we took the following steps in finding them. First, we looked at the industry. We considered the websites of Nvidia, AMD (gpuopen.com), the Khronos group, and a list found on the IWOCL conference site. Next we looked at important conferences, based on the Microsoft Academic’s field ratings. We looked in the areas of programming languages, software verification and parallel computing and selected the following conferences: PLDI, POPL, TACAS, CAV and IPDPS. For each these conferences, we looked through the years 2015-2020.
This was the initial set of tools we considered, and we snowballed, by looking at any tools that the original papers referenced. Lastly, we searched Google Scholar with the following query: “(cuda OR opencl OR gpu) AND (bugs OR problems OR verification OR formal)”.
4.2 Available solutions
In this section we consider the problems that we discussed in Section 3, where we identified categories. In Table 1, we provide an overview of the tools we found. We distinguish between three types of tools (inspired by Donaldson et al. [24, Chapter 1]): Dynamic tools check for one specific input. Symbolic tools execute the input symbolically, allowing for more different paths to be tested at once. Static tools make (sound) approximations of the source code and will try to prove existence or absence of bugs. We indicate if a tool checks for data races (Race), barrier divergence (Bar), memory problems (Mem), functional correctness (Func) or equivalence (Eq), or if it helps with synchronization (Sync) or thread configuration (Thr) in the ‘Solves’ column. With ‘Auto’, we refer to the degree of automation: is it completely automatic, or does the user need to be involved in the checking, for instance by writing annotations? The Corr. column indicates if the tool can prove the absence of bugs in certain settings. We also list any limitations or other remarks in the table.
Data races Ideally, a tool in this category reports existing data races with precise details or guarantees that data races are not present.
Many dynamic tools are practical and require no manual input, but do not guarantee the absence of data races. Solely for checking for a specific input, we think CURD is most suitable, it checks on instruction level, thus can also be used for higher-level languages. Only the approach of Leung et al. [31] gives some guarantees for a dynamic tool and can be used to ‘prove’ absence of data races for specific invariant kernels. One can combine this approach with other dynamic tools.
Symbolic tools, such as GKLEE and ESBMC-GPU, can test for more input, and one can (mostly) use them automatically although they can also suffer
---
6 https://www.iwocl.org/resources/opencl-libraries-and-toolkits/
7 https://academic.microsoft.com/home
Overview of different tools we discuss in this section. We indicate the type of tool, the problems (which we consider in this section) they solve, the degree of automation (Auto.), any correctness guarantees (Corr.) it can give, on which languages it works, and any limitations and other remarks.
<table>
<thead>
<tr>
<th>Tool</th>
<th>Type</th>
<th>Require Auto</th>
<th>Corr</th>
<th>Languages</th>
<th>Limitations</th>
<th>Remarks</th>
</tr>
</thead>
<tbody>
<tr>
<td>CURD [36]</td>
<td>Dynamic</td>
<td>Race Bar</td>
<td>High</td>
<td>FTX</td>
<td>Faster version of BARRACUDA</td>
<td></td>
</tr>
<tr>
<td>Leung et al. [13]</td>
<td>Dynamic</td>
<td>Race Bar</td>
<td></td>
<td>CUDA</td>
<td>No atoms</td>
<td>Checks on races for one input and determines if memory accesses are the same for each input. If they are the same, this proves race freedom for all inputs.</td>
</tr>
<tr>
<td>ARCHER [13]</td>
<td>Dynamic</td>
<td>Race Bar</td>
<td>Medium</td>
<td>OpenMP</td>
<td>Runs dynamically on the CPU, not GPU specific</td>
<td></td>
</tr>
<tr>
<td>KLEE-CL [22]</td>
<td>Symbolic</td>
<td>Race Bar Func.</td>
<td>Medium</td>
<td>OpenCL</td>
<td>Checks for equivalence on symbolic output, although false positives are possible for this.</td>
<td></td>
</tr>
<tr>
<td>SESF [34]</td>
<td>Symbolic</td>
<td>Race Bar</td>
<td>High</td>
<td>CUDA (LLVM-IR)</td>
<td>Similar to GKLEE, but concrete values when possible to reduce runtimes. Can be sound and complete under specific circumstances.</td>
<td></td>
</tr>
<tr>
<td>Xing et al. [72]</td>
<td>Static</td>
<td>Race Bar</td>
<td>High</td>
<td>FTX</td>
<td>Can check fine-grained synchronization. It has to unroll loops, which can cause unsoundness.</td>
<td></td>
</tr>
<tr>
<td>Banerjee et al. [15]</td>
<td>Static</td>
<td>Race Bar Func.</td>
<td>High</td>
<td>OpenMP</td>
<td>Equivalent version should be similar.</td>
<td>Equivalence checking is sound, but might not be possible for complex programs.</td>
</tr>
<tr>
<td>WEFT [47]</td>
<td>Static</td>
<td>Race Bar</td>
<td>High</td>
<td>FTX (CUDA)</td>
<td>No global memory and atomics for Race.</td>
<td>It is based on a warp specialized programming model. It can only verify programs which are completely predictable, e.g. it cannot have dependencies on the input for memory locations and control flow. It will check named barriers, which are only accessible via PTX.</td>
</tr>
<tr>
<td>CIVL [46]</td>
<td>Symbolic</td>
<td>Race Bar</td>
<td>Medium</td>
<td>OpenMP</td>
<td>No atoms</td>
<td>Can use the languages interchangeably, but has no support for specific GPU capabilities. Need some annotations for checking.</td>
</tr>
<tr>
<td>Alur et al. [12]</td>
<td>Symbolic</td>
<td>Task Thr Func.</td>
<td>High</td>
<td>LLVM-IR (CUDA)</td>
<td>Can only prove block size independence for synchronization free programs.</td>
<td></td>
</tr>
<tr>
<td>Simulate [71]</td>
<td>Dynamic</td>
<td>DR Bar Func</td>
<td>High</td>
<td>LLVM-IR (CUDA)</td>
<td>Simulates a GPU memory model, and generates memory via evolutionary computing for it.</td>
<td></td>
</tr>
<tr>
<td>Vericuda [31]</td>
<td>Static</td>
<td>Frame Low</td>
<td></td>
<td>CUDA</td>
<td>Race-free</td>
<td>Needs annotations to prove correctness and can only prove this for race-free programs.</td>
</tr>
</tbody>
</table>
Table 1. Overview of different tools we discuss in this section. We indicate the type of tool, the problems (which we consider in this section) they solve, the degree of automation (Auto.), any correctness guarantees (Corr.) it can give, on which languages it works, and any limitations and other remarks.
from longer verification times. GPUVerify is the most practical static verifier, although it needs annotations to overcome false positives. The tool from Xing
et al. is interesting and checks on instruction level, but uses loop unrolling, which makes it unsound. It could use ideas from GPUVerify, which generates loop invariants. VerCors can give the most guarantees but needs a serious effort in annotating. For example, see the work of Safari et al. [45], which verifies a prefix-sum algorithm.
WEFT, CIVL, Archer, and the tool of Banerjee et al. [15] serve a more specific purpose, like checking OpenMP or warp-specialised programs.
Overall, many steps have been made to verify data races in GPGPU programs. Checking on instruction level is a good idea since other programming languages benefit from this as well. We also think there are proper steps made to check for fine-grained synchronisation and memory fences which one need for this kind of synchronisation (e.g., BARRACUDA checks on this). From the benchmarks that the authors of the tools consider, it seems to be clear though that there is no tool that always detects or proves the absence of data races. Also, each author uses a different set of benchmarks. It would be interesting to test all the mentioned tools with the benchmark suite created by the work of Schmitz et al. [46], for a fair comparison between tools.
**Memory bugs** Here we look for solutions for the categories: ‘memory bug’, ‘out of bounds’ and ‘memory transfer bug’. Thus, tools should check that memory addresses which are accessed are valid and initialised.
CUDA-MEMCHECK detects the above errors dynamically for CUDA. The OCLgrind tool does the same for OpenCL. ESBMC-GPU verifies on index out of bounds. CIVL checks on array index-out-of bounds. These tools can also check for memory leaks.
For these memory issues, we see an opportunity to check on the instruction level. The dynamic tools seem to cover the properties of interests, but this is not yet the case for the (symbolic) verification tools. For instance, it is unclear if ESBMC-GPU checks on accessing uninitialised memory. Lastly, only VerCors could guarantee correctness for the ‘out of bounds’ issues, but it will only check kernels, not host code and needs annotations.
**Barriers & synchronization** Barrier divergence is also a source of bugs, which can be verified by GPUVerify and GKLEE. CUDA-MEMCHECK detects this dynamically. Another interesting topic, which can help developers with ‘synchronisation’, is placing barriers automatically or notifying the user about unnecessary barriers. The Simulee tool checks for the latter, but no tool addressed the former to the best of our knowledge. Automatic barrier placement could be implemented together with race check tools to afterwards verify for race freedom.
**Thread configuration** The tool by Alur et al. [12] can verify if a synchronisation-free program is blocksize independent: does the program behave the same if the number of blocks is changed, but the total amount of threads stays the same. We think such an approach can be helpful for newer programmers. (And would
be a good programming style to begin with.) By making one’s program work for any block size, it is easier to optimise. Or even better, verify that one’s program behaves the same for any number of threads\(^8\). A thread-invariant program lets one freely try different thread configurations without introducing new bugs. Thus, we see an opportunity for verification tools addressing this.
**Dynamic Parallelism** As far as we know, there are no tools that support dynamic parallelism, although we are not sure if tools working at the instruction level, e.g. BARRACUDA, support this. Support for dynamic parallelism is the first step to ensure that a tool can check kernels using this concept. One can also come across new bugs like data races between parent and child kernels. Specific to dynamic parallelism is the fact that there is a maximum recursion depth of new kernels and a maximum number of child kernels. A formal methods tool can check both of these restrictions.
**Functional correctness** VerCors \([19]\) allows deductive checking of functional correctness of programs, although it needs non-trivial annotations.
On a similar vein, the work of Kojima et al. \([29]\) proposes a Hoare logic for GPU programs, which the Vericuda tool \([30]\) verifies when one provides Hoare tuples. However, the latter tool requires that the checked program is data race free, which should be verified by another program.
ESBMC-GPU, CIVL, GPUVerify and GKLEE allow the programmer to place assertions. These assertions do not give complete correctness but allow more flexibility in checking certain aspects of the program.
We think VerCors has potential, although the need for annotations makes it difficult to use out of the box. An interesting research direction is making the reuse of annotations easier after a program has been slightly changed, e.g. due to an optimisation.
**Equivalence checking** Instead of fully verifying a specification, one can do equivalence checking: take a (simple), possibly sequential, version of a program, which you know is correct and prove that a parallel implementation is equivalent. The CIVL tool can do this. Kamil et al. \([28]\) use a similar approach. They transform Fortran stencil codes to Halide (an image processing DSL), and proof functional equality, while being able to optimise the program in Halide further. The tool by Banerjee et al. \([15]\) does something similar. It verifies equivalence for parallelising loop transformations from OpenMP and also verifies data race freedom.
### 4.3 Research directions
We think much progress has already been made by formal methods that address many issues that developers encounter. We make the following observations.
In general, we think that checking on instruction level is valuable. Typically, all GPU programs will eventually compile to the instruction level, and thus allows the tool to be used for more programming languages.
No verification tool is completely sound yet, which might be impossible for the full flexibility of the CUDA and OpenCL languages, but should be the goal. Tools should support as many program features as possible while staying sound. Certainly, since programmers use a lot of low-level features when optimising code, this is an ambitious goal.
Another take on this is to identify which patterns and programming features are sound to verify. This can give rise to a particular programming style, which can be enforced by a different (domain-specific) language.
In the papers presenting the various tools, those tools are compared with each other to show that for specific kernels, the new tool is, at that point in time, the best. It would be better to use a standard benchmark suite, like the suite by Schmitz et al. [46], which is uniformly used and addresses the errors we mention in this paper. Additionally, it should support all the CUDA and OpenCL features. This suite then makes it clear what errors tools can check and what programming features they do or do not support. For instance, we think that tools that deal with fine-grained synchronisation are essential.
5 Related Work
GPGPU problems The study by Wu et al. [51] is similar to our work. Instead of Stack Overflow, they look at open source repositories on Github to collect CUDA bugs. They identify 5 root causes for bugs, which is coarser than our results. We can match most of our categories with one of their root causes. Only their ‘poor portability’ we can not match, and is more related to specific platforms issues, which were questions we marked as irrelevant. Also, the nature of Stack Overflow means we have more questions related to solely understanding GPU programming (e.g. ‘Basics’ or ‘How to do algorithm’) and are not things you could find in commit messages. Because of that reason, the exact numbers on how often certain issues arise are hard to compare, but we don’t think that is too important. Both of these methods give a good overview of what kind of bugs to expect whilst GPGPU programming.
The work of Donaldson et al. [14, Chapter 1] gives an overview of what kind of correctness issues occur with GPGPU programming and gives a comparison between the tools GPUVerify, GKLEE, Oclgrind and CUDA-MEMCHECK. They name four different correctness issues: data races, weak memory behaviours, lack of forward progress guarantees and floating point accuracy. Of these issues, we have only come across data races in our study. We think the other issues are more particular for experienced users, and less so for novice users. As mentioned before, we think Stack Overflow attracts mostly novice users. The taxonomy made by Donaldson et al. of the considered tools inspired the current work, although we consider a wider range of tools overall.
Stack Overflow studies There were many other studies performed on Stack Overflow concerning other subjects, for example concurrency [41,9] mobile development [44] and machine learning [27]. In [41,9,44] topic modelling is used to categorize all the questions. We chose to not use topic modeling, since we think that we can make a finer subdivision of the categories with open card sorting. In [27] something more related to our work was done, but experts pre-determined the categories. In our case the goal was to discover problems, therefore it makes no sense to pre-determine the categories.
6 Discussion
In this work, we showed the problems GPGPU programmers struggle with, while programming for the GPU using OpenCL or CUDA. We see that memory, synchronization, threads and performance are essential topics for GPGPU programming. Next, we looked at (formal method) tools and how they address the correctness issues we found. In general, the research community addresses most problems, but we identified several interesting research directions. The data used for the categorization with card sorting is available here: https://github.com/sakehl/StackOverflow-GPU-Questions.
Acknowledgements We want to thank Jan Martens for his help with the card sorting.
References
2. CUDA-MEMCHECK. https://docs.nvidia.com/cuda/cuda-memcheck (Jun 2020)
5. OpenACC. https://www.openacc.org/ (Jun 2020)
44. Rosen, C., Shihab, E.: What are mobile developers asking about? A large scale study using stack overflow. Empir Software Eng 21(3), 1192–1223 (Jun 2016)
50. Sorensen, T., Donaldson, A.F., Batty, M., Gopalakrishnan, G., Rakamaric, Z.: Portable Inter-workgroup Barrier Synchronisation for GPUs p. 20
|
{"Source-Url": "https://pure.tue.nl/ws/portalfiles/portal/163432549/iFM_2020_paper_55.pdf", "len_cl100k_base": 8938, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 51865, "total-output-tokens": 13998, "length": "2e13", "weborganizer": {"__label__adult": 0.0005350112915039062, "__label__art_design": 0.000499725341796875, "__label__crime_law": 0.00043082237243652344, "__label__education_jobs": 0.0007448196411132812, "__label__entertainment": 9.85264778137207e-05, "__label__fashion_beauty": 0.00025844573974609375, "__label__finance_business": 0.00026988983154296875, "__label__food_dining": 0.00035500526428222656, "__label__games": 0.001255035400390625, "__label__hardware": 0.004459381103515625, "__label__health": 0.0006542205810546875, "__label__history": 0.0003173351287841797, "__label__home_hobbies": 0.00015366077423095703, "__label__industrial": 0.0007109642028808594, "__label__literature": 0.0002884864807128906, "__label__politics": 0.00038313865661621094, "__label__religion": 0.000835418701171875, "__label__science_tech": 0.061553955078125, "__label__social_life": 8.040666580200195e-05, "__label__software": 0.00551605224609375, "__label__software_dev": 0.9189453125, "__label__sports_fitness": 0.0004968643188476562, "__label__transportation": 0.0010290145874023438, "__label__travel": 0.0002410411834716797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56424, 0.03182]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56424, 0.2866]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56424, 0.86616]], "google_gemma-3-12b-it_contains_pii": [[0, 2332, false], [2332, 4721, null], [4721, 7390, null], [7390, 10409, null], [10409, 13209, null], [13209, 16119, null], [16119, 17359, null], [17359, 19176, null], [19176, 22476, null], [22476, 25103, null], [25103, 27992, null], [27992, 34446, null], [34446, 37433, null], [37433, 40338, null], [40338, 43384, null], [43384, 45988, null], [45988, 49509, null], [49509, 52772, null], [52772, 56424, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2332, true], [2332, 4721, null], [4721, 7390, null], [7390, 10409, null], [10409, 13209, null], [13209, 16119, null], [16119, 17359, null], [17359, 19176, null], [19176, 22476, null], [22476, 25103, null], [25103, 27992, null], [27992, 34446, null], [34446, 37433, null], [37433, 40338, null], [40338, 43384, null], [43384, 45988, null], [45988, 49509, null], [49509, 52772, null], [52772, 56424, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56424, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56424, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56424, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56424, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56424, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56424, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56424, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56424, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56424, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56424, null]], "pdf_page_numbers": [[0, 2332, 1], [2332, 4721, 2], [4721, 7390, 3], [7390, 10409, 4], [10409, 13209, 5], [13209, 16119, 6], [16119, 17359, 7], [17359, 19176, 8], [19176, 22476, 9], [22476, 25103, 10], [25103, 27992, 11], [27992, 34446, 12], [34446, 37433, 13], [37433, 40338, 14], [40338, 43384, 15], [43384, 45988, 16], [45988, 49509, 17], [49509, 52772, 18], [52772, 56424, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56424, 0.12019]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
c0b280359b0a6d3545f32012b84ada65fbea7311
|
LARS: A Logic-based Framework for Analyzing Reasoning over Streams
Harald Beck and Minh Dao-Tran and Thomas Eiter and Michael Fink
Institute of Information Systems, Vienna University of Technology
Favoritenstraße 9-11, A-1040 Vienna, Austria
{beck,dao,eiter,fink}@kr.tuwien.ac.at
Abstract
The recent rise of smart applications has drawn interest to logical reasoning over data streams. Different query languages and stream processing/reasoning engines were proposed. However, due to a lack of theoretical foundations, the expressivity and semantics of these diverse approaches were only informally discussed. Towards clear specifications and means for analytic study, a formal framework is needed to characterize their semantics in precise terms. We present LARS, a Logic-based framework for Analyzing Reasoning over Streams, i.e., a rule-based formalism with a novel window operator providing a flexible mechanism to represent views on streaming data. We establish complexity results for central reasoning tasks and show how the prominent Continuous Query Language (CQL) can be captured. Moreover, the relation between LARS and ETALIS, a system for complex event processing is discussed. We thus demonstrate the capability of LARS to serve as the desired formal foundation for expressing and analyzing different semantic approaches to stream processing/reasoning and engines.
Introduction
The emergence of sensors, networks, and mobile devices has generated a trend towards pushing rather than pulling of data in information processing. In stream processing (Babu and Widom 2001), studied by the database community, input tuples dynamically arrive at systems in form of possibly infinite streams. To deal with unboundedness of data, the systems typically apply window operators to obtain snapshots of recent data. The user runs continuous queries on the latter that are triggered either periodically or by events, e.g., by the arrival of new input. A prominent stream processing language is the Continuous Query Language (CQL) (Arasu, Babu, and Widom 2006), which has an SQL-like syntax and a clear operational semantics.
Recently, the rise of smart applications such as smart cities, smart home, smart grid, etc., has raised interest in the topic of stream reasoning (Della Valle et al. 2009), i.e., logical reasoning on streaming data. Consider the following example.
Example 1 To monitor a city’s public transportation, the city traffic center has a static background data set for the assignment of trams to lines of the form line(Id, L), where Id is the tram and L the line identifier. The planned travelling time (duration Z) between stops X and Y with line L is stored by rows plan(L, X, Y, Z). Facts of the form old(Id) classify old trams which are inconvenient for travelling with baby strollers. Moreover, sensor data tram(Id, X) and jam(X) report the appearance of tram Id and traffic jams at stop X, respectively. Based on this, reports on the traffic status and suggested updates for travel routes shall be provided in real time.
Consider Bob travelling with his baby on line ℓ3 (Fig. 1a). He is currently at Haydn Street (h) and wants to go to Strauß Avenue (s), so he has different options to change trams at Mozart Circus (m). Thus, he wants to know (i) the expected arrival time of the next tram (ii) is convenient for the stroller. Fig. 1b depicts arrival times, e.g., tram(a1, b) at t = 36 represents that tram a1 arrived at stop Beethoven Square at minute 36. Furthermore, consider the following background data tables, which specify the planned travel time between stops (plan), the association between lines and their trams (line) and which trams are old and thus not suitable for strollers (old).
\[
\begin{align*}
\text{plan} &= \{(\ell_1, b, m, 8), (\ell_2, g, m, 7), (\ell_3, h, m, 3), \ldots\} \\
\text{line} &= \{(a_1, \ell_1), (a_2, \ell_2), (a_3, \ell_3), \ldots\} \\
\text{old} &= \{(a_1), \ldots\}
\end{align*}
\]
Based on this input stream and the static background data, we expect the following reports (i) and (ii):
Figure 1: (a) Transportation map (b) Timeline (minutes)
(i) Tram a₂ is expected to arrive at m at minute 44, and a₃ should arrive at m one minute earlier, i.e., at minute 43.
(ii) Switching from line ℓ₃ to ℓ₂ at m satisfies the short waiting time requirement. However, since tram a₁ is old, it is not a good connection with the stroller. ■
Different research communities have contributed to various aspects of this topic, leaving several challenges to overcome. First, these predominantly practical approaches often define semantics only informally, which makes them hard to predict and hard to compare. Second, advanced reasoning features are missing, e.g., nonmonotonicity, nondeterminism or model generation. According techniques have been studied almost exclusively on static data.
**Contributions.** We present LARS, a Logic-based framework for Analyzing Reasoning over Streams, providing (i) a rule-based formalism with (ii) different means to refer to or abstract from time, including (iii) a novel window operator, i.e., a flexible mechanism to change the view on stream data. To date, no stream reasoning language with these features exists. Moreover, LARS features a model-based semantics, and it offers besides monotonic also nonmonotonic semantics that can be seen as an extension of Answer Set Programming (ASP) for stream reasoning.
We analyze the complexity of central reasoning tasks (model checking and satisfiability) in LARS, establishing that they do not get harder compared to ASP, provided that nesting of window operators is bounded (in particular, if no nesting occurs). Moreover, we demonstrate how the semantics of CQL can be expressed in LARS and study the relation of LARS and ETALIS (Anicic et al. 2010), a monotonic rule-based system for complex event processing.
The presented framework yields (a) a common ground to express various semantic concepts of different stream processing/reasoning formalisms and engines, which (b) can now be formally characterized in a common language, and thus (c) be compared analytically.
**Streams**
**Streaming Data.** We use mutually disjoint sets of predicates 𝒫 and constants 𝐶. The set 𝐴 of atoms is defined as \{p(c₁,...,cₙ) | p ∈ 𝒫, c₁,...,cₙ ∈ 𝐶\}. If i,j ∈ ℕ, we call the set \{i,j\} = \{k ∈ ℕ | i ≤ k ≤ j\} an interval. We divide 𝒫 into two disjoint subsets, namely the extensional predicates 𝒫ₑ and the intensional predicates 𝒫ᵢ. The former is used for input streams and background data, while the latter serves for intermediate and output streams. Additionally, we assume basic arithmetic operations (+, -, ×, ÷) and comparisons (=, ≠, <, >, ≤, ≥) are predefined by designated predicates 𝐵 ⊆ 𝒫ₑ, and used also in infix notation.
We now present the central notion of streams.
**Definition 1 (Stream)** Let 𝑇 be an interval and 𝑣: ℕ → 2^𝒜 an evaluation function such that 𝑣(𝑡) = ∅ for all 𝑡 ∈ ℕ \ 𝑇. Then, the pair 𝑆 = (𝑇, 𝑣) is called a stream, 𝑇 is the timeline of 𝑆, and the elements of 𝑇 are time points.
Consider two streams 𝑆 = (𝑇, 𝑣) and 𝑆′ = (𝑇′, 𝑣′), we say 𝑆′ is a substream or window of 𝑆, denoted 𝑆′ ⊆ 𝑆, if 𝑇′ ⊆ 𝑇 and 𝑣′(𝑡) ⊆ 𝑣(𝑡) for all 𝑡 ∈ 𝑇′. We call 𝑆 a proper substream of 𝑆, denoted 𝑆′ ⊂ 𝑆, if 𝑆′ ⊆ 𝑆 and 𝑆′ ≠ 𝑆.
Moreover, we define the size # ∈ ℕ of 𝑆 by \(\sum_{𝑡∈𝑇} |𝑣(𝑡)|\). The restriction 𝑆|′ of 𝑆 to 𝑇′ ⊆ 𝑇 is the stream (𝑇′, 𝑣|′), where 𝑣|′ restricts the domain of 𝑣 to 𝑇′, i.e., 𝑣|′(𝑡) = 𝑣(𝑡) for all 𝑡 ∈ 𝑇′, else 𝑣|′(𝑡) = ∅.
A data stream contains only atoms with extensional predicates.
**Example 2 (cont’d)** Consider again the scenario of Example 1. We can model the input as the data stream \(D = (𝑇, 𝑣)\) with a timeline \(𝑇 = [0, 50]\) and the evaluation \(𝑣(30) = \{\text{tram}(a_1, b_1)\}, 𝑣(40) = \{\text{tram}(a_3, h_1)\}\), and 𝑣(𝑡) = ∅ for all 𝑡 ∈ 𝑇 \ [36, 40]. We will also represent the evaluation function 𝑣 by according mappings, i.e., by \(\{36 \mapsto \{\text{tram}(a_1, b_1)\}, 40 \mapsto \{\text{tram}(a_3, h_1)\}\}\).
**Windows.** An essential aspect of stream reasoning is to restrict data to so-called windows, i.e., recent substreams to limit the amount of data and forget outdated information.
**Definition 2 (Window function)** A window function \(𝑤_-football\) of type \(𝑣\) takes as input a stream \(𝑆 = (𝑇, 𝑣)\), a time point \(𝑡 ∈ 𝑇\), called the reference time point, and a vector of window parameters \(𝑥\) for \(𝑣\) and returns a substream \(𝑆′\) of \(𝑆\).
The most common types of windows in practice are time-, tuple-, and partition-based windows. We associate them with three window functions \(𝑤_𝑡\), \(𝑤_𝑢\), and \(𝑤_𝑖\), respectively. Traditionally (Arasu et al., 2006), these window functions take a fixed size ranging back in time from a reference time point \(𝑡\); we generalize this by allowing to look back and forth from \(𝑡\). Intuitively, these functions work as follows.
- **Time-based:** \(𝑥 = (ℓ, 𝑢, 𝑑)\), where \(ℓ, 𝑢 ∈ 𝑁 \cup \{∞\}\) and \(𝑑 ∈ 𝑁\). The function \(𝑤_𝑡(𝑆, 𝑡, 𝑥)\) returns the substream of \(𝑆\) that contains all tuples of the last \(ℓ\) time units and the next \(𝑢\) time units relative to a pivot time point \(𝑡′\) derived from \(𝑡\) and the step size \(𝑑\) (Fig. 2). We use \(ℓ = ∞\) (resp. \(𝑢 = ∞\)) to take all previous (resp. later) tuples.
- **Tuple-based:** \(𝑥 = (ℓ, 𝑢)\), where \(ℓ, 𝑢 ∈ 𝑁\). The function \(𝑤_𝑢(𝑆, 𝑡, 𝑥)\) selects a substream of \(𝑆\) with the shortest interval \([𝑡_ℓ, 𝑡_𝑢]\) \(⊆ 𝑇\) as timeline, where \(𝑡_ℓ \leq 𝑡 \leq 𝑡_𝑢\), such that \(ℓ\) tuples are in \([𝑡_ℓ, 𝑡]\) and \(𝑢\) tuples are in \([𝑡_ℓ+1, 𝑡_𝑢]\). Exactly \(ℓ\), resp. \(𝑢\) tuples are returned. In case of multiple options due to multiple tuples at time points \(𝑡_ℓ\), resp. \(𝑡_𝑢\), only tuples from there are removed at random.
- **Partition-based:** \(𝑥 = (𝑖𝑑𝑥, 𝑛)\) where \(𝑖𝑑𝑥\) and \(𝑛\) are two total functions \(𝑖𝑑𝑥: 𝑀 → 𝐶 ⊆ 𝑁\) and \(𝑛: 𝐼 → 𝑁 \times 𝑁\). Here, \(𝐼\) is a finite index set. Applying \(𝑤_𝑖(𝑆, 𝑡, 𝑥)\) first splits the input stream \(𝑆 = (𝑇, 𝑣)\) into \(𝑖\) substreams \(𝑆_𝑖 = (𝑇, 𝑣_𝑖)\) by taking \(𝑣_𝑖(𝑡) = \{a ∈ 𝑣(𝑡) | 𝑖𝑑𝑥(𝑎) = 𝑖\}\). Then, a tuple-based window \(𝑤_𝑢\) is applied on each \(𝑆_𝑖\) with parameters taken from \(𝑛(𝑖) = (ℓ_𝑖, 𝑢_𝑖)\). The output streams after \(𝑤_𝑢\) are then merged to the result window.
Here, we gave a slight generalization of window functions as presented (more formally) in (Beck et al. 2014), using the general parameter vector \(𝑥\). This will be more useful when we discuss window applications with flexible sizes represented by variables.
Due to space reason, we present only the adaptation of time-based windows formally. The same idea can be applied to other types of windows straightforwardly.
Here, all operators extract tuples from the second stream. The symbol \( \boxplus_t^1 \) abbreviates the time-based window operator that takes all tuples of the last 4 time points, while \( \boxplus_t^5 \) takes all tuples of the next 5 time points. Moreover, \( \boxplus_t^x \) takes the latest tuple which arrived until the reference time point.
**Syntax.** In addition to window operators, we use further means to refer to or abstract from time. Similarly as in modal logic, we use operators \( \Box \) and \( \Diamond \) to represent that a tuple (atom) or formula holds at all times respectively some time in a window. Moreover, an *exact* operator \( @ \) is used to refer to specific time points.
**Definition 5 (Formulas)** Let \( a \in A \) be an atom and \( t \in \mathbb{N} \). The set \( F \) of formulas is defined by the following grammar:
\[
\alpha ::= a | \neg \alpha | \alpha \land \alpha | \alpha \lor \alpha | \alpha \rightarrow \alpha | \Diamond \alpha | \Box \alpha | \Diamond \alpha \land \Box \alpha | \Diamond \alpha \land \Box \alpha
\]
Intuitively, given a stream \( S^* \) and a considered window \( S \) (which initially is \( S^* \)), a formula \( \alpha \) will be evaluated based on a reference time point \( t \) within \( S \). An application of a window operator \( \otimes_{t,ch} \) creates a new window \( S' \) that depends on \( S^* \) and \( S \) as specified by the stream choice. Within the current window \( S, \Diamond \alpha \) (resp. \( \Box \alpha \)) holds, if \( \alpha \) holds at some time point (resp. at all time points) in \( S \). Relative to \( t \), the formula \( \alpha \) holds if \( \alpha \) is true at \( t \), and \( \Diamond \alpha \) holds if \( \alpha \) is in the timeline of \( S \) and \( \alpha \) is true at \( t \). That is, the operator \( @ \) allows to \( \text{jump} \) to a specific time point within the window.
**Semantics.** In addition to streams, we consider background knowledge in form of a static data set, i.e., a set \( B \subseteq A \) of atoms which does not change over time. From a semantic perspective, the difference to streams is that static data is always available, regardless of window applications.
**Definition 6 (Structure)** Let \( S = (T, v) \) be a stream, \( W \) be a set of window functions and \( B \subseteq A \) a set of facts. Then, we call \( M = (T, v, W, B) \) a structure, \( S \) the interpretation stream and \( B \) the data set or background data of \( M \).
We now define when a formula holds in a structure.
**Definition 7 (Entailment)** Let \( M = (T^*, v^*, W, B) \) be a structure, \( S^* = (T^*, v^*) \) and let \( S = (T, v) \) be a sub-stream of \( S^* \). Moreover, let \( t \in T \). The entailment relation \( \models \) between \((M, S, t)\) and formulas is defined as follows. Let \( \alpha \in A \) be an atom, and let \( \alpha, \beta \in F \) be formulas. Then,
\[
M, S, t \models \alpha \quad \text{iff} \quad \alpha \in v(t) \quad \text{or} \quad a \in B,
\]
\[
M, S, t \models \neg \alpha \quad \text{iff} \quad \text{either} \quad M, S, t \not\models \alpha,
\]
\[
M, S, t \models \alpha \land \beta \quad \text{iff} \quad M, S, t \models \alpha \quad \text{and} \quad M, S, t \models \beta,
\]
\[
M, S, t \models \alpha \lor \beta \quad \text{iff} \quad M, S, t \models \alpha \quad \text{or} \quad M, S, t \models \beta,
\]
\[
M, S, t \models \alpha \rightarrow \beta \quad \text{iff} \quad M, S, t \not\models \alpha \quad \text{or} \quad M, S, t \models \beta,
\]
\[
M, S, t \models \Diamond \alpha \quad \text{iff} \quad M, S, t' \not\models \alpha \quad \text{for some} \quad t' \in T,
\]
\[
M, S, t \models \Box \alpha \quad \text{iff} \quad M, S, t' \not\models \alpha \quad \text{for all} \quad t' \in T,
\]
\[
M, S, t \models \otimes_{t,ch} \alpha \quad \text{iff} \quad M, S', t \models \alpha \quad \text{where} \quad S' = w_t(ch(S^*, S), t, x).
\]
If \( M, S, t \models \alpha \) holds, we say that \((M, S, t)\) *entails* \( \alpha \). Moreover, \( M \) satisfies \( \alpha \) at time \( t \), if \((M, S^*, t)\) entails \( \alpha \). In this case we write \( M, t \models \alpha \) and call \( M \) a model of \( \alpha \) at time \( t \). Satisfaction and the notion of a model are extended to sets of formulas as usual.
**Example 4 (cont’d)** Let \( D = (T, v) \) be the data stream of Ex. 3 and \( S^* = (T^*, v^*) \supseteq D \) be a stream such that \( T^* = T \) and
\[
v^* = \bigg\{ 36 \mapsto \{ \text{tram}(a_1, b) \}, \ 40 \mapsto \{ \text{tram}(a_2, h) \}, \ 43 \mapsto \{ \text{exp}(a_3, m) \}, \ 44 \mapsto \{ \text{exp}(a_1, m) \} \bigg\}.
\]
Let $M = (T^*, v^*, W, B)$, where $W = \{w_i\}$, and $B$ is the set of facts from the data tables in Example 1. Then it holds that $M, S^*, 42 \models \Box^{\leq 5} \Diamond \exp(a_3, m)$: The window operator $\Box^{\leq 5}$ selects $S' = (T', v')$, with timeline $T' = [42, 47]$ and $v' = \{43 \mapsto \{\exp(a_3, m)\}, 44 \mapsto \{\exp(a_1, m)\}\}$, i.e., there is some $t' \in T'$ ($t' = 43$) s.t. $M, S', t' \models \exp(a_3, m)$.
**Programs.** Now we define a rule language for stream reasoning with semantics similar to Answer Set Programming.
**Definition 8 (Rule, Program)** A program $P$ is a set of rules, i.e., expressions of the form
$$\alpha \leftarrow \beta_1, \ldots, \beta_j, \text{not } \beta_{j+1}, \ldots, \text{not } \beta_n,$$
(1)
where $\alpha, \beta_1, \ldots, \beta_n \in F$ are formulas and $\alpha$ contains only intensional predicates.
Suppose we want to evaluate a program $P$ on a data stream $D$. Let $I = (T, v)$ be a stream such that $D \subseteq I$. If at every time point in $I$, all atoms that occur in $I$ but not in $D$ have intensional predicates, then we call $I$ an interpretation stream for $D$ and a structure $M = (T, v, W, B)$ an interpretation (for $D$). Let for any rule $r$ of form (1), be $\beta(r) = \beta_1 \land \ldots \land \beta_j \land \neg \beta_{j+1} \land \ldots \land \neg \beta_n$. We then say that $M$ is a model of $P$ (for $D$ at time $t$), denoted $M, t \models P$, if $M, t \models \beta(r) \rightarrow \alpha$ for all rules $r \in P$. We call $M$ a minimal model, if no model $M' = (T', v', W, B)$ of $P$ (for $D$ at time $t$) exists such that $(T', v') \subseteq (T, v)$. The reduct of a program $P$ w.r.t. $M$ at time $t$ is defined by $P^{M,t} = \{r \in P | M, t \models \beta(r)\}$, i.e., the subset of rules whose bodies are satisfied.
**Definition 9 (Answer Stream)** Let $M = (T, v, W, B)$ be a structure, where $I = (T, v)$ is an interpretation stream for a data stream $D$, let $P$ be a program and $t \in T$. Then, $I$ is called an answer stream of $P$ for $D$ at time $t$ (relative to $W$ and $B$) if $M$ is a minimal model of the reduct $P^{M,t}$.
For ASP fragments of LARS, answer streams correspond to answer sets as defined by the FLP-reduct (Faber et al., 2004), which we formulated for LARS programs above. More precisely, consider an interpretation stream $I = (\{t\}, v')$ for a data stream $D$, let $P$ be a program where in each rule of form (1) all body formulas $\beta$ are atoms and the head $\alpha$ is a disjunction of atoms with intensional predicates. Then, $I$ is an answer stream of $P$ at $t$ relative to some $W$ and $B$ iff $v'(t)$ is an answer set of $P \cup \{\beta(t)\} \cup B$.
Towards more conciseness, we consider schematic programs with variables of two sorts, namely constant variables and time variables. The semantics of these nonground programs is given by the answer streams of according groundings, obtained by replacing variables with constants from $C$, respectively time points from $T$, in all possible ways.
**Example 5 (cont’d)** The requests (i) and (ii) from Example 1 can be formulated by rules (2) and (3), respectively.
$$\mathord{\forall} T \exp(Id, Y) \leftarrow \oplus^{\text{time}}_p \Diamond \exp(Id, X), \text{line}(Id, L), \text{not } \mathord{\Box}^{\text{time}}_p \Diamond \text{jam}(X), \text{plan}(L, X, Y, Z), T = T_1 + Z.$$
(2)
$$\exp(Id_1, Id_2, X) \leftarrow \mathord{\forall} T \exp(Id_1, X), \mathord{\forall} T \Box^{\text{time}}_p \Diamond \exp(Id_2, X), \text{Id}_1 \neq \text{Id}_2, \text{not } \text{old}(\text{Id}_2).$$
(3)
Rule (2) encodes when a train is expected at later stops. For the partition-based window operator $\Box^{\text{time}}_p$, we use $\text{idx}(at) = i$ for an atom $at$ of form $\text{tram}(a, X)$ and $\text{idx}(at) = 0$ else. By the tuple-based windows of sizes $n(i) = (1, 0)$ for $i > 0$ and $n(0) = (0, 0)$ applied on the $i + 1$ obtained substreams, we thus get for each tram $a_i$ only its most recent appearance at some stop $X$. Usually, the expected arrival time on the next stop can be computed by the travelling duration according to the table $\text{plan}$. For the case of traffic jams within the last 20 minutes, we block such conclusions by means of default negation.
Next, rule (3) builds on the expected arrival times of rule (2) to identify good connections where the targeted tram is not old and the expected waiting time is at most 5 minutes. It uses a time-based window that looks 5 minutes ahead from the time when $\exp(\text{Id}_1, X)$ is concluded and checks the existence (operator $\Diamond$) of an expected (different) tram $\text{Id}_2$.
We observe that the interpretation stream of the structure $M$ of Example 4 is an answer stream of $P$ for $D$ at time $t$. Note that $\text{gc}(a_3, a_1, m)$ is not derived. Tram $a_1$ appears one minute after $a_3$ at Mozart Circus, but it is old. Thus, the next example demonstrates another advantage of our rule-based approach, namely the possibility to obtain different models for nondeterministic choices.
**Example 6 (cont’d)** Consider an extended scenario where a tram with identifier $a_2$ of line $\ell_2$ is reported at Guda Lane ($g$) at time point $38$. This updates the data stream $D = (T, v)$ in Example 2 to $D' = (T, v')$, where $v' = v \cup \{38 \mapsto \{\text{tram}(a_2, g)\}\}$. By the entries $\text{line}(a_2, \ell_2)$ and $\text{plan}(\ell_2, g, m, 7)$ in $B$, rule (2) derives that tram $a_2$ is expected to arrive at Mozart Circus at $t = 45$. Furthermore, we now assume that tram $a_1$ is not old, i.e., $\text{old}(a_1) \notin B$. This gives Bob three good connections at stop $m$, when leaving tram $a_3$ at minute 43:
$$\text{G} = \{\text{gc}(a_3, a_1, m), \text{gc}(a_1, a_2, m), \text{gc}(a_3, a_2, m)\}$$
Bob is not interested in the connection from $a_1$ to $a_2$, since he is currently travelling with $a_3$. His smart phone streams an according tuple $\text{on}(a_3)$ at query time. This leaves him two options: He can either change to line $\ell_1$ (and take tram $a_1$ after 1 minute at time point $44$), or to line $\ell_2$ (and take tram $a_2$ after 2 minutes at $45$). The following two rules formalize the possibility to either change trams or skip a good connection:
$$\text{change(Id}_1, \text{Id}_2, X) \leftarrow \text{on(Id}_1), \text{gc(Id}_1, \text{Id}_2, X), \text{not skip(Id}_1, \text{Id}_2), X.$$
(4)
$$\text{skip(Id}_1, \text{Id}_2, X) \leftarrow \text{gc(Id}_1, \text{Id}_2, X), \text{change(Id}_1, \text{Id}_3, X), \text{Id}_2 \neq \text{Id}_3.$$
(5)
Consider the program $P$ consisting of the rules (2)-(5). Moreover, let $D'' = (T, v'')$ be the data stream obtained from $D'$ by adding $\{42 \mapsto \{\text{on}(a_3)\}\}$ to the evaluation and let $\text{Id}_0 = (T, v_0), \text{Id}_1 = (T, v_1)$ and $\text{Id}_2 = (T, v_2)$ be the following interpretation streams for $D''$: We take $v_0 = v \cup \{42 \mapsto \text{G}, 43 \mapsto \{\exp(a_3, m)\}\}, 44 \mapsto \{\exp(a_1, m)\}, 45 \mapsto \{\exp(a_2, m)\}\}$. And for $i \in \{1, 2\}$, let $v_i = v_0 \cup \{42 \mapsto \text{choice}_i\}$, where
$\text{choice}_1 = \{\text{change(a}_3, \text{a}_1, \text{m}), \text{skip(a}_3, \text{a}_2, \text{m}), \text{and} \}$
$\text{choice}_2 = \{\text{change(a}_3, \text{a}_2, \text{m}), \text{skip(a}_3, \text{a}_1, \text{m})\}$.
1Thus, “not” and “¬” coincide, as well as “∧” and “•”.
Then, \( I_1 \) and \( I_2 \) are (the only) two answer streams for \( P \) at time 42 relative to \( W = \{ w_r, w_p \} \) and \( B \), i.e., we get the user choices as separate models.
Note that in this example we did not constrain good connections by the actual destination Bob wants to reach. By means of the presented formalism, such reachability relations can be expressed elegantly through recursion as in Datalog.
Another benefit of our approach for advanced stream reasoning is the possibility to retract previous conclusions due to new input data. Combined with (minimal) model generation, i.e., alternatives that may be enumerated, compared under preference etc., such nonmonotonic reasoning allows for sophisticated AI applications in data stream settings.
Example 7 (cont’d) If the lines \( \ell_1 \) and \( \ell_2 \) have the same travelling time from Mozart Circus to Strauß Avenue, Bob will pick choice \( 1 \) (answer stream \( I_1 \)), since at \( t = 42 \) tram \( a_1 \) is expected to arrive one minute earlier than tram \( a_2 \).
Suppose a few seconds later (still at \( t = 42 \)) a traffic jam is reported for Beethoven Square. Thus, we now consider the data stream \( D_j = (T, v_3) \), where \( v_3 = v \cup \{ 42 \rightarrow \{ \text{on}(a_3), \text{jam}(b) \} \} \). Thus, we have no expectation anymore when tram \( a_1 \) will arrive at Mozart Circus. Now \( \text{exp}(a_1, m) \) cannot be concluded for \( t = 44 \), and as a consequence, \( g c(a_3, a_1, m) \) will not hold anymore. Thus, the previous two answer streams are discarded and only change \( (a_3, a_2, m) \) remains recommended in the resulting unique answer stream.
### Complexity of Reasoning in LARS
Let \( \alpha \) be a formula, \( P \) a program, \( W \) a set of window functions evaluable in polynomial time, and let \( B \subseteq A \) be a set of atoms. We say that a stream \( S = (T, v) \) is over \( A' \subseteq A \), if \( v(t) \setminus A' = \emptyset \) for all \( t \in T \).
We study the complexity of the following reasoning tasks:
1. **Model checking (MC).** Given \( M = (T, v, W, B) \) and a time point \( t \), check whether
- for a stream \( S \subseteq (T, v) \) and formula \( \alpha \) it holds that \( M, S, t \models \alpha \); resp.
- \( I = (T, v) \) is an answer stream of a program \( P \) for \( D \subseteq I \) at \( t \).
2. **Satisfiability (SAT).** For decidability, we assume that relevant atoms are confined to (polynomial) \( A' \subseteq A \). The reasoning tasks are:
- Given \( W, B, A' \), a timeline \( T \) and a time point \( t \), is there an evaluation function \( v \) on \( T \) such that \( M, S, t \models \alpha \), where \( M = (T, v, W, B) \) and \( S = (T, v) \) is over \( A' \)?
- Given \( W, B \) and a data stream \( D \), does there exist an answer stream of \( P \) for \( D \) over \( A' \) (relative to \( W \) and \( B \)) at \( t \)?
Table 1 shows the complexity of reasoning in ground LARS, where \( \alpha^- \), \( P^- \) are formulas resp. programs with nesting of window operators bounded by a constant. Note that the problems refer to the more general notion of entailment but (hardness) results carry over to satisfaction. The complexity of the general case is based on the following theorem.
**Theorem 1** Given a structure \( M = (T, v, W, B) \), a stream \( S \), a time point \( t \), and an arbitrary ground formula \( \alpha \), deciding \( M, S \models \alpha \) is PSPACE-complete, and PSPACE-hardness holds for \( S = (T, v) \).
Intuitively, PSPACE-membership can be shown by a depth-first-search evaluation of a formula along its tree representation. At each node of the tree, we need to store the content according to the window operators applied as in the path from the root, which requires only polynomial space.
The PSPACE-hardness can be shown by a reduction from evaluating QBFs \( \exists x_1 \forall x_2 \ldots Q_n x_n \phi(x) \) to LARS MC. A LARS formula \( \alpha = \exists^1 \phi_1 \sqcap \exists^2 \phi_2 \sqcup \ldots \phi_n \) on the time line \( T = [0, 1] \) is constructed where \( \exists^i \) effects the possible truth assignments to \( x_i \) at the time points \( 0 \) or \( 1 \), and \( \sqcup \), \( \sqcap \) naturally encode the quantifiers \( \exists \) and \( \forall \).
The next result addresses the complexity of MC for ground LARS programs.
**Theorem 2** MC for LARS programs, i.e., given a structure \( M = (T, v, W, B) \), a data stream \( D \), a program \( P \), and a time point \( t \), decide whether \( I = (T, v) \) is an answer stream of \( P \) for \( D \) at time \( t \), is PSPACE-complete.
**Proof.** To decide the problem, we can (a) check that \( I \) is an interpretation stream for \( D \), (b) compute \( P^{M,I} \), and (c) check that \( M \) is a minimal model of \( P^{M,I} \), i.e., that (c.1) \( M, I \models P^{M,I} \) and (c.2) no \( M' = (T', v', W, B) \), with \( (T', v') \subset (T, v) \) exists s.t. \( M', t \models P^{M,I} \).
1. step (a) is trivially polynomial;
2. steps (b) and (c) are feasible in polynomial time using a PSPACE oracle; and
3. step (c.2) is feasible in nondeterministic polynomial time using a PSPACE oracle (guess \( (T', v') \) and check \( M', t \models P^{M,I} \)).
Overall, the computation is feasible in NPSPACE, thus in PSPACE (as NPSPACE = PSPACE).
PSPACE-hardness of the problem is immediate from Theorem 1: let \( P = \{ \alpha \leftarrow \top \} \), where \( \top \) is an arbitrary tautology and exploit \( S = (T, v) \). \( \square \)
Under restrictions, however, MC may be tractable. This holds e.g. for the important case where only time-based windows are allowed. In case of \( \alpha^- \), the evaluation tree for MC has only polynomially many window contents to process, and we can use a standard labeling technique to evaluate formulas bottom up (from subformulas) in polynomial time.
SAT for \( \alpha \) (resp. \( \alpha^- \)) is in PSPACE (resp. NP) as guess and check establishes membership, and hardness is inherited from MC (resp. propositional SAT). For monotone (e.g., time-based) window functions, the results apply setting \( A' \) to the atoms in \( \alpha \); also for tuple-based and partition-based windows reasonable assumptions (e.g., \( t, v \ll |S| \) and idx monotone) yield only polynomially larger \( A' \).
For LARS programs, building the reduct $P^{M,i}$ is for $P$ (resp. $P^-$) feasible in polynomial space (resp. the minimality check is feasible in polynomial space (resp. requires a polynomial guess to refute minimality). A more detailed complexity analysis including schematic programs (with $A$ possibly infinite) is subject to ongoing work.
**Capturing CQL**
The Continuous Query Language (CQL) (Arasu, Babu, and Widom 2006) is an SQL based language for maintaining continuous queries over streaming data. It extends SQL with different operators. Two important ones are:
- **Stream-to-relation (S2R)** operators apply window functions to the input stream to create a mapping from execution times to bags of valid tuples (w.r.t. the window) without timestamps. This mapping is called a *relation*.
- **Relation-to-relation (R2R)** operators can manipulate relations similarly as in relational algebra, respectively SQL.
**Example 8** The request (i) from Example 1 can be represented by the following CQL query.
```
SELECT ID, PLAN.Y, T2
FROM TRAM[PARTITION BY ID ROWS 1], LINE, PLAN
WHERE TRAM.ID=LINE.ID AND LINE.L=PLAN.L AND TRAM.ST=PLAN.X AND T2=TRAM.T+PLAN.Z
AND NOT EXISTS
(SELECT * FROM JAM[RANGE 20] WHERE JAM.ST=TRAM.ST)
```
Note that these streams have designated timestamp fields.
To capture CQL queries by LARS programs, we exploit two well-known translations: from SQL to relational algebra (Dadashzadeh and Stemple 1990) and from relational algebra to Datalog (Garcia-Molina, Ullman, and Widom 2009). Let us call the former $\Delta_\rho$ and the latter $\Delta_\delta$.
The idea is to have a 3-step process, given a CQL query $q$:
1. (1) apply a translation $\Delta_{\text{SRC}}$ (Table 2) to the input streams (with windows) and tables in the FROM and WHERE clauses of $q$. Let $\Delta_{\text{SRC}}(q)$ denote the result of applying $\Delta_{\text{SRC}}$ on the input streams of $q$. Considering the formulas of $\Delta_{\text{SRC}}(s)$ as table names, we get an SQL query;
2. (2) apply $\Delta_\rho$ on this query to get a relational algebra expression; and
3. (3) apply $\Delta_\delta$ on the expression to get a program.
Considering the translated table names as LARS formulas, we get a LARS program. Formally speaking, the translation of a CQL query $q$ is $\Delta(q) = \Delta_\delta(\Delta_\rho(\Delta_{\text{SRC}}(q)))$, and that of a set $Q$ of CQL queries is given by $\Delta(Q) = \bigcup_{q \in Q} \Delta(q)$.
**Example 9** The translation of the CQL query in Ex. 8 is:
\[
\begin{align*}
q_1(P_1) &= \exists^{\text{idx},n} \exists^{\text{idx}} \text{tram}(Id, ST, T_1), \text{line}(Id, L), \\
& \text{plan}(L, X, Y, Z), ST = X, T_2 = T_1 + Z. \\
q_2(P_2) &= \exists^{\text{idx},n} \exists^{\text{idx}} \text{jam}(ST, T_3), \text{tram}(Id, ST, T_1). \\
q_{12}(P_{12}) &= q_1(P_1), q_2(P_2). \\
q(P) &= q_1(P_1), q_2(P_2)
\end{align*}
\]
where \( P_1 = Id, ST, X, Y, Z, T_1, T_2; \ P_2 = Id, ST, T_1, T_3; \ P_{12} = P_1, T_3; P = Id, Y, T_2; \) and idx, n are from Ex. 5.
<table>
<thead>
<tr>
<th>Input source ( s )</th>
<th>$\Delta_{\text{SRC}}(s)$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$s$</td>
<td>$s$</td>
</tr>
<tr>
<td>$s[RANGE L]$</td>
<td>$s[f] \Join s$</td>
</tr>
<tr>
<td>$s[RANGE L SLIDE D]$</td>
<td>$s[t_0] \Join s$</td>
</tr>
<tr>
<td>$s[RANGE UNBOUNDED]$</td>
<td>$s[\infty] \Join s$</td>
</tr>
<tr>
<td>$s[\text{NOW}]$</td>
<td>$s$</td>
</tr>
<tr>
<td>$s[\text{ROWS N}]$</td>
<td>$s[#] \Join s$</td>
</tr>
<tr>
<td>$s[\text{PARTITION BY X1, \ldots, Xk ROWS N}]$</td>
<td>$s[p_{\text{idx},n}] \Join s$</td>
</tr>
</tbody>
</table>
**Table 2: Translation function $\Delta_{\text{SRC}}$**
To establish the correspondence between the result of a set $Q$ of CQL queries and its LARS translation $\Delta(Q)$, we first formally build a conversion of CQL streams to a LARS input stream. W.l.o.g., assume that $Q$ is evaluated on a background data table $B$ and input streams $S_1, \ldots, S_n$, and that any stream is only used in one place in the FROM clause in a single query (we can always duplicate streams and rename them). These input streams can be represented as the set $\mathcal{S} = \{(S_i(a_{ij}), t_{ij}) \mid 1 \leq i \leq n, 1 \leq j \leq m_i\}$.
The corresponding representation of the input stream in LARS is defined by $\forall(S) = (T_3, v_3)$, where for $1 \leq i \leq n$, and $1 \leq j \leq m_i$:
$$
T_3 = [t_{\min}, t_{\max}]; \quad v_3(t') = \{S_i(a_{ij}) \mid t_{ij} = t'\}, t' \in T; \quad t_{\min} = \min t_{ij};
$$
Let $res(q, t)$ denote the set of all answers to $q$ and let $res(Q, t) = \bigcup_{q \in Q} res(q, t)$. The following theorem shows that the translation $\Delta$ faithfully captures CQL.
**Theorem 3** Let $Q$ be a set of CQL queries to be evaluated on input streams $S = S_1, \ldots, S_n$ and a background data table $B = P = \Delta(Q)$, and $t$ a time point. Then:
(a) There exists an answer stream $I = (T, v)$ of $P$ for $\forall(S)$ at $t$, such that $v(t)|_{Q} = res(Q, t)$.
(b) If $I = (T, v)$ is an answer stream of $P$ for $\forall(S)$ at $t$, then $res(Q, t) = v(t)|_{Q}$.
Intuitively, (b) establishes the soundness and (a) the completeness of the translation $\Delta$.
**Proof (Sketch).** Given a set of CQL queries $Q$ and its translated LARS program $\Delta(Q)$, we establish the correspondence between their answers by a translation from $Q$ to a Datalog program $\Delta_D(Q)$. Briefly, $\Delta_D(Q)$ is constructed in a similar way as $\Delta(Q)$, except that the first step translates the input streams (with windows) to plain table names instead of LARS formulas. Formally speaking, this is done by a renaming function $\text{ren}$ instead of $\Delta_{\text{SRC}}$. Then, we apply $\Delta_\rho$ and $\Delta_\delta$ to get $\Delta_D(Q) = \bigcup_{q \in Q} \Delta_\delta(\Delta_\rho(\text{ren}(q)))$.
Note that both $\Delta(Q)$ and $\Delta_D(Q)$ are acyclic programs, thus each of them has a unique minimal model.
By the correctness of $\Delta_\rho$ and $\Delta_\delta$, the unique answer set of $\Delta_D(Q)$ and the result set of $Q$ correspond. Moreover, one can guarantee that $\Delta(Q)$ and $\Delta_D(Q)$ are evaluated essentially on the same input (despite slightly different representation) when computing answers for the same reference time
point. As moreover the programs are structurally the same, they correspond on their unique answer set/answer stream.
The two above observations yield the desired correspondence result between the results of \( Q \) and the answer stream of the respective LARS program \( \Delta(Q) \).
\( \square \)
**Relation to ETALIS**
Related to stream processing is *complex event processing* (CEP), which is concerned with describing and detecting complex events (high-level information) based on atomic events (low-level information) of a stream. Complex events are typically expressed over time intervals. By briefly studying the well-known CEP language ETALIS (Anicic et al. 2010), we will draw a line between stream reasoning and complex event processing by means of our formalization.
In ETALIS, an event stream \( e \) maps atomic events (ground atoms) to time points. Instead of non-negative rational numbers, we use natural numbers, which suffice for practical purposes. *Complex events* can be constructed by rules on *event patterns*, which are similar to interval relations by Allen (1983). An interpretation \( \mathcal{I} \) maps atoms to sets of pairs \((t_1, t_2) \in \mathbb{N} \times \mathbb{N} \), which represent intervals \([t_1, t_2] \). Intuitively, \( \mathcal{I} \) satisfies a rule \( a \leftarrow pt \), if the atom \( a \) holds at least in the set of intervals where the event pattern \( pt \) holds. For an event stream \( e \) and a rule base \( \mathcal{R} \), Anicic et al. define ETALIS semantics in terms of minimal models that (i) map each atomic event \( a \) to the interval \((t, t)\) if \( a \) occurs in \( e \) at time point \( t \), and (ii) satisfy each rule \( r \in \mathcal{R} \).
**Intervals in LARS.** Although LARS is based on time points, we can express ETALIS patterns that are based on intervals. Consider a window function \( w_{int} \) that selects the substream of (the greatest timeline within) a given interval \([\ell, u]\), and a window operator \( \lbrack \ell, u \rbrack^\square \) that employs \( w_{int} \) on the input stream. Furthermore, let \( \lbrack \ell, u \rbrack \) stand for \( \lbrack 0, \infty \rbrack^\square \) or \( \lbrack \ell, u \rbrack \), i.e., first create a view on the entire input stream, jump to reference time \( u \), then select the substream of the timeline \([\ell, u]\) and apply \( \square \). Then, the formula \( \lbrack \ell, u \rbrack^\square a \) holds iff \( a \) holds at every time point in the interval \([\ell, u]\), regardless of the query time. Similarly, we can define \( \lbrack \ell, u \rbrack^\square a \) such that \( \lbrack \ell, u \rbrack^\square a \) holds iff \( \lbrack \ell, u \rbrack^\square a \) is a maximal interval in which \( a \) always holds.
**Example 10** Consider the events \( x \) and \( y \) which hold in the intervals given by the pairs \((t_1, t_2)\) and \((t_3, t_4)\), respectively, where \( t_2 < t_3 \). Then, the ETALIS rule \( z \leftarrow x \) SEQ \( y \) assigns the pair \((t_1, t_4)\) to \( z \). It may be modelled in LARS by the rule \( z \leftarrow w_{int}[t_1, t_2] \cdot x \cdot w_{int}[t_3, t_4] \cdot y \), i.e., \( x \) holds in the entire interval \([t_1, t_2]\) and \( y \) holds throughout the later interval \([t_3, t_4]\) (also maximally), then \( z \) must hold throughout \([t_1, t_4]\).
However, we cannot fully express the ETALIS semantics in LARS by this straightforward encoding, since ETALIS allows atoms to be assigned to multiple intervals that need not be disjoint. In LARS, we assign atoms to a single timeline by the evaluation \( v : T \rightarrow 2^A \). Unless we explicitly use time points in atoms, we can encode intervals only by assigning atoms to consecutive time points. Overlapping or adjacent intervals for the same atom are indistinguishable from a merged view of them.
We call \( \mathcal{I} \) separable, if such overlaps do not occur. If the minimal model of an event stream \( e \) and a rule base \( \mathcal{R} \) is separable, we also call the pair \( e, \mathcal{R} \) separable. In this case, the approach of Ex. 10 allows us to capture ETALIS. In our subsequent results we confine to positive rule bases, i.e., without the NOT pattern. Our notion of minimality is based on set inclusion, whereas ETALIS defines minimality in terms of a different preference relation that ensures the minimal length and the supportedness of inferred intervals. By this, the minimal model is always unique, while a natural translation of negation in LARS would give multiple models in general. Capturing ETALIS’ minimal model semantics of NOT patterns in LARS would require a more involved and less direct translation (which is beyond the scope of this work).
**Theorem 4** Let \( e \) be an event stream, let \( \mathcal{R} \) be a positive rule base (i.e. without negation) such that \( e, \mathcal{R} \) is separable, and let \( \mathcal{I} \) be an interpretation for \( e, \mathcal{R} \). Then one can construct a LARS input stream \( \Delta \), a program \( \Delta^\mathcal{R} \), and an interpretation stream \( \Delta^\mathcal{R} = (T, v) \), such that for each \( t \in T \), \( \mathcal{I} \) is the minimal model for \( e, \mathcal{R} \) iff \( \Delta^\mathcal{R} \) is the answer stream of \( \Delta^\mathcal{R} \) at time \( t \) relative to \( W = \{w_{int}\} \) and \( B = \emptyset \).
Taking LARS and ETALIS as reference languages, separability can thus be regarded as the dividing line between stream reasoning and complex event processing. Under a stream reasoning view on ETALIS, focusing on truth at single time points, we get correspondence.
**Corollary 1** Let \( e \) be an event stream, \( \mathcal{R} \) be a positive rule base such that the minimal model \( \mathcal{I} \) of \( e, \mathcal{R} \) is separable and let \( \Delta^\mathcal{R} = (T, v) \), i.e., the answer stream of \( \Delta^\mathcal{R} \). Then, for all atoms \( a \in A \) and for all \( t \in T \), \( a \in v(t) \) iff there exists an interval \([t_1, t_2] \in \mathcal{I}(a) \) such that \( t \in [t_1, t_2] \).
In summary, we have shown how LARS operators can be naturally used to reason over time intervals. However, the presented intuitive approach is less expressive than ETALIS, where an atom can be assigned to overlapping intervals. On the other hand, the minimal model of the monotonic ETALIS semantics can be constructed by computing fixed-points for intervals of increasing size. Hence, with an explicit encoding of intervals \([t_1, t_2]\) into atoms that contain \( t_1 \) and \( t_2 \) as terms, one can mimic the bottom-up evaluation of such models with ASP and thus also with LARS. It is a research topic on its own to find a suitable extension of LARS for nonmonotonic complex event processing that builds upon an evaluation \( v : T \times T \rightarrow 2^A \) mapping intervals to atoms.
**Related Work**
In the Semantic Web area, the SPARQL language was extended to queries on streams of RDF triples; respective engines such as CQELS (Phuoc et al. 2011), C-SPARQL (Barbieri et al. 2010), and SPARQLStream (Calbimonte, Corcho, and Gray 2010) follow the snapshot semantics approach of CQL. However, they face difficulties with extensions incorporating the Closed World Assumption, nonmonotonicity, or nondeterminism. Such features are important to deal with missing or incomplete data, which can, e.g., temporarily happen due to unstable network connections or hardware failure. In this case, these engines remain idle, while some output based on default reasoning might be useful.
In KR&R, first attempts towards expressive stream reasoning have been recently carried out and reveal many open problems. The plain approach of Do, Loke, and Liu (2011) periodically calls the dlvhex solver (Eiter et al. 2006) without incremental reasoning and thus cannot handle heavy data load. StreamLog (Zaniolo 2012) extends Datalog towards stream reasoning, based on stratification (which guarantees a single model), while OSMS (Ren and Pan 2011) considers streams of ontologies. Both StreamLog and OSMS have no window mechanisms. Time-decaying logic programs (Gebser et al. 2012) aim to implement time-based windows in reactive ASP (Gebser et al. 2008), whose relation to other stream processing/reasoning approaches is unexplored.
Moreover, as observed by Dindar et al. (2013), conceptually identical queries may produce different results on different engines. This may be due to differences (i.e., flaws) in implementations, but might also arise from (correct implementations of) different semantics. Comparisons between different approaches are confined to experimental analysis (Phuoc et al. 2012) or informal examination on specific examples. For the user it is important to know the exact capabilities and semantic behaviors of given approaches for systematic analysis and comparison.
Conclusion
We presented LARS, an expressive rule-based modelling language to formalize and analyze stream reasoning semantics. It provides an idealized model-based, nonmonotonic language to formalize and analyze stream reasoning semantics. For practical concerns, tractable and efficient fragments of LARS are of interest; related to this are operational characterizations of its semantics. Later, along the lines of (Brewka et al., 2014), we aim at a formalism for stream reasoning in distributed settings across heterogeneous nodes that have potentially different logical capabilities.
References
|
{"Source-Url": "http://www.kr.tuwien.ac.at/staff/beck/pub/aaai2015.pdf", "len_cl100k_base": 13747, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 36569, "total-output-tokens": 15643, "length": "2e13", "weborganizer": {"__label__adult": 0.0003647804260253906, "__label__art_design": 0.0006189346313476562, "__label__crime_law": 0.0005402565002441406, "__label__education_jobs": 0.0011301040649414062, "__label__entertainment": 0.0001552104949951172, "__label__fashion_beauty": 0.0001939535140991211, "__label__finance_business": 0.0004363059997558594, "__label__food_dining": 0.0005216598510742188, "__label__games": 0.0008611679077148438, "__label__hardware": 0.0009102821350097656, "__label__health": 0.0006184577941894531, "__label__history": 0.0003619194030761719, "__label__home_hobbies": 0.00015938282012939453, "__label__industrial": 0.000675201416015625, "__label__literature": 0.0005908012390136719, "__label__politics": 0.00040030479431152344, "__label__religion": 0.000537872314453125, "__label__science_tech": 0.16162109375, "__label__social_life": 0.00014650821685791016, "__label__software": 0.0192718505859375, "__label__software_dev": 0.80810546875, "__label__sports_fitness": 0.00029778480529785156, "__label__transportation": 0.001033782958984375, "__label__travel": 0.00022792816162109375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47650, 0.02506]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47650, 0.54656]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47650, 0.82211]], "google_gemma-3-12b-it_contains_pii": [[0, 4116, false], [4116, 10708, null], [10708, 15337, null], [15337, 22751, null], [22751, 29103, null], [29103, 35190, null], [35190, 42764, null], [42764, 47650, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4116, true], [4116, 10708, null], [10708, 15337, null], [15337, 22751, null], [22751, 29103, null], [29103, 35190, null], [35190, 42764, null], [42764, 47650, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47650, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47650, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47650, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47650, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47650, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47650, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47650, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47650, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47650, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47650, null]], "pdf_page_numbers": [[0, 4116, 1], [4116, 10708, 2], [10708, 15337, 3], [15337, 22751, 4], [22751, 29103, 5], [29103, 35190, 6], [35190, 42764, 7], [42764, 47650, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47650, 0.03814]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
bf4f85a87fff25de62c9b8eac21b1f58fc166ca5
|
Fast in-memory XPath search using compressed indexes
Arroyuelo, Diego
IEEE Computer Society
2010
http://hdl.handle.net/10138/23556
https://doi.org/10.1109/ICDE.2010.5447858
Downloaded from Helda, University of Helsinki institutional repository.
This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail.
Please cite the original version.
Fast In-Memory XPath Search using Compressed Indexes
Diego Arroyuelo #1, Francisco Claude #2, Sebastian Maneth %3, Veli Mäkinen %4,
Gonzalo Navarro %5, Kim Nguyễn %6, Jouni Sirén %7, Niko Välimäki %8
Abstract—A large fraction of an XML document typically consists of text data. The XPath query language allows text search via the equal, contains, and starts-with predicates. Such predicates can be efficiently implemented using a compressed self-index of the document’s text nodes. Most queries, however, contain some parts querying the text of the document, plus some parts querying the tree structure. It is therefore a challenge to choose an appropriate evaluation order for a given query, which optimally leverages the execution speeds of the text and tree indexes. Here the SXSI system is introduced. It stores the tree structure of an XML document using a bit array of opening and closing brackets plus a sequence of labels, and stores the text nodes of the document using a global compressed self-index. On top of these indexes sits an XPath query engine that is based on tree automata. The engine uses fast counting queries of the text index in order to dynamically determine whether to evaluate top-down or bottom-up with respect to the tree structure. The resulting system has several advantages over existing systems: (1) on pure tree queries (without text search) such as the XPathMark queries, the SXSI system performs on par or better than the fastest known systems MonetDB and Qizx, (2) on queries that use text search, SXSI outperforms the existing systems by 1–3 orders of magnitude (depending on the size of the result set), and (3) with respect to memory consumption, SXSI outperforms all other systems for counting-only queries.
I. INTRODUCTION
As more and more data is stored, transmitted, queried, and manipulated in XML form, the popularity of XPath and XQuery as languages for querying semi-structured data spreads faster. Solving those queries efficiently has proved to be quite challenging, and has triggered much research. Today there is a wealth of public and commercial XPath/XQuery engines, apart from several theoretical proposals.
In this paper we focus on XPath, which is simpler and forms the basis of XQuery. XPath query engines can be roughly divided into two categories: sequential and indexed. In the former, which follows a streaming approach, no preprocessing of the XML data is necessary. Each query must sequentially read the whole collection, and the goal is to be as close as possible to making just one pass over the data, while using as little main memory as possible to hold intermediate results and data structures. Instead, the indexed approach preprocesses the XML collection to build a data structure on it, so that later queries can be solved without traversing the whole collection. A serious challenge of the indexed approach is that the index can use much more space than the original data, and thus may have to be manipulated on disk. There are two approaches for dealing with this problem: (1) to load the index only partially (by using clever clustering techniques), or (2) to use less powerful indexes which require less space.
Examples of systems using these approaches are Qizx/DB [1], MonetDB/XQuery [2] and Tauro [3].
In this work we aim at an index for XML that uses little space compared to the size of the data, so that the indexed collection can fit in main memory for moderate-sized data, thereby solving XPath queries without any need of resorting to disk. An in-memory index should outperform streaming approaches, even when the data fits in RAM. Note that usually, main memory XML query systems (such as Saxon [4], Galax [5], Qizx/Open [1], etc.) use machine pointers to represent XML data. We observed that on various well-established DOM implementations, this representation blows up memory consumption to about 5–10 times the size of the original XML document.
An XML collection can be regarded essentially as a text collection (that is, a set of strings) organized into a tree structure, so that the strings correspond to the text data and the tree structure corresponds to the nesting of tags. The problem of manipulating text collections within compressed space is now well understood [6]–[8], and also much work has been carried out on compact data structures for trees (see, e.g., [9] and references therein). In this paper we show how both types of compact data structures can be integrated into a compressed index representation for XML data, which is able to efficiently solve XPath queries.
A feature inherited from its components is that the compressed index replaces the XML collection, in the sense that the data (or any part of it) can be efficiently reproduced from the index (and thus the data itself can be discarded). The result is called a self-index, as the data is inextricably tied to its index. A self-index for XML data was recently proposed [10], [11], yet its support for XPath is reduced to a very limited class of queries that are handled particularly well.
The main value of our work is to provide the first practical and public tool for compressed indexing of XML data, dubbed Succinct XML Self-Index (SXSI), which takes little space, solves a significant portion of XPath (currently we support at least Core XPath [12], i.e., all navigational axes, plus the three text predicates = (equality), contains, and starts-with), and largely outperforms the best public softwares supporting XPath we are aware of, namely MonetDB and Qizx. The main challenges in achieving our results have been to obtain practical implementations of compact data structures (for texts, trees, and others) that are at a theoretical stage, to develop new compact schemes tailored to this particular problem, and to develop query processing strategies tuned for the specific cost model that emerges from the use of these compact data structures. The limitations of our scheme are that it is in-memory (this is a basic design decision, actually), that it is static (i.e., the index must be rebuilt when the XML data changes), and that it does not handle XQuery. The last two limitations are subject of future work.
II. BASIC CONCEPTS AND MODEL
We regard an XML collection as (i) a set of strings and (ii) a labeled tree. The latter is the natural XML parse tree defined by the hierarchical tags, where the (normalized) tag name labels the corresponding node. We add a dummy root so that we have a tree instead of a forest. Moreover, each text node is represented as a leaf labeled \( \$ \). Attributes are handled as follows in this model. Each node with attributes is added a single child labeled \( @ \), and for each attribute \( @\text{attr}=\text{value} \) of the node, we add a child labeled \( \text{attr} \) to its \( @ \)-node, and a leaf child labeled \( \% \) to the \( \text{attr} \)-node. The text content value is then associated to that leaf. Therefore, there is exactly one string content associated to each tree leaf. We will refer to those strings as texts.
Let us call \( T \) the set of all the texts and \( u \) its total length measured in symbols, \( n \) the total number of tree nodes, \( \Sigma \) the alphabet of the strings and \( \sigma = |\Sigma| \), \( t \) the total number of different tag and attribute names, and \( d \) the number of texts (or tree leaves). These receive text identifiers which are consecutive numbers assigned in a left-to-right parsing of the data. In our implementation \( \Sigma \) is simply the set of byte values 1 to 255, and 0 will act as a special terminator called \( \$ \). This symbol occurs exactly once at the end of each text in \( T \). We can easily support multi-byte encodings such as Unicode.
To connect tree nodes and texts, we define global identifiers, which give unique numbers to both internal and leaf nodes, in depth-first preorder. Fig. 1 shows a toy collection (top left) and our model of it (top right), as well as its representation using our data structures (bottom), which serves as a running example for the rest of the paper. In the model, the tree is formed by the solid edges, whereas dotted edges display the connection with the set of texts. We created a dummy root labeled \( \$ \), as well as dummy internal nodes \( @ \), \( \% \), and \( \# \). Note how the attributes are handled. There are 6 texts, which are associated to the tree leaves and receive consecutive text numbers (marked in italics at their right). Global identifiers are associated to each node and leaf (drawn at their left). The conversion between tag names and symbols, drawn within the bottom-left component, is used to translate queries and to recreate the XML data, and will not be further mentioned.
Some notation and measures of compressibility follow, preceding a rough description of our space complexities. Logarithms will be in base 2. The empirical \( k \)-th order entropy [13] of a sequence \( S \) over alphabet \( \sigma \), \( H_k(S) \leq \log \sigma \), is a lower bound to the output size per symbol of any \( k \)-th order compressor applied to \( S \). We will build on self-indexes able of handling text collections \( T \) of total length \( u \) within \( uH_k(T) + o(u \log \sigma) \) bits [6], [8], [14]. On the other hand, representing an unlabeled tree of \( n \) nodes requires \( 2n - O(\log n) \) bits, and several representations using \( 2n + o(n) \) bits support many tree query and navigation operations in constant time (e.g., [9]). The labels require in principle other \( n \log t \) bits. Sequences \( S \) can be stored within \( |S| \log \sigma (1 + o(1)) \) bits (and even \( |S| \log \sigma (|S| \log \sigma) \), so that any element \( S[i] \) can be accessed, and they can also answer queries \( \text{rank}_c(S, i) \) (the number of \( c \)'s in \( S[1, i] \)) and \( \text{select}_c(S, j) \) (the position of the \( j \)-th \( c \) in \( S \)) efficiently [14]−[16]. These are essential building blocks for more complex functionalities, as seen later.
The final space requirement of our index will include:
1. \( uH_k(T) + o(u \log \sigma) \) bits for representing the text collection \( T \) in self-indexed form. This supports the string searches of XPath and can (slowly) reproduce any text.
2. \( 2n + o(n) \) bits for representing the tree structure. This supports many navigational operations in constant time.
3. \( d \log d + o(d \log d) \) bits for the string-to-text mapping, e.g., to determine to which text a string position belongs, or restricting string searches to some texts.
4. Optionally, \( u \log \sigma \) or \( uH_k(T) + o(u \log \sigma) \) bits, plus \( O(d \log \frac{\sigma}{u}) \), to achieve faster text extraction than in 1.
5. \( 4n \log t + O(n) \) bits to represent the tags in a way that they support very fast XPath searches.
6. \( 2n + o(n) \) bits for mapping between tree nodes and texts.
As a practical yardstick: without the extra storage of texts (item 4) the memory consumption of our system is about the size of the original XML file (and, being a self-index, includes it!), and with the extra store the memory consumption is between 1 and 2 times the size of the original XML file.
In Section III we describe our representation of the set of strings, including how to obtain text identifiers from text positions. This explains items 1, 3, and 4 above. Section IV describes our representation for the tree and the labels, and the way the correspondence between tree nodes and text identifiers works. This explains items 2, 5, and 6. Section V describes how we process XPath queries on top of these compact data
structures. In Section VI we empirically compare our SXSI engine with the most relevant public engines we are aware of.
III. TEXT REPRESENTATION
Text data is represented as a succinct full-text self-index [6] that is generally known as the FM-index [17]. The index supports efficient pattern matching that can be easily extended to support different XPath predicates.
A. FM-Index and Backward Searching
Given a string $T$ of total length $u$, from an alphabet of size $\sigma$, the alphabet-friendly FM-index [14] requires $uH_k(T) + o(u \log \sigma)$ bits of space. The index supports counting the number of occurrences of a pattern $P$ in $O(|P| \log \sigma)$ time. Locating the occurrences takes $O(\log^{1+\epsilon} u)$ time per answer, for any constant $\epsilon > 1$.
The FM-index is based on the Burrows–Wheeler transform (BWT) of string $T$ [18]. Assume $T$ ends with the special end-marker $\$$. Let $\mathcal{M}$ be a matrix whose rows are all the cyclic rotations of $T$ in lexicographic order. The last column $L$ of $\mathcal{M}$ forms a permutation of $T$ which is the BWT string $L = T^{\text{bwt}}$. The matrix is only conceptual: the FM-index uses only the $T^{\text{bwt}}$ string. See Fig. 1 (bottom right). Note $L[i]$ is the symbol preceding the $i$-th lexicographically smallest row of $\mathcal{M}$.
The resulting permutation is reversible. The first column of $\mathcal{M}$, denoted $F$, contains all symbols of $T$ in lexicographic order. There exists a simple last-to-first mapping from symbols in $L$ to $F$ [17]: Let $C[e]$ be the total number of symbols in $T$ that are lexicographically less than $e$. Now the LF-mapping can be defined as $LF(i) = C[L[i]] + \text{rank}_{L[i]}(L, i)$. The symbols of $T$ can be read in reverse order by starting from the end-marker location $i$ and applying $LF(i)$ recursively: we get $T^{\text{bwt}}[i], T^{\text{bwt}}[LF(i)], T^{\text{bwt}}[LF(LF(i))]$ etc. and finally, after $u$ steps, get the first symbol of $T$. The values $C[e]$ can be stored in a small array of $\sigma \log u$ bits. Function $\text{rank}_e(L, i)$ can be computed in $O(\log \sigma)$ time with a wavelet tree data structure requiring only $uH_k(T) + o(u \log \sigma)$ bits [14], [15].
Pattern matching is supported via backward searching on the BWT [17]. Given a pattern $P[1, m]$, the backward search starts with the range $[sp, ep] = [1, u]$ of rows in $\mathcal{M}$. At each step $i \in \{m, m-1, \ldots, 1\}$ of the backward search, the range $[sp, ep]$ is updated to match all rows of $\mathcal{M}$ that have $P[i]$ as a prefix. New range $[sp', ep']$ is given by $sp' = C[P[i]] + \text{rank}_{P[i]}(L, sp-1) + 1$ and $ep' = C[P[i]] + \text{rank}_{P[i]}(L, ep)$. Each step takes $O(\log \sigma)$ time [14], and finally $ep - sp + 1$ gives the number of times $P$ occurs in $T$.
To find out the location of each occurrence, the text is traversed backwards from each $sp \leq i \leq sp$ (virtually, using $L^{\text{bwt}}$) until a sampled position is found. This is a sampling carried out at regular text positions, so that the corresponding positions in $T^{\text{bwt}}$ are marked in a bitmap $B_s[1, u]$, and the text position corresponding to $T^{\text{bwt}}[i]$, if $B_s[i] = 1$, is stored at a samples array $P_s[\text{rank}_i(B_s, i)]$. If every $l$-th position of $T$ is sampled, the extra space is $O((n/l) \log n)$ (including the compressed $B_s$ [19]) and the locating takes $O(l \log \sigma)$ time per occurrence. Using $l = \Theta(\log^{1+\epsilon} u / \log \sigma)$ yields $o(u \log \sigma)$ extra space and $O(\log^{1+\epsilon} u)$ locating time.
B. Text Collection and Queries
The textual content of the XML data is stored as $\$-terminated strings so that each text corresponds to one string. Let $T$ be the concatenated sequence of $d$ texts. The sampling is extended to include all text beginning positions, and to record
texts and makes it easy to extract the range that maps from positions of $s$ in its $T$. This generates a valid $T^{bwt}$ of all the texts and makes it easy to extract the $i$-th text starting from its $\$-terminator. The type of wavelet tree actually used was a Huffman-shaped one using uncompressed bitmaps inside [20].
Now $T^{bwt}$ contains all end-markers in some permuted order. This permutation is represented with a data structure $Doc$, that maps from positions of $\$s in $T^{bwt}$ to text numbers, and also allows two-dimensional range searching [21] (see Fig. 1, bottom right). Thus the text corresponding to a terminator $T^{bwt}[i] = \$ = Doc[\text{rank}_k(T^{bwt}, i)]$. Furthermore, given a range $[sp, ep]$ of $T^{bwt}$ and a range of text identifiers $[x, y]$, $Doc$ can be used to output identifiers of all $\$-terminators within $[sp, ep] \times [x, y]$ range in $O(\log d)$ time per answer. In practice, because we only use the simpler functionality in the current implementation, $Doc$ is implemented as a plain array using $d \log d$ bits.
The basic pattern matching feature of the FM-index can be extended to support XPath functions such as starts-with, ends-with, contains, and operators $=, \leq, <, >, \geq$ for lexicographic ordering. Given a pattern and a range of text identifiers to be searched, these functions return all text identifiers that match the query within the range. In addition, existential (is there a match in the range?) and counting (how many matches in the range?) queries are supported. Time complexities are $O(|P| \log \sigma)$ for the search phase, plus an extra for reporting:
1) starts-with($P, [x, y]$): The goal is to find texts in $[x, y]$ range prefixed by the given pattern $P$. After the normal backward search, the range $[sp, ep]$ in $T^{bwt}$ contains the end-markers of all the texts prefixed by $P$. Now $[sp, ep] \times [x, y]$ can be mapped to $Doc$, and existential and counting queries can be answered in $O(\log d)$ time. Matching text identifiers can be reported in $O(\log d)$ time per identifier.
2) ends-with($P, [x, y]$): Backward searching is localized to texts $[x, y]$ by choosing $[sp, ep] = [x, y]$ as the starting interval. After the backward search, the resulting range $[sp, ep]$ contains all possible matches, thus, existential and counting queries can be answered in constant time. To find out text identifiers for each occurrence, text must be traversed backwards to find a sampled position. Cost is $O(\log \sigma)$ per answer.
3) operator = ($P, [x, y]$): texts that are equal to $P$, and in range, can be found as follows. Do the backward search as in ends-with, then map to the $\$-terminators in starts-with. Time complexities are same as in starts-with.
4) contains($P, [x, y]$): To find texts that contain $P$, we start with the normal backward search and finish like in ends-with.
In this case there might be several occurrences inside one text, which have to be filtered. Thus, the time complexity is proportional to the total number of occurrences, $O(1 \log \sigma)$ for each. Existential and counting queries are as slow as reporting queries, but the $O(|P| \log \sigma)$-time counting of all the occurrences of $P$ can still be useful for query optimization.
5) operators $\leq, <, >, \geq$: The operator $\leq$ matches texts that are lexicographically smaller than or equal to the given pattern. It can be solved like the starts-with query, but updating only the $ep$ of each backward search step, while $sp = 1$ stays constant. If at some point there are no occurrences of $P[i] = c$ within the prefix $L[1, ep]$, we find those of smaller symbols, $ep = C[c]$, and continue for $P[1, i − 1]$. Other operators can be supported analogously, and costs are as for starts-with.
The new XPath extension, XPath Full Text 1.0 [22], suggests a wider selection of functionality for text searching. Implementation of these extensions requires regular expression and approximate searching functionalities, which can be supported within our index using the general backtracking framework [23]: The idea is to alter the backward search to branch recursively to different ranges $[sp', ep']$ representing the suffixes of the text prefixes (i.e. substrings). This is done by computing $sp'_c = C[c] + \text{rank}_c(L, sp − 1) + 1$ and $ep'_c = C[c] + \text{rank}_c(L, ep)$ for all $c \in \Sigma$ at each step and recursing on each $[sp'_c, ep'_c]$. Then the pattern (or regular expression) can be compared with all substrings of the texts, allowing to search for approximate occurrences [23]. The running time becomes exponential in the number of errors allowed, but different branch-and-bound techniques can be used to obtain practical running times [24], [25]. We omit further details, as these extensions are out of the scope of this paper.
C. Construction and Text Extraction
The FM-index can be built by adapting any BWT construction algorithm. Linear time algorithms exist for the task, but their practical bottleneck is the peak memory consumption. Although there exist general time- and space-efficient construction algorithms, it turned out that our special case of text collection admits a tailored incremental BWT construction algorithm [26] (see the references and experimental comparison therein for previous work on BWT construction): The text collection is split into several smaller collections, and a temporary index is built for each of them separately. The temporary indexes are then merged, and finally converted into a static FM-index.
The BWT allows extracting the $i$-th text by successively applying $LF^i$ from $T^{bwt}[i]$, at $O(\log \sigma)$ cost per extracted symbol. To enable faster text extraction, we allow storing the texts in plain format in $n \log \sigma$ bits, or in an enhanced LZ78-compressed format (derived from the LZ-index [27]) using $uH_k(T) + o(n \log \sigma)$ bits. These secondary text representations are coupled with a delta-encoded bit vector storing starting positions of each text in $T$. This bitmap requires $O(d \log \frac{1}{\epsilon})$ more bits.
IV. TREE REPRESENTATION
A. Data Representation
The tree structure of an XML collection is represented by the following compact data structures, which provide navigation and indexed access to it. See Fig. 1 (bottom left).
1) Par: The balanced parentheses representation [28] of the tree structure. This is obtained by traversing the tree in depth-first-search (DFS) order, writing a "(" whenever we arrive at a node, and a ")" when we leave it (thus it is
easily produced during the XML parsing). In this way, every node is represented by a pair of matching opening and closing parentheses. A tree node will be identified by the position of its opening parenthesis in Par (that is, a node will be just an integer index within Par). In particular, we will use the balanced parentheses implementation of Sadakane [9], which supports a very complete set of operations, including finding the i-th child of a node, in constant time. Overall Par uses \(2n + o(n)\) bits. This includes the space needed for constant-time binary \(\text{rank} \) on Par, which are very efficient in practice.
2) Tag: A sequence of the tag identifiers of each tree node, including an opening and a closing version of each tag, to mark the beginning and ending point of each node. These tags are numbers in \([1, 2t]\) and are aligned with Par so that the tag of node \(i\) is simply \(\text{Tag}[i]\).
We will also need \(\text{rank}\) and \(\text{select}\) queries on \(\text{Tag}\). Several sequence representations supporting these are known [20]. Given that \(\text{Tag}\) is not too critical in the overall space, but it is in time, we opt for a practical representation that favors speed over space. First, we store the tags in an array using \([\log 2t]\) bits per field, which gives constant time access to \(\text{Tag}[i]\). The rank and select queries over the sequence of tags are answered by a second structure. Consider the binary matrix \(R[1..2t][1..2n]\) such that \(R[i, j] = 1\) if \(\text{Tag}[j] = i\). We represent each row of the matrix using Okanohara and Sadakane’s structure \(\text{sarray}\) [29]. Its space requirement for each row \(i\) is \(k_i \log 2n + k_i(2 + o(1))\) bits, where \(k_i\) is the number of times symbol \(i\) appears in \(\text{Tag}\). The total space of both structures adds up to \(2n \log(2t) + 2nH_0(\text{Tag}) + n(2 + o(1)) \leq 4n \log t + O(n)\) bits. They support access and \(\text{select}\) in \(O(1)\) time, and \(\text{rank}\) in \(O(\log n)\) time.
B. Tree Navigation
We define the following operations over the tree structure, which will be useful to support XPath queries over the tree. Most of these operations are supported in constant time, except when a \(\text{rank}\) over \(\text{Par}\) is involved. Let \(\text{tag}\) be a tag identifier.
1) Basic Tree Operations: These are directly inherited from Sadakane’s implementation [9]. We mention only the most important ones for this paper; \(x\) is a node (a position in \(\text{Par}\)).
- \(\text{Close}(x)\): The closing parenthesis matching \(\text{Par}[x]\). If \(x\) is a small subtree this takes a few local accesses to \(\text{Par}\), otherwise a few non-local table accesses.
- \(\text{Preorder}(x) = \text{rank}_k(\text{Par}, i)\): Preorder number of \(x\).
- \(\text{SubtreeSize}(x) = (\text{Close}(x) - x + 1) / 2\): Number of nodes in the subtree rooted at \(x\).
- \(\text{IsAncestor}(x, y) = x \leq y \leq \text{Close}(x)\): Whether \(x\) is an ancestor of \(y\).
- \(\text{FirstChild}(x) = x + 1\): First child of \(x\), if any.
- \(\text{NextSibling}(x) = \text{Close}(x) + 1\): Next sibling of \(x\), if any.
- \(\text{Parent}(x)\): Parent of \(x\). Somewhat costlier than Close\((x)\) in practice, because the answer is less likely to be near \(x\) in \(\text{Par}\).
2) Connecting to Tags: The following operations are essential for our fast XPath evaluation.
- \(\text{SubtreeTags}(x, \text{tag})\): Returns the number of occurrences of \(\text{tag}\) within the subtree rooted at node \(x\). This is \(\text{rank}_{\text{tag}}(\text{Tag}, \text{Close}(x)) - \text{rank}_{\text{tag}}(\text{Tag}, x - 1)\).
- \(\text{Tag}(x)\): Gives the tag identifier of node \(x\). In our representation this is just \(\text{Tag}[x]\).
- \(\text{TaggedDesc}(x, \text{tag})\): The first node labeled \(\text{tag}\) strictly within the subtree rooted at \(x\). This is \(\text{select}_{\text{tag}}(\text{Tag}, \text{rank}_{\text{tag}}(\text{Tag}, x) + 1)\) if it is \(\leq \text{Close}(x)\), and undefined otherwise.
- \(\text{TaggedPred}(x, \text{tag})\): The last node labeled \(\text{tag}\) with preorder smaller than that of node \(x\), and not an ancestor of \(x\). Let \(r = \text{rank}_{\text{tag}}(\text{Tag}, x - 1)\). If \(\text{select}_{\text{tag}}(\text{Tag}, r)\) is not an ancestor of node \(x\), we stop. Otherwise, we set \(r = r - 1\) and iterate.
- \(\text{TaggedFoll}(x, \text{tag})\): The first node labeled \(\text{tag}\) with preorder larger than that of \(x\), and not in the subtree of \(x\). This is \(\text{select}_{\text{tag}}(\text{Tag}, \text{rank}_{\text{tag}}(\text{Tag}, \text{Close}(x)) + 1)\).
3) Connecting the Text and the Tree: Conversion between text numbers, tree nodes, and global identifiers, is easily carried out by using \(\text{Par}\) and a bitmap \(B\) of \(2n\) bits that marks the opening parentheses of tree leaves containing text, plus \(o(n)\) extra bits to support rank/select queries. Bitmap \(B\) enables the computation of the following operations:
- \(\text{LeafNumber}(x)\): Gives the number of leaves up to \(x\) in \(\text{Par}\). This is \(\text{rank}_1(B, x)\).
- \(\text{TextIds}(x)\): Gives the range of text identifiers that descend from node \(x\). This is simply \([\text{LeafNumber}(x - 1) + 1, \text{LeafNumber}(\text{Close}(x))]\).
- \(\text{XMLIdText}(d)\): Gives the global identifier for the text with identifier \(d\). This is \(\text{Preorder}([\text{select}_3(B, d)]\).
- \(\text{XMLIdNode}(x)\): Gives the global identifier for a tree node \(x\). This is just \(\text{Preorder}(x)\).
C. Displaying Contents
Given a node \(x\), we want to recreate its text (XML) content, that is, return the string. We traverse the structure starting from \(\text{Par}[x]\), retrieving the tag names and the text contents, from the text identifiers. The time is \(O(\log \sigma)\) per text symbol (or \(O(1)\) if we use the redundant text storage described in Section III) and \(O(1)\) per tag.
- \(\text{GetText}(d)\): Generates the text with identifier \(d\).
- \(\text{GetSubtree}(x)\): Generates the subtree at node \(x\).
D. Handling Dynamic Sets
During XPath evaluation we need to handle sets of intermediate results, that is, global identifiers. Due to the mechanics of the evaluation, we need to start from an empty set and later carry out two types of operations:
- Insert a new identifier to the result.
- Remove a range of identifiers (actually, a subtree).
To remove a range faster than by brute force, we use a data structure of \(2n - 1\) bits representing a perfect binary tree over the interval of global identifiers, so that leaves of this binary tree represent individual positions and internal nodes ranges of positions (i.e., the union of their child ranges). A bit mark
at each such internal node can be set to zero to implicitly set all its range to zero. A position is in the set if and only if all of its path from the root to it is not zero. Thus one can easily insert elements in $O(\log n)$ time, and remove ranges within the same time, as any range can be covered with $O(\log n)$ binary tree nodes.
V. XPath Queries
The aim is to support a practical subset of XPath, while being able to guarantee efficient evaluation based on the data structures described before. As a first shot, we target the “Core XPath” subset [12] of XPath 1.0. It supports all 12 navigational axes, all node tests, and filters with Boolean operations (and, or, not). In our prototype implementation, all axes have been implemented, but only part of the forward fragment (consisting of child and descendant) has been fully optimized. We therefore focus here only on these two axes. A node test (non-terminal NodeTest below) is either the wildcard (*), a tag name, or a node type test, i.e., one of text(), a tag name, or a node type test, i.e., one of text() (non-terminal NodeTest below) is either the wildcard (*), a tag name, or a node type test, i.e., one of text(), a tag name, or a node type test, i.e., one of text()
A data value is the value of an attribute or the content of a text node. Here, all data values are considered as strings. If an XPath expression selects only data values, i.e., its final location step is the attribute-axis or a text() test, then we call it a value expression. Our XPath fragment (“Core”), consists of Core XPath plus the following data value comparisons which may appear inside filters (that is, may be generated by the nonterminal Pred of above). Let $w$ be a string and $p$ a value expression; if $p$ equals , (dot) or self and the XPath expression to the left of the filter is a value expression, then $p$ is a value expression as well.
- $p = w$ (equality): tests if a string selected by $p$ is equal to $w$.
- contains($w, p$): tests if the string $w$ is contained in a string selected by $p$.
- starts-with($p, w$): tests if the string $w$ is a prefix of a string selected by $p$.
A. Tree Automata Representation
It is well-known that Core XPath can be evaluated using tree automata; see, e.g., [30] and [31]. Here we use alternating tree automata (as in [32]). Such automata work with Boolean formulas over states, which must become satisfied for a transition to fire. This allows much more compact representation of queries through automata, than ordinary tree automata (without formulas). Our tree automata work over a binary tree view of the XML tree where the left child is the first child of the XML node and the right child is the next sibling of the XML node.
Definition 5.1 (Non-deterministic marking automaton): An automaton $A$ is a tuple $(\mathcal{L}, \mathcal{Q}, \mathcal{I}, \delta)$, where $\mathcal{L}$ is the infinite set of all possible tree labels, $\mathcal{Q}$ is the finite set of states, $\mathcal{I} \subseteq \mathcal{Q}$ is the set of initial states, and $\delta : \mathcal{Q} \times 2^\mathcal{L} \rightarrow \mathcal{F}$ is the transition function, where $\mathcal{F}$ is a set of Boolean formulas. A Boolean formula $\phi$ is generated by the following EBNF.
$$
\phi ::= \top \mid \bot \mid \phi \lor \phi \mid \phi \land \phi \mid \neg \phi \mid a \mid p \quad \text{(formula)}
$$
$$
a ::= \downarrow q \mid \uparrow q \quad \text{(atom)}
$$
where $p \in P$ is a built-in predicate and $q$ is a state. We call $F$ the set of well-formed formulas.
Definition 5.2 (Evaluation of a formula): Given an automaton $A$ and an input tree $t$, the evaluation of a formula is given by the judgement
$$
\mathcal{R}_1, \mathcal{R}_2, t \vdash_A \phi = (b, R)
$$
where $\mathcal{R}_1$ and $\mathcal{R}_2$ are mappings from states to sets of sub-trees of $t$, $t'$ is a subtree of $t$, $\phi$ is a formula, $b \in \{\top, \bot\}$ and $R$ is a set of sub-trees of $t$. We define the semantics of this judgment by the mean of inference rules, given in Fig. 2. These rules are pretty straightforward and combine the rules for a classical alternating automaton, with the rules of a marking automaton. Rule (or) and (and) implements the Boolean connective of the formula and collect the marking found in their true sub-formulas. Rules (left) and (right) (written as a rule schema for concision) evaluate to true if the state $q$ is in the corresponding set. Intuitively, $\mathcal{R}_1$ (resp. $\mathcal{R}_2$) is
the set of states accepted in the left (resp. right) subtree of the input tree. Rule (pred) supposes the existence of an evaluation function for built-in predicates. Among the latter, we suppose the existence of a special predicate, mark which evaluates to \( \top \) and returns the singleton set containing the current subtree.
We can now give the semantics of an automaton, by the means of a run function.
**Algorithm 5.1 (Top-down run function):**
**Input:** \( A = (L, Q, I, \delta, t, r) \)
**Output:** \( R \)
where \( A \) is the automaton, \( t \) the input tree, \( r \) a set of states and \( R \) a mapping from states of \( Q \) to sets of subtrees of \( t \) and such that \( \text{dom}(R) \subseteq r \).
```plaintext
1 function top_down_run(A, t, r) =
2 if \( t \) is the empty tree then return \( \emptyset \) else
3 let \( \text{trans} = \{(q, \ell) \rightarrow \phi \mid q \in r \text{ and } \text{Tag}(t) \in \ell\} \) in
4 let \( r_i = \{q \mid q, \ell \in \text{trans}, \forall \phi \in \ell, q \in \phi \} \) in
5 let \( R_1 = \text{top_down_run}(A, \text{FirstChild}(t), r_1) \)
6 and \( R_2 = \text{top_down_run}(A, \text{NextSibling}(t), r_2) \)
7 in return
8 \{q \rightarrow R \mid \{R_1, R_2, t \rightarrow \phi \mid \{\text{dom}(\phi) \rightarrow \text{trans} \} \} \}
```
This algorithm works in a very general setting. Considering any subtree \( t \) of our input tree, let \( R \) be the result of \( \text{top_down_run}(A,t,Q) \). Then \( \text{dom}(R) \) is the set of states which accepts \( t \) and \( \forall q \in \text{dom}(R), R(q) \) is the set of subtrees of \( t \) marked during a run starting from the tree \( t \). It is easy to see that the evaluation of \( \text{top_down_run}(A,t,r) \) takes time \( O(|A| \times |t|) \), provided that the operations \( \odot \) and \( \ominus \) can be evaluated in constant time.
**B. From XPath to Automata**
The translation from XPath to alternating automata is simple and can be done in one pass through the parse tree of the XPath expression. Roughly speaking, the resulting automaton is “isomorphic” to the original query (and has approximately the same size). All our optimization discussed later are on-the-fly algorithms; for instance, we only determine the automaton during its run on the input tree. We illustrate the process by giving a query and its corresponding automaton. Consider the query
\[ \text{descendant::listitem/descendant::keyword} \]
The corresponding automaton is
\[ A = (L, \{q_0, q_1\}, \{q_0\}, \delta) \]
where \( \delta \) contains the following transitions:
1. \( q_0, \text{listitem} \rightarrow 1, q_1 \)
2. \( q_0, L - \{\@, \#\} \rightarrow 1, q_0 \)
3. \( q_0, L \rightarrow 1, q_1, q_0, L \rightarrow 2, q_0 \)
4. \( q_1, \text{keyword} \rightarrow \text{mark} \)
5. \( q_1, L - \{\@, \#\} \rightarrow 4, q_1, q_1, L \rightarrow 2, q_1 \)
The automaton starts in state \( q_0 \) and traverses the tree until it finds a subtree labeled listitem. At such a subtree, the automaton changes to state \( q_0, q_1 \) on the left subtree (because it is non-deterministic and two transitions fire), looking for a tag keyword or possibly another tag listitem and it will recurse on the right subtree in state \( q_0 \) again. Transitions 2 and 5 make sure that, according to the semantics of the descendant axis, only element nodes (and not text or attributes) are considered. If, in state \( q_0, q_1 \) it finds a node labeled keyword then this node is marked as a result node.
**C. General Optimizations, On-the-fly Determinisation**
In Algorithm 5.1 the most expensive operation is in Line 11, which is evaluating the set of possible transitions and accumulating the mappings. First, note that only the states outside of filters actually accumulate nodes. All other states always yield empty bindings. Thus we can split the set of states into marking and regular states. This reduces the number of \( \odot \) and \( \ominus \) operations on result sets. Note also that given a transition \( q_i, \ell \rightarrow 1, q_j \lor 1, q_k \) where \( q_i, q_j \) and \( q_k \) are marking states, all nodes accumulated in \( q_j \) are subtrees of the left subtree of the input tree. Likewise, all the nodes accumulated in \( q_k \) are subtrees of the right subtree of the input tree. Thus both sets of nodes are disjoint. Therefore, we do not need to keep sorted sets of nodes but only need sequences which support \( O(1) \) concatenation. Thus, computing the union of two result sets \( R_j \) and \( R_k \) can be done in constant time and therefore \( \odot \) and \( \ominus \) can be implemented in constant time.
Another important practical improvement exploits the fact that the automata are very repetitive. For instance if an XPath query does not contain any data value predicate (such as \( \text{contains} \)) then its evaluation only depends on the tags of the input tree. We can use this to our advantage to memoize the results based on the tag of the input tree and the set \( r \). Indeed, the set \( r \) and the tag of the input tree \( t \) uniquely define the set \( \text{trans} \) of possible transitions. So instead of computing such a set at every step, we can cache it in a hash-table where the key is the pair \( \langle \text{Tag}(t), r \rangle \); this corresponds to an on-the-fly determinization of automata. We can apply a similar technique for the other expensive operation, that is, the evaluation of the set of formulas. This operation can be split in two parts: the evaluation of the formulas and the propagation of the result sets for the corresponding marking states. Again, if the formulas do not contain data value predicates, then their value only depends on the states present in \( R_1 \) and \( R_2 \), the results of the recursive calls. Using the same technique, we can memoize the results in a hash table indexed by the key \( \langle \text{dom}(R_1), \text{dom}(R_2) \rangle \). This hash table contains the pair \( \text{dom}(R) \) of the states in the result mapping and a sequence of affectation to evaluate, of the form \( \langle q_0 := \text{concat}(q_1, q_2, \ldots) \rangle \), which represents results that need to be propagated between the different marking states. Another optimization is for the result set associated with the initial state of the automaton, which is answer of the query. This result set is “final” in the sense that anything that was propagated up to it will be in the result of the query. We can exploit this fact and use a more compact data-structure for this set of results (for instance the one described in Section IV-D). Thus we can trade time complexity (since insertion is \( O(\log(n)) \) in this structure) for space. Using this scheme, we are able to answer queries containing billions of result nodes using little memory.
**D. Leveraging the Speed of the Low-Level Interface**
Conventionally, the run of a tree automaton visits every node of the input tree. This is for instance the behaviour of the tree automata presented in [30], which performs two
scans of the whole XML document (the latter being stored on disk in a particular format). For highly efficient XPath evaluation, this is not good enough and we must find ways to restrict the run to the nodes that are “relevant” for the query (this is precisely what is also done through “partitioning and pruning” in the staircase join [33]). Consider the query /descendant::listitem/descendant::keyword of before. Clearly, we only care about listitem and keyword nodes for this query, and how they are situated with respect to each other. This is precisely the information that is provided through the TaggedDesc and TaggedFoll functions of the tree representation. These functions allow us to have a “contracted” view of the tree, restricted to nodes with certain labels of interest (but preserving the overall tree structure). For instance, to solve the above query we can call TaggedDesc(Root,listitem) which selects the first listitem-node \( x \). Now we can apply recursively TaggedDesc(\( x \),keyword) and TaggedFoll(\( y \),keyword) in order to select all keyword-descendants of \( x \). We do this optimization of “jumping run” based on the automaton: for a given set of states of the automaton we compute the set of relevant transitions which cause a state change.
**Bottom-up run:** While the previous technique works well for tree-based queries it still remains slow for value-based queries. For instance, consider the query //listitem//keyword[contains(.,”Unique”)]. The text interface described in Section III can answer such queries the string query very efficiently returning the set of text nodes matching this query. It is also able to count the number of such results. If this number is low, then it would be faster to take these text nodes as starting points for query evaluation and test if their path to the root is determined efficiently through the tree structure interface.
We can start bottom-up by jumping to the keyword nodes and then checking their ancestors for listitem nodes.
The first function in Algorithm 5.2 iterates the function `match_above` on every tree in the sequence \( s \). The `match_above` function is the one “climbing-up” the tree. We assume that the `Parent(_)` function returns the empty tree when applied to the root node. If the input tree is not equal to the tree \( stop \) (which is initially the empty tree \#), allowing to stop only after the root node has been processed) then we first check whether the next (we use the function `hd` and `tl` which returns the first element of the list and its tail) potential tree is a descendant of our parent (Line 14). If it is so, then we pause for the current branch and recursively call `match_above` with our parent as `stop` tree. Once it returns, we compute all the possible transitions that the automata can take from the parent node to arrive on the left and right subtree with the correct configuration (Line 21). Once this is done, we merge both configurations using the same computation as in the top-down algorithm (Line 23). Finally, we recursively call `match_above` on the parent node, with the new configuration and sequence of potential matching nodes (Line 25).
### Algorithm 5.2 (Bottom-up run function):
**Input:** \( A, s \) **Output:** \( R \)
where \( A \) is an automaton, \( s \) a sequence of subtrees of the input tree, and \( R \) a mapping from states of \( A \) to subtrees of the input tree.
1. function `bottom_up_run` \( A, s = \)
2. if \( s = [] \) then return \( \emptyset \) else
3. let \( t, s' = \text{hd}(s), \text{tl}(s) \) in
4. let \( \mathcal{R} = \text{top_down_run} A t, Q \) in
5. let \( \mathcal{R}', s'' = \text{match_above} A t, s', \mathcal{R} \# \) in
6. \( \mathcal{R}' \cup (\text{bottom_up_run} A s'') \)
7. function `match_above` \( A t, s \) is 1.
8. if \( t = \text{stop} \) then \( \mathcal{R}_1, s \) else
9. let \( pt = \text{Parent}(t) \) in
10. let \( \mathcal{R}_2, s' = \)
11. if \( s = [] \) or not (isAncestor(pt,hd(s)))
12. then \( \emptyset, s \) else
13. let \( t_2, s' = \text{hd}(s), \text{tl}(s) \) in
14. let \( \mathcal{R} = \text{top_down_run} A t_2, Q \) in
15. \( \mathcal{R} = \text{match_above} A s' \) \( \emptyset \) pt in
16. let \( \text{trans} = \) \{ \( q, \ell \rightarrow \phi \mid \exists q' \in \text{dom}(\mathcal{R}_2) \) s.t. \( q, q' \in \phi \}
17. label pt in \( \ell \) in
18. let \( \mathcal{R}' = [q \rightarrow R | \mathcal{R}_1, \mathcal{R}_2, t \vdash A, \phi = (T, R), \}
19. \text{trans} \} in
20. match_above \( A, pt s' \) \( \mathcal{R}' \) stop
VI. EXPERIMENTAL RESULTS
We have implemented a prototype XPath evaluator based on the data structures and algorithms presented in previous
sections. Both the tree structure and the FM-Index were developed in C++, while the XPath engine was written using the Objective Caml language.
A. Protocol
To validate our approach, we benchmarked our implementation against two other well established XQuery implementations, namely MonetDB/XQuery and Qizx/DB. We describe our experimental settings hereafter.
Test machine: Our test machine features an Intel Core2 Xeon processor at 3.6Ghz, 3.8 GB of RAM and a S-ATA hard drive. The OS is a 64-bit version of Ubuntu Linux. The kernel version is 2.6.27 and the file system used to store the various files is ext3, with default settings. All tests were run on a minimal environment where only the tested program and essential services were running. We used the standard compiler and libraries available on this distribution (namely g++ 4.3.2, libxml2 2.6.32 for document parsing and OCaml 3.11.0).
Qizx/DB: We used version 3.0 of Qizx/DB engine (free edition), running on top of the 64-bit version of the JVM (with the -server flag set as recommended in the Qizx user manual). The maximal amount of memory of the JVM set to the maximal amount of physical memory (using the -Xmx flag). We also used the flag -r of the Qizx/DB command line interface, which allows us to re-run the same query without restarting the whole program (this ensures that the JVM’s garbage collector and thread machinery do not impact the performances). We used the timing provided by Qizx debugging flags, and reported the serialization time (which actually includes the materialization of the results in memory and the serialization).
MonetDB/XQuery: We used version Feb2009-SP2 of MonetDB, and in particular, version 4.28.4 of MonetDB4 server and version 0.28.4 of the XQuery module (pathfinder). We used the timing reported by the “-t” flag of MonetDB client program, mclient. We kept the materialization time and the serialization time separated.
Running times and memory reporting: For each query, we kept the best of five runs. For Qizx/DB, each individual run consists of two repeated runs (“-r 2”), the second one being always faster. For MonetDB, before each batch of five runs, the server was exited properly and restarted. We excluded from the running times the time used for loading the index into main memory (based on the engines timing reports). We monitored the memory the resident set size of each process, which correspond to the amount of process memory actually mapped in physical memory. For MonetDB, we kept track of the memory usage of both server and client. The peak of memory reported was the maximum of the sum of client’s memory plus server’s memory use, at the same instant.
For the tests where serialization was involved, we serialized to the /dev/null device (that is, all the results were discarded without causing any output operation).
B. Indexing
Our implementation features a versatile index. It is divided into three parts. First, the tree representation composed of the parenthesis structure, as well as the tag structure. Second, the FM-Index encoding the text collection. Third, the auxiliary text representation allowing fast extraction of text content.
It is easy to determine from the query which parts of the index are needed in order to solve it, and thus load only those into main memory. For instance, if a query only involves tree navigation, then having the FM-Index in memory is unnecessary. On the other hand, if we are interested in very selective text-oriented queries, then only the tree part and FM-Index are needed (both for counting and serializing the results). In this case, serialization is a bit slower (due to the cost of text extraction from the FM-Index) but remains acceptable since the number of results is low.
Figure 3 reports the construction time and memory consumption of the indexing process, the loading time from disk into main memory of a constructed index and a comparison between the size of the original document and the size of our in-memory structures. For these indexes, a sampling factor l = 64 (cf. Section III) was chosen. It should be noted that the size of the tree index plus the size of the FM-index is always less than the size of the original document.
It should be noted that although loading time is acceptable, it dominates query answering time. This is however not a problem for the use case we have targeted: a main memory query engine where the same large document is queried many times. As mentioned in the Introduction, systems such as MonetDB load their indexes only partially; this gives superior performance in a cold-cache scenario than our system.
C. Tree Queries
We benchmarked tree queries using the queries given in Fig. 4. Queries Q01 to Q11 were taken from the XPathMark benchmark [34], derived from the XMark XQuery benchmark suite. Q12 to Q16 are “crash tests” that are either simple (Q12 selects only the root since it always has at least one descendant in our files) or generate the same amount of results but with various intermediate result sizes. For this experiment we used XMark documents of size 116MB and 1GB. In the cases of MonetDB and Qizx, the files were indexed using
the default settings. Fig. 5 reports the running times for both counting and materialization (construction of a result set in memory) and serialization (the output of a result set). As previously mentioned, since Qizx interleaves serialization and materialization, therefore the timing we report include both. In this table, we marked in bold font the lowest materialization time for a given query and we underlined the materialization and serialization time whose sum was the lowest (or in other words underlined numbers correspond to the lowest overall execution time, excluding index loading).
We report in Fig. 6 the peak memory usage for each query, for the 116MB document.
From the results of Fig. 5, we see how the different components of SXSI contribute to the efficient evaluation model. First, queries Q01 to Q06—which are fully qualified paths—illustrate the sheer speed of the tree structure and in particular the efficiency of its basic operations (such as FirstChild and NextSibling, which are used for the child axis), as well as the efficient execution scheme provided by the automaton. Query Q07 to Q11 illustrate the impact of the jumping. Moreover, it shows that filters do not impact the execution speed: the conditions they express are efficiently checked by the formula evaluation procedure. Finally, Q12 to Q16 illustrate the robustness of our automata model. Indeed while such queries might seem unrealistic, the good performances that we obtain are only the consequence of using an automata model, which factors in its states all the necessary computation and thus do not materialize unneeded intermediate results. This, coupled together with the compact dynamic set of Section IV-D, allows us to keep a very low memory footprint even when the query returns a lot of results or that each step generates a lot of intermediate results (cf. Fig. 6).
It is well-known that MonetDB’s policy is to use as much memory as available to answer queries efficiently and to preserve memory only if there is not enough physical memory available. Our goal by providing memory use experiment was not to argue that we would use less memory than e.g. MonetDB but rather to show that even though we remain memory conscious, we can outperform engines using a “greedier” memory policy.
D. Text Queries
We tested the text capabilities of our XPath engine against the most advanced text oriented features of other query engines.
Qizx/DB: We used the newly introduced Full-Text extension of XQuery available in Qizx/DB v. 3.0. We tried to write queries as efficiently as possible while preserving the same semantics as our original queries. The query we used always gave better results than their pure XPath counterpart. In particular, we used the ftcontains text predicate [22] implemented by Qizx/DB. The ftcontains predicate allows one to express not only contains-like queries but also Boolean operations on text predicates, regular expression matching and so on. It is more efficient than the standard contains. In particular we used regular expression matching instead of the starts-with and ends-with operators since the latter were slower in our experiments.
MonetDB: MonetDB supports some full-text capabilities through the use of the PF/Tijah text index ([35]). However, this index only supports a complex about operator, similar to contains but returning ranked results by order of relevance. Although its semantics does not exactly match the one of contains, its execution is often faster while providing richer results. We measured MonetDB timings both for standard XPath operator and about. Experiments were made on a 122MB Medline file. This file contains bibliographic information about life sciences...
and biomedical publications. This text file featured 5,732,159 text elements, for a total amount of 95MB of text content. Fig. 7 shows the text queries we tested. We used count queries for both MonetDB and Qizx —enclosing the query in an \texttt{fn:count()} predicate— while in our implementation we ran the queries in “materialization” mode but without serializing the output. The table in Fig. 8 summarizes the running times for each query. As we target very selective text queries, we also give, for each query, the number of results and serializing the output. The table in Fig. 8 summarizes the running times for each query. As we target very selective text queries, we also give, for each query, the number of results that returned.
Since for these queries our automata worked in “bottom-up” mode, we detail the two following operations:
- Calling the text predicate \texttt{globally on the text collection}, thus retrieving all the probable matches of the query (\textit{Text query} line in the Table 8).
- Running the automaton bottom up from the set of probable matches to keep those satisfying the path expression.
We mark in \textbf{bold face} the fastest query execution time and we underline the fastest execution and serialization time.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{fig7.png}
\caption{Text oriented queries}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{fig8.png}
\caption{Running times (in ms) and memory consumption (in MB) for the text-oriented queries}
\end{figure}
\textit{(Automaton run line in the table of Fig. 8)}
As it is clear from the experiments the bottom-up strategy pays off. The only down-side of this approach is that the automaton uses Parent moves, which are less efficient than FirstChild and NextSibling. This is clear in queries T7 and T8 where the increase in number of results makes the relative slowness of the automata more visible. However our evaluator still outperforms the other engines even in those cases.
\textit{E. Remarks}
We also compared with Tauro [3]. Yet, as it uses a tailored query language, we could not produce comparable results.
We limited the experiments to natural language XML, although our engine (unlike the inverted file based engines) supports as well queries on XML databases of continuous sequences such as DNA and proteins. Realistic queries on such biosequence XMLs require approximate / regular expression search functionalities, that we already support but whose experimental study is out of the scope of this paper.
VII. CONCLUSIONS AND FUTURE WORK
We have presented SXSI, a system for representing an XML collection in compact form so that fast indexed XPath queries can be carried out on it. Even in its current prototype stage, SXSI is already competitive with well-known efficient systems such as MonetDB and Qixz. As such, a number of avenues for future work are open. We mention the broadest ones here.
Handling updates to the collections is possible in principle, as there are dynamic data structures for sequences, trees, and text collections [7]–[9]. What remains to be verified is how practical can those theoretical solutions be made.
As seen, the compact data structures support several fancy operations beyond those actually used by our XPath evaluator. A matter of future work is to explore other evaluation semantics, where strings spanning more than one text node can be searched for. This, at least at a rough level, is not hard to achieve with our FM-index, by removing the $\texttt{S}$-terminators and marking them on a separate bitmap instead. Beyond that, we would like to extend our implementation to full XPath 1.0, and add core functionalities of XQuery.
ACKNOWLEDGEMENTS
We would like to thank Schloss Dagstuhl for the very pleasant and stimulating research environment it provides; the work of this paper was initiated during the Dagstuhl seminar “Structure-Based Compression of Complex Massive Data” (Number 08261). Diego Arroyuelo and Francisco Claude were partially funded by NICTA, Australia. Francisco Claude was partially funded by NSERC of Canada and the Go-Bell Scholarships Program. Francisco Claude and Gonzalo Navarro were partially funded by Fondecyt Grant 1-080019, Chile. Gonzalo Navarro was partially funded by Millennium Institute for Cell Dynamics and Biotechnology (ICDB), Grant ICM P05-001-F, Mideplan, Chile. Veli Mäkinen and Jouni Sirén were funded by the Academy of Finland under grant 119815. Niko Välimäki was funded by the Helsinki Graduate School in Computer Science and Engineering.
REFERENCES
|
{"Source-Url": "https://helda.helsinki.fi//bitstream/handle/10138/23556/icde2010.pdf?sequence=2", "len_cl100k_base": 14835, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 53974, "total-output-tokens": 18031, "length": "2e13", "weborganizer": {"__label__adult": 0.00034427642822265625, "__label__art_design": 0.0004377365112304687, "__label__crime_law": 0.0003955364227294922, "__label__education_jobs": 0.0010633468627929688, "__label__entertainment": 0.0001342296600341797, "__label__fashion_beauty": 0.00021564960479736328, "__label__finance_business": 0.00034427642822265625, "__label__food_dining": 0.0003709793090820313, "__label__games": 0.0006461143493652344, "__label__hardware": 0.0015649795532226562, "__label__health": 0.000576019287109375, "__label__history": 0.00044846534729003906, "__label__home_hobbies": 0.0001285076141357422, "__label__industrial": 0.0006327629089355469, "__label__literature": 0.00040841102600097656, "__label__politics": 0.00031065940856933594, "__label__religion": 0.0006732940673828125, "__label__science_tech": 0.2000732421875, "__label__social_life": 0.00012046098709106444, "__label__software": 0.029052734375, "__label__software_dev": 0.76123046875, "__label__sports_fitness": 0.0002617835998535156, "__label__transportation": 0.0005621910095214844, "__label__travel": 0.0002465248107910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64089, 0.02796]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64089, 0.58518]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64089, 0.86528]], "google_gemma-3-12b-it_contains_pii": [[0, 819, false], [819, 5401, null], [5401, 12473, null], [12473, 16366, null], [16366, 22929, null], [22929, 29734, null], [29734, 34232, null], [34232, 41308, null], [41308, 46009, null], [46009, 51187, null], [51187, 54915, null], [54915, 57080, null], [57080, 64089, null]], "google_gemma-3-12b-it_is_public_document": [[0, 819, true], [819, 5401, null], [5401, 12473, null], [12473, 16366, null], [16366, 22929, null], [22929, 29734, null], [29734, 34232, null], [34232, 41308, null], [41308, 46009, null], [46009, 51187, null], [51187, 54915, null], [54915, 57080, null], [57080, 64089, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64089, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64089, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64089, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64089, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64089, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64089, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64089, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64089, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64089, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64089, null]], "pdf_page_numbers": [[0, 819, 1], [819, 5401, 2], [5401, 12473, 3], [12473, 16366, 4], [16366, 22929, 5], [22929, 29734, 6], [29734, 34232, 7], [34232, 41308, 8], [41308, 46009, 9], [46009, 51187, 10], [51187, 54915, 11], [54915, 57080, 12], [57080, 64089, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64089, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-05
|
2024-12-05
|
60444ff119671a8a25af694ae144dd8f0e7519de
|
4 Known Issues and Limitations
4.1 License Orchestrator below 1.0.2 and Univa Grid Engine 8.2
4.2 Job ID’s in command output
4.3 Required changes for existing scripts when read-only threads are enabled
4.4 Cgroups specific limitations
4.5 NUMA specific functionality on AMD processors
4.6 Univa Grid Engine on native Windows
4.6.1 Restricted functionality of administration and submit commands
4.6.2 Restricted functionality of job execution
4.7 Univa Grid Engine, accounting file format, Univa UniSight and (ARCo) reporting
4.8 Problems with loading of shared libraries
1 License
TERM SOFTWARE LICENSE AND SUPPORT AGREEMENT
This agreement is between the individual or entity agreeing to this agreement and Univa Corporation, a Delaware corporation (Univa) with its registered office at 2300 N Barrington Road, Suite 400, Hoffman Estates, IL 60195.
1. SCOPE: This agreement governs the licensing of the Univa Software and Support provided to Customer.
• Univa Software is defined as the Univa software described in the order, all updates and enhancements provided under Support, its software documentation, and license keys (Univa Software), which are licensed under this agreement. This Univa Software is only licensed and is not sold to Company.
• Third-Party Software/Open Source Software licensing terms are addressed on the bottom of this agreement.
2. LICENSE. Subject to the other terms of this agreement, Univa grants Customer, under an order, a non-exclusive, non-transferable, renewable term license up to the license capacity purchased to:
(a) Operate the Univa Software in Customer’s business operations and
(b) Make a reasonable number of copies of the Univa Software for archival and backup purposes.
Customer’s contractors and majority owned affiliates are allowed to use and access the Univa Software under the terms of this agreement. Customer is responsible for their compliance under the terms of this agreement.
The initial term of this license is for a period of one year from date hereof to be automatically renewed at each anniversary unless a written notification of termination has been received 60 days prior to each anniversary.
3. RESTRICTIONS. Univa reserves all rights not expressly granted. Customer is prohibited from:
(a) assigning, sublicensing, or renting the Univa Software or using it as any type of software service provider or outsourcing environment or
(b) causing or permitting the reverse engineering (except to the extent expressly permitted by applicable law despite this limitation), decompiling, disassembly, modification, translation, attempting to discover the source code of the Univa Software or to create derivative works from the Univa Software.
4. PROPRIETARY RIGHTS AND CONFIDENTIALITY.
(a) Proprietary Rights. The Univa Software, workflow processes, designs, know-how and other technologies provided by Univa as part of the Univa Software are the proprietary property of Univa and its licensors, and all rights, title and interest in and to such items, including all associated intellectual property rights, remain only with Univa.
The Univa Software is protected by applicable copyright, trade secret, and other intellectual property laws. Customer may not remove any product identification, copyright, trademark or other notice from the Univa Software.
(b) Confidentiality. Recipient may not disclose Confidential Information of Discloser to any third party or use the Confidential Information in violation of this agreement.
(c) Confidential Information means all proprietary or confidential information that is disclosed to the recipient (Recipient) by the discloser (Discloser), and includes, among other things:
- any and all information relating to Univa Software or Support provided by a Discloser, its financial information, software code, flow charts, techniques, specifications, development and marketing plans, strategies, and forecasts
- as to Univa the Univa Software and the terms of this agreement (including without limitation, pricing information).
(ii) Confidential Information excludes information that:
- was rightfully in Recipient’s possession without any obligation of confidentiality before receipt from the Discloser
- is or becomes a matter of public knowledge through no fault of Recipient
- is rightfully received by Recipient from a third party without violation of a duty of confidentiality
- is independently developed by or for Recipient without use or access to the Confidential Information or
- is licensed under an open source license.
Customer acknowledges that any misuse or threatened misuse of the Univa Software may cause immediate irreparable harm to Univa for which there is no adequate remedy at law. Univa may seek immediate injunctive relief in such event.
5. PAYMENT. Customer will pay all fees due under an order within 30 days of the invoice date, plus applicable sales, use and other similar taxes.
6. WARRANTY DISCLAIMER. UNIVA DISCLAIMS ALL EXPRESS AND IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTY OF TITLE, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE UNIVA SOFTWARE MAY NOT BE ERROR FREE, AND USE MAY BE INTERRUPTED.
7. TERMINATION. Either party may terminate this agreement upon a material breach of the other party after a 30 day notice/cure period, if the breach is not cured during such time period. Upon termination of this agreement or expiration of an order, Customer must discontinue using the Univa Software, de-install it and destroy or return the Univa Software and all copies, within 5 days. Upon Univa’s request, Customer will provide written certification of such compliance.
8. SUPPORT INCLUDED. Univa’s technical support and maintenance services (Support) is included with the fees paid under an order. Univa may change its Support terms, but Support will not materially degrade during any paid term. More details on Support are located at www.univa.com/support
9. LIMITATION OF LIABILITY AND DISCLAIMER OF DAMAGES. There may be situations in which, as a result of material breach or other liability, Customer is entitled to make a claim for damages against Univa. In each situation (regardless of the form of the legal action (e.g. contract or tort claims)), Univa is not responsible beyond:
(a) the amount of any direct damages up to the amount paid by Customer to Univa in the prior 12 months under this agreement and
(b) damages for bodily injury (including death), and physical damage to tangible property, to the extent caused by the gross negligence or willful misconduct of Univa employees while at Customer’s facility.
Other than for breach of the Confidentiality section by a party, the infringement indemnity, violation of Uniwa’s intellectual property rights by Customer, or for breach of Section 2 by Customer, in no circumstances is either party responsible for any (even if it knows of the possibility of such damage or loss):
(a) loss of (including any loss of use), or damage to: data, information or hardware
(b) loss of profits, business, or goodwill or
(c) other special, consequential, or indirect damages
10. INTELLECTUAL PROPERTY INDEMNITY. If a third-party claims that Customer’s use of the Univa Software under the terms of this agreement infringes that party’s patent, copyright or other proprietary right, Univa will defend Customer against that claim at Univa’s expense and pay all costs, damages, and attorney’s fees, that a court finally awards or that are included in a settlement approved by Univa, provided that Customer:
(a) promptly notifies Univa in writing of the claim and
(b) allows Univa to control, and cooperates with Univa in, the defense and any related settlement.
If such a claim is made, Univa could continue to enable Customer to use the Univa Software or to modify it. If Univa determines that these alternatives are not reasonably available, Univa may terminate the license to the Univa Software and refund any unused fees.
Univa’s obligations above do not apply if the infringement claim is based on the use of the Univa Software in combination with products not supplied or approved by Univa in writing or in the Univa Software, or Customer’s failure to use any updates within a reasonable time after such updates are made available.
This section contains Customer’s exclusive remedies and Univa sole liability for infringement claims.
11. GOVERNING LAW AND EXCLUSIVE FORUM. This agreement is governed by the laws of the State of Illinois, without regard to conflict of law principles. Any dispute arising out of or related to this agreement may only be brought in the state of Illinois. Customer consents to the personal jurisdiction of such courts and waives any claim that it is an inconvenient forum. The prevailing party in litigation is entitled to recover its attorney’s fees and costs from the other party.
12. MISCELLANEOUS.
(a) Inspection. Univa, or its representative, may audit Customer's usage of the Univa Software at any Customer facility. Customer will cooperate with such audit. Customer agrees to pay within 30 days of written notification any fees applicable to Customer's use of the Univa Software in excess of the license.
(b) Entire Agreement. This agreement, and all orders, constitute the entire agreement between the parties, and supersedes all prior or contemporaneous negotiations, representations or agreements, whether oral or written, related to this subject matter.
(c) Modification Only in Writing. No modification or waiver of any term of this agreement is effective unless signed by both parties.
(d) Non-Assignment. Neither party may assign or transfer this agreement to a third party, except that the agreement and all orders may be assigned upon notice as part of a merger, or sale of all or substantially all of the business or assets, of a party.
(e) Export Compliance. Customer must comply with all applicable export control laws of the United States, foreign jurisdictions and other applicable laws and regulations.
(f) US Government Restricted Rights. The Univa Software is provided with RESTRICTED RIGHTS. Use, duplication, or disclosure by the U.S. government or any agency thereof is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.227-7013 or subparagraphs (c)(1) and (2) of the Commercial Computer Software Restricted Rights at 48 C.F.R. 52.227-19, as applicable.
(g) Independent Contractors. The parties are independent contractors with respect to each other.
(h) Enforceability. If any term of this agreement is invalid or unenforceable, the other terms remain in effect.
(i) No PO Terms. Univa rejects additional or conflicting terms of a Customer’s form-purchasing document.
(k) Survival. All terms that by their nature survive termination or expiration of this agreement, will survive.
Additional software specific licensing terms:
Grid Engine incorporates certain third-party software listed at the URL below. These licenses are accepted by use of the software and may represent license grants with restrictions in which Univa is bound to provide. We are hereby notifying you of these licenses.
Unicloud Kits
- Third Party Software is defined as certain third-party software which is provided along with the Univa Software, and such software is licensed under the license terms located at: http://www.univa.com/resources/licenses/
- Open Source Software is defined as certain opens source software which is provided along with the Univa Software, and such software is licensed under the license terms located at: http://www.univa.com/resources/licenses/
Grid Engine
- Third Party Software is defined as certain third-party software which is provided along with the Univa Software, and such software is licensed under the license terms located at: http://www.univa.com/resources/licenses/
- Open Source Software is defined as certain opens source software which is provided along with the Univa Software, and such software is licensed under the license terms located at: http://www.univa.com/resources/licenses/
Rev: August 2014
2 Fixes and Enhancements
2.1 Summary
2.2 Native Windows Port
2.2.1 Windows Domain users in the autoinstallation configuration and in the UGE configuration
In Univa Grid Engine 8.2.1, the “WIN_DOMAIN_ACCESS” entry in the autoinstallation config file is now ignored and might be removed in future versions. Likewise, the “enable_windomacc” execd_params configuration parameter is now ignored and should not be used anymore. This is because using admin, manager, operator and submit users in specific Windows Domains is not supported, instead all users have to be in the default Windows Domain, so the domain name always can be omitted.
2.2.2 starting the execution daemon manually
In Univa Grid Engine 8.2.1, if the execution daemon was started manually, it automatically stopped when the console window was closed. This is fixed with Univa Grid Engine. Now, if the starter script in $SGEROOT\SGE_CELL\common\sgeexecd.bat is used to start the execution daemon, the daemon keeps on running when the console window is closed. If the binary itself is started manually, then the console window is broken after daemon start, but if it is closed, the daemon also keeps running.
2.2.3 Supported Functionality on Hosts Running Windows Operating Systems
Univa Grid Engine now supports hosts that run certain versions of the Microsoft Windows Operating System as administration, submit or execution host, without the need to install and setup SFU/SUA or Cygwin. Most administration and submit commands of Univa Grid Engine are available on Windows, although some of them with limited functionality. It’s also possible to execute native Windows applications under full control of Univa Grid Engine, even GUI applications can show a GUI on the Windows Desktop of the currently logged in user if necessary, e.g. to show MessageBoxes in case of errors.
The Univa Grid Engine master host functionality is NOT available on hosts running Windows Operating Systems, i.e. neither the QMaster, nor the Shadow Daemon, nor the DBWriter functionality are available on Windows. This means that Windows hosts that act as execution, administration or submit hosts have to be connected to a cluster where the QMaster component is running on a UNIX/Linux host. Read further for details about other prerequisites.
2.2.4 Prerequisites to Use a Windows Hosts in a Univa Grid Engine Cluster
Following list shows the supported Microsoft operating system versions and architectures:
Table 1: Supported Windows Systems, Versions and Architectures
<table>
<thead>
<tr>
<th>Operating System</th>
<th>Version</th>
<th>Architecture</th>
</tr>
</thead>
<tbody>
<tr>
<td>Windows XP Professional (SP3)</td>
<td>XP</td>
<td>32bit</td>
</tr>
<tr>
<td>Windows Server</td>
<td>2003, 2003 R2</td>
<td>32bit</td>
</tr>
<tr>
<td>Windows Vista</td>
<td>Enterprise, Ultimate</td>
<td>32bit, 64bit</td>
</tr>
<tr>
<td>Windows Server</td>
<td>2008, 2008 R2</td>
<td>32bit, 64bit</td>
</tr>
<tr>
<td>Windows 7</td>
<td>Professional, Enterprise, Ultimate</td>
<td>32bit, 64bit</td>
</tr>
<tr>
<td>Windows 8, 8.1</td>
<td>Professional, Enterprise</td>
<td>64bit</td>
</tr>
</tbody>
</table>
Please note that the following prerequisites need to be fulfilled before a host running one of the operating systems mentioned above can be used:
- All execution hosts have to be members of one Active Domain.
- All user accounts of users that should interact with the Univa Grid Engine system have to be domain users.
- Passwords for those users have to be registered at the Univa Grid Engine system.
- The certificates that are used to encrypt these passwords have to be available on the Windows hosts.
- All user names have to be the same on Unix/Linux and Windows hosts.
- The Univa Grid Engine admin user needs full network access, to the $SGE_ROOT directory, to the certificate directory (if these are shared and not copied over) and to the network shares where job output files have to be created.
- During installation, for each Microsoft Windows host, the account of a user with permissions to write to the C:\Windows directory and to the registry is needed. This usually is the local Administrator, but can be any other user with sufficient permissions.
2.3 Architectural Changes in Univa Grid Engine
2.3.1 Areas of Improvement
Several architectural changes have been applied to Univa Grid Engine 8.2 that improve time required for job submission, scheduling performance, job dispatching and the overall cluster throughput. Compared to previous versions of the product Univa Grid Engine 8.2 is up to 3x faster.
In particular big clusters with a large user base and a huge amount of short and medium-sized workload will greatly benefit from these enhancements. For end users of such clusters this will be visible by improved responsiveness of all client and daemon application. Administrators will see improved utilization of multi-core hardware used for the qmaster component as well as by rapid job throughput.
2.3.2 New Architecture
Improved utilization of the underlying qmaster hardware is the reason for the performance improvements realized in Univa Grid Engine 8.2. This is achieved by an additional pool of threads in the qmaster process. The new thread pool (reader threads) is responsible for processing read-only requests exclusively that are triggered by commands such as qstat, qhost, qselect. Other threads (worker threads), that were already available in previous versions of Univa Grid Engine, can now exclusively process read-write requests. Such requests are generated by commands such as qsub, qalter, qmod. Decoupling read-write and read-only requests are the key for the improved performance because up to 64 reader-threads can now work in parallel.
In addition to the above changes, the internal memory architecture has been changed. Reader and worker thread pools hold one copy of the configuration/status information. Both datastores are synchronized via events. Reader threads might have a ‘slightly stale’ view of the master state. The result is that all reader threads and also worker threads can work in parallel. A new Univa Grid Engine object type named session has been introduced that removes the ‘slightly stale’ view for read requests when this must be avoided.
2.3.3 Sessions
Sessions enforce additional synchronization between client and reader threads to avoid polling that is required to maintain a consistent view. Sessions (may) slightly slow down read requests to ensure consistency but they do not thwart internal operations of the Univa Grid Engine system itself. Usually, synchronization happens so fast that it is not noticed by the end user. Therefore, there is no need to use sessions at all in small clusters.
2.4 Request Limits
Request limits allow administrators to define limits for incoming qmaster requests sent by client commands. Requests that are sent by command line clients might get rejected when a limit is exceeded. This allows regulation and control over client commands before things get critical in the Univa Grid Engine system.
Requests can be filtered according to request type (GET, ADD, MOD, DELETE), request object (Job, Job Class, Queue, …), client command name (qsub, qstat, qalter, qconf), user and hostname. Limits are ignored for managers and administrators to avoid lockout.
2.5 Cgroups Support
Cgroups is a Linux kernel feature to limit, account and isolate resource usage of process groups. Univa Grid Engine is integrated with this facility because it provides irrevocable CPU isolation, NUMA domain isolation, safer job suspension, job reaping and additional ways to limit main and virtual memory for jobs. Univa Grid Engine uses this functionality and it allows to do additional modifications of existing Cgroups through customizable prelog scripts.
64bit Linux distributions (like RHEL 6.0 / CentOS 6.0 / Ubuntu 12.4 / SUSE 12.3) support Cgroups when the libcgroups library is installed.
If Cgroups functionality is enabled in Univa Grid Engine then it is used for:
2.6 Distributed Resource Management Application API, version 2.0 (DRMAAv2.0)
DRMAA2 defines an open standard for an API that supports the creation of job workflows as well as cluster monitoring applications. It was evolved from the widely adopted DRMAA1 specification by the Open Grid Forum (http://www.ogf.org) and offers a set of around 100 standardized C functions. It has a notion of queues, slots, machines, job classes, advance reservations and more. Applications may hold multiple, concurrent and persistent sessions that do not only allow job control but also cluster monitoring of machines, queues and non-DRMAA jobs. The internal architecture is event-driven to avoid performance drawbacks through polling. DRMAA2 offers extensible data structures so that Univa Grid Engine specific functionality can be added in later versions of the library without breaking compatibility with existing applications.
The DRMAAv2 specification is currently under final review. Univa Grid Engine 8.2 comes with a developer preview version of a C implementation of the DRMAA2 C language specification. The C API is currently only available for the 64-bit Linux operating system. The specification of other language bindings is currently in progress.
DRMAA1 is fully supported in Univa Grid Engine 8.2 but users are encouraged to adopt the new standard. If you have questions or requirements for specific language bindings then please contact our support team.
2.7 Miscellaneous Enhancements
2.7.1 Scalability and Scheduling
Several bug fixes and improvements have been applied to Univa Grid Engine 8.2. Corrections of the sharetree usage calculation for array tasks as well as fixes for job dependency nets and internal thread synchronization improve the scheduler performance.
With this version of the product, it is also possible to enforce the release of resources that are booked for advance reservations so that intended jobs can consume the underlaying resources.
2.7.2 Job Accounting
Job timestamps are recorded in milliseconds in accounting and reporting. User name and host are recorded for job deletions and available in the accounting file as well as the submit host, submit switches used at the commandline and the specified working directory of a job.
Additional memory metrics can be accessed in the accounting file as well as during runtime of a job. Job usage information is stored as 64bit values.
Univa Grid Engine 8.2 supports 32bit job ID numbers with a configurable rollover.
2.7.3 Cluster Diagnostics
Annotations for queue state changes can be logged to inform other users or managers for reasons of unavailability.
Details about event clients have been added that make it easy for managers to identify users and hosts that trigger certain commands.
2.7.4 Job Resource Control
Users can now specify dynamic runtime limits for jobs. The limit enforcement of resources is now configurable.
2.7.5 Other
Server side JSV scripts can now use any client command (like qstat) to retrieve more information from the Univa Grid Engine system. This does not cause delay due to deadlocks and deadlock detection like it was in previous versions when Univa Grid Engine command line clients were started in JSV routines.
HP Insight CMU integration is added to Univa Grid Engine. For more information, please contact our sales or support team.
Univa Grid Engine supports the Cray XC-30 system architecture. For more information, please contact our sales or support team.
2.8 Full List of Fixes and Enhancements
Univa Grid Engine 8.1.7p1 - 8.1.7p5
GE-4996 job reporting entry "waiting for license" created in non-LO system
GE-4982 scheduler param MAX_SCHEDULING_TIME can get exceeded as long as jobs can be dispatched
GE-4883 d_rt limit is not documented
GE-4599 string complex with spaces is rejected when initialized on host level
GE-4629 Kill a job when h_rss is exceeded
GE-4728 maxrss and maxpss should be available in online job usage
GE-4738 stop scheduling other jobs until a high priority job has been scheduled
GE-4744 qrsh jobs started in terminal in background are suspended and qdel does not work
GE-4762 GE-4744 new qrsh switch to configure behavior when running in background of a job control enabled shell
GE-4772 qrsh client which cannot obtain exit state from execution host should not terminate with exit state 0
GE-4812 execd aborts when executing parallel jobs and execd_params ENABLE_MEM_DETAILS=true is set
GE-4822 Execution daemon erroneously reconnects to qmaster
GE-4828 Use system defined connection backlog value for UGE server socket setup
GE-4831 Need option to set master task job to failed when not all slave tasks report job finish
GE-4836 cryptic error message regarding the clash of 2 unexpected job states
GE-4840 slave tasks of tightly integrated job running on master task host should be reported before master task termination
Univa Grid Engine 8.2.0 beta 1
GE-3072 GUI jobs on Windows Vista only starting when there is a user logged into the system
GE-4124 Inconsistency in job class manual pages
GE-4141 qstat doesn’t report array job concurrency limit
GE-4202 JC’s that specify a positive priority value cannot be used by non-manager to submit new jobs
GE-4460 replace not thread safe strerror() by sge_strerror()
GE-4704 limit of submission rate on user level
GE-4741 garbled version information and outdated checkin date in man pages
GE-4751 GE-3406 Create native Windows text installer
GE-4769 qconf doesn’t handle full qualified Windows user names properly
GE-4797 gdi_request_limits should allow to define limits for certain users or hosts
GE-4798 command, object and request parts of gdi_request_limits are not verified if they are valid
GE-4799 qstat -j '*' takes very long with more than 100K jobs
GE-4800 Users that are not managers cannot delete own GDI sessions
GE-4801 source token in gdi_request_limits are ignored
GE-4802 request type and object type in gdi_request_limits need to be uppercase
GE-4809 wildcard character for 'source' within gdi_request_limit is rejected
GE-4810 NONE as gdi_request_limit is rejected
GE-4814 qhost -si help output is incorrect
GE-4815 many commands do not accept NONE as session_id for the -si switch
GE-4821 "qconf -stl and -at/-kt "reader" are missing in the help output of qconf"
GE-4826 man pages do not explain GDI sessions and corresponding commands
GE-4849 on native Windows, a job must be set to error state if the job users password can’t be read
GE-4850 on native Windows, the execd can't read spooled jobs after execd restart
GE-4852 on native Windows, PEs that use /bin/true as start_proc_arg fail
GE-4854 on native Windows, the UGE Starter Service fails to start the execd at boot time
GE-4855 on native Windows, after the execd was restarted, it doesn’t recognize jobs end
GE-4857 the native Windows shepherd crashes before or when freeing the job environment
GE-4863 on native Windows, the shepherd crashes if no explicit user home directory is defined
GE-4865 the UGE Job Starter Service starts GUI jobs in the foreground even if the job environment variable SGE_BACKGND_MODE=1 is set
GE-3406 The resulting job environment doesn’t contain the user environment from the Windows user profile and variables specified by -v or -V
GE-4895 GE-3406 use SGE admin user and the local Administrator to install UGE on native Windows
GE-4899 on native Windows, executing a job can cause execd crash if the job user can’t be logged on
GE-4901 on native Windows, any job opens a Window on the visible desktop as long as SGE_BACKGND_MODE=1 is not specified
GE-4902 event clients see incorrect state of JC’s and GDI-get requests show incorrect JC’s
GE-4903 qalter -mods/-adds/-clears switches do not work
GE-4904 Change of certain job attributes do not trigger modify event of job/task
GE-4907 if the job users password is missing in the sgepasswd file, a wrong error message is written to accounting
GE-4915 improve error logging if sge_getpwnam_r() fails
GE-4916 the host isn’t set to error state if the UGE Job Starter Service is not running
2 Fixes and Enhancements
GE-4927 shepherd daemon might report incorrect job exit status
GE-4929 manual execd installation creates default queue setup with
zero host slots
GE-4934 install_execd.bat fails to install services if the QMaster
port is read from /etc/services
GE-4939 job start fails if a starter_method is configured
GE-4942 suspend state of jobs is not visible in qstat after
qmod -[u]sq and on suspend on subordinate
Univa Grid Engine 8.2.0 FCS
GE-1039 qmaster logs warnings even when log_level is set to log_err
GE-2544 upgrade qmake using gmake 4.0
GE-2822 tight integration does not work with two queues on one host
GE-3291 Adding a new PE should use NONE instead of /bin/true for
start/stop_proc_args
GE-3698 enhancement for qstat/qacct to see cwd and submission
command of job
GE-3813 user configurable max job number
GE-3840 openmpi jobs incorrectly get killed due to memory limit
GE-3952 IO in online usage and accounting is not explained
GE-3927 adding a way to switch on/off the limit enforcement by execd
GE-3990 /proc/cpuinfo file is opened when submitting job
GE-4022 update jemalloc in 3rdparty directory of lx-amd64
GE-4049 Use 64 bit values to hold job usage data
GE-4076 During the modification of mail recipients in jobs derived
from JC invalid mail addresses will be added.
GE-4085 provide more event client information
GE-4203 normal users are allowed to specify positive priority
values in JC’s
GE-4209 changes to ibm-loadsensor for AIX 6 -> oslevel should be
used instead to detect arch string
GE-4246 use more precise timestamps in job reporting and accounting
GE-4247 request a way to be able to control and manage no. of
qstat calls.
GE-4287 record ‘qdel’ invocation in accounting
GE-4298 write online usage information to reporting file/database
GE-4336 bootstrap man page does not mention Postgres spooling as
supported spooling_method
GE-4338 race condition in signalling the job at startup in shepherd
GE-4344 improve shutdown speed of (builtin) interactive jobs
GE-4414 General Annotate Functionality
GE-4420 Provide an easy mechanism to drain the cluster
GE-4475 Make it possible to set queue instances into error state
via qmod command
GE-4600 functionality to enable/disable backfilling
GE-4670 Improvements to SGE_JSV_TIMEOUT within script or server
side qmaster params.
GE-4731 show latest resource reservation in qstat -j <job_id>
2 Fixes and Enhancements
GE-4743 packint64() and unpackint64() pack and unpack only 32 bit
GE-4754 at most one resource reservation is done when the cluster
is full (all queue instances are full)
GE-4759 qsub -sync yes -t n-m does not print the exit code for every task
GE-4766 qconf command line parsing shows problems when empty strings are
used for command line parameters
GE-4768 GE-4085 Enhance qconf -secl to show the owner/user of the
event client
GE-4773 Fix memory corruption in UGE Job Starter Service that causes
crashes in rare cases
GE-4835 replace confusing "User does not exist" error message if
NIS is broken
GE-4842 can start one task too much on slave host of a tightly
integrated job
GE-4858 update PostgreSQL libraries to current version 9.3.4
GE-4859 update Berkeley DB libraries to current version 6.0.30
GE-4860 update openssl libraries to current version 1.0.1h
GE-4906 random connect problems for PE slave or qmake jobs when
delivering job to execution daemon
GE-4914 make d_rt a queue attribute
GE-4920 add maxrss and maxpss to the accounting file
GE-4924 add submit host to the accounting file
GE-4925 add working directory to the accounting file
GE-4926 add submission command line to accounting file
GE-4931 qrsh client lacks -adds, -mods ... switches.
GE-4933 arsequm file is not backed up by inst_sge -bup
GE-4946 on native Windows, qrsh output is broken if much output
is transferred at once
GE-4950 qmake does not inherit -q switch
GE-4962 online usage is lost for some jobs
GE-4963 broken quoting of job arguments with spaces on win-x86
(native Windows)
GE-4966 The reporting man page has invalid information for the
job log
GE-4972 provide a means to identify jobs which lead to high
scheduling times
GE-4975 reader event client automatically reregisters after
"qconf -kec 3"
GE-4979 installation changes improve install experience and
lower CPU+memory impact
GE-4980 improve man page on thread creation/killing options
GE-4982 scheduler param MAX_SCHEDULING_TIME can get exceeded as long as jobs ...
GE-4988 submission of a jc, which contains wrong entries triggers
a qmaster crash
GE-4996 job reporting entry "waiting for license" created in non-L0 system
GE-5021 m_topology_inuse is lost in case of complex_values changes
Univa Grid Engine 8.2.1
GE-2638 advance reservations should support project based access lists
GE-3610 check for GDI-version mismatch at commlib level
GE-4207 qrsh -inherit to a cluster of different version dumps core
GE-4782 the use of binding switch breaks the functionality of -w v/p
GE-4783 jobs are started in queue which should already have been suspended
by subordination
GE-4833 gridengine ignores complex request and puts tasks into wrong
queue instance
GE-4870 properly translate UGE Job Starter Service error states to
shepherd error states
GE-4892 shepherd pid is not moved out of cgroup when shepherd_cmd is set
GE-4954 Add configurable timeout for client-side suspended qrsh jobs
GE-4959 on native Windows, if the execd was started manually, it stops
when the console is closed
GE-4964 on native Windows, the job environment doesn’t contain SGE_
and -V/-v variables
GE-4973 finished jobs are not stored at all, even if the global config
param finished_jobs is greater than zero
GE-5018 cgroup setting "killing=true" causes shepherd to terminate incorrectly
GE-5020 SGE_HGR_ environment variable is not shown in case of host aliasing
GE-5032 jsv jc parameter is not reset in server JSV (bourne shell, TCL)
if it was set during previous job verification
GE-5036 native Windows clients crash if the sgepasswd file is corrupted
GE-5041 "sharelog" record timestamp in "reporting" file not in milliseconds
GE-5043 man page gmake(1) refers to wrong gmake version
GE-5046 aix platform needs libxml2.a to be available in LIBPATH
GE-5047 sge_qmaster segmentation fault
GE-5051 util/setfilperm.sh doesn’t set ownership of install_execd.bat
GE-5055 sge_qmaster daemon accepts requests from clients using older
GDI version
GE-5058 make the auto installer create certificates even if WIN_DOMAIN_ACCESS
is false
GE-5059 update script adding wrong default parameter for cgroups_params
GE-5065 garbled error output of "save_sge_config.sh"
GE-5066 GUI installer refers to UGE 8.2.0beta1
GE-5068 upgrade procedure does not check for existence of "bc" command
GE-5071 libdrmaa is missing in sol-sparc packages
GE-5072 stree-edit is not part of the distribution
GE-5075 define a single point to set the Grid Engine version and GDI version
GE-5077 Improve logging for scheduler time analysis
GE-5078 RSHAP attribute in "complex_values" definition masks following attributes
GE-5079 gdi_request_limits man documentation is wrong
GE-5080 invalid "gdi_request_limits" accepted by cluster config change
although error message is printed
GE-5086 if execd gets modified execd load report time the change is not
immediately effective
GE-5091 automatic session cleanup does not work in root user systems
Fixes and Enhancements
GE-5092 cwd entry in accounting might break the accounting file format when ":" are used in dir or filenames.
GE-5093 accounting does not filter "\n" in submission command line
GE-5094 negative performance impact on qmaster due to logging into message file: "session <session_id>: processed all available events till unique ID <event_id>"
GE-5097 new PE parameter daemon_forks_slave / master_forks_slave needs to be compatible with cgroups main memory limitation
GE-5098 execd installation fails with error message "./inst_sge: test: ] missing"
GE-5099 uninstallation fails with error message "./inst_sge: LO_ENABLE_QCONF_OPTIONS=1: is not an identifier"
GE-5100 host isn't set to error state if sgepasswd file can't be read or is broken
GE-5105 sge_execd and sge_shepherd depend on libgcc on sol-amd64
GE-5106 sge_execd on hp11-ia64 does not start (/usr/lib/hpux64/dld.so: Unable to find library 'libxml2.so.11')
GE-5107 jobs are not started on hp11-ia64 (failed 137 : invalid execution state)
GE-5108 qmaster is crashing due to lothread issue, when a array job is deleted
GE-5109 scheduler assigns already used resource map value to job
GE-5110 dmem client failed receiving gdi request response for mid=65535 (got syncron message receive timeout error)
GE-5111 create dl script for native Windows
GE-5112 on native Windows, execd crashes if a load sensor reports too much load at a time
GE-5113 port qping to native Windows (win-x86)
GE-5114 extensive logging in qmaster messages file
GE-5115 change Intel Xeon Phi load sensor to use micmgmt API instead of MicAccessSDK
GE-5116 qdel may crash and cause communication error loggings at qmaster
GE-5117 massive qdel request stresses qmaster daemon
GE-5118 event client (e.g. scheduler) may get triggered events delayed if event interval is changed
GE-5119 installer for CUDA complexes works not in all shells
GE-5120 on native Windows, the PATH environment variable contains UNIX style parts
GE-5121 on native Windows, it's not possible to specify more than one load sensor
GE-5122 upgrade script fails to upgrade accounting file to 8.2.x format
GE-5123 Documentation shows incorrect UGE version number on title page
3 Supported Platforms and Upgrade Notes
Univa Grid Engine 8.2 supports various hardware architectures and versions of operating systems.
3.1 Upgrading from cgroups enabled UGE installation
With Univa Grid Engine 8.2.1 the cgroups tasks file does not contain the process (T)IDs for the sge_shepherd daemon anymore. If the cluster that should be upgraded from a previous version has running jobs that use the cgroups_params killing=true or freezer=true there will be the problem that the new version will also terminate the sge_shepherd daemon since it is still in the tasks file for the freezer or cpuset subsystem. The usage and exit status of these jobs would be incorrect. In order to bypass this problem there should be no jobs in the system that were started on hosts where cgroups_params killing or freezer was active before upgrading to Univa Grid Engine 8.2.1.
3.2 Supported Operating Systems, Versions and Architectures
<table>
<thead>
<tr>
<th>Operating System</th>
<th>Version</th>
<th>Architecture</th>
</tr>
</thead>
<tbody>
<tr>
<td>SLES</td>
<td>10,11 x86,</td>
<td>x86-64</td>
</tr>
<tr>
<td>RHEL</td>
<td>5 or higher, 6 or higher, 7</td>
<td>x86, x86-64</td>
</tr>
<tr>
<td>CentOS</td>
<td>5 or higher, 6 or higher, 7</td>
<td>x86, x86-64</td>
</tr>
<tr>
<td>Oracle Linux</td>
<td>5 or higher, 6 or higher, 7</td>
<td>x86, x86-64</td>
</tr>
<tr>
<td>Ubuntu</td>
<td>10.04LTS - 14.04LTS</td>
<td>x86, x86-64</td>
</tr>
<tr>
<td>Oracle Solaris</td>
<td>10, 11</td>
<td>x86_64, SPARC 64bit</td>
</tr>
<tr>
<td>HP-UX</td>
<td>11.0 or higher</td>
<td>64bit</td>
</tr>
<tr>
<td>IBM AIX</td>
<td>6.1 or later</td>
<td>64bit</td>
</tr>
<tr>
<td>Apple OS X</td>
<td>10.8 (Mountain Lion) or higher</td>
<td>x86, x86-64</td>
</tr>
<tr>
<td>Microsoft Windows</td>
<td>XP Professional (SP3)</td>
<td>32 bit</td>
</tr>
<tr>
<td>Microsoft Windows</td>
<td>Server 2003 / 2003 R2</td>
<td>32 bit</td>
</tr>
<tr>
<td>Microsoft Windows</td>
<td>Vista Enterprise / Ultimate</td>
<td>32 and 64bit</td>
</tr>
<tr>
<td>Microsoft Windows</td>
<td>Server 2008 / 2008 R2</td>
<td>32 and 64bit</td>
</tr>
<tr>
<td>Microsoft Windows</td>
<td>7 Professional / Enterprise / Ultimate</td>
<td>32 and 64bit</td>
</tr>
</tbody>
</table>
Table 2: Supported Operating Systems, Versions and Architectures
Please Note: Hosts running the Microsoft Windows operations system cannot be used as master or shadow hosts.
PLEASE NOTE: Univa Grid Engine 8.2 qmaster is fully supported on Linux and Solaris. We provide binaries in Univa Grid Engine 8.2 for running the qmaster on other operating systems but they are not supported and delivered as a courtesy. If you require qmaster support on other architectures please contact us at support@univa.com.
PLEASE NOTE: if you require Univa Grid Engine support for older versions of the above operating systems please contact our sales or support team.
### 3.3 Upgrade Requirements
This is a summary of the Upgrade Matrix that describes how you can carry out the transition from Sun or Oracle Grid Engine 6.2uX, Univa Grid Engine 8.0.X, Univa Grid Engine 8.1.X to Univa Grid Engine 8.2 when you are currently using classic, BDB local spooling or PostgreSQL spooling. If the current version of Grid Engine you are using is missing in the overview, then please look at the full Upgrade Matrix located in the section Updating Univa Grid Engine in the Installation Guide.
<table>
<thead>
<tr>
<th>Version</th>
<th>Upgrade Method</th>
</tr>
</thead>
<tbody>
<tr>
<td>Univa Grid Engine 8.1.X</td>
<td>Backup/Restore</td>
</tr>
<tr>
<td>Univa Grid Engine 8.0.X</td>
<td>Backup/Restore</td>
</tr>
<tr>
<td>Oracle Grid Engine 6.2u6-6.2u8</td>
<td>Backup/Restore</td>
</tr>
<tr>
<td>Sun Grid Engine 6.2u5</td>
<td>Backup/Restore</td>
</tr>
<tr>
<td>Sun Grid Engine 6.2u1-6.2u4</td>
<td>Upgrade to SGE 6.2u5 and then Backup/Restore</td>
</tr>
<tr>
<td>Sun Grid Engine 6.2 FCS</td>
<td>Upgrade to SGE 6.2u5 and then Backup/Restore</td>
</tr>
</tbody>
</table>
Table 3: Upgrading from SGE, OGE, UGE 8.0.X and UGE 8.1.X to Univa Grid Engine 8.2.X
4 Known Issues and Limitations
4.1 License Orchestrator below 1.0.2 and Univa Grid Engine 8.2
Univa Grid Engine 8.2 uses the full range of 32bit values as ID’s for jobs and advance reservation. License Orchestrator below version 1.0.2 cannot handle ID’s of that size.
There are two options to address this limitation:
- Upgrade the License Orchestrator cluster to version 1.0.2 before you install/upgrade to Univa Grid Engine 8.2
or
- Define the variable $\text{MAX\_JOB\_ID}$ in the $\text{qmaster\_params}$ attribute of the global configuration of your Univa Grid Engine 8.2 cluster after upgrade or installation. Set $\text{MAX\_JOB\_ID}$ to 9999999 there before you connect the Univa Grid Engine 8.2 cluster to License Orchestrator 1.0 or 1.0.1
4.2 Job ID’s in command output
Univa Grid Engine now uses the full 32-bit range for job ID’s. Due to this the output format of client commands has changed to be able to display the job ID completely. Existing scripts that parse the output of commands like qstat/qhost might need to be adapted before they can be used with Univa Grid Engine 8.2.
4.3 Required changes for existing scripts when read-only threads are enabled
Existing scripts that use commands to add/modify/delete Univa Grid Engine objects (like qsub, qalter, qmod, . . . ) and commands that only get information (like qstat, qhost, qselect, . . . ) might not work as expected if they are used unmodified in Univa Grid Engine 8.2 with enabled read-only threads.
The reason for this is that read-only and read-write requests are then executed independently from each other so that read-only requests (like qstat, qhost, qselect, . . . ) might not see the outcome of previously executed read-write requests.
To solve this issues the scripts should use sessions for all commands where an execution dependency exists. This can be done by creating a session key with qconf -csi command and by passing this session key to all commands that depend on each other using the -si switch of the corresponding command.
Example:
```
> qconf -csi
5615436
```
> qsub -si 5615436 ...
Your job 82763 ("JobName") has been submitted
> qstat -si 5615436 -j 82763
The Univa Grid Engine system guarantees then that dependent commands can see the outcome of previously executed commands (e.g. qstat will see the previously submitted job 82763) Find more information concerning sessions in section 8.2 “Using sessions to communicate with the system” of the UGE Users Guide.
### 4.4 Cgroups specific limitations
The current cgroups support only allows to install one UGE execution daemon per host. It is not supported to have another UGE installation that uses cgroups support on the same execution host.
### 4.5 NUMA specific functionality on AMD processors
AMD processors have a different NUMA model than Intel processors. Currently the NUMA implementation (per socket memory management) is aligned to the Intel NUMA model. Other features and functions are not affected.
### 4.6 Univa Grid Engine on native Windows
#### 4.6.1 Restricted functionality of administration and submit commands
- These options will fail or be ignored if a job is submitted to a Windows host:
- qalter, qsub, qresub, qrsh, qrsu
- *-c* - Checkpointing is not supported on Windows
- *-ckpt* - Checkpointing is not supported on Windows
- *-m* - Mail sending is not yet implemented
- *-M* - Mail sending is not yet implemented
- *-notify* - There are no notification signals on Windows
- *-noshell* - The shell concept works differently on Windows
- *-pty yes* - There is no pty on Windows
- *-shell yes* - The shell concept works differently on Windows
- *-S* - The shell concept works differently on Windows
- qlogin is not implemented
- qrsh is available only with command, qrsh without a command is not implemented
- These options will fail or be ignored when run on a Windows host:
- qacct
- *-g [group_id]* - not possible to resolve the UNIX group ID on Windows
*Grid Engine Release Notes v 8.2.1*
4.6.2 Restricted functionality of job execution
- Checkpointing is not supported
- Changing the process priority of running jobs is not possible
4.7 Univa Grid Engine, accounting file format, Univa UniSight and (ARCo) reporting
Univa Grid Engine timestamps have changed from seconds to milliseconds in the Univa Grid Engine accounting file.
The Univa Grid Engine reporting parameters configured by reporting_params have changed. All timestamps that were previously in seconds are now reported in milliseconds. This change affects the reporting file format, UniSight reporting and ARCo.
Users using Unisight should not upgrade to Univa Grid Engine until an update to Unisight is available. Users who use dbwriter to process the Grid Engine reporting data or who created tools which directly process the output of the UGE reporting file should adapt their backend tools to properly process the new time stamps.
In Univa Grid Engine 8.2.1 it is now possible to bind Advance Reservations to a Project. Because of this improvement, it is not allowed to have Advance Reservations in the system during upgrade, no matter if they are active or not. Use qrstat to check if there are Advance Reservations in the system.
4.8 Problems with loading of shared libraries
In Univa Grid Engine 8.2.1, if the sgepasswd binary prints that it cannot load the OpenSSL library or that it cannot read the key.pem file while it exists in the quoted path, and this error happens for normal users while it does not happen for user root, then the SGE_ROOT/lib/ARCH path has to be declared as a trusted search path. How this has to be done depends on the architecture. On Linux, the file /etc/ld.so.conf has to be edited or a file has to be added to the /etc/ld.so.conf.d directory, depending on the version of Linux. In both cases, simply the absolute path pointing to SGE_ROOT/lib/ARCH, i.e. something like /opt/uge/lib/lx-amd64 or the like, has to be added to this file. After this, ldconfig has to be executed in order to update the caches. The same problem has been observed for the sge_shepherd, too. If the sge_shepherd does not seem to start, it could be failing before the process itself starts because the loader of the system cannot load the shared libraries.
|
{"Source-Url": "http://www.univa.com/resources/files/Release_Notes_Univa_Grid_Engine_8.2.1.pdf", "len_cl100k_base": 11265, "olmocr-version": "0.1.49", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 47848, "total-output-tokens": 12519, "length": "2e13", "weborganizer": {"__label__adult": 0.0003974437713623047, "__label__art_design": 0.0006237030029296875, "__label__crime_law": 0.001224517822265625, "__label__education_jobs": 0.0010442733764648438, "__label__entertainment": 0.0001906156539916992, "__label__fashion_beauty": 0.00014257431030273438, "__label__finance_business": 0.01027679443359375, "__label__food_dining": 0.00020420551300048828, "__label__games": 0.001852989196777344, "__label__hardware": 0.00151824951171875, "__label__health": 0.0001901388168334961, "__label__history": 0.00017750263214111328, "__label__home_hobbies": 0.0001062154769897461, "__label__industrial": 0.0005474090576171875, "__label__literature": 0.00020396709442138672, "__label__politics": 0.0003190040588378906, "__label__religion": 0.0003211498260498047, "__label__science_tech": 0.00772857666015625, "__label__social_life": 7.50422477722168e-05, "__label__software": 0.323486328125, "__label__software_dev": 0.6484375, "__label__sports_fitness": 0.00019538402557373047, "__label__transportation": 0.0004146099090576172, "__label__travel": 0.0002110004425048828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48210, 0.04827]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48210, 0.05024]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48210, 0.87554]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 0, null], [0, 581, false], [581, 3133, null], [3133, 5981, null], [5981, 8915, null], [8915, 11801, null], [11801, 12278, null], [12278, 14736, null], [14736, 17305, null], [17305, 20351, null], [20351, 22768, null], [22768, 23838, null], [23838, 25957, null], [25957, 28404, null], [28404, 30786, null], [30786, 33047, null], [33047, 35736, null], [35736, 37927, null], [37927, 40262, null], [40262, 41921, null], [41921, 43992, null], [43992, 45960, null], [45960, 48210, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 0, null], [0, 581, true], [581, 3133, null], [3133, 5981, null], [5981, 8915, null], [8915, 11801, null], [11801, 12278, null], [12278, 14736, null], [14736, 17305, null], [17305, 20351, null], [20351, 22768, null], [22768, 23838, null], [23838, 25957, null], [25957, 28404, null], [28404, 30786, null], [30786, 33047, null], [33047, 35736, null], [35736, 37927, null], [37927, 40262, null], [40262, 41921, null], [41921, 43992, null], [43992, 45960, null], [45960, 48210, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 48210, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48210, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48210, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48210, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48210, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48210, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48210, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48210, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48210, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48210, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 0, 3], [0, 581, 4], [581, 3133, 5], [3133, 5981, 6], [5981, 8915, 7], [8915, 11801, 8], [11801, 12278, 9], [12278, 14736, 10], [14736, 17305, 11], [17305, 20351, 12], [20351, 22768, 13], [22768, 23838, 14], [23838, 25957, 15], [25957, 28404, 16], [28404, 30786, 17], [30786, 33047, 18], [33047, 35736, 19], [35736, 37927, 20], [37927, 40262, 21], [40262, 41921, 22], [41921, 43992, 23], [43992, 45960, 24], [45960, 48210, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48210, 0.06911]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
f9a5b8d8760ecc957e3be670542a47db1adc93d2
|
The Answer Set Programming Paradigm
Tomi Janhunen and Ilkka Niemelä
Helsinki Institute for Information Technology HIIT
Aalto University School of Science
Department of Computer Science
PO Box 15400, FI-00076 Aalto, Finland
Abstract
In this paper, we give an overview of the answer set programming paradigm, explain its strengths, and illustrate its main features in terms of examples and an application problem.
Introduction
Answer set programming (ASP, for short) is a declarative programming paradigm for solving search problems and their optimization variants. In ASP a search problem is modeled as a set of statements (a program) in a logic programming type of a language in such a way that the answer sets (models) of the program correspond to the solutions of the problem. The paradigm was first formulated in these terms by Marek and Truszczyński (1999) and Niemelä (1999). The ASP paradigm has its roots in knowledge representation and nonmonotonic logics research as described by Marek et al. (2011) in a historic account on the development of the paradigm. A more recent and more technical overview of ASP has been contributed by Brewka et al. (2011).
The ASP paradigm is most widely used with the formalism of logic programming under the semantics given by answer sets (Gelfond and Lifschitz 1988; 1990). The term answer sets was proposed by Gelfond and Lifschitz (1991) for sets of literals, by which programs in an extended syntax are to be interpreted where the classical negation operator and disjunctions of literals are allowed in the heads of program rules. Lifschitz’ article (2016) in this special issue gives an introduction to the notion of an answer set and the language of ASP, as well as a comparison to Prolog systems. An alternative approach to ASP has been to use directly first-order logic as the basis and extend it with inductive definitions. The details can be found in the articles by Denecker and Vennekens (2014), Denecker and Ternovska (2008), East and Truszczyński (2006), and the one by Bruynooghe et al. (2016) in this special issue.
A main reason for the increasing interest in ASP is the availability of fast software tools that makes it possible to tackle problems of practical importance. Most of the current software tools employ two steps commonly referred to as grounding and solving, reflecting the definition of answer sets for programs with variables (Lifschitz 2016). The idea is to separate concerns so that the grounding phase takes care of the evaluation of more complicated data structures and variable instantiations using logic programming and deductive database techniques, and then the solving phase focuses on search for answer sets for a much simpler type of programs by employing advanced search methods. The papers by Kaufmann et al. (2016) and by Gebser and Schaub (2016) in this special issue provide more information on the solving and grounding techniques.
There is a growing number of successful applications of ASP including molecular biology (Gebser et al. 2010a; 2010b), decision support system for space shuttle controllers (Balduccini, Gelfond, and Nogueira 2006), phylogenetic inference (Erdem 2011; Koponen et al. 2015), product configuration (Soininen and Niemelä 1998; Finkel and O’Sullivan 2011) and repair of web-service work flows (Friedrich et al. 2010). Erdem et al. (2016) give an account of the applications of ASP in this special issue.
On the one hand, ASP is closely related to logic programming and Prolog and, on the other hand, to constraint programming (CP), propositional satisfiability (SAT), and linear/integer programming (LP/IP). Unlike Prolog-like logic programming ASP is fully declarative and neither the order of rules in a program nor the order of literals in the rules matter. Moreover, Prolog systems are tailored to find proofs or answer substitutions to individual queries whereas ASP systems are finding answer sets corresponding to complete solutions to a problem instance. The basic idea in ASP is very close to the paradigm of CP, SAT, or LP/IP where problems are represented by constraints and where systems are tailored to find satisfying variable assignments corresponding to complete solutions. However, there are significant differences. The ASP paradigm allows for a very systematic approach to problem representation through uniform encodings where the problem statement can be developed independently of data on a particular instance. This leads to a large degree of elaboration tolerance. The ASP approach enables structured representation of problems where more complicated constraints are composed of simpler ones using rules. On the other hand, rules enable one to encode conditions that are challenging (like representing disjunctive constraints or other basic relational operations on constraints) or not available at all (like recursive constraints) when comparing to CP or
Problem Solving. The ASP paradigm provides a general approach illustrated in the figure. Finally, we address a number of problems encountered in many real world applications. To get started, the key step is to identify and formalize the problem to be solved, i.e., to work out a problem statement. Typically this consists of clarifying what the potential solutions of the problem are like and then setting the conditions that solutions should satisfy. Solving the problem means that given the data on an instance of the problem we should find one or more solutions satisfying the given conditions (see the topmost arrow in Figure 1). For illustration, we use the task of finding a seating arrangement for a dinner as the first simple example. The respective problem statement could read as formulated below.
Example 1 (Seating Arrangement Problem) A certain group of people, say persons $p_1, \ldots, p_n$, are invited for dinner. There are tables $t_1, \ldots, t_k$ with the respective capacities $c_1, \ldots, c_k$ available for seating such that $c_1 + \cdots + c_k \geq n$. The host has some prior knowledge about the relationships of the guests: there are both friends and enemies among the invitees. This information should be taken into account when designing the arrangement. A solution to this problem is a mapping $s(p_i) = t_j$ of persons $p_i$ to tables $t_j$ so that the mutual relationships are respected.
The problem statement above uses mathematical symbols to abstract the details of the problem such as the number and the identity of persons involved and the collection of tables available for seating. This reflects an important methodological feature, namely the separation of instance data from the actual problem statement. The point is that the problem can be stated without listing all details for a particular instance of the problem. In case of the seating arrangement problem, the instance data would consist of the names of invitees together with lists of tables and their capacities, and the pairs of persons who are known to be either friends or enemies. More concretely put, suppose that we have a group of 20 people: Alice, Bob, John, etc. There are four tables, seating 7, 6, 5, and 4 people, respectively. Moreover, we know that Alice likes Bob, Bob likes John and so on. Given all such pieces of information, the goal is:
- to find at least one solution that fulfills the criteria set in the problem statement of Example 1, or
- to show that no solution exists.
Given what we know so far, we can expect solutions where Alice, Bob, and John are seated together at one of the four tables available. However, if we state additionally that Alice and John dislike each other, for instance, the seating problem instance under consideration has no solutions.
ASPC Encoding. But how do we achieve the goal stated above using ASP and get the problem solved? As suggested by Figure 1, we should formalize the problem statement by writing down a (logic) program. Before we can really do this, we should have a basic understanding of syntax, also introduced in the article by Lifschitz (2016) in this issue. In ASP, programs consist of rules, i.e., statements of the form
$$\text{head} :- \text{body}_1, \text{body}_2, \ldots, \text{body}_m.$$
The intuitive reading of the rule above is that the head can be inferred if (and only if) the body conditions $\text{body}_1, \text{body}_2, \ldots, \text{body}_m$ have been inferred by any other rules in the program. The conditions in the rule are either atomic statements (a.k.a. atoms) like $\text{seat}(a, 1)$ for Alice being seated at Table 1, or count-bounded sets of atoms
$$1 \{ \text{atom}_1; \ldots; \text{atom}_k \} u$$
where at least 1 but at most $u$ atoms among $\text{atom}_1, \ldots, \text{atom}_k$ should be inferable. The cardinality constraint above can also be expressed in terms of a counting aggregate
$$\text{#count}\{\text{atom}_1; \ldots; \text{atom}_k\}$$
where appropriate bounds can be incorporated using relation symbols $\leq$, $\leq$, $\geq$, and $. Atoms can also be negated using the operator not for default negation. A rule with an empty body ($n=0$) stands for a fact whose head holds unconditionally. As a further special case, a rule without a head stands for a constraint whose body $\text{body}_1, \text{body}_2, \ldots, \text{body}_n$ must not be satisfied. In this article, we do not consider extensions of rules by classical negation nor disjunctions in rule heads (Gelfond and Lifschitz 1991).
We are now ready to describe typical steps in writing down a program in ASP, resulting in an encoding given as
\[1\text{The encodings presented in this paper are directly executable using contemporary ASP grounders and solvers compatible with the ASP-core-2 language specification (Calimeri et al. 2012).} \]
Listing 1: Encoding the Seating Problem in ASP
1 % Instance
2 person(a). person(b). person(j).
3 likes(a,b). likes(b,j). ...
4 dislikes(a,j). dislikes(j,a). ...
5 tbl(1,7). tbl(2,6). tbl(3,5). tbl(4,4).
6 % Rules and constraints
7 :- likes(P1,P2), seat(P1,T1), seat(P2,T2),
8 person(P1), person(P2),
9 tbl(T1,T2). T1 !:= T2.
10 :- dislikes(P1,P2), seat(P1,T1), seat(P2,T2),
11 person(P1), person(P2),
12 tbl(T1,T2). T1 !:= T2.
13 :- dislikes(P1,P2), seat(P1,T), seat(P2,T),
14 person(P1), person(P2), tbl(T,_) .
Listing 1. First, we have to decide how to represent the instance data. Sometimes this requires some form of filtering in order to identify which pieces of information are relevant in view of solving the problem. This is easy for the seating problem. The persons involved are listed in line 2 using predicate symbol person/1 and constant symbols a, b, j,... as abbreviations for the names of persons in question. Predicates likes/2 and dislikes/2 are used in lines 3–4 to represent (potentially incomplete)2 information concerning friendship and dislike, respectively. Finally, the identities and capacities of tables are declared by the facts listed in line 5 using predicate tbl/2. Overall, we have obtained a set of facts as the representation of instance data.
The second step concerns the actual program formalizing the problem statement. Writing down the rules is of course a creative activity, which one learns best by doing, but in ASP one can concentrate on defining the relevant concepts (relations) in terms of rules, as well as thinking about conditions on which certain relations should hold. To understand the outcome of the formalization in Listing 1, let us give the intuitive readings for the rules involved. The rule in line 8 stipulates that every person P must be seated at exactly one table T. A few constraints follow. The capacities of tables are enforced in line 9: it is unacceptable if more than C persons are seated at table T which seats at most C persons. Moreover, if person P1 likes person P2, they should not be seated at different tables T1 and T2. This constraint is expressed in lines 10–12. The other way around, if P1 does not like P2, then they should not be seated at the same table T. The respective rule is given in lines 13–14. The rules and constraints in lines 8–14 explained so far form a uniform encoding of the seating problem, as the representation is independent of any problem instance described by facts of the type in lines 2–5.
So far, we have demonstrated the modeling philosophy of ASP in terms of a simple application. The later section on locking design provides further insights into modeling and typical design decisions made. Yet further information is available in the articles of Bruynooghe et al. (2016) and Gebser and Schaub (2016) in this special issue.
**ASP Solving.** It remains to explain how the encoding from Listing 1 solves the problem instance in practice. First, the rules of the program have to be instantiated and evaluated with respect to the present facts. This means, e.g., that the rule in line 8 yields an instance
\[
\{ \text{seat}(a,1); \text{seat}(a,2); \text{seat}(a,3); \text{seat}(a,4) \} \text{ 1}.
\]
when P is replaced by a and T ranges over the available tables 1, 2, 3, and 4. This particular instance concerns the seating of Alice. While instantiating the rules also some evaluations take place. For example, when handling the rule in line 9 for table T1 with capacity 7 the lower bound C of the constraint is substituted by the value 7. The ground program, also indicated in Figure 1, is typically generated by running a dedicated tool, i.e., a grounder, on the input. After that the search for answer sets can be performed by invoking an answer set solver. Finally, the solution(s) of the original problem instance are obtained by extracting relevant part(s) from the answer set(s) found. For the encoding under consideration, this means that whenever an occurrence of \text{seat}(P,T) is contained in an answer set, then person P
---
2 However, ASP builds on the closed world assumption (CWA): the given information is treated as complete information and the problem is solved under this assumption.
is supposed to be seated at table $T$. Using the notions from Example 1, we would have the required mapping $s: P \mapsto T$ from persons to tables. If no answer set can be found, then a problem instance has no solutions. This is actually the case for the instance described by lines 2–5 in Listing 1, since it is impossible to place Alice, Bob, and John at the same table due to their relations. However, if the facts in line 4 are removed, obtaining answer sets is still feasible—the relationships of other guests permitting.
**Beyond Basic ASP.** The basic paradigm illustrated in Figure 1 solves the problem at hand by eventually finding one or more solutions to the problem, or by showing that no solution exists. If there are multiple solutions to the problem, then it may be desirable to select the best solution among the alternatives using some criterion such as *price, capacity, etc.* This turns the problem into an *optimization problem.* In ASP, objective functions for such problems can be defined in terms of *optimization statements* like
\[
\text{\#minimize}\{w_1,1:\mid \text{atom}_1; \ldots ; w_n,n:\mid \text{atom}_n\}.
\]
The statement above assigns weights $w_1,\ldots ,w_n$ to atoms $\text{atom}_1,\ldots ,\text{atom}_n$, respectively, and the goal is to minimize the sum of weights for atoms contained in an answer set—when evaluated over all answer sets. As regards the seating arrangement problem, the respective optimization problem could deal with obviously inconsistent settings like the one described above. Rather than satisfying all constraints resulting from the mutual relations of persons, the goal would be to satisfy as many as possible. In the preceding example, this would mean that either Alice is seated at the same table as Bob, or Bob is seated with John, but Alice and John are placed at different tables.
Besides the optimization of solutions, there are also other *reasoning modes* of interest. It is sometimes interesting to see how much the solutions are alike. In *cautious reasoning*, the idea is to check whether a certain atom is present in all or absent from some answer set. For instance, if $\text{seat}(a,1)$ is for some reason contained in all answer sets, then Alice will be unconditionally seated at the first table at no options remain to this end. Cautious reasoning corresponds to *basic query evaluation* over answer sets and it can be implemented by adding a constraint to the program. In the case of our example, the constraint would read $\neg \text{seat}(a,1)$. indicating that we would like to find any *counter-example,* i.e., an answer set not containing $\text{seat}(a,1)$. Alternatively, cautious reasoning can be implemented by solvers as a special reasoning mode while searching for answer sets. *Brave reasoning* is the dual of cautious reasoning and then the presence in some or absence from all answer sets is required. Again, this can be implemented by adding a constraint or as a special reasoning mode.
It is also possible to *enumerate* answer sets and, hence, *count* their number. For certain applications, the number of solutions could actually be an interesting piece of information. In product configuration (see, e.g., (Soininen and Niemelä 1998)), this could be the number of variants that a production line should be able to produce. There are also complex use cases of ASP. In *incremental solving,* the idea is to compute partial solutions to a problem (or show their non-existence) by calling an ASP solver several times and by extending the instance data on the fly. Various kinds of planning problems (with an increasing plan length) typically fall into this category. The latest developments even suggest *multi-shot solving* (Gebser et al. 2014) where solver calls are freely mixed and the ground programs used upon solver calls may evolve in more complex ways.
### Constraints over Infinite Domains.
Since grounding is an inherent part of ASP work flow, the basic paradigm is based on Boolean or finite-domain variables only. However, certain applications call for variables over infinite domains such as integers and reals. For instance, there have been proposals to extend ASP rules by *linear inequalities* (Gebser, Ostrowski, and Schaub 2009; Liu, Janhunen, and Niemelä 2012; Mellarkod, Gelfond, and Zhang 2008) as well as *difference constraints* (Janhunen, Liu, and Niemelä 2011). From the modeling perspective, the goal of such extensions is to increase the expressive power of ASP suitably so that new kinds of applications become feasible. For instance, referring back to the seating problem in Listing 1, we could refine the specification for each person $P$ by introducing integer variables $e(P)$ and $l(P)$ denoting the points of time when $P$ enters and leaves the table in question. Using difference constraints, we could state a specification given as Listing 2. Intuitively, the rules in lines 1 and 2 insist that person $P$ stays at the table from 5 to 90 minutes. The constraint in lines 3–5 refines the last one from Listing 1. It is not allowed that any two persons $P_1$ and $P_2$ who dislike each other are seated at the same table at the same time. It is important to notice that when the constraint in line 1 is instantiated for Alice, the resulting constraint is $\neg l(a)-e(a)\leq 5$. Thus, the infinity of the underlying domain is not reflected to the size of the resulting ground program. Naturally, the interpretation of $l(a)$ and $e(a)$ as integer variables must be dealt with by the implementation of such constraints.
### Application: Locking Design
Having introduced the ASP paradigm on a general level, we now illustrate its main features in terms of an application problem where the goal is to design a locking scheme for a building. This is to be understood comprehensively, i.e., we are not just interested in locks but also anything else that can affect accessibility in a building. For simplicity, we consider a single floor. A sample floor plan of such a building is depicted in Figure 2. There are 12 rooms altogether, numbered from 1 to 12 in the figure. Given this domain, our objectives
```
Listing 2: Examples of difference constraints
1. $\neg l(P)-e(P) < 5$, person($P$).
2. $\neg l(P)-e(P) > 90$, person($P$).
3. $\neg l(P_1)-e(P_2) > 0$, $l(P_2)-e(P_1) > 0$.
4. dislikes($P_1,P_2$), person($P_1$), person($P_2$),
seat($P_1,T$), seat($P_2,T$), tbl($T,\_1$).
```
are as follows. First, we describe the domain in a uniform way by selecting adequate predicates for the representation of domain information. Second, we take one concrete design goal from this domain into consideration. To this end, we concentrate on the configuration of locks installed on (potential) doors between the rooms in such a way that certain accessibility criteria are met. A particular safety requirement is that the floor can be effectively evacuated in case of an emergency. The idea is to develop ASP encodings for a design problem like this and, at the same time, illuminate the basic line of thinking and typical primitives used when modeling in ASP.
Uniform Encoding. The goal is to choose predicate symbols and the respective relations that are needed to represent an instance of the application problem at hand. To abstract the physical coordinates of the rooms, we rather represent the adjacency relation of rooms in terms of a predicate \textit{adj} \( /2 \). For simplicity, we also assume that this relation captures the potential of installing doors between any adjacent rooms. The floor plan of Figure 2 can be represented by constants 1..12 for the rooms and the following facts:
\begin{align*}
\text{adj}(1,2), \text{adj}(1,3), \text{adj}(2,3), \\
\text{adj}(2,4), \ldots, \text{adj}(11,12).
\end{align*}
In total, there are 21 such facts and they are sufficient for the purposes of our examples to describe the interconnections of the rooms. For space efficiency, the adjacency information is represented asymmetrically, i.e., \textit{adj}(X,Y) is reported only if \( X < Y \). In addition, the rooms having exits are reported using a unary predicate \textit{exit} \( /1 \). For the running example in Figure 2, this is captured by the fact \textit{exit}(5). Now, if the given floor plan were changed in one way or another, or a completely different floor plan were taken into consideration, this should be reflected in the facts describing the problem instance. The other rules describing the application problem are based on these two predicates, hence making the encoding uniform. As typical in ASP encodings, some subsidiary domain predicates are defined in order to make the description of the actual problem easier. Some domain rules for the locking design problem are collected in Listing 3 and explained below.
Relational Operations. The rules in lines 1–2 of Listing 3 are used to extract room information from the adjacency information by a simple \textit{projection} operation. As a result \textit{room} \( (R) \) is true for only those values of \( R \) that actually appear in the adjacency information. In principle, a door between two rooms provides symmetric access from a room to another. Thus, the adjacency relation is not well-suited as such for the description of accessibility and we form the \textit{union} of the accessibility relation with its reverse relation using rules in lines 3–4. The relation \textit{pot}/2 stands for potential access depending on instrumentation such as locks, handles, press buttons, etc.
Defaults. To illustrate the use of defaults in encodings, we have included the rules in lines 5–6 of Listing 3. The rule in line 5 defines the condition \textit{otherexit} \( /0 \) meaning that some other room than the room 1 has an exit. The rule in line 6 ensures that, by default, there is an exit at room 1. This is to hold unless another exit has been declared for the particular problem instance. There can be multiple exits. For instance, if there are two exits at rooms 1 and 5, this can be stated explicitly using facts \textit{exit}(1) and \textit{exit}(5). Adding these facts overrules the default in line 6 because \textit{otherexit} can be inferred by the rule in line 5.
Defining the Search Space. Typical ASP encodings include a part where the solution candidates for the problem being formalized are generated. This can be achieved by expressing a number of \textit{choices} that aim at capturing the varying aspects of solutions. As regards syntax, such choices can be expressed in terms of \textit{choice rules} whose heads are count-bounded sets of atoms. Bounds can also be omitted if an arbitrary choice is of interest. As explained above, the access from a room to another can be asymmetric due to physical constructions. In particular, this is true for emergency situations where persons try to leave the building as soon as possible but might have no keys to unlock any door. For simplicity, we introduce a two-argument predicate \textit{evac}/2 that is used to express the existence of an evacuation route from a room to another. Given adjacent rooms \( R1 \) and \( R2 \), such a design choice can be made in terms of a choice rule
\begin{align*}
\{ \text{evac}(R1,R2) \} :&= \text{pot}(R1,R2).
\end{align*}
The intuitive reading is that if \textit{pot}(R1,R2) is true, then the truth value of \textit{evac}(R1,R2) is subject to a \textit{choice}. Hence, the selection of evacuation routes between rooms is formalized. Note that the analogous normal rule
\begin{align*}
\text{pot}(R1,R2) :&= \text{adj}(R1,R2).
\end{align*}
Figure 2: Floor plan for the rooms 1–12
Listing 3: Domain rules for locking design
1. \text{room}(R1) := \text{adj}(R1,R2).
2. \text{room}(R2) := \text{adj}(R1,R2).
3. \text{pot}(R1,R2) := \text{adj}(R1,R2).
4. \text{pot}(R1,R2) := \text{adj}(R2,R1).
5. \text{otherexit} := \text{exit}(X), X>1.
6. \text{exit}(1) := \text{not otherexit}.
Listing 4: ASP Encoding of the Evacuation Plan
1. reach(R1,R) :- room(R).
2. reach(R1,R2) :-
3. reach(R1,R3), evac(R3,R2),
4. room(R1), pot(R3,R2).
5.
6. ok(R) :- room(R), reach(R,X), exit(X).
7. :- not ok(R), room(R).
8.
9. #minimize{1,R1,R2: evac(R1,R2), pot(R1,R2)}.
Listing 5: Revised ASP Encoding of the Evacuation Plan
1. step(0..S).
2.
3. reach(R1,R,0) :- room(R).
4. reach(R1,R2,S+1) :-
5. reach(R1,R3,S), evac(R3,R2),
6. room(R1), pot(R3,R2), step(S), step(S+1).
7.
8. ok(R) :- room(R), reach(R,X,S),
9. exit(X), step(S).
fact, there are 22 020 such plans and further constraints can be introduced to identify the most suitable ones. It is indeed the case that the current requirements allow for very long evacuation routes through the building of Figure 2 such as
7 → 6 → 11 → 12 → 10 → 9 → 8 → 4 → 2 → 1 → 3 → 5.
Given this observation, the lengths of routes seem important. Thus, we now pay special attention to the number of evacuation steps, i.e., moves from a room to another, and from the room perspective. The number of steps ought to be limited.
Elaboration Tolerance. It is straightforward to modify the recursive encoding so that the number of steps is reflected. The revised encoding is presented as Listing 5. The domain for steps is first declared by the rule in line 1 where the maximum number of steps \( s \) is determined from the command line of the grounder. The base case in line 3 simply states that each room \( R \) is reachable from itself in zero steps. The main modification in the recursive case (lines 4–5) concerns counting: the number of steps \( s \) is increased by one to \( s+1 \) whenever a further step is made. However, since both \( s \) and \( s+1 \) must be members of the domain of steps, the maximum value is effectively determined by the constant \( s \) in line 1. Given the floor plan of Figure 2 and \( s=2 \), no evacuation plans can be found. By increasing \( s \) by one, solutions with 11 connections are found again and there are only 152 plans where the number of evacuation steps is at most three.
In summary, we have now tackled one particular aspect of locking design, i.e., ensuring that an evacuation plan exists for a building. In reality further requirements are imposed on evacuation plans making the problem computationally more and more challenging. For instance, it can be shown that if we incorporate conditions which can make rooms along an evacuation route mutually exclusive, e.g., for certain security reasons, it is unlikely that we are able to find a polynomial-time algorithm for solving the problem (mathematically expressed the problem becomes NP-complete). This justifies well the use of powerful search methods like ASP for tackling the problem. For readers interested in computational complexity, we sketch the justifications of computational hardness in the sidebar.
Computing Answer Sets
So far, we have concentrated on the conceptual model of Figure 1 with an emphasis on the modeling side. As regards the actual computation of answer sets, grounding and
solving were also identified as the main steps involved. Grounders are implemented either as stand-alone tools, such as the state-of-the-art grounder GRINGO\(^3\), or integrated as a front-end of the solver. Native answer-set solvers are able to handle ground logic programs directly and, hence, truly implement the search step illustrated in the figure. Typically, this step is the most demanding one from the computational perspective. A number of answer-set solvers have been developed in the history of ASP and we mention here DLV\(^4\), CLASP\(^3\), and WASP\(^3\) since they are actively maintained and developed at the moment. The article by Kaufmann et al. (2016) in this special issue gives a more detailed account of grounding and solving. If ASP is extended by constraints which cannot be directly handled by the ASP solver being used, the typical solution is to isolate extensions from rules themselves and to treat them by appropriate solvers externally. This leads to an architecture where two or more solvers are cooperating and interacting in analogy to SAT modulo theories (SMT) solvers. Then each sort of constraints can be handled by native algorithms.
**Translation-Based ASP.** The other constraint-based disciplines discussed in the introduction offer similar solver technology at the user’s disposal for handling, in particular, the search phase. However, they cannot be used straightforwardly, as ground programs are not directly understood by such solvers and certain kinds of transformations become indispensable. The idea of translation-based ASP is to translate (ground) logic programs into other formalisms so that a variety of solvers can be harnessed to the task of computing answer sets. Such an approach can be understood as a refinement of the search step in Figure 1. There are existing translations from ASP, e.g., to SAT (Janhunen 2004), and its extension as SMT (Niemelä 2008), and mixed integer programming (MIP) (Liu, Janhunen, and Niemelä 2012). These translations indicate the realizability of ASP in other formalisms and they have all been implemented by translators in the ASPTOOLS\(^6\) collection. They offer another way of implementing the search phase in ASP using off-the-shelf solvers as black boxes. This approach is already competitive in certain application problems and it can be seen as an effort to combine the expressive power of the modeling language offered by ASP with the high performance of existing solvers. Translations are also useful when implementing language extensions in a single target language. For instance, the idea of (Janhunen, Liu, and Niemelä 2011) is to translate programs enriched by difference constraints into difference logic altogether. The strength is that a single solver is sufficient for the search phase, but on the other hand, the original structure of constraints may be lost.
**Cross Translation.** The translations mentioned above are based on very similar technical ideas but yield representations of the ground program in completely different formats. Since the development of several translators brings about extra programming work, it would be highly desirable to integrate the variety of translators in a single tool—having options for different back-end formats. This is not as simple as that due to the wide variety of formats under consideration. However, this issue is partly solved by a recent translation from ASP to SAT modulo acyclicity (Gebser, Janhunen, and Rintanen 2014) where graph-based constraints are interconnected with ordinary logical constraints (i.e., clauses). The translation can be implemented by instrumenting a ground logic program with certain additional rules and meta information formalizing the underlying recursion mechanism in terms of the acyclicity constraint. This leads to a new implementation strategy for translation-based ASP: the choice of the target formalism can be postponed until the last step of translation where the constraints are output in a particular solver format. This idea is analogous to cross compilation in the context of compiling conventional programming languages and hence we coin the term cross translation for ASP. In the current implementation of this idea, a back-end translator transforms the instrumented program into other kinds of constraints understood by SMT, MIP, and pseudo-Boolean (PB) solvers, for instance. Interestingly, by implementing an additional acyclicity check inside a native ASP solver, the instrumented program can also be processed directly by the solver (Bomanson et al. 2015), which offers yet another approach to answer set computation.
**Summary and Future Prospects**
This paper provides an introduction to the ASP paradigm as well as explains its main features—first generally, but also in terms of examples. We also discuss the two mainstream approaches to implementing the search for answer sets using either native solvers, or translators combined with solver technology offered by neighboring disciplines.
**Towards Universal Modeling.** There is a clear trend in the area of constraint-based modeling where methods and techniques are being transferred from one discipline to another. Various ideas from knowledge representation, logic programming, databases, and Boolean satisfiability served as a starting point for the ASP paradigm. But there are signs of knowledge transfer in the other direction as well. For instance, ASP solvers have been integrated into logic programming systems such as XSB (Rao et al. 1997). Advanced query evaluation mechanisms of ASP (Faber, Greco, and Leone 2007) are also relevant for deductive databases. The very idea of answer sets has been brought to the context of CP by introducing so-called bound-founded variables (Aziz, Chu, and Stuckey 2013). Quite recently, the algorithms for projected answer set enumeration have been exported for model counting in the context of SAT (Aziz et al. 2015).
We foresee that the exchange and incorporation of ideas and technologies in this way is gradually leading towards a universal approach where the user may rather freely pick the right language for expressing constraints of his or her interest. The underlying reasoning system is then supposed to (i) take care of required translations transparently and (ii) forward the resulting constraints for a solver architec-
\(^3\)potassco.sourceforge.net/
\(^4\)www.dlvsystem.com/
\(^5\)github.com/alviano/wasp.git
\(^6\)research.ics.aalto.fi/software/asp/
ture that can realize the search for answers. The first attempts to define a modular framework for multi-language modeling have already been made (Järvisalo et al. 2009; Lierler and Truszczynski 2014; Tasharrofi and Ternovska 2011). However, a lot of work remains to be done in order to realize the universal modeling scenario. Our experience from integrating various kinds of tools suggests that finding a universal format for the constraints of interest is one of the key issues for tool interoperability. There are existing formats such as the DIMACS format in SAT, the Smodels format in ASP, and the FlatZinc format in CP, that can be used as starting points for designing the universal format.
Acknowledgments. The support from the Finnish Centre of Excellence in Computational Inference Research (COIN) funded by the Academy of Finland (under grant #251170) is gratefully acknowledged. The authors thank Martin Gebser, Michael Gelfond, Torsten Schaub, and Mirek Truszczynski for their comments on a preliminary draft of this article.
References
Bruynooghe, M.; Denecker, M.; and Truszczynski, M. 2016. First order logic with inductive definitions for model-based problem solving. AI Magazine (this number).
Erdem, E.; Gelfond, M.; and Leone, N. 2016. Applications of ASP. AI Magazine (this number).
Koponen, L.; Oikarinen, E.; Janhunen, T.; and Säilä, L.
**Sidebar: Locking Design Can Be Computationally Challenging**
It is not surprising that finding a locking scheme satisfying given conditions can become computationally challenging when more involved conditions need to be satisfied. Here we consider the problem of finding a locking scheme that allows an evacuation plan such that for each room there is exactly one evacuation direction and the evacuation routes respect a given set of room conflicts, i.e., a set of pairs of rooms \((R_1, R_2)\) such that when following the evacuation routes if you enter room \(R_1\), then you cannot enter room \(R_2\). We show that this locking design problem is NP-complete indicating that it is unlikely that a polynomial time algorithm for solving this problem can be found. See, for example, (Papadimitriou 1994) for an introduction to computational complexity and the required concepts used below.
Technically, the NP-completeness of a problem can be shown by establishing a reduction computable in polynomial time from a known NP-complete problem to the problem and showing that it can be checked in polynomial time that a potential solution satisfies the required conditions for the problem. As such a known NP-complete problem we use the Exact-3-SAT problem where we are given a conjunction of 3-literal clauses and the problem is to find a truth assignment that satisfies exactly one literal in each of the clauses.
**Reduction from Exact-3-SAT.** Any given 3-SAT instance \(C_1 \land \ldots \land C_n\) can be transformed into a floor plan illustrated in Figure 3. For each 3-literal clause \(C_i = l_{i,1} \lor l_{i,2} \lor l_{i,3}\), we introduce a corridor \(C_i\) connected to rooms \(R_{i,1}, R_{i,2},\) and \(R_{i,3}\) that are connected to corridor \(C_{i+1}\). Moreover, rooms \(R_{i,1}, R_{i,2},\) and \(R_{i,3}\) do not have doors in-between. The (only) exit is located next to corridor \(C_{n+1}\) which means that all corridors and rooms must be eventually evacuated through it. Moreover, each room \(R_{i,j}\) is labeled by the respective literal \(l_{i,j}\), the idea being that \(l_{i,j}\) is satisfied if \(C_i\) is evacuated via the room \(R_{i,j}\). Consequently, if there are two rooms labeled by complementary literals (i.e., a Boolean variable \(x\) and its negation \(\neg x\)), then those rooms are in conflict. This means that evacuation routes involving any pair of conflicting rooms are not feasible. It is also easy to see that the floor plan in Figure 3 and the associated set of conflicts can be computed in polynomial time.
It can be shown that a 3-SAT instance \(C_1 \land \ldots \land C_n\) has
a satisfying truth assignment such that each clause has exactly one literal satisfied if and only if for the corresponding floor plan there is a locking scheme that allows an evacuation plan such that (i) for each room there is exactly one evacuation direction and (ii) the evacuation routes respect the set of room conflicts arising from the complementary literals. The key observation is that for the corresponding floor plan evacuation is possible only if there is a route from $C_1$ to $C_{n+1}$ such that for each $i = 1, \ldots, n$ the route visits exactly one of the rooms $R_{i,1}$, $R_{i,2}$, and $R_{i,3}$ and all room conflicts are respected. A satisfying truth assignment such that each clause has exactly one literal satisfied gives directly such a route and if such a route is available, it gives directly an appropriate truth assignment where literals corresponding to the visited rooms in the route are satisfied.
Moreover, it is clear that given a locking scheme with exactly one evacuation direction for each room, it can be checked in polynomial time that evacuation is possible and that all room conflicts are respected.
|
{"Source-Url": "https://research.aalto.fi/files/30223846/SCI_Janhunen_Niemela_The_Answer_Set_Programming.pdf", "len_cl100k_base": 9354, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36541, "total-output-tokens": 12216, "length": "2e13", "weborganizer": {"__label__adult": 0.00036978721618652344, "__label__art_design": 0.00045418739318847656, "__label__crime_law": 0.0004870891571044922, "__label__education_jobs": 0.0022907257080078125, "__label__entertainment": 0.00010353326797485352, "__label__fashion_beauty": 0.00020015239715576172, "__label__finance_business": 0.0003566741943359375, "__label__food_dining": 0.00041294097900390625, "__label__games": 0.0007042884826660156, "__label__hardware": 0.0007395744323730469, "__label__health": 0.0007529258728027344, "__label__history": 0.0003383159637451172, "__label__home_hobbies": 0.00015735626220703125, "__label__industrial": 0.0006222724914550781, "__label__literature": 0.0005583763122558594, "__label__politics": 0.0003528594970703125, "__label__religion": 0.0005517005920410156, "__label__science_tech": 0.09783935546875, "__label__social_life": 0.0001494884490966797, "__label__software": 0.01232147216796875, "__label__software_dev": 0.87890625, "__label__sports_fitness": 0.00029540061950683594, "__label__transportation": 0.0006413459777832031, "__label__travel": 0.00020205974578857425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46459, 0.06152]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46459, 0.53299]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46459, 0.90135]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4904, false], [4904, 9732, null], [9732, 13950, null], [13950, 20373, null], [20373, 25844, null], [25844, 28896, null], [28896, 35378, null], [35378, 40789, null], [40789, 45318, null], [45318, 46459, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4904, true], [4904, 9732, null], [9732, 13950, null], [13950, 20373, null], [20373, 25844, null], [25844, 28896, null], [28896, 35378, null], [35378, 40789, null], [40789, 45318, null], [45318, 46459, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46459, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46459, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46459, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46459, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46459, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46459, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46459, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46459, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46459, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46459, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4904, 2], [4904, 9732, 3], [9732, 13950, 4], [13950, 20373, 5], [20373, 25844, 6], [25844, 28896, 7], [28896, 35378, 8], [35378, 40789, 9], [40789, 45318, 10], [45318, 46459, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46459, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
5e51330871952499288247e64ff49aa3d9e4d003
|
[REMOVED]
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/286634186/Mezzina2020_Chapter_SoftwareAndReversibleSystemsAS.pdf", "len_cl100k_base": 8234, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 45043, "total-output-tokens": 14963, "length": "2e13", "weborganizer": {"__label__adult": 0.00039768218994140625, "__label__art_design": 0.00029397010803222656, "__label__crime_law": 0.0002815723419189453, "__label__education_jobs": 0.0005588531494140625, "__label__entertainment": 7.599592208862305e-05, "__label__fashion_beauty": 0.0001575946807861328, "__label__finance_business": 0.00020802021026611328, "__label__food_dining": 0.00038695335388183594, "__label__games": 0.000644683837890625, "__label__hardware": 0.0010156631469726562, "__label__health": 0.0005779266357421875, "__label__history": 0.00028228759765625, "__label__home_hobbies": 8.291006088256836e-05, "__label__industrial": 0.0003910064697265625, "__label__literature": 0.00032019615173339844, "__label__politics": 0.00025916099548339844, "__label__religion": 0.0005307197570800781, "__label__science_tech": 0.02960205078125, "__label__social_life": 8.291006088256836e-05, "__label__software": 0.005359649658203125, "__label__software_dev": 0.95751953125, "__label__sports_fitness": 0.00029397010803222656, "__label__transportation": 0.0006213188171386719, "__label__travel": 0.0002142190933227539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55329, 0.05183]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55329, 0.40853]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55329, 0.84326]], "google_gemma-3-12b-it_contains_pii": [[0, 1093, false], [1093, 3923, null], [3923, 7034, null], [7034, 10327, null], [10327, 13545, null], [13545, 15557, null], [15557, 16760, null], [16760, 18742, null], [18742, 21749, null], [21749, 25123, null], [25123, 28006, null], [28006, 30929, null], [30929, 34376, null], [34376, 37654, null], [37654, 40586, null], [40586, 43826, null], [43826, 47364, null], [47364, 50358, null], [50358, 53740, null], [53740, 55329, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1093, true], [1093, 3923, null], [3923, 7034, null], [7034, 10327, null], [10327, 13545, null], [13545, 15557, null], [15557, 16760, null], [16760, 18742, null], [18742, 21749, null], [21749, 25123, null], [25123, 28006, null], [28006, 30929, null], [30929, 34376, null], [34376, 37654, null], [37654, 40586, null], [40586, 43826, null], [43826, 47364, null], [47364, 50358, null], [50358, 53740, null], [53740, 55329, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55329, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55329, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55329, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55329, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55329, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55329, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55329, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55329, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55329, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55329, null]], "pdf_page_numbers": [[0, 1093, 1], [1093, 3923, 2], [3923, 7034, 3], [7034, 10327, 4], [10327, 13545, 5], [13545, 15557, 6], [15557, 16760, 7], [16760, 18742, 8], [18742, 21749, 9], [21749, 25123, 10], [25123, 28006, 11], [28006, 30929, 12], [30929, 34376, 13], [34376, 37654, 14], [37654, 40586, 15], [40586, 43826, 16], [43826, 47364, 17], [47364, 50358, 18], [50358, 53740, 19], [53740, 55329, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55329, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
a21fd69edad8f9585eee98253b28b1a27440e54a
|
Solving scheduling problems using Petri nets and constraint logic programming
<http://www.numdam.org/item?id=RO_1998__32_2_125_0>
SOLVING SCHEDULING PROBLEMS USING PETRI NETS AND CONSTRAINT LOGIC PROGRAMMING (*)
by P. RICHARD and C. PROUST
Communicated by Philippe CHRETIENNE
Abstract. – This paper presents an approach to solve scheduling problems from a Petri net model. A timed Petri net describes feasible sequences and schedules of operations. The net is then translated into a CHIP program. Build-in solver of the constraint logic programming language is used to solve the associated scheduling problem. The implementation of the OPTNET software and some results are described. © Elsevier, Paris
Keywords: Scheduling, Petri nets, Constraint Logic Programming.
Résumé. – Cet article présente une approche pour résoudre des problèmes d'ordonnancement depuis un modèle réseau de Petri. Un réseau de Petri temporisé modélise les séquences et les ordonnancements réalisables d'opérations. Le réseau est ensuite transcrit en un programme CHIP. Le solveur intégré du langage de programmation logique avec contrainte est utilisé pour résoudre le problème d'ordonnancement associé. L'implémentation du logiciel OPTNET et quelques résultats sont décrits.© Elsevier, Paris
Mots clés : Ordonnancement, réseaux de Petri, Programmation Logique avec Contrainte.
1. INTRODUCTION TO SCHEDULING PROBLEMS
Carlier et al. (1987) say that we deal with a scheduling problem when “we must program the execution of a realisation by assigning resources to tasks and by fixing their execution dates [...]. The tasks are the common denominator of scheduling problems, their definitions are neither always immediate, nor trivial”. These problems are hard to solve, most of them are NP-hard (Lawler et al., 1989).
Traditional approaches is made through a mathematical classification of problems as Baker (1974), GOThA (1993). The notation of problems is: n/m/A/B, where n is the number of jobs, m is the number of machines, A
is the flow of pieces and internal rules of processing, and B is the measure of performance. Of course, we can model analytically those problems in a classical operational research way (see for instance, for the flowshop problem, Stafford et al. (1990)) to use standard resolution tools. But, on one hand, that does not necessarily allow to use the properties of the studied problem to face its resolution complexity (a specific branch and bound approach allows it), and on the other hand, the initial model can’t be easily extended to take new constraints into account. The efficiency is then synonym of a lack of genericity even if we actually see some attempts to remedy it, as Foure et al. (1993). Classical methods used to solve these problems are enumerative methods (branch and bound, dynamic programming...), and neighborhood methods (tabu search, simulated annealing, genetic algorithms...).
But previous approaches failed to bridge the gap between the theory and practice of scheduling (Bauer et al., 1991). In consequence, other approaches have been made: simulation techniques, artificial intelligence methods... The readers can refer to Bauer et al. (1991) and the special number of IJPR (1988) for a review of these techniques. More recently, the development of constraint programming seems to be interesting in the modelling of scheduling problems (Le Pape, 1994a). Constraints are added to a programming language through an extension of the language as CHIP (Van Hentenryck, 1989) or Prolog III (Colmerauer, 1990), or as a library, as the Ilog Solver C++ library (Le Pape, 1994b). In order to deal with scheduling problems, CHIP and the Ilog library introduce symbolic constraints to represent resources, Aggoun et al. (1993) with CHIP and Le Pape (1994b) with Ilog schedule. The modelling power of these kind of tools is very important, but the resolution abilities are not well known. But those approaches reduce the gap between theory and practice.
All approaches reviewed before are bottom-up, in the sense that a resolution tool is applied to a scheduling problem. Another category of approach is to determine the set of feasible solutions of the scheduling problem without any optimization concern (Baptiste et al. (1991), Roubellat et al. (1994), Levy et al. (1994)). For example in Baptiste et al. (1994), feasible sequences are modeled with a P-Q-R tree, and then exploited with an extension of a CLP language designed by Zidoum et al. (1994), but without any optimization objective.
Our approach to solve scheduling problem is top-down. The first step is the modelling of the scheduling problem with Petri nets (Murata, 1989). The modelling is independant from any resolution tool. In a second step, among emerging resolution tools, we gainst interest with CLP.
First, we recall the basis of Petri nets. In second, we present relation between Petri nets and the scheduling theory. The third part concerns the modelling of scheduling problems with Petri nets. Without lost of generality, we take our examples in the classical flow-shop problem (Johnson, 1954) and its extensions (Proust, 1992). After deduction of the properties of the model, we describe the transcription of the obtained net into a CLP program in CHIP, and the resolution step. Then we present the software implementation and some results.
2. PETRI NETS RECALLED
A Petri net (PN) is a bipartite graph in which the vertices are places and transitions. Weighted edges link places and transitions. More formally, a Petri net is a 4-tuple \(<P,T,\text{In},\text{Out}>\) where:
- \(P\) a finite set of places \((|P| = m)\)
- \(T\) a finite set of transitions \(|T| = n)\)
\begin{align*}
\text{In} \quad &\text{an input function} \quad P \times T \rightarrow \mathbb{N} \\
\text{Out} \quad &\text{an output function} \quad P \times T \rightarrow \mathbb{N}
\end{align*}
\(\text{In}(p, t) = 0\) if \(p\) does not precede \(t\) else the weight of the edge
\(\text{Out}(p, t) = 0\) if \(p\) is not behind \(t\) else the weight of the edge.
The dynamical behaviour of a PN is made by the flow of marks (or tokens) in its places. The marking function is \(M : P \rightarrow \mathbb{N}\). \(M(p)\) is the number of marks in the place \(p\).
A transition \(t\) is enabled if and only if every input place of \(t\) has a number of marks greater or equal to the weight of the corresponding edge. Firing a sequence of transitions follows the state equation below:
\[M' = M + C\sigma\]
- \(C\) is the incidence function: \(C = \text{Out-In}\)
- \(\sigma\) is the firing vector of the (firing) sequence. It is an \(n\) vector in which a \(k\) in \(i^{th}\) position indicates that transition \(i\) is fired \(k\) times.
We note \(\{\bullet p\}\) (respectively \(\{p\bullet\}\)) the set of input (respectively output) transitions of the place \(p\).
An extension of the basic model adds a firing time \((d_t)\) to each transition \((t)\) (called Timed Petri Nets: TPN). A transition is enabled if places are marked at time \(u\). The firing of a transition is decomposed in two phases: instantaneous withdrawal of tokens in input places of \(t\) at time \(u\); instantaneous addition of tokens in output places of \(t\) at date \(u + d_t\). Below, we deal with a particular class of TPN: timed marked graphs (TMG). Adding time to transitions of
a PN was made by Ramchandani (1974). In the following, we assume that no transition is firing at the date \( t = 0 \).
**Definition:** A timed marked graph is a 3-tuple \( G = < R, d, M^0 > \) in which:
- \( R \) is a marked graph, *i.e.* every place has exactly one input and one output \( |p_\bullet| = |p_\circ| = 1 \) and all weights are equal to one.
- \( d \) is the time mapping \( d : T \rightarrow N^* \)
- \( M^0 \) is the initial marking. We note \( p_{ij} \) the place between the transition \( t_i \) and \( t_j \), and \( M_{ij} \) the initial marking of \( p_{ij} \).
The firing time of a transition is no longer atomic in this model, but we assume that no transition can have more than one fire in progress (it is usually ensure by associating places, in input and output of each transition, marked by one token). The parallelism of transition firing is effective. The marking at date \( u \) of the PN is not sufficient to describe the state of the net. It is necessary to know the date of firing of transitions in progress (the residual firing times). Thus, the notion of firing sequence is not sufficient to determine the net evolution for a span of time. For that, Chrétienne (1983) introduces the notions of controlled execution and state. Before defining the notion of state, we must define the residual firing time of a transition \( R_t(u) \) where \( x^n_t \) is the date of the \( n \)-th fire of transition \( t \):
\[
\begin{align*}
R_t(u) &= 0 & & \text{if } 0 \leq u \leq x^n_1; \\
R_t(u) &= x^n_t + d_t - u & & \text{if } x^n_t < u < x^{n+1}_t + d_t; \\
R_t(u) &= 0 & & \text{if } x^{n+1}_t + d_t \leq u \leq x^n_{t+1}.
\end{align*}
\]
**Definition:** a state is a couple \((M(u), R(u))\) in which
\[
\begin{align*}
M(u) &= (M_p(u))_{p \in P} & & \text{the marking vector at date } u \\
R(u) &= (R_t(u))_{t \in T} & & \text{the residual time vector at date } u.
\end{align*}
\]
**Definition:** a controlled execution is the step function of the firing dates series.
The characteristic vector \( N = (N_t)_{t \in T} \) of a controlled execution stores the number of initiations at date \( u \). A controlled execution is said feasible if the behaviour of the net insures that the marking remains nonnegative at any date \( u \). A controlled execution can be represented by a Gantt diagram (fig. 1).
Chrétienne (1983) has shown that constraints on firing dates in a TMG are potentials constraints:
**Theorem 1:** A controlled execution of a TMG is feasible iff the firing dates follow the inequalities
\[
\forall t_i, N_{t_i} > 0, x_{t_i}^1 \geq 0, \forall p_{ij} \in P, N_{t_j} \geq 0,
\forall n \left\{ \begin{array}{l} n \geq 1 \text{ et } \\ n + M_{ij} \leq N_{t_j} \end{array} \right\} \Rightarrow \left\{ \begin{array}{l} n \leq N_{t_i} \\ x_{t_j}^n + M_{ij} \geq x_{t_j}^n + d_i \end{array} \right\}
\] (1)
This result is easy to understand. Let us consider an elementary part of a marked graph (a place \( p_{ij} \) with its two transitions \((t_i, t_j)\), its form follows necessary \( |p_\bullet| = |\bullet p| = 1 \). \( t_j \) can be simultaneously fired \( M_{ij} \) times, at date \( t = 0 \), unless \( t_i \) was fired once. Afterwards, every firing of \( t_j \) must be preceded by a firing of \( t_i \), to bring a mark in \( p_{ij} \) (enabling \( t_j \)).
3. SCHEDULING AND PETRI NETS
Petri nets (PN) allows the modelling of scheduling problems constraints with a homogenous graphical formalism. They define central problems of the scheduling theory. For instance, TMG generalize the PERT/CPM methods
Petri nets have been used to deal with cyclic scheduling. For instance, Ramchandani (1974), Chrétienne (1983), Hillion et al. (1989), Munier (1993), Julia et al. (1995). They have been also used to study acyclic scheduling, for instance Chu (1993), Lee (1994), Cheng et al. (1994), Proth (1994), Richard et al. (1994). The optimized performance criterion is not modelized with the TPN, as it is the case with other models (i.e. mathematical programming). The total completion time (Cmax) is defined by ending time of firing transition:
$$C_{\text{max}} = \max_{t \in T} \left( x_t^{N_t} + d_t \right)$$
where $N = (N_t)$ is the characteristic vector (the fire number of transitions) to consider for the scheduling period. A scheduling criterion is regular, in scheduling theory, if it is non-decreasing in function of the tasks ending dates. $C_{\text{max}}$ is obviously regular since a controlled execution is a non-decreasing step function.
In order to deal with due dates, an integer $p_t$ can be associated to each transition, as Richard et al. (1995a). Thus, the following criteria can be defined: lateness ($L_{\text{max}}$),
$$L_{\text{max}} = \max_{t \in T} \left( \max_{n=1}^{\ldots} \left( x_t^n + p_t - n \cdot d_t \right) \right),$$
and tardiness ($T_{\text{max}}$),
$$T_{\text{max}} = \max_{t \in T} \left( \max_{n=1}^{\ldots} \left( x_t^n + p_t - n \cdot d_t \right), 0 \right).$$
These performance criteria are very used in industrial problems (Baker, 1974, and French, 1982).
Carlier et al. (1988) study the scheduling of firing sequences of TPN. In that paper, the parallel is made between the notions of tasks sequences and firing sequences on one hand, and the notions of schedules and controlled executions on other hand. Classically, in scheduling theory, the study of performance criterion allows to limit the search space of feasible solutions. For instance, for any regular criterion, it is only necessary to consider the schedules for which the operations execute as early as possible on each machine (semi-active schedules). That allows to associate one schedule to each sequence. Carlier et al. (1988) say that the same reasoning is applicable for TPN when the criterion studied is regular. The set of feasible solutions is then the set of earliest controlled executions associated to each feasible firing sequence.
The generalization of scheduling algorithms is obtained by showing equivalence between performance criteria. For instance, minimizing $C_{\text{max}}$ is equivalent to maximize the average number of processing machines ($\bar{N}_p$) during the scheduling period. We show the same result with TPN concepts.
PROPERTY 1: Following performance criteria are equivalent:
(i) \( \min C_{\text{max}} \); (ii) \( \max \overline{R}_t \); (iii) \( \max N_p \)
where
\[
\overline{R}_t = \frac{1}{C_{\text{max}}} \int_0^{C_{\text{max}}} R_t(u) \, du, \quad N_p(u) = \sum_{t \in T} \delta_t(u)
\]
and \( \delta_t(u) = 1 \) if \( R_t(u) > 0 \) and 0 otherwise.
Proof:
(a) \( \min C_{\text{max}} \Leftrightarrow \max \overline{R}_t, \quad \forall t \overline{R}_t = \frac{1}{C_{\text{max}}} \int_0^{C_{\text{max}}} R_t(u) \, du = \frac{1}{C_{\text{max}}} \sum_{i=1}^{N_t} \int_{x_i^t}^{x_i^t + d_t} (x_i^t + d_t - u) \, du = \frac{N_t d_t^2}{2 C_{\text{max}}} \).
(b) \( \min C_{\text{max}} \Leftrightarrow \max N_p, \quad N_p = \frac{1}{C_{\text{max}}} \sum_{t \in T} \int_0^{C_{\text{max}}} \delta_t(u) \, du = \frac{1}{C_{\text{max}}} \sum_{t \in T} N_t d_t \).
Those results allow us to think that PN constitutes a homogenous model to study scheduling problems, both on modelling and theoretical point of view.
4. MODELLING OF SCHEDULING PROBLEMS – THE FLOWSHOP EXAMPLE
The modelling of scheduling problems with TPN consists on a design of a net where only the feasible schedules are reachable (feasible controlled executions). Then, come two choices: using high level PN or elementary PN. The former gives abstractions that are usually obtained with a loss of properties (DiCesare et al. 1993). For example Valentin et al. (1994) uses a high level Petri net to study the job-shop problems. Thus, the use of elementary PN is often prefered. But the model is too large to be wholly designed without a method. We have decided to model each constraint and extension of the basic flowshop problem (Proust, 1992) by different nets. The global net is obtained by a synthesis of all the set of designed nets. The constraints modelling of the flowshop problem family is given below.
Note that, since for all regular performance criterion, it is only necessary to consider earliest controlled executions, we can add transitions with nought delay to model particular events (starting or ending of operations). The modelling of constraints is given Figure 2.
Two synthesis techniques of PN have been proposed: the bottom-up and the top down techniques. The first technique merges the two parts of independant nets into one. The second technique makes a decomposition
(a) A task i viewed as a single operation outside the shop
(b) A task is a sequence of operations. Places symbolize stocks between machines
(c) Machines are mutual exclusion places.
(d) In a permutation problem, the tasks pass in the same order on every machines. (stocks between machines are fifo managed). M_1..M_m, S_1..S_n are shared places.
(e) Delays between operations.
(e1) Start lag D_i, the start of t_{ij+1} can't occur before the starting time t_{ij}+D_i.
(we assume D_i≥d_i)
(e2) Stop lag E_i, the end of t_{ij+1} can't happen before the end of t_{ij}+E_i.
(we assume E_i≥d_i)
(e3) transport time a_{ij} between the machine j and j+1 for task i.
(f) Limited stock capacity between machines j and j+1 (b_{ij+1}). The places b_{ij+1} are initialized with b_{ij+1} tokens. M_j and M_{j+1} are the machine constraints.
(g) General precedence constraint. We insert a place between the two transitions (operations) in precedence relation.
(h) Setup s_{ij} and remove time r_{ij} of tools which are not sequence dependent.
Figure 2. – The modelling of shop scheduling constraints.
(stepwise refinements) of places and/or transitions in subnets. Those principal techniques are summarized by DiCesare et al. (1993).
In presence of highly shared resources, modelling with one of those approaches is impossible. A combination of those two techniques must be used (hybrid approach), but then the systematic feature of the modelling...
requires a synthesis procedure. It expresses the order and the use of synthesis rules. We give below the rules which we are going to use (fig. 3). The origin of the rule is indicated between brackets without mentioning the exact references. In the following N1 N2 represents the merge of the nets N1 and N2.
**bottom-up techniques**
R1 merging of places
(Agerwala, Choed-Amphai 78)
\[
\begin{array}{c}
\text{N1} \\
\text{t}_{11} \quad \text{M} \\
\text{t}_{12} \quad \text{p}_1 \\
\text{N2} \\
\text{t}_{21} \quad \text{M} \\
\text{t}_{22} \quad \text{p}_2 \\
\text{NI N2} \\
\end{array}
\]
R2 merging of elementary paths
(Beek, Krogh 86)
\[
\begin{array}{c}
\text{N1} \\
\text{t}_1 \quad \text{p} \\
\text{n} \quad \text{t}_2 \\
\text{N2} \\
\text{t}_2 \quad \text{t}_3 \\
\text{NI N2} \\
\end{array}
\]
R3 extension with a path
(Datta, Gosh 84)
\[
\begin{array}{c}
\text{N1} \\
\text{t}_1 \\
\text{n} \\
\text{N2} \\
\text{t}_3 \\
\text{N1N2} \\
\end{array}
\]
**top-down techniques**
R4 refinement of transitions
(N2 is a well-formed block) block in N2 represents a subnet.
(Valette 79)
\[
\begin{array}{c}
\text{N1} \\
\text{N2} \\
\text{NI N2} \\
\end{array}
\]
R5 refinement of places
permits to use the R4 rule after
(Suzuki, Murata 83)
\[
\begin{array}{c}
\text{N1} \\
\text{t}_1 \\
\text{n} \\
\text{N2} \\
\text{t}_3 \\
\text{N1N2} \\
\end{array}
\]

**Example 1:** The necessity to use the two techniques is illustrated for the modelling of limited buffer capacity between machines (fig. 4). Since the net is timed, that modelling means that the two machines are serialized. It can
\[
\begin{array}{c}
\text{M}_1 \\
\text{t}_1 \\
\text{p} \\
\text{t}_2 \\
\text{M}_2 \\
\end{array}
\]
using the R2 rule
\[
\begin{array}{c}
\text{M}_1 \\
\text{t}_1 \\
\text{p} \\
\text{t}_2 \\
\text{b}_{12} \\
\text{M}_2 \\
\end{array}
\]
the obtained net is
\[
\begin{array}{c}
\text{M}_1 \\
\text{t}_1 \\
\text{n} \\
\text{M}_2 \\
\text{t}_2 \\
\end{array}
\]

vol. 32, n° 2, 1998
be corrected if \( t_1 \) and \( t_2 \) are decomposed to set the events: start of operation \( t_2 \) and end of operations \( t_1 \) (using the R4 rule).
The same approach must be used to model fifo stocks. The modelling of problem \( n/m/F \), constraints/\( C_{max} \) follows the synthesis procedure given hereafter:
**Synthesis procedure**
step 1: modelling each task as in (a)
step 2: setting the sequence of operations for each task as in (b)
step 3: modelling resource constraints with R2 (c)
step 4: modelling setup time and remove time of tools with R4 (h)
step 5: modelling constraints of limited stocks capacity (f)
5.1. building the limitation of the stock with R2
5.2. decomposing the input and output transitions of the stock with R4
step 6: modelling fifo stocks (d)
6.1. using R4 to insert a transition with a zero duration after \( t_1 \)
6.2. decomposing the place between \( t_0 \) and \( t_1 \) to insert the stock (f) with R5
6.3. placing mutual exclusion loop of stocks parts with R2
step 7: modelling the lateness with R2 (e)
step 8: modelling precedence constraints with R3 (g)
step 9: merging resource places and stock places with RL.
**Example 2:** modelling of \( n/3/F \), \( b_j,j+1 \), \( a_{ij}/C_{max} \) with the above synthesis procedure. For each job \( i \), we have following steps (fig. 5).
**Property 2:** The global net, after synthesis, can be decomposed into:
- timed marked graph,
- shared resources constrained by \( n \) processes, (i.e. structure of figure 6).
These kinds of shared resources are also used in the modelling approach of flexible manufacturing system by Zhou et al. (1991), and are called “parallel mutual exclusion” resources. But these authors focus on the validation of the properties as deadlock, liveness and reversibility.
The introduction of renewable resource constraints modify the structure of the net, i.e. it is no longer a marked graph. We show with an example that the sufficient condition, of the theorem 1, is no longer true. If \( t_i \) and \( t_j \)
SOLVING SCHEDULING PROBLEMS USING PETRI NETS
step 1
\[ T_i \]
step 2 and 3
\[ t_1, t_2, t_3 \]
\[ M_1, M_2, M_3 \]
step 5.1
\[ t_1, t_2, t_3 \]
\[ M_1, b_{12}, M_2, b_{23}, M_3 \]
step 5.2
\[ t_1, (a_i), t_{i+1} \]
\[ M_1, b_{12}, M_2, b_{23}, M_3 \]
step 7
\[ t_i \]
\[ (a_i), t_{i+1} \]
\[ For \ each \ transport \ constraint \]
step 9 : merging of places \( M_1, M_2, M_3, b_{12}, b_{23} \) of the \( n \) nets gives the final one. For instance, the obtained net with two generic jobs \( i \) and \( j \) is:
\[ \text{Figure 5. - Example using the synthesis procedure.} \]
\[ t_1, M, t_3 \]
\[ t_2, t_4 \]
\[ \text{Figure 6. - Net structure of a shared renewable resource.} \]
verify the linear inequalities, we have $x^n_j \geq \nu_j$ and $x^n_i \geq \nu_i$ where $\nu_i$ and $\nu_j$ are the bounds of the $n$-th fire of $t_i$ and $t_j$. If $t_i$ is fired first, the bound $\nu_j$ becomes $\nu_j + d_i$ and conversely, if $t_j$ is fired first, $\nu_i$ becomes $\nu_i + d_j$. Therefore, there is no linearity of the inequalities according to the ordering of the allocation of the resources on the jobs. Besides, the necessary condition still holds since $\forall t_i, d_i \in \mathbb{N}$. In fact, resource conflicts delay firing dates.
5. SOLVING WITH CONSTRAINT LOGIC PROGRAMMING
Solving scheduling problems consists on the computation of optimal controlled execution with a given characteristic vector. Among emergent resolution techniques we focus our interest on logic programming with constraint propagation (CLP). We use the CHIP language (Van Hentenryck, 1989, and Cosytec, 1993).
CLP does not restrict itself to the manipulation of symbolic terms, and extends the mecanisms of logic programming to different domains: booleans, rationals, finite domains. A finite domain variable is an integer variable which takes its values in a finite non empty set. Linear terms are built with these variables and + and $\times$ operators. A constraint is the comparison of two linear terms with classical arithmetic comparators. Symbolic constraints, specially designed for specific problems, are also predefined. For instance the cumulative constraint of the CHIP language is designed to serve in the resolution of scheduling problems (Aggoun et al., 1993).
A CLP program is classically divided in two parts. In the first one, variables are declared and the constraints are set. In the second one, values are assigned to variables, such that the optimum is reached (with a build in branch and bound technique). The disjunctive constraints generated by a shared resource, can be set in different ways. An efficient one is the build-in cumulative constraint of the CHIP language.
The PN graph, which models the scheduling problem, is represented in the CHIP language by a set of clauses. Some additive clauses define the list of places, transitions, firing durations, the initial marking. The part of the model, which is a marked graph, is represented by the set of place/transition incidence relations. In fact a clause has the form $edge(\text{rel}, p, t)$ where $\text{rel}$ takes its values in \{in, out\}. The resource part of the model is coded like $edge\_res(p, \text{treq}, \text{trel}, v)$ where $\text{treq}$ and $\text{trel}$ are the request and the release transitions for the resource $p$ and its required quantity $v$. These correspondances are summarized in Figure 7.
Other lists are added to complete the data (places, resources, durations, marking, characteristic vector).
**Example 3:** The example herafter shows the transcription of a net in the CHIP language (Figure 8).
Chrétienne (1983) presents an algorithm to compute the completion time of \( M \) fires in a TMG. The principle is to unfold the marked graph by considering every fire of the same transition as a new vertex. In this new graph, called the developed graph, every edge represents a potential constraint between each fire. It is a generalization of the conjunctive graph, as defined by Roy (1970), on which the set of minimal potential is build. The optimum is reached by taking the critical path (the path for which the sum of the potentials is maximal). The use of the CLP will avoid us to build the developed graph by using directly the inequality of theorem 1. Moreover, we can deal with both disjunctive and conjunctive constraints (Richard *et al.*, 1995b).
```
net R
v11 m v21
p1
v12
\text{clauses}
\text{edge(out,}p_{i},t_{i})
\text{edge(in,}p_{i},t_{i})
\text{edge_res(r,}t_{i},t_{j},k)
```
Figure 8. – Transcription of a net into a CHIP program.
vol. 32, n° 2, 1998
The resolution of the scheduling problem uses a finite domain variable for each fire occurrence. In the particular case of the flowshop, each transition is fired only once. Constraints on firing dates of a marked graph are potential constraints. For every input place $p$ of $t$, we set an inequality constraint: $x_t^1 \geq 0$ if the marking of $p$ is equal to 1 ($m(p) = 1$), else $x_t^1 \geq x_{t'}^1 + d_{t'}$, where $t'$ is the input transition of $p$. An algorithm which sets potential constraints in the CHIP language for marked graph is given by Richard (1994). Resource constraints use the built-in cumulative constraint. We note before edge_res ($p, X t, Y t, V t$), where $X_t, Y_t, V_t$ are finite domain variables. Parameters of cumulative are: Starts (list of the demand dates of the resource: $X_t$), Durations (a list of non instanciated finite domain variables), Ends (list of release dates of the resource: $Y_t + D_t$), Quantity (list of the quantity of resource required for the operation: $V_t$), High (the total amount of the resource – it is the initial marking of the place $p$). In the above example, the constraint set will be: cumulative ($[x_{t_{1,1}}, x_{t_{2,1}}], [D1, D2], [x_{t_{1,2}}, x_{t_{2,2}}], [1,1], 1$). Constraints computing the $C_{max}$ are $\forall t \in T$, $C_{max} \geq x_t^{N_t} + d_t$. The start of the CHIP solver, associated to finite domain, is made by assigning values to free variables. The constraint propagation phase restricts as many domains as possible. The skeleton of the algorithm is given hereafter:
Skeleton of the algorithm:
step 1: build the list of firing dates (finite domain variables)
step 2: set potential constraints
set renewable resource constraints
set constraints for the $C_{max}$ computation
step 3: optimize with a built-in branch and bound procedure technique
We describe the algorithm which sets the potential constraints in an imperative form to make understanding easier (Figure 9). The constraints that we must set take the form of the theorem 1. The number of fires, fixed by the user, allows to build the characteristic vector of the controlled execution. $N_i$ is the number of fire of $t_i$.
Cavalier et al. (1995) have developed a prototype, called OPTNET. We report some result hereafter.
6. IMPLEMENTATION AND RESULTS
The software developped is OPTNET (OPTimization of Petri NETs) with the CHIP language on a Sun Sparc Station under Solaris 2.2. The net
Algorithm potential-constraints
Begin
\[
\text{For } i \leftarrow 1 \text{ to } n \text{ do }
\]
\[
\text{For all } j \in \Gamma^+(i) \text{ do }
\quad /* \text{successor of } t_i \text{ in the graph } */
\quad k \leftarrow n_j
\]
\[
\text{While } k - M_{ij} > 0 \text{ do }
\quad \text{set constraint } \left\{ x_i^k - M_{ij} + d_i \leq x_j^k \right\}
\quad k \leftarrow k - 1
\]
End while
End for
End for
End
Figure 9. - Algorithm which sets potential constraints.
is described in a textual form as we seen in the previous sections. But there's no problem to design the net through a graphical interface, by using transcription rules defined below. The global architecture of the software is given Figure 10.
The graphical interface is used to parametrize the net (duration associated to transitions, the number of tokens in each place), and the resolution characteristics (bound of time, presentation of Gantt diagrams). Gantt diagrams can be drawn with one line per transition, or one line for a set of transitions (i.e. a job), which is set with the graphical interface. During the resolution step, each solution found is drawn in the Gantt diagram window. To avoid long resolution time, the user can provide a maximal resolution time after which the solver must stop. When the time is elapsed, the solver is stopped and the best solution that was found is given. In general, the optimal solution is found fastly, but it takes a lot of time for CHIP to prove its optimality. For our tests, we use the flow-shop problem. It is well known
---
Figure 10. - OPTNET software architecture.
that permutation schedules (i.e. stocks between machines are managed in fifo) are dominant (a subset of optimal schedules) for the \( n/2/F/Regular \) Rule and for the \( n/3/F/C_{max} \) problems (Baker, 1974). So, most resolution techniques (optimal or heuristic) limit the search space to the permutation schedules. Optnet doesn’t exploit this kind of dominance properties, and searches a solution in the general case.
We report some results, in figure 11, one the \( n/2/F/C_{max} \), the \( n/2/F, S_{nsd}, R_{nsd}/C_{max} \), and the \( n/m/F/C_{max} \) (\( m \geq 3 \)) problems. Those two first problems are solved optimally with polynomials algorithm J, defined by Johnson (1954) and SH designed by Sule et al. (1983). Cavalier et al. (1995) have implemented them with the CHIP language too.
<table>
<thead>
<tr>
<th>Polynomial Problems</th>
<th>OPTNET (ms)</th>
<th>Classical algorithms (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pb1 (4/2/F/Cmax)</td>
<td>230</td>
<td>59.7 (J)</td>
</tr>
<tr>
<td>Pb2 (5/2/F/Cmax)</td>
<td>510</td>
<td>74.4 (J)</td>
</tr>
<tr>
<td>Pb3 (6/2/F/Cmax)</td>
<td>300</td>
<td>94.8 (J)</td>
</tr>
<tr>
<td>Pb4 (2/2/F, S_{nsd}, R_{nsd}/Cmax)</td>
<td>873</td>
<td>26.2 (SH)</td>
</tr>
<tr>
<td>Pb5 (3/2/F, S_{nsd}, R_{nsd}/Cmax)</td>
<td>342285</td>
<td>39.8 (SH)</td>
</tr>
<tr>
<td>Pb6 (4/2/F, S_{nsd}, R_{nsd}/Cmax)</td>
<td>NS*</td>
<td>56.4 (SH)</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Strongly NP-hard Problems</th>
<th>OPTNET (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pb7 (3/3/F/Cmax)</td>
<td>320</td>
</tr>
<tr>
<td>Pb8 (4/3/F/Cmax)</td>
<td>370</td>
</tr>
<tr>
<td>Pb9 (4/4/F/Cmax)</td>
<td>230</td>
</tr>
<tr>
<td>Pb10 (4/5/F/Cmax)</td>
<td>800</td>
</tr>
<tr>
<td>Pb11 (4/9/F/Cmax)</td>
<td>2680</td>
</tr>
<tr>
<td>Pb12 (5/4/F/Cmax)</td>
<td>4600</td>
</tr>
</tbody>
</table>
NS*: not solve after one day of computing
Figure 11. – Results.
We can note that the resolution time with Optnet is very large in comparison with the polynomial algorithm. But, as we noted before, the polynomial algorithm exploits the dominance property of permutation schedules. That considerably reduces the search space. Optnet has difficulties to solve a \( n/2/F, S_{nsd}, R_{nsd}/C_{max} \) because the durations of the operations in the cumulative constraint are not known at the beginning of the resolution phasis. In fact, the durations are set with constraints. In that case, solving
optimally with CHIP is much cheaper. The resolution time increases mainly with the number of jobs. Default search strategies are not adapted to the resolution of scheduling problems. So, it seems necessary to take into account the specific constraints of the problem, as regular criterion and permutation schedules properties, if we want to solve optimally problems such as: \( n/m/F, S_{nsd}, R_{nsd}, r_i, \) lag times, limited buffer capacity, \( \ldots/C_{max} \) (even for small size problems). OPTNET is a flexible resolution tool with a top-down approach. In this way, Petri nets are used to specify the set of feasible schedules with a graphical tool.
7. CONCLUSION
We have presented a homogeneous approach to solve (shop) scheduling problems submitted to various constraints. The PN advantages have been explained in the first part, then a modelling technique has been exposed, which is based on a hybrid synthesis procedure. And then, the resolution of the modeled problem has been obtained by using the CHIP CLP language. But, according to our experience, default search strategies of CLP language are, in general, not adapted to the resolution of NP-hard problems. The use of constraint and solver as black boxes forbids efficient implementations.
But, in most cases the optimum is not required. As Ackoff (1977) said “preoccupation with optimization leads to a withdrawal from reality”. Industrial problems require more flexibility than optimality. So a perspective of our work is to design a computer aided approach to take into account human decisions.
ACKNOWLEDGMENT
The authors are very grateful to the anonymous reviewers for their important and constructive comments.
REFERENCES
S. FRENCH, Sequencing and scheduling: an introduction to the mathematics of the Job-Shop, Ellis Horwood, 1982.
Recherche opérationnelle/Operations Research
vol. 32, n° 2, 1998
|
{"Source-Url": "http://www.numdam.org/article/RO_1998__32_2_125_0.pdf", "len_cl100k_base": 9571, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 45979, "total-output-tokens": 13112, "length": "2e13", "weborganizer": {"__label__adult": 0.00045108795166015625, "__label__art_design": 0.0008425712585449219, "__label__crime_law": 0.0006241798400878906, "__label__education_jobs": 0.0033111572265625, "__label__entertainment": 0.00010991096496582033, "__label__fashion_beauty": 0.0003190040588378906, "__label__finance_business": 0.001898765563964844, "__label__food_dining": 0.0007238388061523438, "__label__games": 0.0008525848388671875, "__label__hardware": 0.002582550048828125, "__label__health": 0.0008363723754882812, "__label__history": 0.00055694580078125, "__label__home_hobbies": 0.0003674030303955078, "__label__industrial": 0.0169525146484375, "__label__literature": 0.0002617835998535156, "__label__politics": 0.0006093978881835938, "__label__religion": 0.0008935928344726562, "__label__science_tech": 0.449462890625, "__label__social_life": 0.0001513957977294922, "__label__software": 0.0213165283203125, "__label__software_dev": 0.49365234375, "__label__sports_fitness": 0.0005230903625488281, "__label__transportation": 0.002315521240234375, "__label__travel": 0.0003097057342529297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41378, 0.04206]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41378, 0.58222]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41378, 0.8044]], "google_gemma-3-12b-it_contains_pii": [[0, 198, false], [198, 2079, null], [2079, 4870, null], [4870, 7413, null], [7413, 9751, null], [9751, 10977, null], [10977, 13636, null], [13636, 15986, null], [15986, 17447, null], [17447, 19566, null], [19566, 21615, null], [21615, 22315, null], [22315, 25022, null], [25022, 26215, null], [26215, 28667, null], [28667, 30261, null], [30261, 32830, null], [32830, 35018, null], [35018, 38311, null], [38311, 41378, null]], "google_gemma-3-12b-it_is_public_document": [[0, 198, true], [198, 2079, null], [2079, 4870, null], [4870, 7413, null], [7413, 9751, null], [9751, 10977, null], [10977, 13636, null], [13636, 15986, null], [15986, 17447, null], [17447, 19566, null], [19566, 21615, null], [21615, 22315, null], [22315, 25022, null], [25022, 26215, null], [26215, 28667, null], [28667, 30261, null], [30261, 32830, null], [32830, 35018, null], [35018, 38311, null], [38311, 41378, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41378, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41378, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41378, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41378, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41378, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41378, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41378, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41378, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41378, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41378, null]], "pdf_page_numbers": [[0, 198, 1], [198, 2079, 2], [2079, 4870, 3], [4870, 7413, 4], [7413, 9751, 5], [9751, 10977, 6], [10977, 13636, 7], [13636, 15986, 8], [15986, 17447, 9], [17447, 19566, 10], [19566, 21615, 11], [21615, 22315, 12], [22315, 25022, 13], [25022, 26215, 14], [26215, 28667, 15], [28667, 30261, 16], [30261, 32830, 17], [32830, 35018, 18], [35018, 38311, 19], [38311, 41378, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41378, 0.04244]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
f3b12cfd99bf574f5810b873ecc9439415c5f35d
|
JaTE: Transparent and Efficient JavaScript Confinement
Tung Tran
Stony Brook University
tunghack@gmail.com
Riccardo Pelizzi
Stony Brook University
r.pelizzi@gmail.com
R. Sekar
Stony Brook University
sekar@cs.stonybrook.edu
ABSTRACT
Inclusion of third-party scripts is a common practice, even among major sites handling sensitive data. The default browser security policies are ill-suited for securing web sites from vulnerable or malicious third-party scripts: the choice is between full privilege (<script>) and isolation (<iframe>), with nearly all use cases (advertisement, libraries, analytics, etc.) requiring the former. Previous work attempted to bridge the gap between the two alternatives, but all the solutions were plagued by one or more of the following problems: (a) lack of compatibility, causing most existing third-party scripts to fail (b) excessive performance overheads, and (c) not supporting object-level policies. For these reasons, confinement of JavaScript code suitable for widespread deployment is still an open problem. Our solution, JaTE, has none of the above shortcomings. In contrast, our approach can be deployed on today’s web sites, while imposing a relatively low overhead of about 20%, even on web pages that include about a megabyte of minified JavaScript code.
1. INTRODUCTION
A recent study [26] found that nearly 90% of web sites include third-party scripts. Unfortunately, this practice poses serious security threats to the first-party web site, threatening its integrity and confidentiality. Vulnerabilities in third-party code can expose the first-party to attacks such as cross-site scripting, or the third-party server may be outright malicious or be compromised. Major web sites such as Yahoo and New York Times [8, 6] have exposed their users to malware by including third-party content in the form of advertisements. As a result, there is a pressing need for approaches to protect web sites from third-party scripts, while preserving their functionality.
In order to protect first-party code, it is necessary to isolate third-party code from accessing (sensitive) first-party data or functions. There are two main approaches in this regard:
• Frame-based isolation: The browser’s SOP isolates code running in different frames, while providing a controlled means for communicating through the postMessage API. AdJail [34], Mashic [17] and Pivot [23] rely on this approach for isolation. MashupOS [42] also relies on frames and similar isolation mechanisms. While COWL [32] extends a browser’s SOP further to support a MAC policy, it continues to rely on frame-based isolation. The main drawback of frame-based isolation is that it limits interactions (between first- and third-party code) using familiar means such as passing objects, or calling another party’s functions. This limits compatibility with existing first-party and third-party code.
• Language-based isolation: This class of techniques aims at isolating individual objects, so that objects can be shared between parties, and controlled interactions can take place through function calls. However, works in this area must first address the challenge of mediating all of the numerous avenues by which JavaScript programs can interact. Early works such as Caja [20] and BrowserShield [29] resorted to rewriting the code to introduce all the necessary runtime checks. Unfortunately, because of the dynamic nature of JavaScript, most operations need to be transformed and/or checked at runtime, often slowing programs down by an order of magnitude or more. An alternative approach is to develop static analysis techniques that can eliminate the need for most (or all) runtime checks. ADSafe [11], GateKeeper [13], SES [24], JSand [9] and others [19] opt for this approach. However, full JavaScript is not amenable to static analysis, thus forcing these techniques to impose language restrictions. Among these techniques, SES and JSand place the fewest language restrictions, but these are still too severe for real-world code: we found that 80% of the Alexa’s Top 500 websites are not supported by them.
Our Goals. We seek a secure object-granularity policy enforcement infrastructure compatible with existing browsers as well as web sites, including all their first- and third-party code. Specifically, we seek:
• Transparency: The enforcement infrastructure should not change the execution semantics of benign code. Our solution achieves this goal except for a few rare corner cases, none of which could be observed on any of the Alexa Top 500 websites. (See Section 6.2.)
• Object-granularity policy: The infrastructure should allow third-party code to access any subset of objects deemed safe by a policy developer, while preventing access to others. Even on permitted objects, access to individual operations can be sand-boxed.
• Deployability on existing browsers: To facilitate adoption, the approach must not require modifications to the browser (specifically, its JavaScript engine), nor can it impose unreasonable performance overheads.
Our Approach. We present JaTE, a new approach that satisfies the above requirements. Every object is associated
1This work was supported in part by grants from NSF (CNS-0812098 and CNS-1319137) and ONR (N00014-07-1-0928).
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
ACM’15, December 07 - 11, 2015, Los Angeles, CA, USA
© 2015 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 978-1-4503-3682-6/15/12...$15.00
DOI: http://dx.doi.org/10.1145/2818000.2818019
1Note that the goal of any security policy is to change the execution semantics of code that violates the policy. Thus, it is generally infeasible to ensure transparency in the presence of a nontrivial security policy. Moreover, since malicious code can easily detect the presence of a policy framework by simply trying out operations that any sensible policy must deny, we do not attempt to be transparent to malicious code.
with a principal, and this principal has direct access to the object, while the access of other principals is mediated using a wrapper object that can enforce a policy. The set of all objects belonging to a principal is held within the principal’s compartment [41].
Many of the key challenges in JaTE, including complete mediation and the realization of a secure multi-principal compartment model, arise from the complexity and highly dynamic nature of JavaScript. We discuss these challenges in Section 2, followed by an overview and illustration of how our design overcomes them in Section 3. The design and implementation of JaTE is described in Sections 4 and 5 respectively. A detailed experimental evaluation is presented in Section 6, followed by a discussion of related work (Section 7) and concluding remarks (Section 8). Below we summarize the technical contributions of this paper.
Contributions.
- Object-capability environment for full JavaScript. Object capability ensures that only objects explicitly given to third-party code can be reached by it. It provides the basis for complete mediation. Ours is the first work to realize this feature without placing significant restrictions on the JavaScript language.
- Secure and transparent multi-principal JavaScript confinement without browser modifications. Our solution is ready for deployment on any web site because existing code does not need to be modified. It can support policies that protect mutually untrusting principals, e.g., two advertisers.
- Efficient fine-grained object-level access control.
- Large-scale experimental evaluation of compatibility, performance, and functionality. When enforcing an allow-all policy, our implementation demonstrates full compatibility with all sites from the Alexa Top 500, while incurring an average overhead of about 20%.
2. CHALLENGES
Complete mediation. To ensure complete mediation, all mechanisms for object access must be handled. This is a difficult task in JavaScript because the language supports several unusual ways to reference objects:
- Global object access. Securing global object access is critical because all other objects are reachable from it. In addition to the explicit mechanism of accessing the variable window, JavaScript provides implicit access to the global object via (a) free variables that are interpreted as property accesses on the global object, and (b) accesses to the this keyword within a function invoked without an object argument.
- Native prototype access. JavaScript relies on prototypes to support object inheritance. Prototypes of native objects are shared, thus providing a mechanism for third-party code to affect the semantics of first-party’s use of native objects. Controlling this access is complex because third-party code can not only rely on direct access (e.g., update Object.prototype), but also indirect access. For instance, even a seemingly “safe” access to a third-party’s own object x can allow it to update Object.prototype using the expression x.__proto__.
- Call stack access. JavaScript allows third-party code to travel up the call stack. This access can be used by a third-party function to access sensitive first-party data such as the arguments of the first-party function that invoked it.
Dynamic code. Dynamic code poses a well-recognized challenge to security. Previous works forbade most dynamic code (ADSafe, GateKeeper), or replaced eval(s) with a safe wrapper, say, safeeval(s) (Caja, SES, JSand). Unfortunately, use of a wrapper function might change the semantics of s: the free variables occurring in s are no longer resolved in the context where the original eval occurred, possibly altering the semantics of code such as:
\[
\text{var x=0; eval("alert(x)"})
\]
2.1 Discussion
Using an object-capability runtime is a well-established approach for achieving complete mediation [21, 24, 9, 18]. The major effort in this area is Secure ECMAScript (SES) [24], an object-capability language based on ES5. SES relies on ES5’s strict mode to prevent the use of caller and implicit accesses to the global object via this. To eliminate the threat of code injection into native prototypes, it prevents their modification by freezing them all. Moreover, it replaces eval with a safe wrapper. All of these restrictions tend to break existing code, and indeed, backward compatibility wasn’t their focus. As a result, we found that the vast majority of Alexa Top 500 web sites experience compatibility problems with SES.
JSand [9] uses the object-capability environment of SES to build a policy enforcement framework for third-party JavaScript code. JSand exposes permitted objects to third-party code using Miller’s membrane pattern [25]. In JSand, a membrane consists of policy-enforcing wrappers around these objects. If any operation on a wrapped object returns another object, the membrane is extended to wrap the returned object as well.
A second major goal of JSand is to achieve compatibility with existing web sites. In addition to handling implicit access to window via this, JSand addresses a frequent incompatibility posed by SES: it performs a simple analysis to identify global variables in the third-party code, and transforms the code to explicitly synchronize their values with the correspondingly named attributes of window. While properties referenced statically can be synchronized this way, dynamic property accesses (e.g., window[p]) pose a challenge. Moreover, other incompatibilities posed by SES, including the remaining restrictions of strict mode, the use of an eval wrapper and the use of native prototype extensions, continue to affect JSand. We found that over 80% of Alexa Top 500 web sites fail to “compile” because of strict mode violations, while 30% and 49% violate the other two restrictions.
Instead of first denying access to the global object using SES and then partially mitigating these restrictions, JaTE is designed from the ground up with a single goal: intercept every access to protected objects, so that a policy can be applied to each of those accesses. JaTE exploits the dynamic and reflection features of JavaScript, together with a simple lexical analysis\(^2\) and transformation of third-party code, to ensure that all object accesses are mediated at runtime. It does not place any significant restrictions on JavaScript, a fact confirmed by our evaluation on Alexa top 500 sites. (See
\(^{2}\)Unlike JSand, JaTE does not require full parsing of JavaScript, but only a lexical analysis. All rewriting is done Just-In-Time and cannot be circumvented through obfuscation.)
Figure 1: Example for a malicious Facebook “Like” Button
Section 6.2 for details.)
An important feature of JaTE is that it supports multiple mutually-distrusting principals, which arise in web pages that integrate content from multiple sources, e.g., several advertisers.
3. OVERVIEW
This section provides a high-level overview of how the compartment model confines third-party scripts using code transformation and runtime checking. We illustrate this using an example of first-party (also called host) page that includes sensitive content in an inline script:
```javascript
var stolen = data["se" + "cret"];
function s() {
var stolen = this.data.secret;
s();
stolen = data.getSecret();
eval("stolen = this.data.secret;");
}
```
Figure 2: Malicious “Like” Script
Also assume that the page includes a Facebook “Like”-button, but Facebook’s servers have been compromised to replace the button with malicious code that attempts to steal the value of secret.
The scenario begins with an HTTP request (1) in Figure 1 for retrieving the first-party web page. The JaTE Network Module intercepts this request and modifies the page to add an object jate that contains our confinement library. This module could be implemented in one of three ways: a client-side proxy, a browser extension, or a server-side proxy. Our implementation relies on a browser extension.
In step (2), the “Like”-script included in the page is fetched from Facebook. It is transformed by the network module to enable secure policy enforcement (note that a policy can decide: a) if code from a domain/url will be confined and b) its corresponding principal). To illustrate the main elements of this rewriting step, consider the malicious “Like” script shown in Listing 2. It includes four distinct mechanisms to steal the secret:
A: through dynamic property access (line 1),
B: using this, which resolves to global object (lines 2-5),
C: using a function defined in first-party code (line 6), and
D: by executing dynamic code (line 7).
Listing 3 shows the rewritten script, with the transformations underlined. First, we introduce a preamble to setup a scope and enclose the original script using a with statement (used to intercept free variable access). The script is then transformed using three simple rules:
1. a global function declaration (e.g., function s) is turned into variable declaration and assignment and moved to the top of the script to simulate declaration hoisting,
2. this is replaced with processThis(this), and
3. direct eval is transformed to rewrite its argument before evaluation.
Using these rules, JaTE is able to mediate all cross-compartment accesses, even those from dynamic code.
Step (3) in Figure 1 shows the effect of lines 2-4 from the rewritten script; this setup creates a new compartment for facebook.com. This compartment starts its life cycle with only a mediated reference to window as a global object, but, if permitted by the policy, it can obtain mediated references to objects reachable by the original window. Mediation is achieved using ECMAScript 6 Proxies, which enable transparent interception of all operations on objects.
Compartment represent trust boundaries within the same JavaScript execution environment: each party is confined within its own compartment (see Figure 1), and JaTE mediates all cross-compartment interactions. While the JaTE framework itself is general enough to support mutually-dis-
3Requires cross-origin resource sharing (CORS).
\(^4\)The eval transformation has been simplified in this example. Section 4.3.4 describes the actual transformation used in JaTE.
trusting first- and third-parties, the threat model considered in this paper is more limited: our goal is to (a) protect the first-party from third-party, and (b) if there are multiple (mutually distrusting) third parties, then protect them from each other. In this scenario, there is no need to transform first-party scripts, and hence the host compartment holds an unmediated reference to window. Although the host code does not run in a compartment set up by JaTE, it is helpful to think of it as running in privileged compartment.
Step (4) in Figure 1 depicts the effect of variable declaration and assignment (lines 5 to 7) in Figure 3, which was originally a function declaration (lines 2 to 4) in Figure 2. Note that the access is unmediated in facebook.com’s compartment because it is created by facebook.com. Step (5) shows the effect of line 8: the policy permits obtaining a mediated reference to data, but does not allow reading the value of secret (Step 6), which is a primitive value of type String. This stops attack (A).
Line 9 is an unmediated function call. However, since our transformation has rewritten the body of a, accesses to this now return a reference to a mediated version of window. When this mediated version is dereferenced, the policy once again stops reading of secret, thus stopping attack (B).
Line 10 obtains a mediated reference to getSecret (Step 7) and performs a mediated cross-compartment function call, which is denied by the policy, stopping attack (C).
Finally, line 11 evaluates the string after rewriting it just-in-time. Note that the exact same technique of Step (2) is applied again, using the same light-weight rewriting based on lexical analysis. The rewritten code is:
```javascript
stolen = processThis(this).data.secret
```
This makes the attack semantically equivalent to the one on line 6 (Figure 2), and hence attack (D) is also stopped.
4. DESIGN
This section presents the core mechanisms to implement the compartment model for multiple mutually-distrusting principals. Specifically, Sections 4.1 and 4.2 describe JaTE’s compartments, while Section 4.3 describes the handling of JavaScript’s challenging features outlined in Section 2. Finally, Section 4.4 addresses secure DOM access.
Our compartment design relies on Proxies, a feature of the recently finalized ECMAScript 6 (ES 6) standard. A proxy could be created for any object v as follows:
```javascript
pw = new Proxy(v, {get: getHandler})
```
where getHandler is a function. A read operation pw.x will invoke the function getHandler. This function can check if the access should be permitted, and if so, invoke w.x. If the policy check fails, the operation is not passed on to w, but instead, our handler raises an exception. (Alternatively, a safe default value can be returned, allowing the caller to continue normally.) Thus pw behaves like w, while enabling transparent interception of policy checks before any access.
ES 6 defines several traps in addition to the get-trap illustrated above. These include the has trap (invoked to check if an object possesses a certain property), the set trap (invoked when a property is modified), and the call trap (invoked before calling a member function). Any subset of these trap handlers can be specified in the second argument to Proxy.
4.1 Mediating global object access
Assuming that caller, this, native prototypes and DOM are safely handled, the only way third-party code can access the global object is through free variables, which are interpreted in JavaScript as accesses to properties of the global object. We intercept all free variable accesses by exploiting JavaScript’s dynamic nature: we construct a scope object as shown in Figure 3, and enclose third-party code inside a with (scope) { } block. This causes all free variable accesses in the enclosed code to be looked up on scope.
We construct scope to be a proxy object, and define its has-trap so that it returns true. As a result, JavaScript runtime never looks up any variable outside the with statement, thwarting any attempt by third-party code to directly access the global object. We also define the remaining traps of scope so that it forwards these accesses to the virtual global object, which is a proxy of the global object. This enables all policy checks to be performed in the virtual global object.
While the discussion so far has considered accesses, declarations require additional care:
- **global variable declarations** (var a): The with-statement does not prevent enclosed code from declaring a as a property of the global object. However, note that this declaration has no effect if a is already a property of the global object. If not, it ends up declaring a new property with the value undefined. Note that any subsequent access to a will be intercepted by scope, so this declaration won’t allow the enclosed code to bypass policy checks.
- **global variable declaration with initialization** (var a = 1): This case is treated by JavaScript as if it consisted of a variable declaration, followed by an assignment. Since we have already dealt with both statements, JaTE needs to take no additional step for this case.
- **global function declarations** (function f() {}): We transform this as var f = function f() {}, and move it to the top of the script. This means function declarations get handled in the same way as variable declarations.
4.2 Mediating cross-compartment accesses
In our construction, the first-party (aka “host”) has direct access to the global object, as well as most built-in objects. We say that these objects are within the host compartment. The third-party code, as discussed in the above construction, starts its execution with just the virtual global object in its compartment.
During execution, a principal can introduce new objects into its compartment in two ways:
- It can create new objects. Ultimately, all object construction occurs using literals (e.g., [],) or built-in constructors (e.g., Array). We refer to these as direct objects, i.e., the principal’s accesses to these objects are not mediated. Thus, JaTE introduces no additional overheads when a principal accesses the objects it owns.
- The principal can import objects owned by other principals through interactions that get mediated in the following proxy traps:
- get: If a principal A reads a property of an object owned by principal B, and the result is an object owned
5There is a possibility that first party code will behave differently based on the existence of property a, and in this case, third-party can alter the behavior of first-party code. We consider this a side-channel that is unlikely to pose a security threat. A safer alternative, however, would be to simply delete any var declarations at the top level in the enclosed script.
by A, then a direct reference is returned. Otherwise, a
proxy for that object is created and returned to A.
– set: This is handled in a similar manner, except that
the direction of transfer is reversed in this case.
– call: A call can be treated as a switch from caller’s
to callee’s compartment, followed by get operations to
retrieve actual parameter values from the caller’s compart-
ment. When the function returns, a switch back to
the caller’s compartment takes place, followed by a get
operation to retrieve the return value from the callee.
Note that all these operations are subject to the permissions
specified by the policy. In other words, the above behavior
would be observed with a default “allow all” policy, while a
more restrictive policy would deny some of these accesses.
Also note that other traps can be handled similarly, e.g.,
deleteProperty can be handled like the set trap.
Tracking current context. JaTE relies on the single-
threaded nature of JavaScript: the context can only explic-
titely switch in two ways, either at the beginning and at the
end of third-party code execution (handled by rewriting), or
during a cross-compartment function call (handled by the
call trap). In both cases, JaTE tracks the current context
by updating the property jate.currentContext.
Tracking object ownership. JaTE does not need to know
the owner of a direct object until it first crosses a compart-
ment. When this happens, a proxy for the object is created,
and its owner is determined and stored for future use.
Cross-Compartment Exceptions. As a general rule, a
principal should always have direct access to its own ob-
jects, but only have proxies to the objects owned by other
principals. However, there are a few exceptions: (a) certain
built-in functions are frozen and always seen as direct to im-
prove performance, (b) certain objects such as DOM nodes
are always accessed via proxies, even by their owner, and
(c) for security reasons, even the host sees only proxies of
built-in constructors.
4.3 Handling JavaScript challenges
4.3.1 Handling this
JaTE replaces all occurrences of the this keyword with
processThis(this), where processThis returns the virtual
global object if this is the global object.
4.3.2 Handling caller
Note that it is not possible to statically recognize and
rewrite occurrences of caller: such an approach can be cir-
cumvented by obfuscation, e.g., f["c="+"aller"]]. Moreover,
since caller is non-standard, we cannot rely on its “official”
semantics either. Instead, we have developed a solution that
is based on how it has been implemented on major browsers,
including Chrome, Firefox, Safari, and Internet Explorer.
On these browsers, caller is not determined from a stack
frame, but simply has a single value that records the most
recent (and still active) caller of a function. As a result, if a
function f is recursive, after the first recursive call, f.caller
becomes f. Hence
(f.caller).caller = f.caller = f
In otherwords, regardless of how many times caller is in-
voked, it becomes impossible to get to the caller of the out-
ernmost invocation of f.
JaTE relies on the above semantics of caller to ensure
that third-party code, when called by another principal X,
cannot reach X’s stack frames. To illustrate the approach,
suppose that g is a host function that needs to call a third-
party function h. Since this is a cross-compartment call,
it will go through a call-trap handler, which then calls a function f defined below:
```javascript
var t=1;
function f() {if (t) {t=0; f();} else h();}
```
When h tries to use caller to get to functions in the call
stack, it cannot get any further than f, hence it cannot get
to g.
4.3.3 Handling native prototypes
Intercepting native prototype accesses. JaTE lets prin-
cipals handle direct references to the objects they create.
However, a direct object contains references to the object’s
native prototype and its properties. Since native proto-
types are shared among all principals, JaTE must ensure
that third-party code does not obtain direct references to
them. The most natural way to achieve this is to set a native
prototype to a proxy. Unfortunately, all native prototypes
are non-configurable and non-writable, and so JaTE cannot
change them. For instance, Object.prototype cannot be
made to point to a proxy of the real prototype. Instead,
it is necessary to intercept every possible way to get to a
native prototype, and at that point, return a proxy.
Native prototypes can be accessed through native con-
structors or __proto__. For example, native Array proto-
type can be accessed using Array.prototype or x.__proto__
(where x denotes any array value). Therefore, we first re-
place native constructors with proxies. This is done for all
native constructors such as Object and Array. For instance,
Object is transformed as:
```javascript
var origObject = Object;
Object = jate.createProxy (origObject);
origObject.prototype.constructor = Object;
```
To handle __proto__, we replace __proto__’s built-in getter.
The new getter will return a proxy if it is one of the
native prototype objects.
Handling accesses to native prototype properties.
For performance reasons, we identified and white-listed na-
tive prototype functions that can be safely called directly
by any principal, e.g., prototype functions of Array, Object
and String. These functions are frozen to prevent mali-
cious principals from replacing their implementations, and
then a direct reference to these functions is returned. For
functions determined unsafe (for direct calls from untrusted
code), only a proxy is returned. While performing this safety
analysis, we found a bug in V8’s Array prototype functions.
This bug, which led to a leak of the global object, was re-
ported [7] and promptly fixed by Google.
For any property p that is newly added to native proto-
types, JaTE instead stores a proxy to p. When the current
context is switched to a principal P, JaTE converts all prop-
erties added to native prototypes and owned by P to direct
---
6In theory, this can affect transparency as it would break code
that attempts such replacement. In practice, however, we find
that overwriting of these built-in functions don’t seem to occur.
7Even though this is considered a bad practice, it seems to be
fairly common — our tests have shown that nearly half of the Top
500 websites extend native prototypes, perhaps because many of
them use the popular Prototype library.
executing principal. Needed to ensure its safe execution. The rewritten string is
stored in the string argument to perform the transformations.
At the call-trap of this proxy, we first rewrite the code containment, while third-party code is given a proxy to this function.
Figure 4 using examples. All of these instances of dynamic
of eval can either be
of native prototypes that necessitates this special handling.
Almost all direct eval calls can be identified because they
correct handling of direct eval below. Our evaluation shows
that about 30.9% of Alexa top 500 websites use direct eval.
Almost all direct eval calls can be identified because they
use the keyword eval. Our implementation transforms eval(x) into eval(processEval(x))\(^8\). Obfuscated, or unusual instances of direct eval may not be recognized by this approach. This is not a security threat since an unrecognized direct eval will to be treated as an indirect eval.
4.3.4 Handling eval
ECMAScript defines four constructs to execute dynamic
code: eval, Function, setTimeout and setInterval. Use of eval can either be direct or indirect \(^2\), as illustrated in
Figure 4 using examples. All of these instances of dynamic
code, with the exception of direct eval, are to be executed in
the global scope. JaTE wraps these instances using a function,
while third-party code is given a proxy to this function.
At the call-trap of this proxy, we first rewrite the code contained in the string argument to perform the transformations needed to ensure its safe execution. The rewritten string is then evaluated within the compartment of the currently executing principal.
Direct eval cannot be handled this way since it should not execute in the global scope, but in the same scope in which it appears. Therefore, we cannot use the method described above for handling indirect eval. We describe correct handling of direct eval below. Our evaluation shows
that about 30.9% of Alexa top 500 websites use direct eval.
Almost all direct eval calls can be identified because they
use the keyword eval. Our implementation transforms eval(x) into eval(processEval(x))\(^8\). Obfuscated, or unusual instances of direct eval may not be recognized by this approach. This is not a security threat since an unrecognized direct eval will to be treated as an indirect eval.
4.4 Supporting DOM Access
Normally, when a principal creates a JavaScript object, it receives a direct reference. However, this wouldn’t be safe for DOM nodes, since they contain read-only built-in properties that can get to the global object and the whole DOM tree (e.g., aNode.ownerDocument.defaultView is window where aNode is a DOM node).
For the above reason, we ensure that only the host has direct access to DOM-nodes. Third-party code can access DOM-node creation operations only through proxies. As a result, all DOM nodes will have the host as the owner, and be accessed using proxies by third-party. Since object ownership information is no longer enough to tell who created a DOM-node, we record this information explicitly in a field, and call it DOM-ownership.
JavaScript code generated from HTML. Certain DOM-
operations, such as the setting of innerHTML property and calling document.write, can generate new JavaScript code from HTML. It is necessary to parse HTML, identify script code, and rewrite it so that it executes in the same compartment as the principal invoking the HTML operation.
Malicious third parties can attempt to confuse our HTML parser with malformed HTML so that our parser does not recognize all the scripts that would be recognized and executed by the browser. We can rely on the solution used in Blueprint\(^35\) for this purpose, namely, parsing HTML, filtering the parse tree, and then converting the parse tree directly into actual DOM nodes using safe DOM API calls.
5. IMPLEMENTATION
We implemented JaTE in Firefox 33. The implementation consists of (a) a Firefox extension that implements the JaTE network module, and (b) the JaTE script, written in JavaScript. When minified, this script is about 30KB in size. JaTE source has been released under GPL\(^36\).
5.1 Use of Proxy
Use of shadow objects. To provide consistent semantics, ES6 proxies enforce several invariants within each trap handler. For example, a non-configurability invariant is enforced in the get trap to ensure that the return value is consistent for a frozen property. This prevents JaTE from creating a proxy to such a property. To work around this, instead of creating a proxy to an object \(O\), JaTE creates a proxy to a shadow object\(^39\) \(S\) that contains a reference to \(O\). The traps on the proxy are set so as to access \(O\). Since \(S\) does not undergo any modification, all invariants enforced by ES6 proxies will always be satisfied.
Fixing built-in functions. Proxy is still a new concept and Firefox 33 does not yet completely conform to the ES6 specification. For example, some String prototype functions such as replace and match that take a regular expression argument don’t work if a proxy is supplied instead. To work around this problem, JaTE wraps such problematic functions to replace proxies with direct versions before calling the original function, and also creates proxies as needed for return values.
5.2 JavaScript rewriting
JaTE’s rewriting requires static recognition of certain keywords. We can perform this safely because all dynamic code is analyzed and rewritten just before execution. By considering all formats of JavaScript comments, our rewriting is resilient to lexer confusing attacks\(^5\).
Code undergoes three transformations: direct eval rewriting, this rewriting, and global function declaration rewriting.\(^9\). These rewriting steps are efficient because they only require lexical analysis, plus maintaining the current parenthesis nesting level, as opposed to more extensive transformations that require full parsing.
While rewriting, we introduce some identifiers, such as processThis, processEvalSrc, etc., to the source code. In the actual implementation, these identifiers are randomly generated with a safe length to avoid the possibility of colliding with the names used by third-party code.
5.3 Supporting EcmaScript 6
\(^8\)Due to space constraints, we have shown a simplified transformation here. The full version, with additional security checks, can be found in the extended version of this paper on our website\(^37\).
\(^9\)To support strict mode, we perform simple global variable declaration rewriting. More details can be found in our tech report\(^37\).
Even though JaTE was developed to confine ES 5 code, it can support new ES 6 features. Some require minor changes: for example, let statements need a new rewriting rule to convert them to var declarations if they are in the global scope. Other new constructs such as Arrow Functions, Proxies, and WeakMaps do not require changes to JaTE and are already supported.
6. EVALUATION
6.1 Performance Evaluation
6.1.1 Page Load Overhead
To calculate page load overhead, we developed a test extension for Firefox. The extension loads URLs sequentially from an input list, measuring the time it takes for the browser to emit the load event. The measurement is first performed 10 times without any JaTE components, and then repeated another 10 times with JaTE enabled. To avoid problems with network and caching, the extension disables caching and discards the load time for the first request of each site.
Social Media Widgets. Since JaTE mediates all security-relevant operations, it can support any policy. Although we leave the design of a flexible policy framework as future work, we have developed a suitable policy for our evaluation. The starting point for this policy is one-way isolation [16], which allows untrusted code to read or modify any data, but the modifications are visible only to untrusted code. We then tighten this policy to enforce confidentiality: all reads of primitive types return a “null” value. Specifically, the following rules are enforced:
- **traversable objects**: cross-compartment objects can be obtained but not modified or called. (Built-in functions can be called). This allows navigating the whole object graph.
- **primitive zeroing**: reading cross-compartment primitives always returns a default value, e.g., empty string.
- **global object shadowing**: property writes on the global object do not affect other principals. The updated value is only visible to the current principal.
We then relaxed this policy to support the functionality of Facebook’s “Like”-button script. This script first creates a new global variable FB. Since this variable is not shared with other principals, the default global object shadowing policy is already permissive enough. The script then looks for two DIVs, one with id fb-root and one with class name fb-like, by looping through all DOM nodes using `document.getElementsByTagName('*')`. The default policy allows calling the built-in DOM functions and looping through the DOM nodes (traversable objects), but zeroes out their properties (primitive zeroing). Our policy relaxation is to avoid such zeroing and providing access to the two DIVs. Then, the script writes into them. Finally, the script inserts a new script tag and a new iframe, both of which are allowed by the default policy since they pose no security threats with JaTE. In summary, the default policy needs only a small change to allow write access to the two DIVs.
We used a process similar to that described above for Facebook “Like”-button to create policies for Google+, Twitter, etc. Much like the Facebook button, they also required write access to a small set of DOM nodes.
Figure 5 shows the overhead for the confinement of each button. The interception overhead dominates because it includes rewriting these rather large scripts, while the policy checks only need to approve the creation of a handful of DOM nodes. We used a blank enclosing (i.e., first-party) page for each button, so the overhead figures represent the worst-case. (A non-empty enclosing page would reduce the overall overheads because first-party scripts are not confined — and hence not slowed down — by JaTE.)
Advertisements. In this experiment, we measured the overhead for confining advertisement scripts on Alexa’s Top 500 websites. Since interception overheads dominate, we did not develop a specific policy for advertisements, but used an “allow-all” policy. To identify which scripts on a page are related to advertisement, we relied on a popular advertisement host list [1]. These scripts were confined by JaTE, while the remaining scripts were not confined. The average page load overhead was 19.5%.
6.1.2 User Interaction Overhead
We also measured the perceived overhead of JaTE on common user interactions, such as scrolling the page and moving to the next image in a gallery. These actions trigger one or more callbacks, which might schedule asynchronous callbacks of their own (e.g. making an HTTP request and evaluating the data when it has arrived).
To estimate the interaction delay, we leveraged the single-threaded nature of JavaScript, instrumenting all mechanisms used to register callbacks (e.g., `addEventLister` and `XMLHttpRequest`) to wrap the callback in a special function evaluating the data when it has arrived.
We also assessed the performance of the rewriter by rewriting 6 common scripts. Figure 7 shows the time required to rewrite the scripts. Our rewriter is much faster than JSand’s...
rewriter — JaTE’s 58ms Vs JSand’s 753ms for rewriting jQuery in 753ms. This is because their rewriting is significantly more complex than ours. But even JaTE’s smaller overhead may be deemed significant, e.g., 100ms on Facebook “Like” button, and hence in our future work, we plan to implement it in C.
6.1.4 Comparison With Related Work
Comparison with JSand. We compared the performance of JaTE with that of JSand, a JavaScript confinement solution based on SES. To compare JaTE’s and JSand’s performance, we replicated JSand’s benchmarks. Figure 8 shows the overhead for opening a blank page, loading the jQuery library, Google Maps and finally interacting with Google Maps. Two reasons for the difference in performance are the full parsing required by JSand during rewrite, which affects page load times, and its compatibility layer: their confinement setup makes all global variables local, which requires expensive global object synchronization.
Comparison with Caja. We also compared JaTE against Caja using a subset of the demos provided by the Caja authors. The chosen subset consisted of programs that could easily be benchmarked: a canvas clock, a markdown converter and a Game of Life. We modified the code for each demo to stop after a fixed amount of computations (e.g. 200 generations in Game of Life) and measured the average time required to complete the computation with Caja, JaTE and without any confinement to assess the overhead. For Caja, we tested both ES5/3 mode (compatible with ES3, uses rewriting to isolate code and a virtual DOM implementation) and ES5 mode (compatible with ES5, uses SES for isolation and the same virtual DOM implementation as ES5/3). Figure 9 shows the results: ES5/3 mode is slower than ES5 mode and JaTE because of its heavy runtime checks; Caja ES5 mode is faster than ES5/5 mode due to their use of SES (which realizes object capability without runtime checks), but still substantially slower than JaTE because of its virtual DOM implementation.
6.2 Transparency Evaluation
6.2.1 JaTE Transparency
There are three corner cases where JaTE can change the semantics of a script: (a) use of a cross-compartment `caller`, (b) special forms of direct `eval`, and (c) modification of white-listed built-in functions.
<table>
<thead>
<tr>
<th>Script</th>
<th>Size</th>
<th>Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>Google AdSense</td>
<td>22kB</td>
<td>37ms</td>
</tr>
<tr>
<td>Google Analytics</td>
<td>40kB</td>
<td>25ms</td>
</tr>
<tr>
<td>Google Maps</td>
<td>50kB</td>
<td>47ms</td>
</tr>
<tr>
<td>JQuery 2.1</td>
<td>83kB</td>
<td>58ms</td>
</tr>
<tr>
<td>Twitter “Share” Button</td>
<td>96kB</td>
<td>60ms</td>
</tr>
<tr>
<td>Facebook “Like” Button</td>
<td>160kB</td>
<td>101ms</td>
</tr>
<tr>
<td>Total</td>
<td>451kB</td>
<td>328ms</td>
</tr>
</tbody>
</table>
Figure 7: Rewriting overhead
<table>
<thead>
<tr>
<th>Test</th>
<th>Type</th>
<th>JaTE</th>
<th>JSand</th>
</tr>
</thead>
<tbody>
<tr>
<td>Blank Page</td>
<td>Page Load</td>
<td>16%</td>
<td>208%</td>
</tr>
<tr>
<td>JQuery</td>
<td>Page Load</td>
<td>21%</td>
<td>1230%</td>
</tr>
<tr>
<td>Google Maps</td>
<td>Page Load</td>
<td>98%</td>
<td>364%</td>
</tr>
<tr>
<td>Google Maps (Pan)</td>
<td>Interaction</td>
<td>6.2%</td>
<td>31%</td>
</tr>
</tbody>
</table>
Figure 8: JaTE vs JSand Overhead Comparison
To assess the prevalence of these corner cases, we undertook a large-scale evaluation involving all sites from the Alexa Top 500. Using the same extension used to calculate page load overheads, we loaded each site, waited 5 seconds after the `load` event, took a screenshot and logged JavaScript errors, both with and without JaTE. To automate the inspection of a large number of sites and minimize false negatives, we compared the error logs and the screenshots of both runs for each site. If we found different error messages in the two logs, we inspected the screenshots side-by-side for missing content. If content appeared to be missing, we confirmed the test results manually. We did not find any page that could not be loaded correctly due to a shortcoming of our approach. Thus, we conclude that JaTE achieves transparency for today’s web sites.
6.2.2 Related Work Transparency
To estimate the transparency of related work, we used the test extension again to load the same set of pages while confining all code in the `strict mode` subset used by Caja ES5 mode, SES and JSand. As shown in Figure 10, over 80% of sites use third-party scripts that break in strict mode, and hence these sites are not transparent with the aforementioned solutions.
Forcing strict mode is not their only shortcoming. For example, they also prevent the use of direct eval semantics and freeze native prototypes. To estimate the transparency impact of these two features, we ran our testing harness again and logged the use of these features in each web site, as shown in Figure 10. Both restrictions cause enough transparency problems to discourage websites from adopting these confinement solutions.
We also estimated the impact of `strict mode` on the social media buttons confined in Section 6.1.1. All the buttons failed to load.
6.3 Security Evaluation
To evaluate the security of JaTE, we tested it against a collection of attack vectors maintained by Google Caja [4], which contains 48 different attacks. 23 of these attacks are not applicable as they rely on non-standard features and do not work on Firefox. We augmented the test suite with 5 attacks of our own. These attacks either attempt to obtain unmediated access to cross-compartment references or introduce unconfined code into the page. For example, the `Function` constructor can be accessed through the `constructor` property of the prototype of `Number`, to create dynamic code, such as `(3).constructor.constructor("return window")`. We put these attacks into categories as shown in Figure 11. JaTE successfully stopped all other attack vectors, mediating all accesses and running dynamically generated code.
7. RELATED WORK
In this section, we discuss previous related research, focusing our attention on efforts that have not already been discussed in detail.
7.1 Language-based isolation
ADsafe [11] and GateKeeper [13] define a subset of JavaScript amenable to static analysis to enforce policies using static verification. Gatekeeper [13] restricts JavaScript to perform static points-to analysis to reason about unreachability of security-sensitive resources. ADsafe [11] provides controlled DOM access to third-party code by offering a narrow interface through the ADSAFE object, while imposing significant language restrictions aimed at ensuring that all DOM interaction happens through the object. For example, ADsafe prevents access to eval and the use of subscript notation. Despite these restrictions, bugs were found [28] in ADsafe, demonstrating the difficulty of realizing object-granularity access control in JavaScript.
BrowserShield [29] was one of the earliest works in this area. It avoided language restrictions by relying primarily on runtime checking. They were the first to propose the idea of runtime rewriting to handle eval that we have used in JaTE as well. Caja [20] also relies heavily on rewriting and runtime checking. In particular, accesses to identifiers, attributes and functions need to be checked for safety, which can lead to slowdowns by an order of magnitude or more for some programs.
7.2 Frame-based isolation
AdJail [34] isolates third-party code in an iframe and uses postMessage to transparently cooperate with the first-party page. The advantage of this approach is that it is easier to reason about complete mediation, since every communication must explicitly pass through the postMessage primitive. Specifically, it sets up a shadow iframe containing third-party code and DOM data from the real page that was explicitly shared by the first-party. Any modification to the shadow DOM by the third-party code is transmitted to the real page and subject to a policy check before it is reflected there. Treehouse [14] is a conceptually similar approach using Web Workers instead of iframes.
Instead of propagating DOM changes, Mashic [17] and Pivot [23] provide a transparent, synchronous interface for cross-domain operations on top of postMessage, to support confinement of general-purpose code. Mashic rewrites all code to continuation-passing style, while Pivot uses Generators to achieve the same goal using minor rewriting. However, they still fail to support complex interactions, such as pass-by-reference.
AdSentry’s [12] goal is not only to fully mediate access to DOM resources, but also to protect against drive-by-downloads. To meet both goals, AdSentry runs third-party code on a separate JavaScript engine secured using Native Client sandbox [43]. DOM resources are kept in the main engine, and complete mediation is achieved by forwarding all DOM accesses from the shadow engine to the main engine.
MashupOS [42] criticizes the all-or-nothing approach of the SOP and extends it to better support the trust relationships commonly found in web mashups. It identifies four modes of interaction and introduces new HTML elements and security abstractions. On the other hand, COWL [32] leverages traditional mandatory access control and tracks the secrecy labels of each frame, preventing the leakage of confidential information to unauthorized parties. However, both MashupOS and COWL still only support coarse-grained policies; they don’t tackle object-granularity access control that we seek in this paper.
The main problem with solutions in this category is that they are not able to support complex interactions involving passing object references or cross-frame function calls. As a result, to preserve functionality, people are taking risk to run third-party code directly in their websites.
7.3 Other
BEEP [15] allows a browser to examine and approve scripts before they are executed, according to a policy provided by the website as a JavaScript function. Content-Security Policies (CSPs) [31] are a mechanism developed by Mozilla to restrict the inclusion of resources such as scripts, images and frames into the web page to a specific subset of third-party servers. These works were motivated at preventing code injection attacks, e.g., cross-site scripting (XSS). Thus, their mechanisms are helpful for classifying entire scripts as “allowed” or “disallowed,” but they don’t help with the object-level isolation and access control problem faced by JaTE. Indeed, policy enforcement is not a promising approach for blocking XSS since the inferred origin of the malicious script would be the same as that of the first-party. This is why XSS defenses are mainly focused on detecting invalid script content, such as whole script [10] or partial script [27] content that has been reflected from HTTP parameters.
ConScript [22] augments Internet Explorer with policy check callbacks embedded directly in the JavaScript engine. Its goal is to securely mediate the operations made by a script, and apply a user-specified policy. WebJail [38] uses an approach similar to ConScript but implemented on Firefox. Its goal is to provide a higher-level interface to express policies that impose further restrictions over the SOP, e.g., restricting access to local storage, or network operations. The new HTML 5 specification [3] includes coarse-grained support for sandboxing iframes by specifying a subset of capabilities for the contained document, such as running JavaScript code or opening pop-up windows. While all of these techniques are helpful for further restricting untrusted scripts, note that they still only provide a single security context (such as a frame) for the code. In contrast, JaTE requires distinct security contexts to be maintained for the host and third-parties, and distinct policies to be enforced on them, while allowing them all to run within the same frame.
8. CONCLUSION
This paper presented JaTE, a compartment-based solution for confining third-party JavaScript code. Although this problem is of great practical significance, previous solu-
9. REFERENCES
|
{"Source-Url": "http://www3.cs.stonybrook.edu/~rpelizzi/jate.pdf", "len_cl100k_base": 11363, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34982, "total-output-tokens": 13081, "length": "2e13", "weborganizer": {"__label__adult": 0.00032520294189453125, "__label__art_design": 0.00031256675720214844, "__label__crime_law": 0.00041031837463378906, "__label__education_jobs": 0.0003464221954345703, "__label__entertainment": 6.812810897827148e-05, "__label__fashion_beauty": 0.00011736154556274414, "__label__finance_business": 0.00017940998077392578, "__label__food_dining": 0.0002313852310180664, "__label__games": 0.0004105567932128906, "__label__hardware": 0.000804901123046875, "__label__health": 0.0003147125244140625, "__label__history": 0.00015413761138916016, "__label__home_hobbies": 6.42538070678711e-05, "__label__industrial": 0.00022399425506591797, "__label__literature": 0.00017058849334716797, "__label__politics": 0.00019490718841552737, "__label__religion": 0.00025844573974609375, "__label__science_tech": 0.01219940185546875, "__label__social_life": 6.973743438720703e-05, "__label__software": 0.007965087890625, "__label__software_dev": 0.974609375, "__label__sports_fitness": 0.00019299983978271484, "__label__transportation": 0.00027871131896972656, "__label__travel": 0.0001398324966430664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56762, 0.0214]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56762, 0.37988]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56762, 0.89465]], "google_gemma-3-12b-it_contains_pii": [[0, 6525, false], [6525, 13141, null], [13141, 16776, null], [16776, 23604, null], [23604, 30097, null], [30097, 36700, null], [36700, 41653, null], [41653, 47382, null], [47382, 53515, null], [53515, 56762, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6525, true], [6525, 13141, null], [13141, 16776, null], [16776, 23604, null], [23604, 30097, null], [30097, 36700, null], [36700, 41653, null], [41653, 47382, null], [47382, 53515, null], [53515, 56762, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56762, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56762, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56762, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56762, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56762, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56762, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56762, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56762, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56762, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56762, null]], "pdf_page_numbers": [[0, 6525, 1], [6525, 13141, 2], [13141, 16776, 3], [16776, 23604, 4], [23604, 30097, 5], [30097, 36700, 6], [36700, 41653, 7], [41653, 47382, 8], [47382, 53515, 9], [53515, 56762, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56762, 0.0411]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
5b8bb8e6c4656408fb54c183a17693d7bc9bb824
|
Programming in Biomolecular Computation
Lars Hartmann, Neil D. Jones, Jakob Grue Simonsen
Department of Computer Science, University of Copenhagen (DIKU), Copenhagen, Denmark
Abstract
Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We introduce a model of computation that is evidently programmable, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only executable, but are also compilable and interpretable. It is universal: all computable functions can be computed (in natural ways and without arcane encodings of data and algorithm); it is also uniform: new “hardware” is not needed to solve new problems; and (last but not least) it is Turing complete in a strong sense: a universal algorithm exists, that is able to execute any program, and is not asymptotically inefficient. A prototype model has been implemented (for now in silico on a conventional computer). This work opens new perspectives on just how computation may be specified at the biological level.
Keywords: biomolecular, computation, programmability, universality.
1 Biochemical universality and programming
It has been known for some time that various forms of biomolecular computation are Turing complete [7,8,10,12,25,29,32,33]. The net effect is to show that any computable function can be computed, in some appropriate sense, by an instance of the biological mechanism being studied. However, the arguments for Turing universality we have seen are less than compelling from a programming perspective. This paper’s purpose is to provide a better computation model where the concept of “program” is clearly visible and natural, and in which Turing completeness is not artificial, but rather a natural part of biomolecular computation. We begin by evaluating some established results on biomolecular computational completeness from a programming perspective; and then constructively provide an alternative solution. The new model seems biologically plausible, and usable for solving a...
variety of problems of computational as well as biological interest. It should be noted that while our model can support full parallelism (as often seen in biologically-inspired computing), it is not the foci of the paper, which are completeness and universality: we consider one program running on one, contiguous piece of data.
**The central question:** can program execution take place in a biological context? Evidence for “yes” includes many analogies between biological processes and the world of programs: *program-like behavior*, e.g., genes that direct protein fabrication; “switching on” and “switching off”; processes; and reproduction.
A clarification from the start: this paper takes a *synthetic* viewpoint, concerned with building things as in the engineering and computer sciences. This is in contrast to the ubiquitous *analytic* viewpoint common to the natural sciences, concerned with finding out how naturally evolved things work.
The authors’ backgrounds lie in the semantics of programming languages, compilers, and computability and complexity theory; and admittedly not biology. We focus on the synthetic question can, rather than the usual natural scientists’ analytical question does.
**Where are the programs?** In existing biomolecular computation models it is very hard to see anything like a program that realises or directs a computational process. For instance, in cellular automata the program is expressed only in the initial cell configuration, or in the global transition function. In many biocomputation papers the authors, given a problem, cleverly devise a biomolecular system that can solve this particular problem. However, the algorithm being implemented is hidden in the details of the system’s construction, and hard to see, so the program or algorithm is in no sense a “first-class citizen”. Our purpose is to fill this gap, to establish a biologically feasible framework in which programs are first-class citizens.
### 2 Relation to other computational frameworks
We put our contributions in context by quickly summarising some other computational completeness frameworks. **Key dimensions:** uniformity; programmability; efficiency; simplicity; universality; and biological plausibility. (Not every model is discussed from every dimension, e.g., a model weak on a dimension early in the list need not be considered for biological plausibility.)
**Circuits, BDDs, finite automata.** While well proven in engineering practice, these models don’t satisfy our goal of computational completeness. The reason: they are *non-uniform* and so not Turing complete. Any single instance of a circuit or a BDD or a finite automaton has a control space and memory that are both finite. Consequently, any *general but unbounded* computational problem (e.g., multiplying two arbitrarily large integers) must be done by choosing one among an infinite family of circuits, BDDs or automata.
**The Turing machine.** *Strong points.* Highly successful for theoretical purposes, the Turing model is uniform; there exists a clear concept of “program”; and the “universal Turing machine” from 1936 is the seminal example of a self-
interpreter. The Turing model has fruitfully been used to study computational complexity problem classes as small as $\text{ptime}$ and $\text{logspace}$.
**Weak points.** Turing machines do not well model computation times small enough to be realistically interesting, e.g., near-linear time. The inbuilt “data transport” problems due to the model’s one-dimensional tape (or tapes, on a multi-tape variant) mean that naturally efficient algorithms may be difficult to program on a Turing machine. E.g., a time $O(n)$ algorithm may suffer asymptotic slowdown when implemented on a Turing machine, e.g., forced to run in time $O(n^2)$ because of architectural limitations. A universal Turing machine has essentially the same problem: it typically runs quadratically slower than the program it is simulating. Still greater slowdowns may occur if one uses smaller Turing complete languages, for instance the counter or Minsky register machines as used in [7,8,12,22].
**Other computation models with an explicit concept of program.** Numerous alternatives to the Turing machine have been developed, e.g., the Tag systems studied by Post and Minsky, and a variety of register or counter machines. Closer to computer science are recursive functions; the $\lambda$-calculus; functional programming languages such as LISP; and machines with randomly addressable memories including the RAM and, most relevant to our work, its stored-program variant the RASP [19]. These models rate well on some of the key dimensions listed above. However they are rather complex; and were certainly not designed with biological plausibility in mind.
**Cellular automata.** John von Neumann’s groundbreaking work on cellular automata was done in the 1940s, at around the time he also invented today’s digital computer. In [29] computational completeness was established by showing that any Turing machine could be simulated by a cellular automaton. Further, it was painstakingly and convincingly argued that a cellular automaton could achieve self-reproduction. Von Neumann’s and subsequent cellular automaton models, e.g., LIFE and Wolfram’s models [15,8,32], have some shortcomings, though. Though recent advances have remedied the lack of asynchronous computations [23], a second, serious drawback is the lack of programmability: once the global transition function has been selected (e.g., there is only one such in LIFE) there is little more that the user of the system can do; the only degree of freedom remaining is to choose the initial configuration of cell states. There is no explicit concept of a program that can be devised by the user. Rather, any algorithmic ideas have to be encoded in a highly indirect manner, into either the global transition function or into the initial cell state configuration; in a sense, the initial state of a universal CA represents both the program to be simulated, and its input, but in the zoo of cellular automata proven to be universal, there seems to be no standard way to identify which parts of the initial state corresponds to, say, a certain control structure in a program, or a specific substructure of a data structure such as a list.
**Biomolecular computation frameworks.** We will see that the Turing-typical asymptotic slowdowns can be avoided while using a biomolecular computing model. This provides an advance over both earlier work on automata-based computation models (Turing machines, counter machines, etc.), and over some other
approaches to biomolecular computing
A number of contributions exist in this area; a non-exhaustive list:
[1,3,7,10,8,11,12,17,20,21,25,26,30,31,5,33] The list is rather mixed: Several of the articles describe concrete finite-automaton-like computations, emphasising their realisation in actual biochemical laboratory contexts. As such their emphasis is not on general computations but rather on showing feasibility of specific computations in the laboratory. Articles [7,8,12,20,33] directly address Turing completeness, but the algorithmic or programming aspects are not easy to see.
How our approach is different: Contrary to several existing models, our atomic notion (the “blob”) carries a fixed amount of data and has a fixed number of possible interaction points with other blobs. Further, one fixed set of rules specify how local collections of blobs are changed. In this sense, our setup resembles specific cellular automata, e.g. Conway’s game of life where only the initial state may vary. Contrary to cellular automata, both programs and data are clearly identified ensembles of blobs. Further, we use a textual representation of programs closely resembling machine code such that each line essentially corresponds to a single blob instruction with parameters and bonds. The resulting code conforms closely to traditional low-level programming concepts, including use of conditionals and jumps.
Outline of the paper: Section 3 introduces some notation to describe program execution. Section 4 has more discussion of computational completeness Section 5 concerns the blob model of computation, with an explicit program component. Section 6 relates the blob model to more traditional computation models, and Section 7 concludes. Appendix A shows how a Turing machine may be simulated in the blob model – doable within a constant slowdown because of the flexibility of blobs when considered as data structures.
3 Notations: direct or interpretive program execution
What do we mean by a program (roughly)? An answer: a set of instructions that specify a series (or set) of actions on data. Actions are carried out when the instructions are executed (activated,...) Further, a program is software, not hardware. Thus a program should itself be a concrete data object that can be replaced to specify different actions.
Direct program execution: write \([\text{program}]\) to denote the meaning or net effect of running program. A program meaning is often a function from data input values to output values. Expressed symbolically:
\[ [\text{program}](\text{data}_{\text{in}}) = \text{data}_{\text{out}} \]
The program is activated (run, executed) by applying the semantic function \([\_\]). The task of programming is, given a desired semantic meaning, to find a program that computes it. Some mechanism is needed to execute program, i.e., to compute \([\text{program}].\) This can be done either by hardware or by software
**Interpretive program execution:** Here program is a passive data object, but it is now activated by running the interpreter program. (Of course, some mechanism will be needed to run the interpreter program, e.g., hardware or software.) An equation similar to the above describes the effect of interpretive execution:
\[
\llbracket \text{interpreter} \rrbracket (\text{program}, \text{data}_{in}) = \text{data}_{out}
\]
Note that program is now used as data, and not as an active agent. Self-interpretation is possible and useful [18]; the same value \(\text{data}_{out}\) can be computed by:
\[
\llbracket \text{interpreter} \rrbracket (\text{interpreter}, (\text{program}, \text{data}_{in})) = \text{data}_{out}
\]
## 4 Turing completeness of computational models
**How to show Turing completeness of a computation framework.** This is typically shown by *reduction* from another problem already known to be Turing complete. Notation: let \(L\) and \(M\) denote languages (biological, programming, whatever), and let \(\llbracket p \rrbracket^L\) denote the result of executing \(L\)-program \(p\), for example an input-output function computed by \(p\). Then we can say that language \(M\) is at least as powerful as \(L\) if
\[
\forall p \in L-\text{programs} \ \exists q \in M-\text{programs} \ (\llbracket p \rrbracket^L = \llbracket q \rrbracket^M)
\]
A popular choice is to let \(L\) be some very small Turing complete language, for instance Minsky register machines or two-counter machines (2CM). The next step is to let \(M\) be a biomolecular system of the sort being studied. The technical trick is to argue that, given any \(L\)-instance of (say) a 2CM program, it is possible to construct a biomolecular \(M\)-system that faithfully simulates the given 2CM.
Oddly enough, Turing completeness is not often used to show that certain problems *can* be solved by \(M\)-programs; but rather only to show that, say, the equivalence or termination problems of \(M\)-programs are algorithmically undecidable because they are undecidable for \(L\), and the properties are preserved under the construction. This discussion brings up a central issue:
**Simulation as opposed to interpretation.** Arguments to show Turing completeness are (as just described) usually by *simulation*: for each problem instance (say a 2CM) one somehow constructs a biomolecular system such that ... (the system in some sense solves the problem). However, in many papers for each problem instance the construction of the simulator is done by hand, e.g., by the author writing the article. In effect the existential quantifier in \(\forall p \exists q (\llbracket p \rrbracket^L = \llbracket q \rrbracket^M)\) is computed by hand. This phenomenon is clearly visible in papers on cellular computation models: completeness is shown by simulation rather than by interpretation.
In contrast, Turing’s original “Universal machine” simulates by means of *interpretation*: a stronger form of imitation, in which the existential quantifier is realised
by machine. Turing’s “Universal machine” is capable of executing an arbitrary Turing machine program, once that program has been written down on the universal machine’s tape in the correct format, and its input data has been provided. Our research follows the same line, applied in a biological context: we show that simulation can be done by general interpretation, rather than by one-problem-at-a-time constructions.
5 Programs in a biochemical world
Our goal is to express programs in a biochemical world. Programming assumptions based on silicon hardware must be radically re-examined to fit into a biochemical framework. We briefly summarize some qualitative differences.
- **There can be no pointers to data**: addresses, links, or unlimited list pointers. In order to be acted upon, a data value must be *physically adjacent* to some form of actuator. A biochemical form of adjacency: a chemical bond between program and data.
- **There can be no action at a distance**: all effects must be achieved via chains of local interactions. A biological analog: signaling.
- **There can be no nonlocal control transfer**, e.g., no analog to GOTOs or remote function/procedure calls. However some control loops are acceptable, provided the “repeat point” is (physically) near the loop end. A biological analog: a bond between different parts of the same program.
- On the other hand there exist available **biochemical resources** to tap, i.e., free energy so actions can be carried out, e.g., to construct local data, to change the program control point, or to add local bonds into an existing data structure. Biological analogs: Brownian movement, ATP, oxygen.
The above constraints suggest how to structure a biologically feasible model of computation. The main idea is to keep both program control point and the current data inspection site always close to a *focus point* where all actions occur. This can be done by continually shifting the program or the data, to keep the *active program blob* (APB) and *active data blob* (ADB) always in reach of the focus. The picture illustrates this idea for direct program execution.
**Running program** p, i.e., computing \([p](d)\)
Program p Data d
APB: [ ]:ADB
Focus point for control and data (connects the APB and the ADB)
program-to-data bond
5.1 The Blob model
We take a very simplified view of a (macro-)molecule and its interactions, with abstraction level similar to the Kappa model [12,7,14]. To avoid misleading detail questions about real molecules we use the generic term “blob” for an abstract molecule. A collection of blobs in the biological “soup” may be interconnected by two-way bonds linking the individual blobs’ bond sites.
A program $p$ is (by definition) a connected assembly of blobs. A data value $d$ is (also) by definition a connected assembly of blobs. At any moment during execution, i.e., during computation of $[p](d)$ we have:
- One blob in $p$ is active, known as the active program blob or APB.
- One blob in $d$ is active, known as the active data blob or ADB.
- A bond $\ast$, between the APB and the ADB, is linked at a specially designate bond site, bondsite0, of each.
The data view of blobs: A blob has several bond sites and a few bits of local storage limited to fixed, finite domains. Specifically, our model will have four bond sites, identified by numbers 0,1,2,3. At any instant during execution, each can hold a bond – that is, a link to a (different) blob; or a bond can hold $\perp$, indicating unbound.
In addition each blob has 8 cargo bits of local storage containing Boolean values, and also identified by numerical positions: 0,1,2,...,7. When used as program, the cargo bits contain an instruction (described below) plus an activation bit, set to 1. When used as data, the activation bit must be 0, but the remaining 7 bits may be used as the user wishes.
A blob with 3 bond sites bound and one unbound:
```
\perp 1 0 2
```
Since bonds are in essence two-way pointers, they have a “fan-in” restriction: a given bond site can contain at most one bond (if not $\perp$).
The program view of blobs: Blob programs are sequential. There is no structural distinction between blobs used as data and blobs used as program. A single, fixed set of instructions is available for moving and rearranging the cursors, and for testing or setting a cargo bit at the data cursor. Novelties from a computer science viewpoint: there are no explicit program or data addresses, just adjacent blobs. At any moment there is only a single program cursor and a single data cursor, connected by a bond written $\ast$ above.
Instructions, in general. The blob instructions correspond roughly to “four-address code” for a von Neumann-style computer. An essential difference, though, is that a bond is a two-way link between two blobs, and is not an address at all. It is not a pointer; there exists no address space as in a conventional computer. A blob’s 4 bond sites contain links to other instructions, or to data via the APB-ADB bond.
For program execution, one of the 8 cargo bits is an “activation bit”; if 1, it marks the instruction currently being executed. The remaining 7 cargo bits are interpreted as a 7-bit instruction so there are $2^7 = 128$ possible instructions in all. An instruction has an operation code (around 15 possibilities), and 0, 1 or 2 parameters that identify single bits, or bond sites, or cargo bits in a blob. See table below for current details. For example, SCG v c has 16 different versions since v can be one of 2 values, and c can be one of 8 values.
Why exactly 4 bonds? The reason is that each instruction must have a bond to its predecessor; further, a test or “jump” instruction will have two successor bonds (true and false); and finally, there must be one bond to link the APB and the ADB, i.e., the bond * between the currently executing instruction and the currently visible data blob. The FIN instruction is a device to allow a locally limited fan-in.
A specific instruction set (a bit arbitrary). The formal semantics of instruction execution are specified precisely by means of a set of 128 biochemical reaction rules in the style of [12]. For brevity here, we just list the individual instruction formats and their informal semantics. Notation: b is a 2-bit bond site number, c is a 3-bit cargo site number, and v is a 1-bit value.
Numbering convention: the program APB and the data ADB are linked by bond * between bond sites 0 of the APB and the ADB. An instruction’s predecessor is linked to its bond site 1; bond site 2 is the instruction’s normal successor; and bond site 3 is the alternative “false” successor, used by jump instructions that test the value of a cargo bit or the presence of a bond.
<table>
<thead>
<tr>
<th>Instruction</th>
<th>Description</th>
<th>Informal semantics (:=: is a two-way interchange)</th>
</tr>
</thead>
<tbody>
<tr>
<td>SCG v c</td>
<td>Set CarGo bit</td>
<td>ADB.c := v; APB := APB.2</td>
</tr>
<tr>
<td>JCG c</td>
<td>Jump CarGo bit</td>
<td>if ADB.c = 0 then APB := APB.3 else APB := APB.2</td>
</tr>
<tr>
<td>JB b</td>
<td>Jump Bond</td>
<td>if ADB.b = ⊥ then APB := APB.3 else APB := APB.2</td>
</tr>
<tr>
<td>CHD b</td>
<td>CHange Data</td>
<td>ADB := ADB.b; APB := APB.2</td>
</tr>
<tr>
<td>INS b1 b2</td>
<td>INSert new bond</td>
<td>new.b2 ::= ADB.b1; new.b1 ::= ADB.b1.bs; APB := APB.2</td>
</tr>
<tr>
<td>SWL b1 b2</td>
<td>SWap Links</td>
<td>ADB.b1 ::= ADB.b2.b1; APB := APB.2</td>
</tr>
<tr>
<td>SBS b1 b2</td>
<td>SWap Bond Sites</td>
<td>ADB.b1 ::= ADB.b2; APB := APB.2</td>
</tr>
<tr>
<td>SWP1 b1 b2</td>
<td>Swap bs1 on linked</td>
<td>ADB.b1.1 ::= ADB.b2.1; APB := APB.2</td>
</tr>
<tr>
<td>SWP3 b1 b2</td>
<td>Swap bs3 on linked</td>
<td>ADB.b1.3 ::= ADB.b2.3; APB := APB.2</td>
</tr>
<tr>
<td>JN b1 b2</td>
<td>Join b1 to linked b2</td>
<td>ADB.b1 ::= ADB.b1.b2; APB := APB.2</td>
</tr>
<tr>
<td>DBS b</td>
<td>Destination bond site</td>
<td>Cargo bits 0,1 := bond site number of destination for ADB.b</td>
</tr>
<tr>
<td>FIN</td>
<td>Fan IN</td>
<td>APB := APB.2 (bond site 3 is an alternative predecessor)</td>
</tr>
<tr>
<td>EXT</td>
<td>EXiT program</td>
<td></td>
</tr>
</tbody>
</table>
An example in detail: the instruction SCG 1 5, as picture and as a rewrite rule. SCG stands for “set cargo bit”. The effect of instruction SCG 1 5 is to change the 5-th cargo bit of the ADB (active data blob) to 1. First, an informal picture to show its effect:
Note: the APB-ADB bond * has moved: Before execution, it connected APB with ADB. After execution, it connects APB' with ADB, where APB' is the next instruction: the successor (via bond S) of the previous APB. Also note that the activation bit has changed: before, it was 1 at APB (indicating that the APB was about to be executed) and 0 at ADB'. Afterwards, those two bit values have been interchanged.
**Syntax:** Code the above instruction as an 8-bit string: 1 100 1 101. Here activation bit \( a = 1 \) indicates that this is the current instruction (about to be executed). Operation code SCG (happens to be) encoded as 100; and binary numbers are used to express the new value: \( v = 1 \), and the number of the cargo bit to be set: \( c = 5 \).
The instruction also has four bond sites: \( *PS\perp \). Here \( P \) is a bond to the predecessor of instruction \( SCG \_5 \), \( S \) is a bond to its successor, and bond site 3 is not used. The full instruction, with 8 cargo sites and four bond sites can be written in form \(^3\): \( B[11001101](\_PS\perp) \).
**Semantics:** Instruction \( SCG \_5 \) transforms the three blobs APB, APB' and ADB as in the picture above. This can be expressed more exactly using a rewrite rule as in [12] that takes three members of the blob species into three modified ones. For brevity we write “-” at bond sites or cargo sites that are not modified by the rule. Note that the labels APB, ADB, etc. are not part of the formalism, just labels added to help the reader.
\[
\begin{align*}
\text{APB} & \mapsto B[1 \ 100 \ 1 \ 101](\_S\_), \\
\text{APB'} & \mapsto B[0 \ - \ - \ - \ - \ - \ _S\_](\perp S\_), \\
\text{ADB} & \mapsto B[0 \ - \ - \ - \ - \ - \ _S\_](\_S\_)
\end{align*}
\]
\[
\Rightarrow \\
\begin{align*}
\text{APB} & \mapsto B[0 \ 100 \ 1 \ 101](\_S\_), \\
\text{APB'} & \mapsto B[1 \ - \ - \ - \ - \ - \ _S\_](S\_), \\
\text{ADB} & \mapsto B[0 \ - \ - \ - \ 1 \ - \ _S\_](S\_)
\end{align*}
\]
6 The blob world from a computer science perspective
First, an operational image: Any well-formed blob program, while running, is a collection of program blobs that is adjacent to a collection of data blobs, such that there is \( one \) critical bond (\( * \)) that links the APD and the ADB (the active program blob and the active data blob). As the computation proceeds, the program or
\(^3\) \( B \) stands for a member of the blob “species”.
data may move about, e.g., rotate as needed to keep their contact points adjacent (the APB and the ADB). For now, we shall not worry about the thermodynamic efficiency of moving arbitrarily large program and data in this way; for most realistic programs, we assume them to be sufficiently small (on the order of thousands of blobs) that energy considerations and blob coherence are not an issue.
6.1 The blob language
It is certainly small: around 15 operation codes (for a total of 128 instructions if parameters are included). Further, the set is irredundant in that no instruction’s effect can be achieved by a combination of other instructions. There are easy computational tasks that simply cannot be performed by any program without, say, SCG or FIN.
There is certainly a close analogy between blob programs and a rudimentary machine language. However a bond is not an address, but closer to a two-way pointer. On the other hand, there is no address space, and no address decoding hardware to move data to and from memory cells. An instruction has an unusual format, with 8 single bits and 4 two-way bonds. There is no fixed word size for data, there are no computed addresses, and there are no registers or indirection.
The blob programs has some similarity to LISP or SCHEME, but: there are no variables; there is no recursion; and bonds have a “fan-in” restriction.
6.2 What can be done in the blob world?
In principle the ideas presented and further directions are clearly expressible and testable in Maude or another tool for implementing term rewriting systems, or the kappa-calculus [7,9,12,14]. Current work involves programming a blob simulator. A prototype implementation has been made, with a functioning self-interpreter.
The usual programming tasks (appending two lists, copying, etc.) can be solved straightforwardly, albeit not very elegantly because of the low level of blob code. Appendix A shows how to generate blob code from a Turing machine, thus establishing Turing-completeness.
It seems possible to make an analogy between universality and self-reproduction that is tighter than seen in the von Neumann and other cellular automaton approaches. It should now be clear that familiar Computer Science concepts such as interpreters and compilers also make sense also at the biological level, and hold the promise of becoming useful operational and utilitarian tools.
6.3 Self-interpretation in the blob world
The figure of Section 5 becomes even more interesting when a program is executed interpretively, computing \([\text{interpreter}](p, d)\).
We have developed a “blob universal machine”, i.e., a self-interpreter for the blob formalism that is closely analogous to Turing’s original universal machine.
6.4 Parsimony of the instruction set
All instructions are currently in use in the self-interpreter, indeed all instructions appeared to be necessary in programming it. With the possible (but, we believe, unlikely) exception of the various swap instructions (SWL, SBS, SWP1, SWP3), we conjecture the instruction set to be parsimonious in the sense that no proper subset of the instruction set can be used to simulate the remaining instructions. A possible formal proof is being investigated.
6.5 Dimensionality limitations
The physical world imposes a dimensionality requirement we have not yet addressed: data and program code cannot be packed with a density greater than that allowed by three-dimensional Euclidean space. The idea of a biologically plausible computing model that must work in 3-space provokes several interesting questions.
In the blob model, following a chain of $k$ bonds from the active data blob (at any time in a computation) should give access to at most $O(k^3)$ blobs. This is not guaranteed by the blob model as presented above; indeed, a blob program could build a complete 3-ary tree of depth $k$ and containing $3^k$ blobs at distance $k$. This structure could not be represented in 3-space with our restrictions, and still have the intended semantic structure: that any two blobs linked by a bond should be adjacent in the biological “soup”.
The usual Turing machine has a fixed number of 1-dimensional tapes (though $k$-dimensional versions exist, for fixed $k$). Cellular automata as in [29,8,32] have a fixed 2-dimensional architecture. Dimensionality questions are not relevant to Minsky-style machines with a fixed number of registers, e.g., the two-counter machine.
Machines that allow computed addresses and indirection, e.g., the RAM, RASP, etc., have no dimensionality limitations at all, just as in the “raw” blob model: traversing a chain of $k$ bonds from one memory can give access to a number of cells exponential in $k$ (or higher if indexing is allowed).
The well-known and well-developed Turing-based computational complexity theory starts by restricted programs’ running time and/or space. An possible analogy would be to limit the dimensionality of the data structures that a program may build during a computation.
Pursuing the analogy, the much-studied complexity class PTIME is quite large,
indeed so large that dimensionality makes no difference: on any traditional model where data dimensionality makes sense, it would be an easy exercise to show that \( \text{PTIME} = \text{PTIME}^{3D} \). What if instead we study the class \( \text{LINTIME} \) of problems solvable in linear time (as a function of input size)? Alas, this smaller, realistically motivated class is not very robust for Turing machines, as small differences in Turing models can give different versions of \( \text{LINTIME} \) (Sections 18, 19, 25.6 in [19]). It seems likely though that the \( \text{LINTIME} \) class for blob machines is considerably more robust.
**Conjecture:** \( \text{LINTIME}^{3D} \subsetneq \text{LINTIME} \) on the blob model.
**Another interesting question:** does self-interpretation cause a need for higher dimensionality? We conjecture that this is not so for any one fixed interpreted program; but that diagonalisation constructions can force the necessary dimensionality to increase. This appears to be an excellent direction for future work.
### 7 Contributions of This Work
We have for the first time investigated the possibility of programmable bio-level computation. The work sketched above, in particular the functioning of blob code, can all be naturally expressed in the form of abstract biochemical reaction rules. Further, we have shown molecular computation to be universal in a very strong sense: not only can every computable function be computed by a blob program, but this can all be done using a single, fixed, set of reaction rules: it is not necessary to resort to constructing new rule sets (in essence, new biochemical architectures) in order to solve new problems; it is enough to write new programs.
The new framework provides Turing-completeness efficiently and without asymptotic slowdowns. It seems possible to make a tighter analogy between universality and self-reproduction than by the von Neumann and other cellular automaton approaches.
It should be clear that familiar Computer Science concepts such as interpreters and compilers also make sense also at the biological level, and hold the promise of becoming useful operational and utilitarian tools.
### References
A Turing completeness of the blob model
We prove that any one-tape Turing machine with a single read/write head may be simulated by a blob program. The tape contents are always finite and enclosed between a left endmarker $\langle$ and a right endmarker $\rangle$.
A.1 Turing machine syntax
A Turing machine is a tuple $Z = (\{0, 1\}, Q, \delta, q_{\text{start}}, q_{\text{halt}})$. The tape and input alphabet are $\{0, 1\}$. (Blanks are not included, but may be encoded suitably by bits.) $Q$ is a finite set of control states including distinct start and halting states $q_{\text{start}}, q_{\text{halt}} \in Q$. The transition function has type
$$\delta : \{0, 1, \langle, \rangle\} \times Q \rightarrow A \times Q$$
where an action is any $A \in A = \{L, R, W0, W1\}$. Notation: we write a Turing machine instruction as
$$\delta(q, b) \rightarrow (A, r)$$
meaning “In state $q$, reading bit $b$, perform action $A$ and move to state $r$”. Actions $L, R, W0, W1$ mean informally “move Left, move Right, Write 0, Write 1”, respectively. For simplicity we assume that Turing machines may not both move and write on the tape in the same atomic step. (A “write-and-move” action may easily be implemented using two states and two steps.)
We also assume that every Turing machine satisfies the following consistency assumptions:
- If $\delta(q, \langle) \rightarrow (A, r)$ is an instruction, then $A = R$ (i.e. the machine never moves to the left of the left endmarker and cannot overwrite the endmarker).
- If $\delta(q, \rangle) \rightarrow (A, r)$ then $A \in \{L, W0, W1\}$ (i.e. the machine never moves to the right of the right endmarker, but can overwrite the endmarker).
Definition A.1 Let $M$ be a Turing machine. The state graph of $M$ is the directed graph where the nodes are the states of $M$ and there is a directed edge from $q$ to $r$ annotated $(b, A)$ if there is an instruction $\delta(q, b) \rightarrow (A, r)$.
A.2 Turing machine semantics
A total state has the form
$$q \langle b_1 \ldots b_i \ldots b_n \rangle$$
where the $b_j$ are tape symbols, and $q$ is a control state. We define the tape contents of the machine to be everything enclosed between $\langle$ and $\rangle$.
The Turing machine defines a one-step transition relation between total states in the expected way (not spelled out here). Tapes may only grow to the right, not the left. We assume that if there is an instruction of the form $\delta(q, \rangle) \rightarrow (W0, r)$ or $\delta(q, \rangle) \rightarrow (W1, r)$ (i.e. the right endmarker is overwritten), then the tape is
automatically extended to the right with a new endmarker to the immediate right of the previous endmarker.
Remark: the tape contents will always be finite after a finite number of computation steps.
Input/Output: A Turing machine $Z$ computes a partial function
$$[[Z]] : \{0, 1\}^* \rightarrow \{0, 1\}^*$$
- Input: The machine is in its start state with the tape head on the tape cell to the immediate right of the left endmarker $\prec$. The input is the contents of the tape.
- Output: The machine is in its halt state. The output is the contents of the tape.
A.3 Compiling a Turing machine into a blob program
We describe a way to compile any Turing machine $Z = (\{0, 1\}, Q, \delta, q_{start}, q_{halt})$ into blob program code $code(Z)$ that simulates it. Compilation of a Turing machine into blob code is as follows:
- Generate blob code for each instruction $\delta(q, b) \rightarrow (A, r)$.
- Collect blob code for all the states into a single blob program.
Before describing the compilation algorithm, we explain how the blob code realises a step-by-step simulation of the Turing machine $Z$.
A.3.1 Turing machine representation by blobs
At any time $t$ in its computation, the Turing machine’s tape $b_1 \ldots b_i \ldots b_n$ will represented by a finite sequence $B_1 \ldots B_i \ldots B_n$ of blobs. If at time $t$ the Turing machine head is scanning tape symbol $b_i$, the active data blob will be the blob $B_i$. Arrangement: each $B_i$ is linked to its predecessor via bond site 1, and to its successor via bond site 2. The Turing machine’s control state will correspond to the active program blob in $code(Z)$.
The cargo bits of the “data blobs” are used to indicate the contents of the the tape cell:
- Cargo bit 0 is unused in the simulation.
- Cargo bit 1 is used to hold the bit occupied by the tape cell (if the blob represents either $\prec$ or $\_succ$, the contents of cargo bit 1 is irrelevant).
- Cargo bit 2 is ’1’ iff the blob represents the left endmarker $\prec$.
- Cargo bit 3 is ’1’ iff the blob represents the right endmarker $\succ$.
A.3.2 Syntax of the generated code
We will write the generated blob target program as straightline code with labels. For every instruction, the “next” blob code instruction to be executed is the one linked to the active program blob by the latter’s “successor” bond site 2. Thus, in
the blob corresponding to SCG 0 5 has its bond site 2 linked to the “predecessor” bond site 1 of the blob corresponding to EXT.
A.3.3 Code generation for each state
Let \( q \neq q_{\text{halt}} \) be a state. The four possible kinds of transitions on state \( q \) are:
\[
\begin{align*}
\delta(q, 0) & \rightarrow (A0, q0) \\
\delta(q, 1) & \rightarrow (A1, q1) \\
\delta(q, <) & \rightarrow (AL, qL) \\
\delta(q, >) & \rightarrow (AR, qR)
\end{align*}
\]
where \( q0, q1, qL, qR \in Q, A0, A1 \in \{L, R, W0, W1\} \), and \( AL, AR \in \{L, W0, W1\} \).
We generate code for \( q \) as follows. For typographical reasons, \(< = \text{EL} \) and \( > = \text{ER} \). The action code notations \([A0]\) etc, is explained below, as is the label notation \(<\text{label}>\). The initial FIN code may be safely ignored on the first reading.
```plaintext
Generate i-1 FIN // Assume program port 2 is always "next" operation // Each FIN is labeled as noted below // The last FIN is bound (on its bond site 2) to // the blob labeled 'Q' below.
Q: JCG 2 QLE // If 1, We're at left tape end // By convention, bond site 3 of the APB is // bound to the blob labeled QLE JCG 3 QRE // If 1, We're at right tape end JCG 1 Q1 // We're not at any end. If '0' is scanned, move along // (on bond site 2), // otherwise a '1' is scanned, jump to Q1 // (on bond site 3) [A0] // Insert code for action A0 FIN qA0q0 // Go to appropriate fanin before q0 (on bond site 2) Q1: [A1] // Insert code for action A1 FIN qA1q1 // Go to appropriate fanin before q1 (on bond site 2) QLE: [AL] // Insert code for AL FIN qELALqL // Go to appropriate fanin before qL (on bond site 2) QRE: R[AR] // Insert code for AR (with the R[ ]-function) FIN ERARqR // Go to appropriate fanin before qR (on bond site 2) // Code for q end
```
Code for \( q_{\text{halt}} \):
```plaintext
Generate i-1 FIN // Assume program port 2 is "next" operation always // Each FIN is labeled as noted below // The last FIN is bound (on its bond site 2) to // the blob labeled 'Qh' below.
Qh: EXT
```
The JCG instructions test the data blob \( B_i \) to see which of the four possible kinds of transitions should be applied. Codes \([A0], [A1], [AL], R[AR]\) simulate the effect of the transition, and the FIN after each in effect does a “go to” to the blob code for the Turing machine’s next state. (This is made trickier by the fan-in restrictions, see Section A.3.7 below.)
A.3.4 Two auxiliary functions
We use two auxiliary functions to generate code:
\[
\text{[]} : \{L, R, W0, W1\} \rightarrow \text{blobcode}
\]
and
\[
\text{R[]} : \{L, W0, W1\} \rightarrow \text{blobcode}
\]
Function [\] is used for code generation on arbitrary tape cells, and R[] for code generation when the Turing machine head is on the right end marker where some housekeeping chores must be performed due to tape extension.
A.3.5 Code generation for instructions not affecting the right end of the tape
\[W0\]
\[
\text{SCG 0 1} \quad \text{// Set tape cell content to 0}
\]
\[W1\]
\[
\text{SCG 1 1} \quad \text{// Set tape cell content to 1}
\]
\[L\]
\[
\text{CHD 1} \quad \text{// Set ADB to previous blob (move tape left)}
\]
\[R\]
\[
\text{CHD 2} \quad \text{// Set ADB to next blob (move tape right)}
\]
A.3.6 Code generation for instructions that can extend the tape
\[W0\]
\[
\text{SCG 0 3} \quad \text{// Current blob is no longer at right tape end}
\]
\[
\text{INS 2 1} \quad \text{// Insert new blob at bond port 2 on ADB}
\]
\[
\text{// (new tape cell). New blob is bound at site 1.}
\]
\[
\text{CHD 2} \quad \text{// Change ADB to new blob (move head right)}
\]
\[
\text{SCG 1 3} \quad \text{// New blob is at the right end of the tape}
\]
\[
\text{CHD 1} \quad \text{// Change ADB to original blob (move head left)}
\]
\[
\text{SCG 0 1} \quad \text{// Write a '0' in the tape cell (as per W0).}
\]
\[W1\]
\[
\text{SCG 0 3} \quad \text{// Current blob is no longer at right tape end}
\]
\[
\text{INS 2 1} \quad \text{// Insert new blob at bond port 2 on ADB}
\]
\[
\text{// (new tape cell). New blob is bound at site 1}
\]
\[
\text{CHD 2} \quad \text{// Change ADB to new blob (move head right)}
\]
\[
\text{SCG 1 3} \quad \text{// New blob is right tape end}
\]
\[
\text{CHD 1} \quad \text{// Change ADB to original blob (move head left)}
\]
SCG 1 1 // Write a '1' in the tape cell (as per W1)
R[L]
R[L] = [L] // Move to the left
// TM does not move right at right tape end.
A.3.7 Control flow in the generated blob code
A technical problem in code generation. We now explain the meaning of the somewhat cryptical comments such as “Go to appropriate fanin before q1” in Section A.3.3, and notations such as qA0q0.
The problem: while a pointer-oriented language allows an unbounded number of pointers into the same memory cell, this is not true for the blob structures (the reason is that a bond is intended to model a chemical connection between two molecules). This is a “fan-in” restriction on program (and data) syntax.
A consequence: blob program code may not contain more than one control transfer to a given instruction, unless this is done by a bond site different from the usual “predecessor” site 1. The purpose of the instruction FIN is to allow two entry points: one as usual by bond site 1, and a second by bond site 3.
The initial FIN code generated of Section A.3.3. This concerns the entry points into blob code for a Turing state q. Let i be the number of directed edges to q in the state graph (i.e., the number of “go to’s” to q).
If $i \leq 1$, we generate no fanin blobs.
Otherwise, we generate $i - 1$ fanin blobs before the code generated for q; these handle the i transitions to q. The blobs bound to the fanin nodes occur in the code generated for other states (perhaps from q to itself). For each transition $\delta(q', b) \rightarrow (A, q)$, a blob in the code generated for $q'$ is bound to a single fanin blob for q. The fanin blob generated above, before the generated code for state q, is labeled by $q'bAq$.
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/172761091/Hartmann_2010_Programming_in_biomolecular.pdf", "len_cl100k_base": 10941, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 49442, "total-output-tokens": 13568, "length": "2e13", "weborganizer": {"__label__adult": 0.0005173683166503906, "__label__art_design": 0.00047397613525390625, "__label__crime_law": 0.0004858970642089844, "__label__education_jobs": 0.0012292861938476562, "__label__entertainment": 0.00014126300811767578, "__label__fashion_beauty": 0.0002701282501220703, "__label__finance_business": 0.0003323554992675781, "__label__food_dining": 0.0007162094116210938, "__label__games": 0.0008420944213867188, "__label__hardware": 0.0021228790283203125, "__label__health": 0.0012922286987304688, "__label__history": 0.0004749298095703125, "__label__home_hobbies": 0.00025177001953125, "__label__industrial": 0.0010118484497070312, "__label__literature": 0.0007343292236328125, "__label__politics": 0.00045609474182128906, "__label__religion": 0.0009489059448242188, "__label__science_tech": 0.262451171875, "__label__social_life": 0.0001722574234008789, "__label__software": 0.00681304931640625, "__label__software_dev": 0.71630859375, "__label__sports_fitness": 0.0005025863647460938, "__label__transportation": 0.0012826919555664062, "__label__travel": 0.0002357959747314453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48829, 0.02366]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48829, 0.47702]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48829, 0.87257]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2396, false], [2396, 5559, null], [5559, 9033, null], [9033, 11971, null], [11971, 15008, null], [15008, 17316, null], [17316, 20044, null], [20044, 23143, null], [23143, 25549, null], [25549, 28133, null], [28133, 30646, null], [30646, 33901, null], [33901, 37853, null], [37853, 40437, null], [40437, 42807, null], [42807, 45231, null], [45231, 47120, null], [47120, 48829, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2396, true], [2396, 5559, null], [5559, 9033, null], [9033, 11971, null], [11971, 15008, null], [15008, 17316, null], [17316, 20044, null], [20044, 23143, null], [23143, 25549, null], [25549, 28133, null], [28133, 30646, null], [30646, 33901, null], [33901, 37853, null], [37853, 40437, null], [40437, 42807, null], [42807, 45231, null], [45231, 47120, null], [47120, 48829, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48829, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48829, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48829, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48829, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48829, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48829, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48829, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48829, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48829, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48829, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2396, 2], [2396, 5559, 3], [5559, 9033, 4], [9033, 11971, 5], [11971, 15008, 6], [15008, 17316, 7], [17316, 20044, 8], [20044, 23143, 9], [23143, 25549, 10], [25549, 28133, 11], [28133, 30646, 12], [30646, 33901, 13], [33901, 37853, 14], [37853, 40437, 15], [40437, 42807, 16], [42807, 45231, 17], [45231, 47120, 18], [47120, 48829, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48829, 0.04673]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
4c76545a6b21c0968be09292021e00690643aa4d
|
Part I
Mastering Blender 3D
◆ Chapter 1: Controlling Your Environment
◆ Chapter 2: Sculpting and Retopo Workflow
◆ Chapter 3: Creating Realistic Images with UV Textures and Node-Based Materials
◆ Chapter 4: Video Compositing with Nodes
◆ Chapter 5: Working with the Video Sequence Editor
Chapter 1
Controlling Your Environment
Blender incorporates a dizzying amount of functionality in a single application, and learning to use all the tools as efficiently as possible is a daunting proposition. Even after the initial shock that every beginner feels upon seeing the buttons window, experienced users often still sense that there is a great deal of potential that they have not fully tapped into. Indeed, many Blender users use only a small fraction of its capabilities for controlling their work environments. These capabilities include options available in the User Preferences window and a variety of lesser-known techniques and workflow shortcuts. Furthermore, by gaining insight into the design principles behind the Blender interface, you can prepare for the ways that upcoming changes in the code base will help to enhance the power, flexibility, and accessibility of the interface in the future.
In this chapter, you will learn to
♦ Set the options available to you in the User Preferences window
♦ Use lesser-known methods for selecting, grouping, and organizing 3D elements to speed up your workflow
♦ Prepare for changes in the evolving Blender interface by understanding the principles behind its unique design
Getting Your Way with Blender
As I wrote in the introduction to this book, this is a book for people who want to push the envelope of their Blender abilities—people who know how to use Blender but want to know more. Likewise, this is a chapter for people who know Blender’s interface and workflow, but want to know it better, to understand it more deeply, and to learn to use it faster and more efficiently—in short, to master it.
This chapter is intended to help you get beyond simply knowing how things are done in Blender and to truly explore the way you do things in Blender. In this chapter, you’ll learn about the preferences you can set to take control of your own working environment. You’ll learn about workflow tricks and techniques to give you more options for how to get from A to B in your Blender work. This chapter is intended to give you the knowledge and the confidence to start telling Blender how you want things done.
User Preferences
When you think about options and customization for any software, the first thing that usually comes to mind is the set of user preferences available. Like most applications, Blender has a variety of user preferences that you can adjust. The User Preferences window is the “hidden” third
window in the default screen configuration shown in Figure 1.1. The bar across the top of the default screen may look similar to the menu bar that lines the top of many other applications, but in fact it is the header of the User Preferences window, which you can bring into view by left-clicking on the window border and dragging downward, as shown in Figure 1.2. Seven buttons are located along the bottom of the User Preferences area. Each of these buttons displays a different subcontext of User Preferences.
**Figure 1.1**
The default screen configuration
**Figure 1.2**
Dragging the User Preferences window into view
**VIEW & CONTROLS**
The first subcontext of the User Preferences buttons area is the View & Controls subcontext, shown in Figure 1.3.
**Figure 1.3**
The View & Controls user preferences
The Display options include six buttons that control how information is displayed throughout the interface or in the 3D viewport. Those buttons are as follows:
- **Tool Tips** enables and disables the display of tooltips when the mouse is over interface elements.
- **Object Info** displays the name of the active object in the lower-left corner of the 3D viewport.
- **Global Scene** causes the active scene to hold constant over various screens. If this option is enabled and the scene is changed in any one screen, all the screens will change scenes. If this option is disabled, a screen will continue to display the scene it last displayed, even if the scene is changed in another screen.
- **Large Cursors** enables alternate mouse cursors if they are installed in your system.
- **View Name** displays the name of the view (Front, Back, Top, Bottom, Right, Left, Orthogonal, or Perspective) in the upper-left corner of the 3D viewport.
- **Playback FPS** displays the number of frames per second in the upper-left corner of the 3D viewport when the animation is playing.
The next column of buttons and fields includes controls for Blender’s menus, toolboxes, and panels. The options you have here are as follows:
- **Open On Mouse Over** enables menus to open automatically when the mouse is held over them, without clicking. The numerical values for this option determine how long the mouse must be held over the main menu or submenus before the menus open.
- **Toolbox Click-Hold Delay** values determine how quickly the toolbox opens when the right or left mouse button is clicked and held. For immediate toolbox access, the spacebar is used.
- **Pin Floating Panels** causes floating panels such as the Transformations panel or other tool panels to be pinned to the spot in the viewport where they opened last. If this option is not activated, panels will appear at the spot where the mouse is.
- **Plain Menus** causes the ordering of the menus to remain fixed, rather than reversing depending on whether the menu opens upwards or downwards.
The next column of buttons controls snap-to-grid and other 3D navigational controls. The buttons here are as follows:
- **Grab/Move** causes snapping to the grid when objects are moved.
- **Rotate** causes snapping to the grid when objects are rotated.
Scale causes snapping to the grid when objects are scaled.
Auto Depth causes the rotation and zoom of the 3D space to pivot around the point directly under the mouse. This option automatically calculates the depth of the nearest object under the mouse as the pivot point.
Global Pivot causes the selected pivot to be fixed over all 3D viewport windows. If this option is not selected, each 3D viewport can use a different pivot.
The next column of buttons controls the way the 3D space itself can be navigated and manipulated. The buttons here are as follows:
Continue causes the view zoom to continue forward or backward as long as the left mouse button is held down and the mouse is moved above or below the center of the viewport. The distance of the mouse from the horizontal center of the viewport determines the speed with which the zoom moves forward or backward.
Dolly causes the zoom to move forward when the mouse movement is downward and to move backward when the mouse movement is upward, by default.
Scale causes the zoom to move forward when the mouse is pulled away from the center point of the viewport and to move backward when the mouse is pushed toward the center point of the viewport.
Trackball causes the entire view to rotate freely in all directions, analogously to the motion of a trackball.
Turntable causes the entire view to rotate strictly around the three spatial axes, resulting in a more constrained rotation than the Trackball option.
Auto Perspective causes the view to enter Perspective view whenever it is rotated out of Front, Side, or Top views, and to enter Orthogonal view when it enters those views by means of hot keys on the number pad.
Around Selection causes the view to rotate around the median point between selected elements.
The next column of buttons controls the way you can use your mouse. There are also buttons to control the display of the mini axis in the 3D viewport. These buttons are as follows:
Left Mouse causes the left mouse button (LMB) to be used for selecting.
Right Mouse causes the right mouse button (RMB) to be used for selecting.
Emulate 3 Button Mouse enables Alt+RMB to emulate the behavior of the middle mouse button (MMB).
Paste On MMB causes the middle mouse button to paste from the clipboard in the text editor.
Mini Axis controls the display of the miniature axis in the lower-left corner of the 3D viewport.
The next column includes buttons and fields that control the behavior of the middle mouse button and view changes made with the number pad. These buttons include the following:
**Rotate View** causes the middle mouse button to rotate the 3D view. With this option selected, Shift+MMB pans the view.
**Pan View** causes the middle mouse button to pan the 3D view. With this option selected, Shift+MMB rotates the view.
**Invert Zoom** causes the view to zoom forward when the mouse is moved upward and to pull away when the mouse is moved downward across the 3D view (as opposed to the default behavior, which is the reverse of this).
**Smooth View** sets a time interval in milliseconds for an animated transition between number-pad views.
**Rotation Angle** sets the degree of rotation used by the 2, 4, 6, and 8 keys on the number pad to rotate the view incrementally.
Finally, the last column includes settings for the 3D Transform Widget and object center displays, and settings for six-degrees-of-freedom (6DoF) devices such as the SpaceNavigator. These values include the following:
**Size, Handle**, and **Hotspot** values control the overall size, the handle size, and the size of the clickable area (hot spot) of the 3D manipulator.
**Object Center Size** controls the display size of object centers.
**ndPan** and **ndRot** values control the speed with which the navigation responds to input from a 6DoF input device.
---
**Recommendations for View & Controls Settings**
Of course, everybody has their own preferences, which is why options like the ones described in this section exist. Nevertheless, a few nondefault options are particularly worth experimenting with. The Around Selection option for view rotation makes navigating around selected vertices for modeling much easier, particularly when you are working on vertices that are not positioned in the middle of the screen.
The Smooth View value is a great way to visualize the change from one view to another. For example, if you are using Blender to give instruction to students or to create video tutorials, setting this option at 500 (half a second) makes it much easier for observers to maintain their bearings as you navigate the space.
For those who use the 3D Transform Widget, increasing the size of the hot spot can make it much easier to engage the widget.
People accustomed to other 3D packages often feel more comfortable using Turntable view rotation as opposed to Trackball. However, Trackball rotation offers greater flexibility, so it’s worth getting used to. Likewise, the temptation to switch the selection button to the left mouse button (LMB) should be resisted, because it will lead to a variety of undesirable side effects. For one thing, the capability to use Alt+LMB as an alternate to the middle mouse button (MMB) is no longer available to you if you choose this option, making it out of the question for people with two-button mice.
**Edit Methods**
The Edit Methods user-preferences context is shown in Figure 1.4. The options in this window are as follows:
- **Material Linked To** controls whether materials are linked to an object itself or the object’s mesh datablock by default.
- **Add New Objects** options enable you to choose whether to switch to Edit mode automatically upon object creation, and whether newly created objects should be aligned to the view or should be placed at the 3D space origin with default orientation.
- **Transform: Drag Immediately** enables you to select and move elements with one mouse button. If you right-click to select an object and drag immediately, this option will cause the object to follow the mouse until you release the right mouse button. With this option disabled, you must release the mouse button and click again to verify the transformation.
- **Undo** options enable you to set the number of levels of Undo, the amount of memory devoted to Undo, and whether Global Undo is used. Global Undo requires more memory than regular Undo; however, regular Undo is limited in that you cannot undo edits made in Edit mode incrementally after leaving Edit mode and reentering Edit mode again. Global Undo enables you to do this.
- **Auto Keyframe** options enable you to automatically set keyframes for selected sets of Ipo curves. With this option, keyframes are set in a frame anytime an Ipo’s value is changed, making keyframing with the I key unnecessary.
- **Grease Pencil** options enable you to determine specifically how mouse movements are used to draw lines with the Grease Pencil tools. The smaller the Euclidean and Manhattan distances, the less segmented the line will appear.
- **Duplicate With Object** options enable you to select which datablocks will be duplicated when their owner objects are duplicated with Shift+D. Duplication involves a new, independent instantiation of the datablock being created. Datablocks that are not duplicated are shared between the two duplicated objects.
**Recommendations for Edit Methods**
Edit Methods options are a little less “personal” than the View & Controls options. The best options in this case are likely to depend on exactly the kind of work you do. If you typically find yourself going straight into modeling when you add a new object, you will save a step by setting the default to Switch To Edit Mode upon adding a new option. If you do a lot of animation and you are comfortable and confident working with Ipos, enabling Auto-Keying may speed up your workflow. For beginning animators, I think it’s better to set your keyframes deliberately by hand until you are sure you have the hang of it. For Auto-Keying, the Needed option is useful to keep unnecessary keyframes from being set. For the Duplicate With Object settings, if you find that you rarely want a duplicated object to share an Ipo curve with the original object, you may want to select Ipo in addition to the currently set defaults.
LANGUAGE & FONT
The Language & Font buttons context is shown in Figure 1.5. It is no secret that internation-alization is an area of Blender that has been unfortunately neglected. One of the reasons for this is the difficulty of creating and incorporating language translation files for the software, which, like many things in Blender, must be done at a low level and compiled directly into the executable.
One thing you can do here is to adjust the size of the font that shows up on your buttons and menus. To do this, click International Fonts and select the size you want from the Font Size menu shown in Figure 1.6.
The Use Textured Fonts option may result in problems displaying the button labels with some hardware drivers. If you have problems seeing the button labels on your computer, deselect the Use Textured Fonts option, as shown in Figure 1.7.
You can select international font systems if you have the necessary fonts installed. In Figure 1.8, you can see how Blender looks with Japanese selected as the language and a Japanese font.
selected. Nevertheless, this is of limited usefulness for several reasons. First, almost all document-
tation and learning material is written with the assumption that Blender is in English, and second,
the translations are too incomplete to warrant any other assumption, as you can see in Figure 1.9.
**Figure 1.8**
Blender in Japanese
**Figure 1.9**
The limits of internationalization
---
**Language and Font Recommendations**
It would be very welcome if internationalization was made simpler, and perhaps this will become a possibility with the upcoming recode of the event system. For the time being, however, Blender’s internationalization is superficial, incomplete, and largely outdated. The only real choice is to use Blender in English.
---
**Themes**
The Themes context, shown in Figure 1.10, enables you to create and select themes with various options for the coloring and display of interface elements. You can select the theme you want to use from the drop-down menu. In order to add a new theme to the list, click the Add button. In addition to the default theme itself, another theme is included in the default distribution of Blender, the Rounded theme, shown in Figure 1.11. The theme used in this book is a variation based on the Rounded theme.
There are too many options to set in the Themes area to describe each one individually here, but they are mostly self-explanatory. You can change the color of almost every element in Blender, and in some cases such as drop-down menus and pop-up panels, you can change the alpha value as well.
If you have a properly formatted Blender icons file, you can also change the Blender icons, but it requires a small amount of preparation. To use alternate icon sets, you must create a new directory called icons in the .blender directory of your Blender installation. In Mac OS X and Linux, the location is slightly different. For these systems, you should create a .blender directory in your home directory (~/) and put the icons directory there. Then place the alternate icons files in the icons directory. These icons will appear in the drop-down menu that’s displayed when you choose Icon File in the UI And Buttons user preferences list, as shown in Figure 1.12.
The icon file used throughout this book is shown in Figure 1.13 and repeated in color in this book’s color insert. It was created by BlenderArtists.org user jendrzych, and the icon set itself can be found on that website at http://blenderartists.org/forum/showthread.php?t=84971. The file is also included on the CD that accompanies the book. Although this is not the official default icon set for version 2.48, it is a nicer-looking and widely used alternative. Furthermore, it has already been adopted as the official default icon set for Blender version 2.5, so getting accustomed to it is a small and painless way to prepare for the changes of that version.
In Figure 1.14 (also repeated in the book’s color insert), you can see the default icons and the alternate icons as they appear in all the headers of the various window types in Blender. This should give you a good reference for which icons correspond to each other, in case you are using a different icon set from the one used in this book. Throughout the book, in cases where there might be some confusion, both default and alternate icons are shown in the appropriate contexts.
Numerous Blender themes are available online for download. A quick Google search on Blender themes will give you the links for several good theme repositories. The themes may be downloadable in the form of a .blend file or in the form of a Python script. In the latter case, simply open the script in a Blender text editor window and execute it with Alt+P.
**Theme Recommendations**
Themes are a matter of taste; however, there’s a reason why the two themes included in the default installation are largely gray and muted. Bright, lively colored themes can distract attention from what you are working on and can lead to eye strain. You should have enough contrast between elements to see them clearly, but large areas of white or very bright colors can quickly tire your eyes. Other theme options worth noting are those in the 3D View menu list. If you are planning to use Blender for instructing others or for making tutorials, you can change the size at which vertices and face dots are displayed.
**Auto Save**
The Auto Save options context is shown in Figure 1.15. It enables you to set your preferences for how the autosave and backup features work. The Save Versions number enables you to select how many previously saved versions you want to keep backed up. In the past, you may have noticed the filename.blend1 files in the directory alongside filename.blend files. These are the default single-version backup files, and they represent the contents of the previously saved session. If you select a value greater than 1 (and apply it with Ctrl+U), the correspondingly numbered backup versions will appear in your directory.

The Auto Save Temp Files option enables numbered, automatically saved files to be saved to your temporary directory (the default is /tmp, so ensure that this directory exists on your system or else change the directory to wherever you want the files saved). The Minutes value is how often these files are saved. The Open Recent button will open the most recently saved file.
The Recent Files field enables you to choose how many previously saved files are listed in the Open Recent menu entry in the File menu.
**System & OpenGL**
The System & OpenGL user preferences context, shown in Figure 1.16, enables you to control a variety of display-related and miscellaneous values.
There are three possible OpenGL lights that can be used to illuminate objects in the Solid Draw mode. By default, two of these lights are activated. The first is a key light from the left, and the second is a dimmer fill light from the right. A third light is also available, which by default is set to provide highlights from the lower right, as shown in Figure 1.17. You can enable or disable each of these lights, adjust their colors, or change their angles by clicking and dragging directly on the preview spheres for the lights.
**Figure 1.16**
The System & OpenGL user preferences
**Figure 1.17**
3D view with the default two OpenGL lights activated and the same view with the third solid OpenGL light activated
Returning to the System & OpenGL user preferences (Figure 1.16), the Enabled By Default button under Auto Run Python Scripts, when enabled, will allow Python scripts to be run automatically from within a .blend file. This is convenient in some cases, but it is not recommended if you’re not sure of the source of your .blend files.
The Enable All Codecs button under Win Codecs appears on Windows machines. This option will enable the codecs you have installed on your system to be used for rendering in Blender. As the tooltip points out, this is not guaranteed to work in all cases, because support for some codecs remains experimental.
The Color Band button under Color Range For Weight Paint enables you to override the default blue-to-red coloring range for weight painting and to define your own range by using a color-band interface.
The Audio Mixing Buffer buttons enable you to select the amount of memory to devote to audio mixing.
The Emulate Numpad button enables you to use the number keys on the main keypad instead of the number keys on the number pad. This is particularly useful if you are working on a laptop that doesn’t have a separate number pad.
The System & OpenGL buttons and fields in the rightmost two columns control a variety of specific values that you can adjust to improve your performance if you are having memory problems or you are experiencing slowdowns in your 3D viewport. Disabling Mipmaps or raising the Clip Alpha value can speed up the OpenGL drawing in your viewport at the expense of some image quality.
**FILE PATHS**
The last user preferences context is self-explanatory. The File Paths preferences, shown in Figure 1.18, enables you to define defaults for what the Blender file browser will open first when you import or save various types of assets. The default is //, which is Blender notation for the present working directory—that is, the directory you opened Blender from. For example, if you are opening Blender from the Windows Start menu, this will be your Blender installation directory. If you are opening from a file, this will be the directory that the file is in. The Relative Paths Default button causes the file paths to be read as relative to the present working directory.
**Other Options**
Many other options are available throughout the Blender interface, and it is worthwhile to make a note of the ones that you often find yourself adjusting, and to use Ctrl+U to set them as you prefer them once and for all. The Occlude Background Geometry option in the 3D view header is a common option to activate. This makes unseen vertices and faces unselectable when not in Wireframe Draw mode, creating a sharper distinction between selection behavior in Wireframe and Solid Draw modes. If you usually rotate, grab, and scale using the R, G, and S keys, you may want to disable the manipulators, also in the 3D view header. Render settings such as the output format and compression quality are also common places where you might want to customize your defaults.
**Saving the Changes**
After you have set all the options the way you want them, don’t forget to set the current setup as your default setup by using Ctrl+U. Remember, Ctrl+U saves the exact state of Blender at the moment you press it, so be sure you’ve put everything in place exactly the way you want to see it when you open Blender. Objects, materials, animations, and any other data in the .blend file will also be saved.
The resulting settings are stored in the .B.blend file in your .blender directory. To use these settings with another Blender installation, you can simply copy that file into the .blender directory of the Blender installation you want to use.
To save the current theme in the form of a Python script, go to the File menu and choose Export ➤ Save Current Theme. The resulting script can then be executed in another instance of Blender to import the theme.
---
**Improving Your Workflow**
Setting and saving your user preferences is the first step in optimizing your workflow. This section presents a variety of miscellaneous tips and tricks that you may find helpful for increasing your efficiency and improving your experience working with Blender.
**View Hot Keys and Properties**
The 3D viewport has a number of hot keys and properties associated with it that enable you to view your work. You are no doubt familiar with the most commonly used number pad shortcuts for Front view (number pad 1), Side view (number pad 3), and Top view (number pad 7). Pressing these keys with Ctrl will show you the reverse view; Ctrl+number pad 1 yields the rear view of the object, and so on. Number pad 5 toggles Orthogonal and Perspective view; and 2, 4, 6, and 8 rotate the 3D space by the amount determined in the Rotation Angle field in the View & Controls user preferences window.
The decimal (.) key on the number pad centers the selected object in the 3D viewport. Related keys on the main keypad include the C key, which shifts the view so that the 3D cursor is centered; the Home key, which displays and centers the median point of all the objects in the scene; and the Shift+C key combination, which does the same thing as the Home key with the addition of placing the 3D cursor at the zero point of the 3D space.
The slash key (/) on the number pad changes the display to show only the selected object. Pressing the same key again toggles back into full scene display mode. On the main keypad, the Alt+B key combination enables you to select even smaller portions of the 3D view for display. Pressing Alt+B and dragging the box to select an area results in clipping the display of everything outside of that box selection, as shown in Figure 1.19. The resulting displayed selection can be viewed from all angles.
The View Properties panel, shown in Figure 1.20, can be accessed via the View menu in the header of the 3D viewport. Here you can control the display and qualities of the background grid; the X, Y, and Z axes; and the relationship lines (dotted lines between parents and their child objects). You can toggle the drawing of textures in Solid Draw mode with the Solid Tex button, and toggle between displaying all object centers or only the selected object’s center. You can toggle the drawing of an outline around the selected object. You can change the angle of the view lens, adjust the point past which the view is clipped, and place the 3D cursor by entering coordinates by hand.
Figure 1.19
Clipping the view with Alt+B
View Locking enables you to enter an object name (and in the case of an armature, a bone name) and force the view to follow the movement of that object, holding the object in the center of the view. This can be useful when you’re animating detail on moving objects, such as when you’re animating the movement of fingers on a moving hand.
**Grouping and Selection**
Objects can be grouped by selecting the object and choosing a group from the Add To Group drop-down menu in the Object And Links panel of the Object buttons area. Objects that share a group can be appended into other .blend files in one step by appending the group. When lamps are grouped, it is possible to restrict a material’s lighting to lamps from the group by entering the group name in the GR field in the material’s Shaders tab.
Groups are one of many criteria by which you can select objects. You can select variously grouped objects by selecting a single object and pressing Shift+G to open the menu shown in Figure 1.21. You can select other objects based on their relationship with the first selected object.
You can also select objects based on linked data, by pressing Shift+L to open the menu shown in Figure 1.22 and selecting the linked datablock upon which to base the selection.
Using the Select menu in the 3D viewport in Object mode, you can directly select objects by type or by layer. It is also possible to select a random collection of objects and to inverse the current selection.
BOX, CIRCLE, AND LASSO SELECTION
Pressing the B key once initiates the Box selection state, where you can drag your mouse to select whatever falls within the rectangular area you define. Holding down the Alt key while doing this will deselect whatever falls within that area. Pressing the B key twice will enable the Circle selection state, where you can drag your mouse to select all that falls within a circular area following the mouse. Likewise, holding down the Alt key while doing this will deselect the elements.
Holding down the Ctrl key while dragging the left mouse button introduces the Lasso selection state, which enables you to define the area to be selected by moving your mouse around the area directly. This is a very fast selection method.
EDIT MODE SELECTION
Numerous little-known selection methods are available for meshes in Edit mode. The first option you have is whether to select by vertex, edge, or face. This is chosen by using the viewport header buttons shown in Figure 1.23 (both default and alternate icon sets). Vertex, Edge, and Face selection modes correspond with the buttons from left to right (the rightmost button occludes hidden geometry). You can choose more than one mode simultaneously by holding down the Shift key when you click these buttons.
SELECTING EDGES, LOOPS, AND RINGS
Many selection options are available independently of the specific selection mode you are in. Selection options that deal specifically with edges can be found by pressing Ctrl+E in Edit mode. The Region To Loop selection option in that menu enables you to choose the edge outline (strictly speaking, loop here is a misnomer) of any selected region of faces, as shown in Figure 1.24 (this image is repeated for visual clarity in the color insert of this book). The reverse, selecting a region of faces based on a selected closed edge border around the region, is possible with the Loop To Region menu entry.
Other very useful selection options include loop and ring selection using Alt+RMB and Ctrl+Alt+RMB. By holding down the Alt key and right-clicking on a single edge in Edit mode, you can select the entire edge loop that the edge belongs to. By using Ctrl+Alt+RMB, you select the perpendicular ring of faces that includes the edge you clicked on, as shown in Figure 1.25. In Edge selection mode, the behavior is similar, except that the edge ring selected with Ctrl+Alt+RMB does not include faces, as shown in Figure 1.26. In Face selection mode, there is no difference between the selections. Both hot keys select the same ring of faces, as shown in Figure 1.27. These figures are also included in the color insert of this book for visual clarity.
**Figure 1.24**
Choosing a loop from an area
**Figure 1.25**
Edge loop and ring selection in Vertex selection mode
**Figure 1.26**
Edge loop and ring selection in Edge selection mode
Another useful selection tool, Select Vertex Path, can be found in the Specials menu by pressing the W key over the 3D viewport. With exactly two vertices selected, this option will select the shortest edge path between the two vertices.
**Selecting Similar Elements**
The Shift+G menu enables you to select all similar elements to the currently selected element, based on a variety of possible criteria.
In Vertex selection mode, Shift+G enables you to select other vertices that share the same vertex normal direction as the currently selected vertices, vertices that are members of shared vertex groups with the currently selected vertices, or vertices that are used by the same number of faces.
In Edge selection mode, the Shift+G menu enables you to select edges that are the same length, run in the same direction, or have the same number of face users as the selected edges. You can also select edges based on whether they are part of a seam or crease, or based on their sharpness value. This is an excellent method for quickly selecting all seams on an object: Simply select on a seam edge, and then use this selection method to select them all.
In Face selection mode, you can select faces that share the same area, share a material, share an image, have common normal directions, or are coplanar, meaning that the faces share their normal directions and are located on a single imaginary plane in the 3D space. Finally, the Perimeter option enables you to select regions of faces that have the same size perimeter or outline as the originally selected region.
Object Manipulation
The most commonly used and taught methods of translating, rotating, and scaling 3D elements are the hot keys G, R, and S. These are the easiest to control, but using other methods can increase the speed and efficiency of your workflow in some cases. Most people are aware of the existence of mouse gestures and the 3D manipulator, because both are enabled by default (mouse gestures in particular can be a real nuisance to beginners who activate them inadvertently), but fewer people understand the correct way to use them.
Mouse gestures are a way of triggering the translation, scale, or rotation state, which are analogous to pressing the G, S, or R key, respectively. This is done by holding the left mouse button and dragging the mouse in one of the three patterns shown in Figure 1.28.
Almost as important as knowing how to use mouse gestures correctly is knowing when they are being activated by accident and what to do when that happens (click the right mouse button to cancel out of the transform). Mouse gestures can be particularly useful with pen tablets. The easiest gesture to use, by far, is the translate gesture. If you spend much time using a pen tablet, it is likely that you will soon quit using the G key altogether, even without thinking about it.
The rotate and scale gestures are trickier. To be honest, although they are referred to as “mouse” gestures, I personally find it nearly impossible to consistently produce distinct rotation and scale gestures when working with a mouse. The important quality that distinguishes the rotate gesture is the smoothness of the curve. If your curve is choppy or angular, the gesture is likely to be interpreted as the scale gesture. It is much easier to do this correctly with a pen tablet, although it still requires a bit of practice.
It may come as a bit of a surprise, but the 3D manipulator widgets, shown in Figure 1.29 (and repeated in the color insert of this book), also require a little bit of practice to get a feel for using them properly. These can be enabled individually or all at once using the manipulator buttons on the 3D viewport header (to select more than one, hold down the Shift key while choosing, just as in other contexts).
The easiest way to use the manipulator widgets is to left-click on the colored manipulator hot spots (the arrows for translation, curves for rotation, and cube-shaped tips for scale) and drag. The transformations are shown in Figure 1.30 and repeated in the color insert of this book. The transformation is finalized when you release the left mouse button. To abort the transformation, either press the Esc key or click the right mouse button before releasing the left mouse button.
Another way to use the manipulators is to left-click once quickly on the appropriate hot spot. It’s important that you do not begin to drag the mouse until after you have clicked. After you click, you will enter the appropriate transformation state, and the object’s behavior will be identical to what it would have been if you had pressed G, R, or S. Right-clicking will cancel out of the transformation, and left-clicking will finalize the transformation.
The colored hot spots are not the only way to transform the object. Each manipulator has a thin, orange circle associated with it. Clicking on this will enter the corresponding unconstrained transform state: For translation and rotation, the transformation will be carried out with respect to the plane of the viewport; and for scaling, the object will be scaled along all axes.
**Figure 1.28**
Mouse gestures for (top to bottom) translation, scale, and rotation
**Figure 1.29**
Translate, rotate, and scale manipulator widgets
**Figure 1.30**
Translating, rotating, and scaling with manipulator widgets
Finally, you can scale or translate along two axes by holding down the Shift key and clicking on the hot spot of the third axis. This is analogous to the way axes are constrained by hot key. Thus, to scale along the X and Y axis as shown in Figure 1.31, hold down Shift and click on the Z axis manipulator hot spot.
**Figure 1.31**
Scaling along the X and Y axis
---
**Keeping Up with the Blender Interface**
As an open source application, Blender evolves at a more rapid pace and in a more organic way than many proprietary applications that you might be accustomed to. Releases are not timed to maximize profits or to coincide with other merchandising. Rather, they come when the developers decide that the recent developments are significant enough and stable enough to warrant the extra effort required for an official release. When this happens, resources are diverted from new functionality, and the user community and development team focus on intensive beta testing and bug fixing in preparation for the release.
By this point, many users are already familiar with the new functionality, because the code has been open and freely available all along. Several websites, such as [www.graphical11.org](http://www.graphical11.org), have regularly updated builds of Blender that are as easy to install as the official releases (although not necessarily as stable). It is no problem to install multiple versions of Blender side by side, so there is never a problem experimenting with Blender’s bleeding-edge functionality. The BlenderArtists.org forum is always buzzing with discussions of all the latest features, and you’re sure to find somebody to help you with even the most exotic new features.
Getting familiar with these resources is part and parcel of mastering Blender. Most readers of this book have probably already dipped into experimental, developmental Blender builds when a particularly attractive feature was introduced.
The Coming Changes
Anyone who has participated in recent online discussions about Blender has probably heard about the deep changes afoot for the upcoming Blender version 2.5, in particular as they relate to the interface. Indeed, this release has taken on an almost mythological status in some circles, and opinions (some better informed than others) have been flying thick and fast. There is excited hope that all the things that many people find annoying or counterintuitive about the Blender interface will be fixed, as well as apprehension that many idiosyncrasies that Blender users have come to love may be discarded.
Although this book is written to correspond to Blender 2.48, it is nonetheless worthwhile, in keeping with the thinking of Blender as a constantly evolving piece of software, to get a clearer idea of what direction the evolution of its interface is likely to take in the next few releases.
The 2.5 Event Recode
Blender began its life as an in-house animation tool for a commercial studio. It was developed in C by the same people who used it, a small group who knew the code inside and out and worked very closely together. Unfortunately, although the choice of C as an implementation language helped to ensure that Blender would be a fast and lean executable, the way that the development proceeded meant that many design decisions about even relatively superficial things came to be hard-coded at a low level and difficult or impossible to alter later in a simple way. This lack of modularity has been a common source of frustration to coders who are new to Blender. For years, it was accepted as a fact of life and worked around, but over time the problem became compounded by code written in an ad hoc way.
This is about to change. As I write this, the Blender Foundation’s resources have been entirely devoted to a long-postponed, ground-up recode of the Blender event-handling system. The event system manages the way in which keyboard, mouse, and other input/output (I/O) events are dealt with, and as such, it is a crucial point of interaction between the interface and the functionality. Until the recode, much of the event handling happened directly in the code implementing the functionality itself. In order to change a single hot key, for example, it might be necessary to do considerable digging into the code of the associated functionality. To add such an apparently straightforward and often-requested feature as customizable hot keys, then, was a much thornier problem than many people realized.
It was possible to put off the recode for so long in part because individual requests and features that the current code makes difficult were often fairly superficial. Customizable hot keys, for example, are a common request of users seeking to switch over from some other 3D application. But there are arguments on both sides to be made about the actual importance or wisdom of depending heavily on nonstandard hot key configurations. Combined with the intractability of implementing configurable hot keys on the old Blender codebase, this was enough to ensure that such requests went for years without being acted on. Now, with the event-system recode underway, Blender users can look forward to not only many new interface features and customizability options, but more important, a new ease with which future adaptations and modifications can be made.
DNA and RNA
Blender uses a unique internal format called DNA to store and reference 3D assets. The name is an analogy to the biological term, with the implication that DNA is a highly compact encoding of all the information necessary to re-create the contents of what Blender users know as a .blend file: scenes, objects, and all associated datablocks. DNA is a binary format, which makes it very fast to load and save. For example, the same data represented in XML may be several orders of magnitude slower to load and save, particularly in the case of large files with many scene elements. This is the main reason why .blend files are so flexible, and can be used to store large and complex scenes and even collections of scenes.
RNA is a current development that comprises an important behind-the-scenes component of the 2.5 changes. It is also loosely analogous to the biological meaning of the term. RNA will serve as a wrapper or low-level interface for accessing and setting values in DNA. In practice, RNA will be used to automatically generate interface elements and the Python API, making it easier to keep them up-to-date and consistent. The enhanced access that RNA enables will also have the effect of finally realizing the long-held dream of having everything in Blender capable of being animated!
The Evolution of the Interface
With the focus of development being on the task of implementing the new event system and porting the existing Blender functionality over to this new foundation, it is an ideal time for a review of the interface itself. In preparation for the coming interface paradigm shift, William Reynish delivered a presentation at the Blender Foundation’s 2008 annual conference in Amsterdam, outlining the latest thinking on the direction that Blender’s interface should take. A 25-page white paper containing a revised version of Reynish’s proposals is available from the official Blender website at http://download.blender.org/documentation/bc2008/evolution_of_blenders_ui.pdf.
Reynish’s paper is an excellent overview of the thinking behind the Blender interface—past, present, and future—and a good read for anybody who would like to better understand why the interface is the way it is and how it is likely to evolve. The paper describes Blender’s interface strengths, its weaknesses as of the official 2.48 release, and a number of design goals for the 2.5 release.
Strengths
Reynish outlines four main principles that have informed Blender’s interface. These are long-standing, deliberate design decisions that have made Blender extraordinarily fast to work with for experienced users. These principles are as follows:
The workflow should be as nonmodal as possible. Modality in software means that certain functions work in certain modes and not in others. Although Blender does make use of explicit modes for editing and object manipulation, the overall interface is comparatively nonmodal in its behavior. Users have the option of having almost all of Blender’s functionality laid out simultaneously before them, for immediate access at any time.
The window organization should be nonoverlapping. For regular users of Blender, this is one of the main strengths of the interface. With functionality as complex as Blender’s, overlapping windows could very quickly become a nightmare of digging around to find buried windows on the desktop. This never happens with Blender, because its windows are tidily organized in a nonoverlapping way. Users can quickly switch between Screen settings to access other nonoverlapping desktop configurations.
It should use fast, efficient, and consistent hot keys and interface conventions that are minimally dependent on their context. Hot keys, menu entries, and other interface elements should be as consistent as possible across various points in the workflow. In Blender, this is accomplished in part by having similar or intuitively analogous functionality from different modes (such as the select, rotate, or grab functionality in Object mode and Edit mode) grouped logically to appropriate hot keys.
The various tools should be highly integrated with each other. Blender has a wide variety of tools under its hood, ranging from mesh modeling and sculpting, to video editing and compositing, to scripting, game creation, and physical simulation. One of Blender’s great strengths is the way all of these various tools are so tightly integrated that the transition from one to the next is nearly seamless. For individuals and small groups, this is a significant timesaver over a less-integrated pipeline that requires numerous export and import steps.
Weaknesses
Although Blender has done a good job of adhering to the preceding well-founded principles, some areas of Blender’s interface as of 2.48 have been weak. The chaotic layout of the button areas is one key point that Reynish brings up, citing a variety of examples of highly arbitrary button placements, situations where the button type (radio, action, or toggle) is unclear, and cases where clutter is brought about by the need to maintain consistently square button tab shapes for ease of vertical and horizontal layout.
Another area that Reynish’s paper homes in on is the difficulty of dealing with multiple objects simultaneously in certain specific ways. The example he gives is one of adding the Wire extra draw type to a large number of objects. This can be done using Ctrl+C to copy settings from one object to the others, but not everything can be copied in this way, and as Reynish points out, this is a distracting extra step.
Finally, Reynish’s paper discusses the topic of customizability. Blender’s interface is notorious for its lack of customizable key bindings. However, although customizability is a popular request among new users, Reynish concludes that it is a comparatively low priority when measured next to the importance of a good, solid set of defaults. Reynish argues that customizability in itself is an overrated solution—it is sometimes perceived that a poor interface can be improved by the user if the interface allows sufficient customizability, but this is not in fact the case. Nevertheless, there are a number of reasons why customizability in key bindings and input options is regarded as desirable. Some users may wish to preserve muscle-memory habits acquired from other software. More important, customizable hot keys enable the user to have more freedom in accessing custom-made scripts or other nonstandard functionality.
GOALS AND SUGGESTIONS
Reynish’s paper outlines some key interface goals and some practical suggestions for attaining these goals. He argues that the interface should be nonmodal, nonlinear, logical, fast, flexible, innovative, and simple.
The practical suggestions are far-reaching. One of the most profound is Reynish’s recommendation for the total removal of the buttons area window as it is currently implemented. Instead, it would be replaced by a Properties Editor that would enable logical, organized access to all the properties of any selected object or group of objects. Settings for all Blender datablocks would be accessible in this area.
Reynish further advocates a reworking of tool workflow. Rather than the highly modal workflow of tools such as the loop cut or the addition of objects to the scene, in which settings must be decided upon before finalizing the tool action, the recommendation is made to increase the interactivity of tools, enabling settings to be adjusted after the tool has been used.
Further recommendations include enhanced context sensitivity to rid the interface of unnecessary clutter when it is not needed, improved consistency in button and interface widget graphics so that distinct interface component types such as radio buttons and action buttons have a distinct and intuitively recognizable look, improved feedback for when the user is required to wait for something, and a preference for vertical layouts for buttons and fields for reasons of visual clarity and efficient screen real-estate usage.
WHAT TO EXPECT
Reynish’s suggestions will not necessarily be implemented exactly as described in the report. Furthermore, the timeline for when they will be implemented is not set in stone. The 2.5 event recode will set the groundwork for making the evolution of the interface possible. Whether the most significant interface changes will be incorporated in that release or subsequently introduced remains to be seen.
Users can expect a more flexible workflow and more sensible and consistent organization of interface elements. There will likely be a preference for vertical panel configurations, rather than the horizontal panel configuration that has been the default for Blender’s buttons area in the past. Eventually, users can expect the buttons area to be radically reworked or phased out entirely.
Overall, the coming interface developments should go a long way to address many of the pet peeves that plague both new and experienced users of Blender, and help to make Blender an even more powerful and enjoyable tool to work with. As always, you should bring yourself up to speed with new developments for each release by studying the official release notes, which you can link to from the official downloads page at www.blender.org. You can learn more about the focus of the changes to come in 2.5 at http://wiki.blender.org/index.php/BlenderDev/Blender2.5/Focus.
The Developing World
As development on each Blender release intensifies, the #blendercoders IRC channel and the various development-related mailing lists are filled with developers communicating their ideas and intentions with each other. The 2.5 event recode and the huge task of porting existing Blender functionality over to the new base requires a high degree of organization and coordination, as does every release.
The smooth progress of Blender’s development is all the more remarkable considering what a truly global project Blender is. According to the open source software resource Ohloh.net, Blender’s regular committing developers are spread all over the globe—in Europe, North America, South America, Oceania, and Africa. If you count script contributions and recent coding that has not made it into the official trunk, the area is even wider, with recent code contributions beginning to come from Asia as well.
Some of the stories of Blender development around the world serve as inspiring reminders of the power of open source software. The work of Raúl Fernández Hernández (farsthary) on true volumetrics for Blender is an excellent example. As a student living in Cuba, Raúl has had limited access to many of the resources that people in other parts of the world take for granted. Nevertheless, he identified a glaring need in Blender for true volumetric simulations and took advantage of the open code to study for himself how to implement his ideas in Blender. Although he lacked regular access to an Internet connection and was unable to access the Subversion code repository directly, participate in chats, or take part in regular communication with developers, he nevertheless succeeded in creating an impressive foundation for true volumetrics. He reported about his work sporadically in his blog, http://farsthary.wordpress.com/, including some amazing renders and animations of convincing flame and smoke effects. Although initially carried out with very little interaction with others, Raúl’s work quickly began to get attention from the Blender user and developer community. After hurricane Gustav devastated his town, leaving him without electricity for a week, the community rallied to assist him, and two core Blender developers, Matt Ebb and Daniel Genrich, became more actively involved in helping him recode the volumetric simulation to be more consistent with existing Blender code. The project is progressing very nicely, as you can see from Raúl’s blog, and the exciting new volumetric features will surely be a welcome addition to an upcoming official release.
The Bottom Line
Set the options available to you in the User Preferences window. A wide variety of often-overlooked options are available in the User Preferences window, including settings for View & Controls, Edit Methods, and Themes, among others.
Master It Create your own preferred default starting state and save it so that it will be active every time you start Blender.
Use lesser-known methods for selecting, grouping, and organizing 3D elements to speed up your workflow. There are numerous ways to select and group objects and 3D elements that can considerably increase your speed and efficiency when working.
Master It Use the selection methods described in this chapter to make the face selections as shown in the following graphic.
You should be able to make this selection using a single (modified) mouse click followed by a single hot key combination. There are several ways to do this.
Prepare for changes in the evolving Blender interface by understanding the principles behind its unique design. Blender is constantly evolving. It is in your interest to stay informed about developments, in particular at a time when the 2.5 release is promising big developments in usability.
Master It Inform yourself about the status of the 2.5 event recode and GUI update.
|
{"Source-Url": "http://www.subdude-site.com/WebPages_Local/RefInfo/Computer/Linux/LinuxGuidesOfOthers/apps3Dtools/Blender3D_MasteringBlender_excerpt_2008plus_blender-org_30pgs.pdf", "len_cl100k_base": 11556, "olmocr-version": "0.1.50", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 65276, "total-output-tokens": 12814, "length": "2e13", "weborganizer": {"__label__adult": 0.00043392181396484375, "__label__art_design": 0.01715087890625, "__label__crime_law": 0.00023043155670166016, "__label__education_jobs": 0.00391387939453125, "__label__entertainment": 0.0004267692565917969, "__label__fashion_beauty": 0.0002453327178955078, "__label__finance_business": 0.0002913475036621094, "__label__food_dining": 0.00039267539978027344, "__label__games": 0.002410888671875, "__label__hardware": 0.0019130706787109375, "__label__health": 0.0002532005310058594, "__label__history": 0.0005259513854980469, "__label__home_hobbies": 0.00043487548828125, "__label__industrial": 0.0003693103790283203, "__label__literature": 0.00066375732421875, "__label__politics": 0.00013458728790283203, "__label__religion": 0.0006108283996582031, "__label__science_tech": 0.0212554931640625, "__label__social_life": 0.00016796588897705078, "__label__software": 0.171142578125, "__label__software_dev": 0.77587890625, "__label__sports_fitness": 0.0002903938293457031, "__label__transportation": 0.00032520294189453125, "__label__travel": 0.0003020763397216797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56610, 0.01629]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56610, 0.17984]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56610, 0.92276]], "google_gemma-3-12b-it_contains_pii": [[0, 290, false], [290, 290, null], [290, 2776, null], [2776, 3405, null], [3405, 5905, null], [5905, 8309, null], [8309, 11254, null], [11254, 14238, null], [14238, 15290, null], [15290, 16562, null], [16562, 17524, null], [17524, 18668, null], [18668, 21008, null], [21008, 21728, null], [21728, 24757, null], [24757, 28189, null], [28189, 28230, null], [28230, 29707, null], [29707, 32389, null], [32389, 32574, null], [32574, 34149, null], [34149, 37711, null], [37711, 37938, null], [37938, 39883, null], [39883, 43282, null], [43282, 46379, null], [46379, 49799, null], [49799, 52725, null], [52725, 55706, null], [55706, 56610, null]], "google_gemma-3-12b-it_is_public_document": [[0, 290, true], [290, 290, null], [290, 2776, null], [2776, 3405, null], [3405, 5905, null], [5905, 8309, null], [8309, 11254, null], [11254, 14238, null], [14238, 15290, null], [15290, 16562, null], [16562, 17524, null], [17524, 18668, null], [18668, 21008, null], [21008, 21728, null], [21728, 24757, null], [24757, 28189, null], [28189, 28230, null], [28230, 29707, null], [29707, 32389, null], [32389, 32574, null], [32574, 34149, null], [34149, 37711, null], [37711, 37938, null], [37938, 39883, null], [39883, 43282, null], [43282, 46379, null], [46379, 49799, null], [49799, 52725, null], [52725, 55706, null], [55706, 56610, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 56610, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56610, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56610, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56610, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56610, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56610, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56610, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56610, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56610, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56610, null]], "pdf_page_numbers": [[0, 290, 1], [290, 290, 2], [290, 2776, 3], [2776, 3405, 4], [3405, 5905, 5], [5905, 8309, 6], [8309, 11254, 7], [11254, 14238, 8], [14238, 15290, 9], [15290, 16562, 10], [16562, 17524, 11], [17524, 18668, 12], [18668, 21008, 13], [21008, 21728, 14], [21728, 24757, 15], [24757, 28189, 16], [28189, 28230, 17], [28230, 29707, 18], [29707, 32389, 19], [32389, 32574, 20], [32574, 34149, 21], [34149, 37711, 22], [37711, 37938, 23], [37938, 39883, 24], [39883, 43282, 25], [43282, 46379, 26], [46379, 49799, 27], [49799, 52725, 28], [52725, 55706, 29], [55706, 56610, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56610, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
901f079601f549d3105623615a421294248bb2f9
|
Rich Interfaces for Reading News on the Web
Earl J. Wagner
jiahui Liu
Larry Birnbaum
Kenneth D. Forbus
@u.northwestern.edu
@northwestern.edu
@cs.northwestern.edu
@northwestern.edu
Northwestern University, Evanston IL USA
ABSTRACT
Using content-specific models to guide information retrieval and extraction can provide richer interfaces to end-users for both understanding the context of news events and navigating related news articles. In this paper we discuss a system, Brussell, that uses semantic models to organize retrieval and extraction results, generating both storylines explaining how news event situations unfold and also biographical sketches of the situation participants. We generalize these models to introduce a new category of knowledge representation, an explanatory structure, that can scale up to include information from hundreds of documents, yet still provide model-based UI support to end-users. An informal survey of business news suggests the broad prevalence of news event situations indicating Brussell’s potential utility, while an evaluation quantifies its performance in finding kidnapping situations.
ACM Classification Keywords
H5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous.
Authors Keywords
Design, Human Factors, Explanatory Structures
INTRODUCTION
People read the news to learn about what is happening in the world. In addition to reading traditional newspapers, people access news articles on the Web through desktop computers, notebooks and even mobile phones. They arrive at articles through news aggregators such as portals and collaborative recommendation sites, or even just emails from friends. But what happens when they want to find out more about the things discussed in an article? In contrast to all of these developments, little has changed in how people explore the context of the news they read. As one reader observed in a recent ethnographic study of young news readers conducted for the Associated Press, “if you want background, it's up to you.” [4]
The Need for Background to News
News articles commonly discuss events in detail and relevant information about the people and organizations involved in the events. But a reader may not have heard about one of the individuals introduced in the article and want to see a biographical sketch. Alternately, she may see an article describe an organization as having the “second highest revenues in its industry” and wonder what those revenues are exactly, and for what goods and services. Or given that an event just happened, she may want to know what events happened before that led to its occurrence. In focusing on details that are new or have changed, however, articles often leave out contextual information like this. As another reader told AP, “news [today] is not the full story, but more like a preview—it's kind of annoying sometimes. I don't like to get bits and pieces of information.” Rather than being a shortcoming of the news format, however, we see this as an opportunity for software to offer a richer user experience for navigating the context of news.
This problem of exploring the context of information appears more broadly, as people browse the Web not only to search for specific facts, but also as part of “‘building a picture’ of an organization, topic or person.” [18] Even more than when just browsing the Web, however, the need for a “big picture” view is particularly acute when reading news. Another reader explained to AP that he “does not want to be fed bits. I want to know all the details at once.” However, the nature and specific kinds of big picture views that might provide information gatherers with “all the details at once”, and how software might be constructed to support their elaboration, has not received nearly as much attention as search more narrowly construed.
The Contexts of Situations and Participants in the News
Consider the case of a person reading about a rescue operation that freed a kidnapped Colombian politician. A typical news article covering this event provides details of the rescue and some information about how long she had been held. It mentions the kidnappers and some information about the rescue and status of other rescued
hostages. Toward the end, it refers to the final negotiations preceding the rescue.
Although it mentions some previous events in the kidnapping, the article does not provide a high-level view of how it unfolded over time. Its discussion of the participants in the situation assumes that the reader is already familiar with them. To learn more about the events and participants, the reader must manually find relevant news articles or informative pages on the Web. He must identify identifying terms including entity names and event keywords. Then he must cut-and-paste them into a search engine and sort through search results to find relevant pages and assemble an account of what happened. These steps make for an inconvenient process familiar to anyone who reads news on the Web.
**Context from Structured Presentations of Information**
In looking for more information about a person or organization, he may arrive at a Wikipedia page with "trading card"-style infoboxes listing essential information. These infoboxes answer clusters of questions about entities. For example, for a person they provide information on:
- Who is this person?
- What positions has this person held?
- What groups has this person been associated with?
Although the same information may be found scattered among many Web pages, it is useful to see it gathered all in one place. This serves at least two important purposes. First, it provides a "gestalt" allowing him to easily take in all of the information to know what it means and how it is related. Second, it allows him to notice any details that could be helpful in making sense of the situation. A reader can't simply ask a search tool to "show me what's most relevant or interesting in making sense of this situation", but in availing himself of a structured presentation of related information in a conventional form, he can more easily orient himself.
Thus one possibility for better support for understanding the context of news articles is in providing easier access to biographical sketches of event participants. However, the context of news articles involves not just the named entities, but also the events and, importantly, the causal relationships among the events. In reading about the kidnapping, the reader may want to know more about the events that preceded it, in other words, all of the events that make up "the kidnapping" of the individual. We say these events make up the kidnapping news situation, where by news situation we mean its limited sequence of causally-related events covered in the news.
For example, the dismissal of a lawsuit, if it occurs, will follow the filing of that lawsuit, and both are part of a particular lawsuit-type news situation. The individual events constituting a situation are situation events, or just events, and distinct situations of the same type, "lawsuit", do not necessarily involve the same events, just as different lawsuits may have different outcomes. Within a situation type, the events may have ordering relations, and the occurrence of one event can prevent another; the settlement of a suit will not occur if it has already been dismissed.
A reader’s expectations for how a situation will unfold includes relationships like these and, as such, they contribute to an event’s situational context. This context gives rise to another cluster of questions, including:
- What happened in this situation?
- How did it start? How did it end?
- Who are the participants involved?
- What other similar and related situations have these participants been involved in?
- What happened in the other, related situations referenced in this article?
Just as Wikipedia’s biographical sketch infobox provides the essential information of an entity through a structured presentation, a storyline for the situation in terms of events can help in answering questions like these. This storyline view could organize the milestone events that make up the kidnapping from the original abduction to the current release. It could also list the participants and their roles and link to biographical sketches.
**VIEWING A SITUATION IN THE NEWS WITH BRUSSELL**
Let’s return to the case of learning more about the rescue event. Suppose the person were using Brussell, a research system we’ve developed that provides direct software support for accessing informative views of the overall kidnapping situation and its participants. Then, rather than interacting with the article at the textual level by selecting keywords to search with, he could simply right click on a phrase describing the event (see Figure 1). We call a phrase like this one a situation reference.
This reveals a context-menu with questions specific to the situation being referenced. To find out more about it, the user selects “What happened in this kidnapping?” from the context-menu, which loads in the browser a storyline for the kidnapping including milestone events (see Figure 2). With the storyline view, the user can see that the overall kidnapping situation began with the individual’s abduction in February of 2002 and continued more recently with the release of a videotape of the hostage and an appeal for the hostage’s release both occurring in late 2007. Clicking on the release event updates the toolbar to show date and location information for the event and loads an article about the release (see Figure 3). From this article’s lead he can see that planning for the rescue began several months ago.
Referring to the timeline, he realizes that that was shortly after the last appeal for release.
The article lead also mentions the kidnapper group and he’d like to find out more about it. To do so, he clicks on its name in the toolbar, which loads its biographical sketch view (see Figure 4). In addition to details about the group, this includes all of the situation events it has been involved in, and images from articles about those situations. Within this view, details and images link to the article from which they were extracted, enabling the user to verify them and learn more.
We expect that readers will access Brussell’s big picture views in two kinds of circumstances: one, when reading an article primarily about a situation event and wanting to know more, as in the example. In other cases, an article largely about one event refers to another in a single sentence. For example, an article about Microsoft’s offer for Yahoo states, “Yahoo recently acquired Zimbra”, and the user may want to find out how and why that occurred to better understand the context of the offer in the article.
Although the example shows how structured presentations can be helpful, we don’t expect that users would use Brussell to view the situation context of events in every article they read. Actual usage would depend, of course, upon whether the reader is simply skimming the news or doing in-depth reading. For example during a session of reading several articles over the course of an hour, a user might want to view the situations for many of these, perhaps one out of every four or five.
**PRESENTING INFORMATION THROUGH EXPLANATORY STRUCTURES**
In the example we saw two kinds of structured presentations. The first, a situation storyline view, resembles an ordinary timeline with a sequence of events oriented in time. The second, a biographical sketch view, presents essential details of a situation participant, the merged storylines of all of the situations involving the participant, and images of their participation in the situations. With the example suggesting these big picture views can be helpful, it is important to ask, where do they come from?
In fact, content-specific information presentations like these are automatically generated from models through information retrieval and information extraction. Systems taking a similar approach include vertical search engines. ZoomInfo presents resumes of individuals generated employment information that it automatically extracts from pages on the Web. [21] CiteSeer provides a “product page” for computer-science publications freely available on the
Web by extracting their abstracts and authorship and citation information. [6]
In this paper we detail the contribution of the Brussell system in going a step farther than these websites by not only generating content-specific views, but also enabling users to access these views within their web-browsing task context. The situation and biographical sketch models are created and presented using similar mechanisms and we call them both explanatory structures. An explanatory structure, or ES, is a content-specific template featuring semantic constraints that can guide information retrieval and extraction to provide a conventional information presentation linked to the user’s task. In addition to the kidnapping situation type and organization biographical sketch, Brussell supports situation explanatory structures for legal trials and corporate acquisitions, and biographical sketches for persons and groups of people.
Having seen an example of the kind of direct support for situations and participants these views can provide, we next turn to the features of explanatory structures and how they drive Brussell’s functioning. Then we focus on Brussell’s situation models and establish that it is reasonable to expect them to be common in the news and thus content-specific support for interacting with situations is warranted. We also quantify Brussell’s performance in extracting kidnapping situations from news articles. We conclude by looking at background work and future directions.
TO SERVE EXPLANATORY STRUCTURES
Beginning with the properties of explanatory structures, we see how Brussell creates ES instances to provide to users. We then see how they impose functional constraints on Brussell's architecture and drive its operation.
Properties of Explanatory Structures
Centered on an aspect of a focal entity or entities
An explanatory structure is about a specific thing, whether a conventional named-entity such as a person, organization, or product, an intellectual product such as a legal trial, legislative act, or research project or, in the case of Brussell’s situations, a sequence of events centered on a specific participant or set of participants. In the example above, the focal entity is the kidnapped individual.
Explanatory structures do not exhaustively collect all information about the entity, however, but rather present a specific and well-defined aspect such as all of a person’s research publications, or the events in a situation, or the essential biographical details of an individual or group.
Conventional genres of content-specific information presentations
Explanatory structures act as familiar big picture views supporting easy orientation by presenting information as a gestalt. The biographical sketch is an often-used format for presenting essential information about a person or organization. A situation storyline appears as a timeline, with all of associated expectations of linearity, ordering, and the relevance and notability of selected events.
The slots of an explanatory structure differ from search results in working together to support understanding of a topic. The results of a search are unrelated and, in essence disjunctive. Either one result is what the user is looking for or, if not, then perhaps another one is. In contrast, the information in an explanatory structure is conjunctive; the whole is greater than the sum of its parts and each element contributes to the overall meaning. By contextualizing the information it contains, the explanatory structure itself also contributes to the meaning if its elements, by indicating what happened before and after entries in resumes and timelines, for example.
Support rich interaction within the user's task context
Explanatory structures are designed to be easily accessed from the user’s current task context, including the browser as in the example. To support this access, the ES includes indicators to automatically recognize relevant references within documents the user is reading, without requiring the user to select or search for them individually.
Knowing the affordances in advance makes it possible to provide richer interaction such as inspecting situation and participant references directly and selecting a choice from a semantic menu. These techniques can even subtly provide relevant information new to the user. For example, the identity of the kidnapper appears in the context-menu, even though it may not be in article.
Finally, they organize at a high level the entities in user’s current task and allow for easy navigation and traversal among relevant documents.
Authored knowledge structure types with semantic constraints and typed fields
Explanatory structures consist of a frame structure with slots and values that fill the slots. Each slot is constrained to hold values of a certain type and quantity. Brussell’s biographical sketch ES type limits certain kinds of information to be extracted when reading entity references. It includes a person's age, nationality and employer, and an organization's industry, for example. Similarly, a situation ES specifies the roles that participants may play and imposes type restrictions on these roles. For example, an organization can't be kidnapped, although a person or group of people can be.
Some slots may not be filled and thus not presented. Other slots may not be revealed because their existence conflicts with shown information, as determined by semantic constraints associated with the slots. For example, if a kidnapped individual has been released, incorrect information that he was killed would not be shown. The
situation model specifies the possible milestone events and semantic constraints holding among them including their ordering and which events are mutually exclusive.
**Meta-information drives finding and creating new instances, and extending existing instances**
To find and extend instances of explanatory structures, Brussell uses indicators and extractors associated with the ES and its slots, respectively. The ES type specifies keywords used to retrieve relevant documents. In the case of a legal trial, this includes “trial” and “*suit”.
Brussell uses text pattern recognizers associated with the ES and slots to find references to situations and participants and extract information to populate the slots. It repurposes these recognizers to find references in the current web page.
**Record provenance of information as evidence**
Finally, as part of the process of retrieving pages and extracting information, Brussell records the sources of information as well as the information itself. This allows the user to inspect the evidence supporting the information he sees. If a detail seems unexpected to the user, or if it appears that the page might provide further interesting details, the user can access the page directly to learn more.
**How Explanatory Structures Drive Brussell's Operation**
We now turn to the question of how to support these features of explanatory structures. Several important challenges arise. First is the question of where the ES types come from. A system must possess a pre-existing library of ES types, and each must be elaborated sufficiently such that they can be instantiated and managed with little or no supervision. Creating these types automatically and populating them with extractors remains an area of future work and we discuss this more later. Second is the issue of where the ES instances come from. In order to provide anticipatory support within the user's task, Brussell runs automatically and, in reading documents, it knows when to create a new ES instance and when to merge new information with an existing instance. Third, the system must effectively reconcile erroneous and conflicting information. Finally, the system must employ techniques to limit the distraction from any incorrect information that remains.
**Knowing when to instantiate and when to merge**
In reading through source material, and finding references to situations and entities, when does the system create a new ES instance? Brussell distinguishes instances based on the focal participant, specified by the “profile” of the ES. For situations, this is the identity of the kidnapped individual, or the combination of the plaintiff and defendant in a legal trial. An unsolved issue with this approach is that multiple situations with the same profile are merged into the same instance, as with multiple lawsuits between feuding companies.
**Intelligently handle errors and reconcile conflicting information**
A well-known problem with building and manipulating explicitly represented models is that of handling errors and resolving conflicting information. A source may simply provide incorrect information. To some extent this can be partly ameliorated by pre-selecting the sources of information but, for example, often a breaking-news article features incorrect information that is later amended. Or information in an article may be correct, but presented idiosyncratically and, as a result, extracted incorrectly.
We regard conflicts as the main impediment to scaling knowledge representations to the Web. It is reasonable to expect that correct information will be stated more often than incorrect information, however, especially over time as consensus develops over the details of events that originally may have been hazy. So Brussell implements a voting algorithm to resolve error due either to incorrect article information or faulty extraction. After filtering out duplicate articles and sentences from the input pages it reads, it treats every textual appearance of a fact or reference as a vote and simply counts the number of textual references to an event, event fact or biographical detail. No votes are weighed more than others.
Voting is used to resolve conflicts among structure values as well as text values for slots:
- At the top-most level, to select which events actually occur within a situation
- For facts about events including dates, locations and monetary amounts
- Concerning biographical information about situation participants such as names, nationalities, person occupations and group sizes
Brussell uses type-specific techniques for reconciling differing structures and, further, it uses vague accounts as support for specific information. For example, in determining the date of an event, “last month” may be counted as a vote for “April 20th” but not vice versa. Similarly, the description that a kidnapper was “a group of militants” supports “Al Qaeda in Iraq” over “US troops”.
Saving textual supports for extracted information serves an additional purpose: to justify how conflicting information has been reconciled.
**Hide incorrect information**
Since the system is extracting information with minimal supervision, it needs strategies that select correct information and eliminate, or at least hide incorrect information. It is acceptable, and even desirable, for a user
to be able to explicitly request an alternate account or "minority report", however.
It is assumed that Brussell will instantiate situations and participants promiscuously so, for example, it doesn't show all references in a page, and instead reveals them only when the user moves the mouse over one. This works well when an entire invalid situation or participant can be hidden, but if participant is involved in a valid situation or a single event in a situation is misread, incorrect information can "piggy-back" onto a valid participant or situation, e.g. "kidnapping of President George Bush" or mentions of a spurious negotiations event in an otherwise correct situation. Further, the problem of negated and hypothetical situation events mentioned in the news remains unresolved.
Brussell's Architecture
Brussell consists of a Firefox browser plugin and server software, which may both run on the same computer. When the user loads a new page in the browser, the browser software retrieves any cached entity and situation references for the page. If the server hasn't already analyzed the page, it renders the button with the label "Analyze Page". A user can view references in news pages, as in the example, or can request the analysis of any web pages, such as blog posts, by clicking on the button.
The back-end system requires manually created situation model types (inspired by scripts) and currently supports kidnappings, legal trials and corporate acquisitions each of which has multiple possible outcomes and on the order of 8-12 possible events. The system runs daily to retrieve news articles from several English-language news websites via RSS feeds and store them in a Lucene index. After retrieving new articles, it then queries the index to collect the new articles with keywords associated with the situation types it supports and reads through them to create and extend situation instances. These instances include a single reference up to several hundred if they are well publicized.
Using an index of saved news articles rather than searching the Web directly allows Brussell to show the source of extracted information even if the article is removed from the news website.
Brussell uses GATE [9], a standard open-source information extraction system to extract situation information including event references, dates and locations, and entity details such as person names and occupations or organization names and nationalities. Extracting this information allows references such as “the British journalist abducted last year” to be resolved to a particular kidnapping.
THE PREVALENCE OF NEWS SITUATIONS IN BUSINESS NEWS
In the example we saw the support Brussell can provide in reading about kidnapping situations. It's not necessarily obvious that these sorts of stereotypical news situations are prevalent, however. Determining how often they appear would establish an upper bound on the coverage of a system. Obviously, the system would not be useful if it could only provide a richer interface for interacting with a tiny fraction of the news articles a user would read. On the other hand, the system could be useful if it potentially provides support for interacting with many news articles, for some if not many domains. To determine whether this is the case, we performed an informal investigation into the frequency of situation references in news. We selected a particular domain in which we expected them to be particularly common and thus the tool to be especially useful, business news.
Experiment Setup
We randomly selected 100 English-language business news articles published on the Web from April 2005 to August 2008. Articles were retrieved from nine prominent English-language news sources: ABC News, BBC News, Los Angeles Times, The New York Times, San Francisco Chronicle, San Jose Mercury News, USA Today, The Washington Post, and Yahoo! News (which features news from the AP, Reuters and other wire services). Since the hierarchical organization of news sites is often reflected in the URLs of their articles, to determine whether an article is within a business section or otherwise likely to be about business news, we looked for the word “business” in its URL. A person read the text in the article’s title, lead and content and manually annotated any situation references.
To be considered a situation reference, we required that the phrase include at least one event-related verb and one or more named entities. So “Roche offered $85 per share for Genentech” would be annotated as an “acquisition offer” reference. Because Brussell performs a simple form of situation-based anaphora-resolution to merge vague references such as “Roche's offer” and “the bid for Genentech”, these would also be included in the assumption that disambiguating references appear elsewhere in the article. A reference featuring no or minimal identification of a participant such as “the bid for the company” would not, however.
In addition, we limited our focus on situations that would be “interesting” to the user. That is, we focused on cases in which the user would conceivably want to learn more about a situation by seeing its storyline view. Consider a quote in an article by an individual “Mark Corallo, a spokesman for Coventry”. Even though a person's employment at an organization is conceivably a situation that includes events such as the person’s hiring, possible promotions, and eventual departure, we would not expect the employment of a spokesperson by a company to be something the user would be interested in learning more about. Further, it is not likely that this person’s employment would be considered “newsworthy” and covered in detail by further news articles, with the result that no situation could be
created for it. So the employment of an individual who was simply quoted was not annotated.
Frequent fluctuations in quantities such as stock prices, federal reserve rates, interest rates, inflation, approval ratings and survey results can not be accommodated within Brussell's situation models so these were also excluded. Quarterly corporate earnings results and forecasts were also excluded because though they could conceivably be considered situational, with its current architecture Brussell cannot currently distinguish among them.
Experiment Results
One graduate student annotated the situation references in the 100 business news articles finding that 58% had at least one situation. 42% had none and consisted of articles about, for example, earnings forecasts and reports and "lifestyle" issues such as the best cities for recent college graduates and how to live more environmentally-consciously.
Looking at just the articles with references, the histogram in Figure 5 shows how many articles have different quantities of references with the mode being 3 references in 15% of the articles. Articles with references had a mean of 4.1 references and a median of 3 references.
The results for references broken down by the most common situation type appear in Figure 6. Events in three situation types appeared in more than 10% of the articles: 55 employment event references appeared in 19%, 51 corporate acquisition events in 16% and 33 product lifecycle events in 15%.
The most common employment event was a “hire” event, which appeared in most articles referencing employment transitions with 20 references in 11 documents. A typical hire event reference was “Mike Burbach, who became editorial editor of The Pioneer Press three weeks ago...” A typical example of an “offer” reference, the most common acquisition event with 8 references in 4 documents, is “Warner last made a formal approach earlier this year, a 2.1 billion pound offer...” Finally, the most common product lifecycle event was a “release” event reference with 8 references in 7 documents. A typical example would be “iPhone, the company's new smartphone that Jobs unveiled at its Macworld conference last week.”
From this informal survey, we can see that situations are fairly common and it is reasonable to suppose that a tool like Brussell could often be used to support direct interaction with situations.
EVALUATING BRUSSELL’S PERFORMANCE IN EXTRACTING SITUATIONS
Two further issues arise regarding Brussell: how well it performs in extracting situations overall and whether its performance improves as it reads more. We place greater emphasis on the second concern because we see the contribution of the system not in the quality of its extraction mechanisms per se but rather how well it can present information about prominent situations.
We can tell how well it performs overall by observing its performance in extracting the following:
- Whether a situation occurred
- The events within a situation and their dates
- Biographical details of situation participants
To determine whether the system performs better when extracting from more articles, we compare situations referenced many times with those infrequently referenced. For testing and training, we looked for definitive sources of information about situations of a particular type that consist of multiple events. Using published lists of kidnapping situations, we evaluated the performance of the system on a corpus of news articles to answer these questions.
Experiment Setup
We both trained and tested Brussell on collections of kidnappings of foreigners in Iraq since the beginning of the US invasion in March 2003. The training collection was published by the AP and included 35 kidnappings through October 2004. To test the system, we turned to a more
recent Wikipedia page listing 164 kidnappings through August 2008. [19] For example, in the section for Australians, the entry “Douglas Wood, construction engineer, kidnapped April 30, 2005, and freed June 15, 2005,” is represented as a kidnapping situation consisting of two dated events, about a victim with a name, nationality, and occupation. Because Brussell cannot distinguish situations involving vaguely identified participants, 35 kidnappings of unnamed individuals (such as “an Iraqi translator”) and groups of individuals (such as “two French journalists”) were not used.
Brussell has an index of approximately one million articles. Nearly 70,000 of these include a kidnapping term: “kidnap*”, “captur*”, “abduct*”, or “hostage*”. To focus on the cases from the Wikipedia list, we narrowed this set to the 24,687 articles containing both the complete name of a kidnapped individual in the list and a kidnapping term.
We sought to test the system's functionality using criteria somewhat different from tradition information extraction evaluations. Because Brussell is aimed at providing a specific user experience we sought to test the functionality it would have in a “real world” context. Assuming that the news Brussell downloads and indexes is representative of national news in general, we wanted to characterize the level of support a user can expect from the combination of Brussell and a news corpus this size. A traditional evaluation of event-extraction software might involve comparing the situations Brussell extracted from the test corpus with all of the situations, situation events and facts and biographical details it could potentially have extracted.
Our argument for Brussell's contribution is not in the sophistication and thoroughness of the extraction it performs, however, and is rather based on quantifying the level of detail in situations a user can expect to access for a news corpus of this size. Rather than noting how completely the situations were described in the articles in the index, we assumed the situation was completely described and assessed the system's performance in extracting the complete situation if the individual appears in any kidnapping-related articles at all.
**Experiment Results**
Of the 164 Wikipedia kidnapping situations involving named individuals, 135 or 81.7% were present in the news corpus. Brussell found 101, or 74.8% of the 135 situations in the test collection. That is, Brussell found at least one situation event for 74.8% of all of the Wikipedia situations for which there was at least one article in the corpus with a complete name and kidnapping terms.
Overall, Brussell found 48.9% of the biographical details of situation participants, 62.8% of the situation events, 37.3% of the event dates and 41.0% of event locations. Because the test collection didn’t specify all of the correct events and facts that Brussell could recognize, we didn’t measure the precision and the number of false positives.
To get a better sense of how Brussell’s performance varied with the number of references to a situation, we split the results into quartiles based on the number of references (see Figure 7). The mean recall of participant details for the fourth quartile is 73.5% versus 21.6% for the first quartile. The mean recall of scene occurrence for the fourth quartile is 82.7% versus 37.5% for the first quartile. The mean recall of scene dates for the fourth quartile is 64.7% versus 11.5% for the first quartile. For each of these slots, the recall for the fourth quartile is statistically significantly better than the first (p < 0.001, t-test). These results suggest that aggregating the results of extraction from multiple references can improve performance.
**Limitations**
It is important to mention the issue of false-positives in recognition. Brussell recognized over 10,000 spurious situations in the kidnapping articles. Although this seems extremely high—a 99% false positive rate—as we argue above, careful design of the interface for revealing references can minimize the degree to which they distract the user.
**BACKGROUND WORK**
**Innovation on news websites**
Many news sites recognize entity references within article pages and provide links to further information. Often these link to pages on their own sites. On some sites, these links are to previous articles about the same topic. On others they link to advertising for generic terms or are inserted into news article pages in optimizing for search engines.
Many news sites also provide some background through “related articles”. These are either manually added or based on term-frequency but are typically not updated as new events occur, however. They save the user from having to take the step of searching for relevant articles prior to this one, but not the step of sorting through the
articles and assembling the big picture. They also usually present articles only from the same news site.
**Software support for reading news**
Previous research focused on extracting information from both single and multiple news articles. Some of the approaches in reading single news stories use the script conceptual formalism for story understanding, which is the basis of our approach for modeling user expectations for situations as well. Brussell's situation types and instances are simplifications of Schank and Abelson's scripts. [17]
**Single News Article: Story and Event Extraction**
Early work in extracting information from single stories includes systems developed by Schank and his research group at Yale. SAM uses scripts to guide a deep analysis of a news article in order to provide a summary and answer questions about the events it covers. [8] Frump, also using scripts, performed a more shallow analysis to read through news articles rapidly. [10] Like Brussell, it was connected to an online source of news, in its case the UPI newswire.
Extracting event information using templates from single news articles is the focus of work in the Message Understanding Conferences [12].
More recent work on extracting formal knowledge from news has focused on populating the Semantic Web. SemNews [1] extracts structured representations from news retrieved via RSS feeds. Unlike Brussell, however, its emphasis is generating representations in the form of RDF triples rather than presenting views to the user.
**Multiple News Articles: News Summarization**
Techniques in text summarization have been used to merge and reduce the information in multiple documents to present the user with a natural language summary. NewsBlaster [14] and NewsInEssence [15] cluster and summarize similar articles, while NewsJunkie [11] indicates the differences in new articles.
**Multiple News Articles: Topic Detection and Tracking**
Selecting and presenting all and only the news articles associated with news topics is the focus of Topic Detection and Tracking (TDT). These research systems typically represent events as term-vectors, and classify and cluster news articles using these event representations. [2]
In contrast to both TDT and news summarization systems, presenting explanatory structures requires that a system “knows what it knows” by selecting and labeling milestone events in accordance with user expectations.
There's a deeper issue at play here, however. We argue that the context of a user’s news reading task has a structure, and supporting that context is not simply a matter of providing more information. In particular, there are patterns to kinds of information people want and they are reflected in the conventional presentations of information we've already seen. TDT research implicitly acknowledges this by making a semantic commitment and assigning documents to a temporal extent for an event. Without a model of events, however, TDT systems are unable to offer the rich UI that explanatory structures enable by integrating with the user’s context and presenting views in accordance with the user's expectations for how a situation unfolds.
**Query-free Information Retrieval**
Other query-free information retrieval systems for end users include Letizia [13] and Watson [7]. These systems search the Web to find documents relevant to a user: Letizia by following the links of the currently open web page, and Watson by modeling her current task in the browser or an open Microsoft Office document.
As we noted, vertical search engines such as ZoomInfo and CiteSeer offer content-specific presentations of information. Other websites also offer popular content-specific views, such as IMDb and DBLP. Integrating these views with the user’s browsing task context could provide a rich interface as Brussell does.
**FUTURE WORK**
The most noticeable improvement to Brussell would come with support for many more situation types. Adding a new type consists of specifying semantic constraints, retrieval keywords and extraction patterns. Authoring the patterns is the most time-consuming portion by far, though this could be automated through bootstrapping techniques. [16]
Brussell could also improve by disambiguating situations with the same focal participants, or multiple events of the same type within a situation. In some kidnappings, there are actually been multiple negotiation events and the event as presented merges information from each of them. A clustering approach may be helpful.
In some cases, further sub-structures within an explanatory structure should be recognized and extracted, as with the multiple jobs held by an individual. These would naturally link to situation views for the job transitions.
Awareness that slots are empty or unverified could trigger goal-driven, “autonomous” search and extraction. [20]
**CONCLUSION**
Many researchers have put forward the goal of integrating the Web with high-level semantic models to provide more goal-oriented interfaces. Some, including those working as part of the Semantic Web effort, anticipate providing this user-level functionality by having authors annotate their web pages using standardized domain-specific logical annotations. [5] In other words, this effort is aimed at providing smarter interactions with web content by constructing the web out of explicit logical representations.
Rather than bringing the Web to semantics, however, we propose bringing semantics to the Web. With Brussell, we have presented a system that enables users to interact directly with entities and situations referenced in web pages in order to navigate the context of the news webpages they read. Brussell uses standard IR and IE technologies integrated with semantic models in explanatory structures to anticipate user questions and provide high-level views that match user expectations. The prevalence of situations like the ones it supports and its performance in extraction situations both point the way to further research in rich, content-specific interfaces for reading news on the web.
ACKNOWLEDGEMENTS
This research was supported in part by the National Science Foundation under grant no. IIS-0325315/004. We thank our colleagues Chris Riesbeck and Francisco Iacobelli.
REFERENCES
|
{"Source-Url": "http://infolab.northwestern.edu/media/papers/16116-wagner.pdf", "len_cl100k_base": 8482, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33631, "total-output-tokens": 10283, "length": "2e13", "weborganizer": {"__label__adult": 0.00042510032653808594, "__label__art_design": 0.004100799560546875, "__label__crime_law": 0.0008077621459960938, "__label__education_jobs": 0.0156402587890625, "__label__entertainment": 0.000865936279296875, "__label__fashion_beauty": 0.0003561973571777344, "__label__finance_business": 0.001369476318359375, "__label__food_dining": 0.000591278076171875, "__label__games": 0.001262664794921875, "__label__hardware": 0.0012674331665039062, "__label__health": 0.0006785392761230469, "__label__history": 0.0010423660278320312, "__label__home_hobbies": 0.00022089481353759768, "__label__industrial": 0.0004329681396484375, "__label__literature": 0.003246307373046875, "__label__politics": 0.0007834434509277344, "__label__religion": 0.0006117820739746094, "__label__science_tech": 0.275146484375, "__label__social_life": 0.00041365623474121094, "__label__software": 0.2271728515625, "__label__software_dev": 0.46240234375, "__label__sports_fitness": 0.0001882314682006836, "__label__transportation": 0.0005664825439453125, "__label__travel": 0.00034499168395996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48140, 0.00891]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48140, 0.30296]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48140, 0.93255]], "google_gemma-3-12b-it_contains_pii": [[0, 4237, false], [4237, 9714, null], [9714, 12336, null], [12336, 17964, null], [17964, 23325, null], [23325, 29094, null], [29094, 32915, null], [32915, 37769, null], [37769, 43181, null], [43181, 48140, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4237, true], [4237, 9714, null], [9714, 12336, null], [12336, 17964, null], [17964, 23325, null], [23325, 29094, null], [29094, 32915, null], [32915, 37769, null], [37769, 43181, null], [43181, 48140, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48140, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48140, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48140, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48140, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48140, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48140, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48140, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48140, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48140, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48140, null]], "pdf_page_numbers": [[0, 4237, 1], [4237, 9714, 2], [9714, 12336, 3], [12336, 17964, 4], [17964, 23325, 5], [23325, 29094, 6], [29094, 32915, 7], [32915, 37769, 8], [37769, 43181, 9], [43181, 48140, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48140, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
aa0e0c6e8dca162c95384b9350db98faec1e255e
|
Searching
Figure 8.1. Perceptual change blindness in visual search, find five significant differences between these two images
Searching is the second fundamental operation we will study in this course. As with sorting, efficient searching is a critical foundation in computer science. We review $O(n)$ linear search and $O(\log n)$ binary search, then discuss more sophisticated approaches. Two of these techniques, trees and hashing, form the basis for searching very large data collections that must remain on disk.
8.1 Linear Search
The simplest search takes a collection of $n$ records and scans through them from start to end, looking for a record with a target key $k_t$.
Chapter 8. Searching
Best case performance—when the target is the first record—is $O(1)$. Worst case performance—when the target is the last record or the target is not in the collection—is $O(n)$. On average, we assume we must search about $n/2$ records to find a target contained in the collection, which also runs in $O(n)$ time.
The main purposes of linear search are twofold. First, since it is very simple to implement, we sometimes use linear search when $n$ is small or when searching is rare. Second, linear search represents a hard upper bound on search performance. If a search algorithm requires $O(n)$ time (or more), we’d often be better off using a simple linear search.
8.2 Binary Search
If a collection is maintained in sorted order, we can perform a binary search.
```plaintext
binary_search(k, arr, l, r)
Input: k, target key; arr, sorted array to search; l, left endpoint; r, right endpoint
n = r-l+1
if n ≤ 0 then
return -1 // Searching empty range
end
c = 1 + ⌊n / 2⌋
if k == arr[c] then
return c // Target record found
else if k < arr[c] then
return binary_search(k, arr, l, c-1) // Search left half
else
return binary_search(k, arr, c+1, r) // Search right half
end
```
Calling `binary_search(k_t, arr, 0, n-1)` initiates a search. This compares the target key $k_t$ to the key at the center of the collection $k_c$. If $k_t = k_c$, the target record is found. Otherwise, sorted order tells us if $k_t < k_c$ than $k_t$ is left of the center record, otherwise $k_t > k_c$ and $k_t$ is right of the center record. Searching continues recursively until $k_t$ is found, or until the collection is exhausted.
Binary search discards half the collection ($n/2$ records) on its first comparison, then half the remaining collection ($n/4$ records) on its next comparison, and so on. Any operation that halves the size of the collection on each step runs in $O(\log n)$ time.
8.3 Binary Search Tree
If we choose to implement binary search, we must decide what type of data structure to use to manage the sorted collection. One possibility is a sorted array. As shown above, this provides $O(\log n)$ search performance. Unfortunately, maintaining the collection is not as fast. Inserting a new record requires $O(\log n)$ time to find its correct position, but then requires $O(n)$ time to shift part of the collection to make space to hold the new record. Deletion similarly requires $O(n)$ time to fill the hole left by the old record.
There is the also the practical issue of choosing a good initial array size, and the need to allocate more space if the array overflows.
A common alternative is a binary search tree, or BST. A BST is a tree structure made up of nodes, each of which hold a record and references to two (possibly empty) child subtrees (Fig. 8.2). The subtrees are normally labelled left and right. Each node in the BST satisfies the following ordering properties.
1. All records in a node’s left subtree have keys smaller than the node’s key.
2. All records in a node’s right subtree have keys larger than the node’s key.
Given this ordering, performing a binary search with a BST is very simple.
```plaintext
bst_search(k, node)
Input: k, target key; node, node to search
if node == null
return null // Searching empty tree
end
if k == node.key
return node // Target record found
else if k < node.key
return bst_search( k, node.left ) // Search left subtree
else
return bst_search( k, node.right ) // Search right subtree
end
```
The logic applied here is identical to binary search, since BSTs are designed specifically to support this search strategy.
**Insertion.** To insert a record with key \( k_t \) into a BST, we search for \( k_t \) in the tree. When we reach an empty subtree, we insert a new node containing \( k_t \)’s record. Since insertion requires a search followed by a constant time operation, insertion performance is identical to search performance.
**Deletion.** To delete a record with key \( k_i \) from a BST, we search for \( k_i \) in the tree. If a node containing \( k_i \) is found, we remove it and correct the BST based on three possible configuration cases.
1. If the node has no children, nothing needs to be done (Fig. 8.2a).
2. If the node has one subtree, promote its subtree’s root (Fig. 8.2b).
3. If the node has two subtrees (Fig. 8.2c)
a) Find the successor to \( k_t \)—the smallest value greater than \( k_t \)—in the right subtree by walking right once, then walking left as far as possible.
b) Remove the successor from the tree, since it has an empty left subtree it must match Case 1 or Case 2 above.
c) Promote the successor to the node’s position.
Figure 8.2. Deletion from a BST: (a) deleting J, a node with no subtrees; (b) deleting D, a node with one subtree; (c) deleting M, a node with two subtrees
Again, since deletion requires a search followed by a constant time operation, deletion performance is identical to search performance.
**Performance.** Search performance in a BST depends on its shape. Suppose the BST is balanced: for any node in the tree, the height of its left and right subtrees are about equal. For example, the left BST in Fig. 8.2a is roughly balanced, since the difference in left and right subtree heights is no more than 1 throughout the tree. A balanced BST with $n$ records has a height of about $\lg n$, producing best case search performance of $O(\lg n)$ time.
A fully unbalanced BST is one in which every internal node has one subtree empty. Here, the BST degenerates into a linked list of $n$ nodes, producing worst case search performance of $O(n)$. Unfortunately, the common situation of inserting records with keys in sorted or nearly sorted order produces this worst case.
### 8.4 Splay Tree
One way to address the worst case $O(n)$ performance of BSTs is to ensure the tree never becomes unbalanced. Variations like AVL trees and red-black trees enforce guarantees on the difference in subtree heights by applying rotations during insertion and deletion to maintain balance. Both AVL and red-black trees improve worst case performance to $O(\lg n)$.
52
Another type of self-adjusting tree is the splay tree, proposed by Sleator and Tarjan at AT&T Bell Labs\(^1\). A splay tree’s method of adjusting is simple, compared to the complicated rotation cases needed for AVL and red-black trees. One disadvantage of splay trees is that they only guarantee \(O(\lg n)\) amortized performance. The average cost of a sequence of searches is \(O(\lg n)\), but a single search may take up to \(O(n)\).
Implementing a splay tree requires adding a splay operation. to walk a node \(N\) to the top of the tree. Three different types of splaying can occur, based on the relative positions of \(N\), \(N\)'s parent \(P\), and \(N\)'s grandparent \(G\) (if it exists).
1. **Root.** If \(P\) is the root of the tree, rotate \(N\) to replace \(P\) (Fig. 8.3a).
2. **Inline.** If \(N\) is left of \(P\) and \(P\) is left of \(G\), or vice-versa, rotate \(P\) to replace \(G\), then rotate \(N\) to replace \(P\) (Fig. 8.3b).
3. **Angle.** If \(N\) is right of \(P\) and \(P\) is left of \(G\), or vice-versa, rotate \(N\) to replace \(P\), then to replace \(G\) (Fig. 8.3c).
Searching a splay tree is identical to searching a BST, with any record found being splayed to the top of the tree. To insert a record, we first apply a BST insertion to
---
position the record, then splay it to the top of the tree. To delete a record, we apply a
BST deletion, then splay the deleted record’s parent to the top of the tree, if it exists.
**Performance.** Although it’s possible for a splay tree to have a height of $n$ (e.g., after
inserting $n$ elements in increasing order), over time the tree will self-adjust to a height
of $\lg n$, producing amortized $O(\lg n)$ performance.
Another important aspect of a splay tree is that commonly queried records will
move near the root of the tree, meaning they can be found very quickly. If a collection
has a non-uniform pattern of access, splay trees can perform better in absolute terms
than other types of self-adjusting search trees.
### 8.5 k-d Tree
A k-dimensional or k-d tree is a binary tree used to subdivide a collection of records
into ranges for $k$ different attributes in each record. The k-d tree was proposed by
Jon Louis Bentley in 1975 to support associative, or multiattribute, searches\(^2\). For
example, we could take a collection of weather reports and divide them by properties
like latitude, longitude, temperature, or precipitation. We could then make queries
like: “Return all records with temperature $< 0^\circ$ C and precipitation $> 4$cm.”
k-d trees are often used as a method of flexible secondary indexing, although there
is no reason why primary keys cannot participate as one of the $k$ dimensions.
A k-d tree’s structure is similar to a BST, except at each level of the tree we rotate
between the $k$ dimensions used to subdivide the tree’s records. For example, a 2-d
tree using attributes temperature and pressure would subdivide based on temperature
at the root node, subdivide based on pressure in the root node’s children, based again
on temperature in the children’s children, and so on.
#### 8.5.1 k-d Tree Index
Like any binary tree, each k-d tree node contains a key value $k_c$ and two subtrees:
left and right. Unlike a BST, however, records are normally not stored in the internal
nodes. Instead, the target key $k_t$ is used to choose which subtree to enter: the left
subtree if $k_t \leq k_c$, or the right subtree if $k_t > k_c$. Leaf nodes contain collections
of records, specifically all records that satisfy the conditions along the root-to-leaf path.
Suppose we wanted to use information about Snow White and the seven dwarfs to
build a k-d tree index. We will use the attributes height ($ht$) and weight ($wt$) as the two
dimensions to subdivide records in the tree.
The construction algorithm works identical to BST, except that we rotate between
the $k = 2$ dimensions as we walk through each level of the tree.
1. Sleepy is inserted into the root of the tree, which uses $ht$ as its subdivision
attribute.
\(^2\)Multidimensional binary search trees used for associative searching. Bentley. *Communications of the
ACM* 18, 9, 509–517, 1975.
### Table 8.1.
Estimated heights (in inches) and weights (in pounds) of Snow White and each of the seven dwarfs
<table>
<thead>
<tr>
<th>Name</th>
<th>ht</th>
<th>wt</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sleepy</td>
<td>36</td>
<td>48</td>
</tr>
<tr>
<td>Happy</td>
<td>34</td>
<td>52</td>
</tr>
<tr>
<td>Doc</td>
<td>38</td>
<td>51</td>
</tr>
<tr>
<td>Dopey</td>
<td>37</td>
<td>54</td>
</tr>
<tr>
<td>Grumpy</td>
<td>32</td>
<td>55</td>
</tr>
<tr>
<td>Sneezy</td>
<td>35</td>
<td>46</td>
</tr>
<tr>
<td>Bashful</td>
<td>33</td>
<td>50</td>
</tr>
<tr>
<td>Ms. White</td>
<td>65</td>
<td>98</td>
</tr>
</tbody>
</table>
**Figure 8.4.** A k-d tree split by $ht$ and $wt$, indexed using Snow White and the seven dwarfs: (a) the first three insertions, with $ht$ subdividing the root and $wt$ subdividing the second level; (b) adding a $ht$ subdivision node on the third level; (c) the final tree, with Snow White and the dwarfs inserted into the appropriate buckets.
Chapter 8. Searching
2. Happy and Doc are inserted as children of Sleepy. Since Happy’s \( ht = 34 \leq 36 \), Happy goes to the left of the root. Doc’s \( ht = 38 > 36 \), so he goes to the right of the root (Fig. 8.4a). Both Happy and Doc use \( wt \) as their subdivision attribute.
3. Dopey is inserted next. His \( ht = 37 \) puts him to the right of the root, and his \( wt = 51 \) puts him to the left of his parent (Fig. 8.4b).
4. The remaining dwarfs and Snow White are inserted using an identical approach.
Once the k-d tree index is complete, it acts as a method to locate records based on their \( ht \) and \( wt \) attributes. Buckets are placed at each null subtree, ready to hold additional entries as they are inserted. Fig. 8.4d shows the buckets containing the initial dwarfs and Snow White.
**Interpretation.** A k-d tree index subdivides the \( k \)-dimensional space of all possible records into subspaces over a continuous range of values for each dimension. Another way to visualize a k-d tree index is as a subdivision of \( k \)-dimensional space using \((k-1)\)-dimensional cutting planes that represent each entry in the index.
The height–weight index in Fig. 8.4 can be visualized this way. Since the index uses \( k = 2 \) dimensions, we subdivide a 2D plane using 1D lines into regions that represent each bucket in the tree (Fig. 8.5).
### 8.5.2 Search
To search for records that match attribute ranges in a k-d tree, we perform the following operations.
1. Identify all paths whose internal nodes satisfy the target attribute ranges. This may produce multiple paths.
2. Perform an in-memory search of each path’s bucket for records that match the target criteria.
Figure 8.5. A subdivision of the \( k = 2 \) dimensional plane into subspaces representing each bucket in the k-d tree
8.5.2 Search
To search for records that match attribute ranges in a k-d tree, we perform the following operations.
1. Identify all paths whose internal nodes satisfy the target attribute ranges. This may produce multiple paths.
2. Perform an in-memory search of each path’s bucket for records that match the target criteria.
For example, suppose we search for records with $ht \leq 36$ and $wt \leq 47$.
- at the root, branch left ($ht \leq 36$),
- at the next node, branch left again ($wt \leq 49$),
- at the next node, branch left and right ($ht \leq 35$ and $ht > 35$ both fall within the target range of $ht \leq 36$),
- along the right path we reach bucket 3, and
- along the left path, branch left ($wt \leq 50$), reaching bucket 1.
The search produces two paths that identify buckets 1 and 3 as potentially containing target records. Examining either Fig. 8.4c or Fig. 8.5 shows that
- Bucket 1: $ht \leq 35$ and $wt \leq 50$
- Bucket 3: $35 < ht \leq 36$ and $wt \leq 52$
Both buckets may include records with $ht \leq 36$ and $wt \leq 47$. Moreover, no other buckets in the table could contain these types of records.
### 8.5.3 Performance
It should be clear that a k-d tree’s index has a critical impact on its performance. Ideally, the index should subdivide data stored in the tree in a balanced way, for example, by placing all the buckets at the same level in the tree, and by storing about the same number of elements in each bucket. If the data is known a-priori, median elements can be used to construct the index\(^3\).
Our k-d tree is an example of an index that is designed for a certain class of individuals: those with $ht \leq 37$ and $wt \leq 55$. If we try to store a large number of records outside this range, they will all be forced into only one or two different buckets.
For dynamic trees, maintaining balance in the index is more complicated. Here, adaptive k-d trees can be used to try to adjust the index when buckets become too full or out of balance. A simple, although potentially inefficient, suggestion is to take all the records in an out-of-balance area of the tree, then re-partition them and reconstruct the affected region of the index\(^4\).
### 8.6 Hashing
A second major class of algorithms used for efficient searching are hash algorithms. A hash function converts a key $k_i$ into a numeric value $h$ on a fixed range $0 \ldots n - 1$. $h$ is used as a location or an address for $k_i$ within a hash table $A$ of size $n$. This is analogous to indexing on an array, since we can store and retrieve $k_i$ at $A[h]$. If the hash function runs in constant time, search, insertion and deletion are $O(1)$ operations.
---
Chapter 8. Searching
Unfortunately, the number of possible records $m \gg n$ is normally much larger than the number of hash values $n$. Given this, three important properties distinguish hashing from using $h$ to directly index into $A$.
1. The hash value for $k_i$ should appear random.
2. Hash values should be distributed uniformly over the range $0 \ldots n - 1$.
3. Two different keys $k_i$ and $k_j$ can hash to the same $h$, producing a collision.
8.6.1 Collisions
Collisions are a major issue, particularly if each location in a hash table can only hold one record. If two records both hash to the same location, what should we do?
One answer might be, “Choose a hash function that doesn’t produce collisions.” This is harder than it sounds, however. Suppose we’re storing credit card information, and we decide to use the credit card number as a key. For card numbers of the form 0000 0000 0000 0000, there are $m = 10^{16}$ possible numbers (10 quadrillion).
Clearly, it’s not possible to create an in-memory array of size $n = 10^{16}$. Of course, every possible card number isn’t being used, in part because the credit card companies haven’t issued $10^{16}$ cards, and in part because different parts of a credit card number are dependent in various ways\(^5\) (e.g., certain parts of the card number represent check-sums to ensure the card is valid, other parts define card type, bank number, and so on). Card numbers do span a reasonable part of the range from around $1 \times 10^{15}$ to $9 \times 10^{15}$, however, so an array is still not feasible.
Even if the total number and range of the keys is small, it’s still difficult to define a perfect hashing function with no collisions. For example, if we wanted to store $m = 4000$ keys in an array of size $n = 5000$, it’s estimated that only 1 in $10^{12000}$ functions will be perfect. Given this, a more tractable approach is to reduce the number of collisions, and to determine how to handle collisions when they occur.
8.6.2 Hash Functions
Here is a common fold-and-add hash function.
1. Convert $k_i$ to a numeric sequence.
2. Fold and add the numbers, checking for overflow.
3. Divide the result by a prime number, and return the remainder as $h$.
Consider $k_i = \text{Subramanian}$. We convert this into a numeric sequence by mapping each character to its ASCII code, then binding pairs of ASCII codes.
<table>
<thead>
<tr>
<th>S</th>
<th>u</th>
<th>b</th>
<th>r</th>
<th>a</th>
<th>m</th>
<th>a</th>
<th>n</th>
<th>i</th>
<th>a</th>
<th>n</th>
</tr>
</thead>
<tbody>
<tr>
<td>85</td>
<td>117</td>
<td>98</td>
<td>114</td>
<td>97</td>
<td>109</td>
<td>97</td>
<td>110</td>
<td>105</td>
<td>97</td>
<td>110</td>
</tr>
</tbody>
</table>
Assume the largest character pair is zz with combined ASCII codes of 122122. To manage overflow during addition, we divide by prime number 125299 slightly larger than this maximum after each add, and keep the remainder.
\(^5\)http://www.mint.com/blog/trends/credit-card-code-01202011
8.7 Hash Value Distributions
\[ 85117 + 98114 = 193231 \mod 125299 = 67932 \]
\[ 67932 + 97109 = 165041 \mod 125299 = 39742 \]
\[ 35742 + 97110 = 136852 \mod 125299 = 11553 \] (8.1)
\[ 11553 + 10597 = 22150 \mod 125299 = 22150 \]
\[ 22150 + 110 = 22260 \mod 125299 = 22260 \]
We divide the result of 22260 by the size of the hash table, which itself should be prime. Here, we assume \( A \) has size \( n = 101 \), produce a final \( h \) of
\[ h = 22260 \mod 101 = 40 \] (8.2)
Other useful hash functions exist. For example, we could convert \( k_t \) to a numeric sequence, square the sequence, and use the middle digits modulo the hash table size for \( h \). Or, we could convert the numeric sequence to a different base, and use the converted value modulo the hash table size for \( h \).
8.7 Hash Value Distributions
Given a hash table size of \( n \) used to hold \( r \) records, what is the likelihood that
1. No key hashes to a particular address in the table.
2. One key hashes to a particular address.
3. Two keys has to a particular address.
and so on? Assume our hash function uniformly distributes its hash values. For any single key the probability it hashes to a given address is \( b \), and the probability that it doesn’t hash to that address (i.e., it hashes to some other address) is \( a \).
\[ b = \frac{1}{n}, \quad a = 1 - \frac{1}{n} \] (8.3)
Given \( a \) and \( b \), suppose we insert two keys into the hash table. We can compute individual cases, for example, the probability that the first key “hits” an address and the second key “misses”, or the probability that both keys hit.
\[ ba = \frac{1}{n} \left(1 - \frac{1}{n}\right) = \frac{1}{n} - \frac{1}{n^2} \]
\[ bb = \frac{1}{n} \frac{1}{n} = \frac{1}{n^2} \] (8.4)
What is the probability that \( x \) of \( r \) keys hash to a common address? First, we need to determine how many ways there are to arrange \( x \) hits in a sequence of \( r \) keys. This is the binomial coefficient, or choose probability \( r \) choose \( x \).
\[ C = \binom{r}{x} = \frac{r!}{x!(r-x)!} \] (8.5)
Given \( C \), the probability of \( x \) hits in \( r \) keys at a common address is
\[
C \cdot b^x \cdot a^{r-x} = C \left( \frac{1}{n} \right)^x \left( 1 - \frac{1}{n} \right)^{r-x}
\] (8.6)
Because of the \( r! \) in its equation, \( C \) is expensive to compute. Fortunately, the Poisson distribution \( \Pr(x) \) does a good job of estimating our probability.
\[
C \cdot b^x \cdot a^{r-x} \approx \Pr(x) = \frac{(r/n)^x \cdot e^{-(r/n)}}{x!}
\] (8.7)
Since \( x \) is normally small, the \( x! \) in the denominator is not an issue. Consider an extreme case, where we want to store \( r = 1000 \) keys in a hash table of size \( n = 1000 \). Here, \( r/n = 1 \). We can use this ratio to calculate \( \Pr(0) \), the probability an address is empty, \( \Pr(1) \), the probability one key hashes to an address, \( \Pr(2) \), the probability two keys hash to an address, and so on.
\[
\Pr(0) = \frac{1^0 \cdot e^{-1}}{0!} = 0.368
\]
\[
\Pr(1) = \frac{1^1 \cdot e^{-1}}{1!} = 0.368
\]
\[
\Pr(2) = \frac{1^2 \cdot e^{-1}}{2!} = 0.184
\] (8.8)
Based on these probabilities, and given our hash table size of \( n = 1000 \), we expect about \( n \cdot \Pr(0) = 1000 \cdot 0.368 = 368 \) entries that are empty, \( n \cdot \Pr(1) = 368 \) entries holding 1 key, \( n \cdot \Pr(2) = 184 \) entries that try to hold 2 keys, and so on.
### 8.8 Estimating Collisions
Consider our previous example with \( r = n = 1000 \). How many collisions do we expect to see in this situation? To answer this, we use the following hash table breakdown.
<table>
<thead>
<tr>
<th>368 entries 0 keys</th>
<th>368 entries 1 key</th>
<th>264 entries > 1 keys</th>
</tr>
</thead>
<tbody>
<tr>
<td>368 recs inserted</td>
<td>1000 - 368 = 632 recs inserted</td>
<td>264 recs accepted</td>
</tr>
<tr>
<td>632 - 264 = 368 recs collide</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
\( n \cdot \Pr(0) = 368 \) entries in the table hold no keys, and \( n \cdot \Pr(1) = 368 \) entries hold exactly 1 key. This means \( 1000 - n \cdot \Pr(0) - n \cdot \Pr(1) = 264 \) entries try to hold more than
8.9. Managing Collisions
Table 8.2 shows that, even for very low packing densities, some collisions will still occur. Because of this, we need ways to manage a collision when it happens. We look at two common approaches: progressive overflow and multi-record buckets.
### 8.9.1 Progressive Overflow
One simple way to handle a collision on insertion is to hash a record’s key, and if the resulting address \( h \) is already occupied, to walk forward through the table until an empty position is found.
Chapter 8. Searching
To delete a record, we find and remove it. We also mark its position as dirty to remember that, although this position is empty, it was previously occupied.
```plaintext
progressive_insert(rec, tbl, n)
Input: rec, record to insert; tbl, hash table; n, table size
num = 0 // Number of insertion attempts
h = hash( rec.key )
while num < n do
if tbl[ h ] is empty then
tbl[ h ] = rec // Store record
break
else
h = ( h + 1 ) % n // Try next table position
num++
end
end
```
```plaintext
progressive_delete(key, tbl, dirty, n)
Input: key, key to remove; tbl, hash table; dirty, dirty entry table; n, table size
h = progressive_search( key, tbl, dirty, n )
if h != false then
tbl[ h ] = empty // Set table position empty
dirty[ h ] = true // Mark table position dirty
end
```
```plaintext
progressive_search(key, tbl, dirty, n)
Input: key, key to find; tbl, hash table; dirty, dirty entry table; n, table size
num = 0 // Number of compare attempts
h = hash( key )
while num < n do
if key == tbl[ h ].key then
return tbl[ h ] // Target record found
else if tbl[ h ] is empty and !dirty[ h ] then
return false // Search failed
else
h = ( h + 1 ) % n // Try next table position
num++
end
end
return false // Search failed
```
To search for a record, we hash its key to get \( h \), then search from position \( h \) forward. If we find the record, the search succeeds. If we search the entire table without finding the record, the search fails. If we find an empty position whose dirty bit isn’t set, the search also fails.
Why does the search stop at empty positions that aren’t dirty, but jump over empty positions that are dirty? Suppose we insert three records A, B, and C that all hash to
8.9. Managing Collisions
the same position \( h \). A and B form a run in the table, a block of records that C must step over to insert itself (Fig. 8.6a).
Next, we delete B, then search for C. The run that forced C to position \( h + 2 \) is gone (Fig. 8.6b). The search algorithm wants to follow C’s insertion path to find it. If we stopped at any empty entry, we would fail to find C. Marking position \( h + 1 \) as dirty tells the search algorithm, “Although this position is empty, it may have been part of a run when C was inserted, so keep searching.”
Progressive overflow is simple to understand and implement, but it has a number of serious disadvantages.
1. The hash table can become full, and if it does, it’s very expensive to increase. Since the hash function divides by the table size \( n \), increasing \( n \) changes every key’s hash value. The means we must remove and re-insert every record if we resize the table.
2. Runs form as records are inserted, increasing the distance a record needs to walk from its initial hash position \( h \) during insertion.
3. Runs can merge with one another, forming very long super-runs.
Experimental analysis shows that, because of long run lengths, a table > 75% full deteriorates to \( O(n) \) linear search performance. Since deletion leaves dirty locations that a search must past over, if a table is ever > 75% full, searches will run in \( O(n) \) time regardless of the number of records the table currently holds.
8.9.2 Multi-Record Buckets
Another way to reduce collisions is to store more than one record in each hash table entry. For example, each entry could be implemented as an expandable array or a linked list—a bucket—capable of holding \( b > 1 \) records. Insertion and deletion work identical to a simple hash table, except that we no longer need to worry about exceeding the capacity of a table position.
To search for key \( k \) with hash value \( h \), we load the entire bucket \( A[h] \) and scan it using linear search, binary search, or whatever strategy we’ve implemented to try to find a target record.
Do buckets really reduce collisions? That is, for a table that can hold a fixed number of records, does reorganizing it to use buckets reduce the collision rate, compared to a simple hash table that holds one record per table entry?
If we use buckets, the packing density of \( A \) is now \( r/bn \), where \( n \) is the table size and \( b \) is the maximum number of entries each table position can hold. Suppose we try to insert \( r = 700 \) records into a simple hash table with \( n = 1000 \) entries. Table 8.2 reports a collision rate of 28.1% for a packing density of \( r/n = 700/1000 = 70\% \). Suppose we instead built a hash table with \( n = 500 \) entries, each of which can hold \( b = 2 \) records. The packing density \( r/bn = 700/2 \cdot 500 = 0.7 \) is the same 70%. What is its expected collision rate?
Using the Poisson equation (Eq. 8.7), we can compute the expected number of table entries that hold 0 keys, 1 key, 2 keys, and so on. Recall that Poisson uses \( r/n \). For the simple hash table, \( r/n = 700/1000 = 0.7 \), and for the hash table with buckets \( r/bn = 700/2 \cdot 500 = 1.4 \).
\[
\begin{align*}
\text{Pr}(0) &= \frac{0.7^0 e^{-0.7}}{0!} = 0.497 \\
\text{Pr}(1) &= \frac{0.7^1 e^{-0.7}}{1!} = 0.348 \\
\text{Pr}(2) &= \frac{0.7^2 e^{-0.7}}{2!} \approx 0.242
\end{align*}
\]
The equations and figure on the right represent the table with \( n = 500 \) buckets of size \( b = 2 \). The table has \( n \cdot \text{Pr}(0) = 124 \) entries that hold no keys, \( n \cdot \text{Pr}(1) = 172 \) entries that hold 1 key, and \( n \cdot \text{Pr}(2) = 121 \) entries that hold 2 keys. \( 500 - n \cdot \text{Pr}(0) - n \cdot \text{Pr}(1) - n \cdot \text{Pr}(2) = 83 \) entries try to hold more than 2 keys. \( 700 - 172 - (2 \cdot 121) = 286 \) keys hash to these positions, of which 166 are stored and 286 – 166 = 120 collide, for a collision rate of 17.1%.
So, by simply rearranging 1000 table entries into a two-bucket table, we can reduce the collision rate from 28.1% to 17.1%, or from 197 collisions to 120 collisions.
Using multi-record buckets still poses problems for efficiency. In particular, as $r >> n$ records are added to the table, the length of each bucket will become long, increasing the time needed for search, insertion—to check a bucket for duplicate keys—and deletion—to find the record to remove. We might be tempted to increase the size $n$ of the hash table, but this has the same problem that we saw with progressive overflow: changing $n$ changes the hash function, forcing us to remove and re-insert the table’s records if we resize it.
|
{"Source-Url": "https://www.csc2.ncsu.edu/faculty/healey/csc541/notes/search.pdf", "len_cl100k_base": 8451, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 49140, "total-output-tokens": 9664, "length": "2e13", "weborganizer": {"__label__adult": 0.00032711029052734375, "__label__art_design": 0.0005507469177246094, "__label__crime_law": 0.0004794597625732422, "__label__education_jobs": 0.00255584716796875, "__label__entertainment": 0.0001157522201538086, "__label__fashion_beauty": 0.0001982450485229492, "__label__finance_business": 0.00045609474182128906, "__label__food_dining": 0.0004091262817382813, "__label__games": 0.0007090568542480469, "__label__hardware": 0.0020694732666015625, "__label__health": 0.0006856918334960938, "__label__history": 0.00048160552978515625, "__label__home_hobbies": 0.00021266937255859375, "__label__industrial": 0.0008425712585449219, "__label__literature": 0.00039267539978027344, "__label__politics": 0.0003142356872558594, "__label__religion": 0.0005426406860351562, "__label__science_tech": 0.37890625, "__label__social_life": 0.00014317035675048828, "__label__software": 0.0192413330078125, "__label__software_dev": 0.58935546875, "__label__sports_fitness": 0.0003039836883544922, "__label__transportation": 0.0006475448608398438, "__label__travel": 0.0002200603485107422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30535, 0.05116]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30535, 0.81176]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30535, 0.8578]], "google_gemma-3-12b-it_contains_pii": [[0, 684, false], [684, 3168, null], [3168, 5396, null], [5396, 6851, null], [6851, 8236, null], [8236, 11138, null], [11138, 11838, null], [11838, 13994, null], [13994, 16636, null], [16636, 19471, null], [19471, 21552, null], [21552, 23547, null], [23547, 24052, null], [24052, 25828, null], [25828, 27720, null], [27720, 29994, null], [29994, 30535, null], [30535, 30535, null]], "google_gemma-3-12b-it_is_public_document": [[0, 684, true], [684, 3168, null], [3168, 5396, null], [5396, 6851, null], [6851, 8236, null], [8236, 11138, null], [11138, 11838, null], [11838, 13994, null], [13994, 16636, null], [16636, 19471, null], [19471, 21552, null], [21552, 23547, null], [23547, 24052, null], [24052, 25828, null], [25828, 27720, null], [27720, 29994, null], [29994, 30535, null], [30535, 30535, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30535, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30535, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30535, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30535, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30535, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30535, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30535, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30535, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30535, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 30535, null]], "pdf_page_numbers": [[0, 684, 1], [684, 3168, 2], [3168, 5396, 3], [5396, 6851, 4], [6851, 8236, 5], [8236, 11138, 6], [11138, 11838, 7], [11838, 13994, 8], [13994, 16636, 9], [16636, 19471, 10], [19471, 21552, 11], [21552, 23547, 12], [23547, 24052, 13], [24052, 25828, 14], [25828, 27720, 15], [27720, 29994, 16], [29994, 30535, 17], [30535, 30535, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30535, 0.05667]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
03ea48af93d55d44fcc79b9b7138b40c662f83ac
|
A FPGA-based control-flow integrity solution for securing bare-metal embedded systems
Original
Availability:
This version is available at: 11583/2838933 since: 2020-07-08T13:21:37Z
Publisher:
Institute of Electrical and Electronics Engineers Inc.
Published
DOI:10.1109/DTIS48698.2020.9081314
Terms of use:
openAccess
This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository
Publisher copyright
ieee
copyright 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collecting works, for resale or lists, or reuse of any copyrighted component of this work in other works.
(Article begins on next page)
A FPGA-based Control-Flow Integrity Solution for Securing Bare-Metal Embedded Systems
Nicolò Maunero*, Paolo Prinetto*, Gianluca Roascio*, Antonio Varriale†
*Dipartimento di Automatica e Informatica, Politecnico di Torino, Turin, Italy
†Blu5 Labs Ltd., Malta
Cybersecurity National Laboratory, Consorzio Interuniversitario Nazionale per l’Informatica (CINI)
{nicolo.maunero, paolo.prinetto, gianluca.roascio}@polito.it
av@blu5labs.eu
Abstract—Memory corruption vulnerabilities, mainly present in C and C++ applications, may enable attackers to maliciously take control over the program running on a target machine by forcing it to execute an unintended sequence of instructions present in memory. This is the principle of modern Code-Reuse Attacks (CRAs) and of famous attack paradigms as Return-Oriented Programming (ROP) and Jump-Oriented Programming (JOP). Control-Flow Integrity (CFI) is a promising approach to protect against such runtime attacks. Recently, many CFI-based solutions have been proposed, resorting to both hardware and software implementations. However, many of these solutions are hardly applicable to microcontroller systems, often very resource-limited. The paper presents a generic, portable, and lightweight CFI solution for bare-metal embedded systems, i.e., systems that execute firmware directly from their Flash memory, without any Operating System. The proposed defense mixes software and hardware instrumentation and is based on monitoring the Control-Flow Graph (CFG) with an FPGA connected to the CPU. The solution, applicable in principle to any architecture which possesses an FPGA, forces all control-flow transfers to be compliant with the CFG, and preserves the execution context from possible corruption when entering unpredictable code such as Interrupt Services Routines (ISR).
Index Terms—security, code-reuse attacks, return-oriented programming, ROP, JOP, embedded systems, microcontrollers, firmware, bare-metal, backward edges, forward edges, interrupt
I. INTRODUCTION
Embedded devices are nowadays playing a central role in our lives, as they control most of the objects surrounding us. In addition, such systems create a network of connections that goes far beyond simple isolated LANs and links up devices all over the world. A huge amount of sensitive data is thus exchanged, and related security and privacy issues must be addressed.
In addition to communication security, a relevant aspect is the protection of devices themselves and their resilience to unauthorised intrusions. Physical security is certainly a first step, but not enough, since vulnerabilities may be contained in the code that the systems execute. Many of these vulnerabilities derive from the widespread use of very powerful languages such as C and C++. These languages guarantee a high degree of low-level control, but at the same time they allow programmers to freely manipulate memory pointers, so that common weaknesses such as buffer overflow [1] or dangling pointers [3] come out.
These vulnerabilities open the door to a family of exploits commonly known as Code-Reuse Attacks (CRA), in which the flow of the program is redirected to portions of code already present in memory but not intended to be executed in that order. Return-Oriented Programming (ROP) [44] [13] [40] and Jump-Oriented Programming (JOP) [10] [18] are attack paradigms belonging to this category. In a paper of 2005 by Abadi et al. [7], Control-Flow Integrity (CFI) was suggested as a basic defence approach. CFI states that every control-flow transfer occurring during the execution of a program must target a valid destination, as stated in its Control-Flow Graph (CFG) computed ahead of time. Basically, the program behaviour is observed by an online monitor (software or hardware), which is able to ensure that no transfer happens out of those established in its CFG.
In literature, several implementations of CFI have been presented. Purely software solutions are mostly based on code instrumentation [9] [17], with additional checks on the destination of the control-flow transfers. These methods can however result in a considerable overhead in terms of added instructions, allocated data structures and/or execution times, often not acceptable for real-time systems with limited resources. In other cases, solutions based on multitasking have been proposed [22] [31] [54], very modular but inapplicable when the code is directly executed by the processor without the intervention of an Operating System (bare-metal machines).
Hardware-based CFI solutions [50] [23] [20] try to overcome these limits by proposing CFI monitors directly installed at hardware level. The program executes without spending time for checks, which are performed almost transparently in a parallel and much faster way. Moreover, sensitive information are not even visible by the main execution, which cannot access it in any way.
If hardware-based CFI is advantageous for these reasons, on the other hand, providing support for a hardware unit that directly accesses pipeline registers [25] or the bus between the core and the instruction cache [21] becomes unaffordable, as it is necessary to modify the internal design of the microcontrollers. This can be avoided if the hardware is equipped with a reconfigurable component, e.g., an FPGA, which can be
used to implement a CFI monitor without touching the internal
architecture of the processors.
The goal of the present work is therefore to propose a
solution for bare-metal microcontroller systems that exploits
the presence of a FPGA onto which the CFI monitor can
be synthesised. The solution lies halfway between software
techniques and hardware techniques, with a minimal binary
instrumentation based on single write machine instructions:
these are used to communicate to the external reprogrammable
device the information about the status of the CFG. The
monitor validates the information received and stops the
processor activity when a deviation is detected, via a security
violation hard fault. The solution is in principle applicable to
any architecture provided with an FPGA, and does not involve
modifications to internal structure of the processors.
Outline. In the following of this Section, we offer a brief
digression on what is the CPU-FPGA cooperation trend, to
better contextualize our work; Section II provides some techni-
cal background on Control-Flow Hijacking attacks; Section III
presents main state-of-the-art hardware-based CFI solutions;
Section IV motivates our work and lists the challenges that
are addressed; Section V presents our FPGA-based solution;
Section VI lists the experimental results obtained from a
preliminary implementation; Section VII finally concludes the
paper.
A. The CPU-FPGA cooperation
According to latest Gartner research about the future of
Infrastructure and Operations [6], FPGA will be part of the
top 10 technologies to drive innovation through 2024.
The most recent strategies depict a primary interest of using
FPGA in server-side hybrid chips. Nevertheless, the rise of 5G
technology, the consequently spread of IoT and OT infrastruc-
tures, and the need for real time insights and localised actions,
are forcing to deploy edge-computing solutions to process data
closer to the source of generation. It is expected that, over
the next few years, hardware vendors will focus on delivering
computing hardware to execute complex, compute-intensive
functions at the edge. In this context, hybrid chips based on
CPU and FPGA components, will be the easiest and most
power/cost-effective way to meet the new edge computing
hardware requirements. Although there are still a few examples
on the market, mostly provided by FPGA vendors who embed
ARM or NIOS cores in their devices, hybrid CPU+FPGA
chips are expected to become increasingly popular in the next
years.
FPGA and CPU devices are already employed in many
projects as separate components interconnected through a
parallel bus and mounted on the same electronic board. In
most of the cases, the FPGA is mapped as a memory device
whereas the CPU acts as a master of the system. However, the
mobile terminals market is driving a new trend, which aims
to replacing the parallel bus with serial differential lines in
order to reduce the final device size and, at the same time, to
increase the data transfer rate. In the past decade, this process
already happened in the PC world, when the PCI parallel bus
was replaced by the PCIe differential lanes based bus. In terms
of architectural access, we are talking of a migration from
memory-mapped devices to port-mapped devices.
In any case, since the new serial buses affect the memory
components, it is expected that the CPU will adapt the
instruction set to atomically manage the memory access with a
single-instruction paradigm, either mapping the LOAD/STORE
instructions to the new serial buses or introducing IN/OUT
instructions to manage the serial memory access.
II. BACKGROUND
The IEEE Spectrum ranking of top programming languages
[4] reports C and C++ as respectively 2nd and 3rd most used
languages in the embedded system domain still in 2019. The
reasons may be many, but there is no doubt that one of the
great advantages in their use is the availability of low-level
control structure that allows a deep optimisation in resource
usage without losing the advantages of high-level statements.
Although, the direct management of data structures in memory
and the free manipulation of pointers originate a large number
of vulnerabilities. The lack of memory safety capabilities (such
as a strong typization, present in other modern languages)
enables attackers to exploit these bugs by maliciously altering
the program’s behaviour.
One of the most famous vulnerabilities of this kind is buffer
overflow [1], which is caused by incrementing or decrementing
a pointer without proper boundary checks. This may result
in out-of-bounds writes which corrupt adjacent data on stack,
heap or other zones. Similar problems may rise when indexing
bugs are present in the code, i.e., when boundary checks over
an index for a given data structure are missing or incomplete.
Indexing bugs derive from programming errors collectively
known as integer-related errors, such as integer overflow [2],
correct signedness or wrong pointer casting.
Famous are also use-after-free vulnerabilities [3], for which
a pointer is mistakenly used after the area it points has been
freed and released to the memory management system. After
the free, the pointer still points to the deallocated region, which
in the meanwhile can have been written with other data. The
consequence is that newly allocated data in the heap may be
corrupted by accessing it by these dangling pointers.
Memory vulnerabilities described above may enable attack-
ers to maliciously take control over the program by forcing it
to execute an unintended sequence of instructions. This
exploit is generally called Arbitrary Code Execution (ACE).
To achieve ACE, attackers tamper with the instruction pointer,
which in most architectures is referred to as Program Counter
(PC). The PC stores the address of the next instruction to be
executed: being able to control its content means being able
to decide the next instruction to be executed.
The control over the instruction pointer can be taken, for
instance, by corrupting the memory operand of an instruction
that copies that value into the
PC (indirect control-flow transfer
instructions). RET and some formats of CALL and JMP are
example of such instructions, but, in general, any instruction
Indexing bugs derive from programming errors collectively
known as integer-related errors, such as integer overflow [2],
correct signedness or wrong pointer casting.
Famous are also use-after-free vulnerabilities [3], for which
a pointer is mistakenly used after the area it points has been
freed and released to the memory management system. After
the free, the pointer still points to the deallocated region, which
in the meanwhile can have been written with other data. The
consequence is that newly allocated data in the heap may be
corrupted by accessing it by these dangling pointers.
Memory vulnerabilities described above may enable attack-
ers to maliciously take control over the program by forcing it
to execute an unintended sequence of instructions. This
exploit is generally called Arbitrary Code Execution (ACE).
To achieve ACE, attackers tamper with the instruction pointer,
which in most architectures is referred to as Program Counter
(PC). The PC stores the address of the next instruction to be
executed: being able to control its content means being able
to decide the next instruction to be executed.
The control over the instruction pointer can be taken, for
instance, by corrupting the memory operand of an instruction
that copies that value into the
PC (indirect control-flow transfer
instructions). RET and some formats of CALL and JMP are
example of such instructions, but, in general, any instruction
that treats the PC register as a destination register for a computing operation can be exploited.
The PC value is corrupted to point to the attacker’s payload. This was traditionally injected together with the corrupted instruction pointer into the program memory program (Code Injection) thanks to stack memory vulnerabilities [38]. Such exploits were made practically impossible after the wide adoption of main architectural countermeasures like Data Execution Prevention (DEP) [47] and Write XOR Execute policy [48], for which a memory location cannot be both writable (W) and executable (X) at runtime. Attackers then devised a new attack paradigm, in which the payload is composed of snippets of code already present in the program memory, but not meant to be executed in that order. This was how Code Reuse Attacks (CRA) were born. In a paper of 2007 by Shacham et al. [44], the authors theorized that “in any sufficiently large body of executable code there will exist sufficiently many useful code sequences that an attacker who controls the stack will be able [...] to cause the exploited program to undertake arbitrary computation”. The control flow can be diverted to execute a series of small sequences of instructions, each ending with an indirect control-flow transfer instruction, known as gadgets. In large codebases present in every C application, such as libc, the amount of gadgets that can be extracted is high, and the attackers achieve the maximum of expressiveness [49].
This is the basic idea behind a famous attack paradigm known as Return-Oriented Programming (ROP) [40]. In ROP, the attackers write their malicious code using, instead of instructions, the gadgets found in the code of the system to be attacked as basic “bricks”. These gadgets may perform any kind of general-purpose action, as copying values from registers to others, loading values from memory, or performing arithmetic and/or logic operations. The common property they must have is that their last instruction must always be a RET instruction. Once identified the set of gadgets, attackers fill the stack with a list of fake return addresses exploiting a memory vulnerability (Figure 1). Each of the injected addresses points to the beginning of each of the identified gadgets.
The attack starts when the function that contains the vulnerability returns: by executing the RET, the processor copies the first corrupted value into the PC, and the program flow is redirected to the first gadget of the sequence. Once the first gadget is finished, another RET is executed, that carries the flow to the second gadget, then to the third, and so on (Figure 2).
ROP was demonstrated to be effective over many different architectures [13] [27] [16] [15] [33], and then the concept was extended to non-RET-ended gadgets. Indirect formats of JMP and CALL can be used as well to reach instructions at will. Concepts like Jump-Oriented Programming (JOP) [10] [18], Call-Oriented Programming (COP) [42], and others [43] [30] were introduced.
III. RELATED WORK
Literature has been enriched with a considerable amount of CFI solutions, ranging from purely software implementations [7] [9] [17], to techniques that take advantage of features made available by Operating Systems [22] [31] [32] [54], to hardware-based solutions. Since this last is the field of our proposed technique, an overview of the most significant examples is here listed. The various techniques offered can be classified into families.
Branch target or instruction protection. One way to prevent code-redirection attacks is to make indirect branches operations protected by a key, which the external attacker does not know. The authors of [39] propose to insert a module in the architecture that automatically encrypts the routine return address before pushing it onto the stack at the call time, and that decrypts it when the RET is executed. Such an on-the-fly pointer encryption/decryption mechanism is also presented in [35]. In [36], a slightly different approach is instead adopted, which involves the encryption not of the addresses but of the indirect jump target instructions. This encryption is done at load-time, when the code is loaded in memory. At runtime, every time an indirect branch is performed, the processor...
In light of the above, from our point of view it is important to page a solution that:
- aims at protecting microcontroller-based systems even when they directly execute a firmware stored in the Flash memory without the support of an Operating System (bare-metal), being thus independent of the facilities offered by OS’s, such as multitasking or privileged execution levels;
- exploits the advantages of a hardware-based defense applicable without designing custom microcontrollers to have a protection, by mixing binary instrumentation techniques and low-level runtime monitoring based on reprogrammable hardware (FPGA);
- sets up an efficient defense mechanism that does not rely on secrets of any kind (e.g., encryption keys or secure identifiers) to be hidden by memory protection mechanisms or similar;
- cares about the strict requirements that these systems have in terms of resource occupation and execution times, and therefore aims at minimally impacting the system configuration and behavior, by properly selecting the edges to be protected;
- takes into consideration the problem of hardware interrupts, as explained in [37]: if not properly protected, the context of the program, including sensitive elements from the CFI point of view, can be corrupted with consequent loss of effectiveness of the solution.
V. Our Approach
The proposed solution aims at ensuring that (i) all branches target a valid location, (ii) the program context be not corrupted during sudden calls to Interrupt Service Routines (ISRs). The implemented CFI monitor is a module synthesised on a FPGA connected to the CPU it via a serial or parallel interface. An instrumented version of the program runs on the CPU and awakes the monitor by sending sensitive data about branches and context. In parallel, without stopping the processor activity, the monitor processes these data and interrupts the CPU only if they are not compliant with the expected ones. The CFI monitor is the only IP present on the reconfigurable hardware device. The cooperation system between CPU and FPGA is depicted in Figure 3.
Fig. 3: The CPU-FPGA cooperation system for protection.
The program is instrumented so that single OUT/STORE instructions (called hereinafter write instructions for simplicity) are added in specific points of the code to communicate to the monitor two kinds of data:
- labels to uniquely identify a position within the code (for edge protection);
• values contained in specific registers (for context protection).
Together with data, the CPU must also communicate an opcode, that distinguishes the kind of the provided data and instruct the monitor on the the right operation to be performed.
The sequel of this Section is organised as follows: we first introduce a classification of the CFG edges to define those needing protection. Then, the problem of context corruption and why context protection is needed are explained. The two phases of the protection (online and offline) are eventually presented, followed by some remarks about the architecture and the actual implementation of the proposed solution.
**Classification and Identification of Edges**
As already mentioned, the CFG is the set of connections between the basic blocks (BB) of the program through edges that correspond to control-flow transfers. Edges can be classified depending on the transfer instruction that generates them. They can be first distinguished in forward edges and backward edges, where the latter are edges connecting a BB to another which immediately follows (in terms of static position within the code) a block visited previously. These are typically the return edges from a routine. “Forward edges” refers to all the other edges that connect a BB to another elsewhere in the code. In most cases, these are the calling edges of a routine, but they can also be jumping edges within a same routine.
We refer as target of an edge to the BB pointed by that edge. From this definition, we can define direct edges and indirect edges. Direct edges are edges whose target is expressed as a label encoded within the instruction itself, while indirect edges are edges whose target is expressed by the value of a program data.
An origin tree of an edge target is a tree whose root is the location (register or memory address) used as argument of the instruction generating the edge, and which traces all the locations used to compose the value of the target up to the origin. Figure 4 shows a snippet of code in ARM-Assembly-like language ending with the edge-generating instruction BX R3 (indirect jump to address stored in R3), with the relative origin tree for R3. For direct edges, the origin tree can assume the entire code is already available in a single binary stored in Flash, and there are no modules linked at runtime. We can also assume that the code remains constant during activity. The result is that the construction of the origin tree is always possible, no matter the complexity in constructing it. This represents a key point for the proposed protection mechanism.
If the origin tree is always entirely reconstructable, then it is possible to list it all, from the root to the leaves. The leaves of this tree will be values that cannot further derive from other locations, i.e., they are either constant values or inputs taken from the outside. Assuming that an external input can never be used to compose a code pointer (because even in the case of a switch-case statement over an input, there is always a translation in a readable or predictable constant value, which then becomes the leaf), or in alternative we impose it as a design rule, then the set of targets of an edge is always finite and enumerable, and that set is a strict subset of all possible code locations. In direct edge case, the cardinality of this set will be 1, while it will be greater than or equal to 1 in indirect edge case. It follows (and this is the point) that under these assumptions it is always possible to list all the destinations of all the edges of a CFG, and thus, it is always possible to completely protect the integrity of the control flow.
Introduced all these definitions, it is possible finally to divide edges into insecure edges and secure edges, i.e., edges that need protection against control-flow hijacking and edges that need not. This is mainly important to reduce the number of code areas to be protected, primary target for embedded systems with limited potential.
We assume as insecure an edge whose target has an origin tree that contains at least one node in an area at risk of corruption, i.e., the data memory (if we consider the code memory incorruptible). In other words, no matter which are the leaves of the origin tree of its target, an edge is insecure when its target is even partially composed with data coming from data memory. This immediately implies that all direct edges are secure, but also all indirect edges composed with values that never exit the code (intended as union of code memory and processor registers) to go in the data memory.
This approach can be considered as conservative (think of the case in which a value is saved in memory and retrieved few instructions later). To prevent the creation of this kind of false positives, it would be necessary to go further in the analysis of the code, to investigate about the actual possibility of corruption between the store and the load instructions. However, this would mean taking into consideration a memory vulnerability database, and even looking only for the vulnerabilities known so far, this would be not trivial, and moreover, the unknown vulnerabilities would not be taken into account.
In conclusion, if the edge is insecure, then it must be instrumented so that a CFI monitor, at runtime, is able to decide whether it is actually pointing to one of its valid targets. In the case of an insecure forward edge, there is no way to say which of these points is the right one, because this depends on the execution, so the monitor can do nothing but ensuring
that all valid target can be reached. In the case of an insecure backward edge, the monitor can instead enforce a single target, because in addition to store all the possible destinations, it is also possible to store in the monitor the identifier of the BB to be executed at return time, and so the execution is forced to go back there.
Interrupt Service Routines
The assumptions made so far are valid only if one does not consider that the processor, in undefined moments of the execution, can jump to execute special routines to serve interrupt requests (Interrupt Service Routines, ISRs). As explained in [37], there is no static analysis that can forecast in which order or where in the code these routines will be called, so they can never be part of a predefined CFG. Yet, the ISRs are full-fledged routines, which operate on data and registers and which preserve the current program status moving it into memory. The result is that the origin tree that can be constructed from a static analysis as we have seen so far become invalid.
To preserve what has been assumed up to now, it must be therefore ensured that the execution context when entering into an ISR will be equal to the one when resuming the main program. To achieve this, an additional specific instrumentation is needed, based on the validation of the registers’ content, with a double check before and after the execution of the service routine.
Protection Mechanism
As any other, our CFI solution resorts to an offline phase and an online phase as well.
In the offline phase, the firmware to be protected is first compiled, then a static analysis identifies different categories of critical points in the Assembly code. Critical points are locations within the code that require the monitor intervention for control-flow verification in the online phase. In correspondence of such points, some data must therefore be sent to the FPGA, i.e., a write instruction must be inserted. For each BB that contains a critical point, a unique identifier is produced and inserted into the code as a constant. The inserted write will therefore send the identifier of the BB, using as address a code to instruct the monitor. For edge protection, seven categories of critical points are identified:
1) Forward insecure edges with single target: the ID of the source BB is sent to the monitor before the transfer, and the ID of the target BB is sent after the transfer. Internally, the monitor combines the two IDs, and if the edge is valid, the execution can proceed, otherwise the CPU activity is immediately interrupted via a security fault using the interrupt line;
2) Backward insecure edges with single target: same as above;
3) Forward insecure edges with multiple targets: same as the case of single target, but here all target locations are instrumented;
4) Forward secure edge to a routine ending with a backward insecure edge with multiple targets: this transfer is not to be protected, but the ID of the BB to which the called routine must return is sent. In the monitor, the ID is pushed on top of a stack structure;
5) Backward insecure edges with multiple targets: same as 2), but the ID of the target BB must correspond to the ID sent as described in 4). In this regard, the top of the stack is popped and compared to the ID of the target. If a mismatch is found, the violation fault is triggered;
6) Forward insecure edge to a routine ending with a backward insecure edge with single target: again, as in 4), the return BB ID is sent, but also the ID of the target BB is sent after the transfer (to verify both caller identity at return time and validity of destination of the present call);
7) Forward insecure edge to a routine ending with a backward insecure edge with multiple targets: same as above, but here all possible return sites are instrumented;
For context protection, two categories of critical points are identified:
1) Entry point of an Interrupt Service Routine (ISR): a given number of consecutive writes are inserted as first instructions of the ISR, storing the content of registers which, upon entering an ISR, are automatically pushed by the processor architecture (e.g., in case of ARM, R0, R1, R2, R3, R12, LR, PC and the status register xPSR), plus the registers which are additionally used by that ISR. Internally, the monitor saves all these values on top of a dedicated stack structure;
2) Exit point of an Interrupt Service Routine (ISR): before leaving, the same number of writes performed at the entry point for the same registers, are performed in reverse order. The program transfers from the top of its stack to the monitor, which compares the received values with the ones on top of its own dedicated stack. If a mismatch is found, a violation is notified through the interrupt line.
After the instrumentation process described above, two items are available:
1) the instrumented executable binary;
2) a table containing all the instrumented edges, intended as a set of ID pairs (source BB, target BB).
The edge table is converted into a memory initialisation file (.mif) which is then used to produce a read-only memory (ROM) block to be placed inside the monitor architecture. The RT-level description of the monitor is synthesised into a bitstream used to program the FPGA.
Once all the sources are ready, as last step of the offline phase, the programming part takes place: a secure boot loader both loads the instrumented version of the firmware and programs the FPGA, correctly setting the CPU-FPGA interface in order to allow the runtime interaction. The online phase now starts, with the FPGA acting as a monitor in response
In their communication, CPU and FPGA do not need to establish synchronization, as they share the same oscillator for clock signal. Possibly, they may run at different frequencies, multiplying or dividing the oscillator frequency. Anyway, this is decided once for all during configuration, and both actors are aware of the relative speed they have.
The workflow of the analysis and instrumentation process is presented in Figure 5.
**Monitor Internal Structure**
In summary, the monitor relies on three different data structures:
- an *edge table* which encodes the information about all consented control-flow transfers, as pairs of source BB ID and target BB ID;
- a *secure ID stack*, where it pushes the identifiers received to protect backward insecure edges with multiple destinations;
- a *secure register stack*, where it pushes the context of the program upon entering an ISR and checks whether this has remained the same or has been corrupted upon exiting the ISR.
A central *control and check unit* decodes the commands coming from the CPU to generate consequent reads and writes on these three storage blocks, as well as it verifies, through a set of comparators, that the received data are the expected ones. As an output, the unit controls the *interrupt line*, which notifies the CPU that an attempt to redirect the control flow is in progress.
The unit also contains a *timer*, crucial for security. In fact, when protecting an edge, a very stringent timeout must be triggered as soon as the source ID is received. To jump to any gadget in memory, the attacker must pass through one of the instrumented zones, because there is no trampoline which remains unprotected after the instrumentation. When it succeed in tampering with the branch target and jumps to his payload, there is no instrumentation in that position, unless the jump is compliant with the CFG (but when an attack is performed, this is not the case). Therefore, the monitor assumes an attack when, at timeout, the ID has not yet been received. Since CPU and FPGA share the same clock source, the length of the timeout is just the time for the execution of a branch, plus the time required to complete the **OUT/STORE** instruction, possibly multiplied or divided according to the relative CPU-FPGA frequency.
The impossibility to access the FPGA in the normal execution is set as a design rule to guarantee protection: the FPGA is considered as a private resource unusable by the program, so any possible read or write from/to the FPGA is removed during the offline phase, in such a way that no accesses other than those provided by the protection are consented.
The overall block diagram of the CFI monitor is depicted in Figure 6.
**Involved Overhead**
As shown, in terms of code equipment, the defense is implemented simply by performing **write** instructions into the external device. Thus, the need to allocate memory to store CFG information is overcome, as well as it is eliminated.
the computational overhead necessary for validity checks. Conceptually, the write instructions required are:
- just 1 for each instrumented location for edge protection;
- \( n \) for each instrumented location for context protection, where \( n \) is the number of registers pushed by default by the architecture upon entering an ISR, plus the registers pushed because used by the routine.
The term conceptually is here a keyword, because to reach exactly 1 and \( n \) write in each case, the architecture has to support specific features. In particular, additional machine instructions are needed when (i) the ISA does not support writing immediate values to immediate addresses, (ii) mismatches are present in the width of the involved buses.
Concerning the hardware part of the defense, the overhead can be evaluated in terms of the amount of occupied area on the reprogrammable device. The proposed solution requires the CFI monitor be the only module in the FPGA. Required resources are mostly memory resources, for the edge ROM and the two stacks for IDs and registers. These blocks must be properly dimensioned to accommodate all the edges and the maximum amount of forecasted stackable IDs and registers. The additional logic, including the state machine, some comparator and some registers for intermediate data storage, occupies a marginal area, as shown in next Section.
In terms of timing, the FPGA computation needs to be completed in the shortest time possible, in order to inform the processor about an attack as soon as possible. To achieve this, an intelligent encoding for the consented edges is adopted, which allows to access the table with an \( O(1) \) complexity (implementing it as a hash table) after a fast and lightweight combination of source and target IDs.
**Trading off Security and Complexity**
The features which may limit the feasibility of our solution are:
1) a too large execution time overhead due to the added instructions, so that it is no longer possible to meet some real-time constraints;
2) too much latency between the write of the sensitive data and the attack detection, so that the attacker can jump to dangerous code and perform destructive actions in that time window.
Both problems can be faced by trading off security and performances. In particular, the former problem can be tackled by the system designer, who could resort to a “partial” protection: insecure edges belonging to paths proven to be “critical” from the performance point of view could be left “unprotected”. This could be justified with a deeper analysis of code vulnerabilities or simply by assuming the risk of such a choice.
To address the latter problem, the designer should identify the code sections that, within the response time window of the monitor, may cause irreparable damage to the system functioning. These depend not only on the code, but also on the time the adopted architecture takes for executing it. If these dangerous sections are found, either the code is rewritten, so that it become harmless, or the relative frequency between CPU and FPGA should be properly tuned, so that the monitor is faster than an attacker.
**VI. Experimental Results**
In this Section, some preliminary experimental results deriving from the implementation and testing of our solution on a real device are presented. For the evaluation, the SEcube™ Chip [5] by Blu5 Group® has been used. SEcube™ is an open security-oriented platform, implemented as a 3D SiP (System-in-Package) integrating three components:
- A STM32F4 microcontroller by STMicroelectronics™, embedding a ARM Cortex-M4 core, 2 MB of Flash and 256 KB of SRAM;
- A MachXO2 FPGA by Lattice Semiconductor™, hosting 7000 4-bit look-up tables and 240 Kbits of embedded SRAM;
- An EAL5+ certified Smart Card;
The chip was designed as a secure processor that acts as a slave to offer a master (which can be the main processor of a smartphone or of a PC) cryptographic and secure storage functionalities. In this regard, it was designed to resist the most common physical attacks [26] [11]. SEcube™ was not chosen only for the presence of the ST microcontroller and the FPGA, but also because this type of embedded processors, given the uses that are usually made of it, can be the victim of code redirection attacks, as shown in [51] and [52].
Our solution has been tested on some specific benchmarks for embedded devices, made available by the MiBench platform1. On the website, there are several archives containing the source code to be compiled on ARM platforms. We chose a set of 5 applications, and once obtained the binary, we performed the offline analysis and instrumentation process described in the previous Section. As a wrapper around the actual code, we implemented functions to start and stop the hardware timer present in the microcontroller, for measuring the execution times before and after the instrumentation.
The physical implementation of the SEcube™ platform and the STM32F4 architecture required increasing the number of machine instructions for each write. As an example, the external parallel interface has a 16-bit data bus, so two accesses are required to send 32-bit values. In addition, the STR machine instruction does not support an immediate address, so this must be first copied into a register.
On the FPGA side, we were able to implement a version of the monitor with 1024 entries for each of the two stacks and 8192 entries for the ROM edge table. These dimensions were decided statically before the benchmarking process, and anyway they are much more than needed for hosting critical information for each of the analyzed applications. As expected, we got an occupancy of 156 Kbits for the embedded FPGA memory (~69% of the total), which is to be attributed to the implementation of the three data structures. For the logic of our
---
1http://vhvhosts.eecs.umich.edu/mibench/
monitor, we got 185 LUTs occupied (~3% of the total), which is perfectly expected for the simplicity of the implemented functionalities.
In Table I, we list data collected from the analysis of the benchmarks, in the form of execution times and number of instructions before and after the instrumentation process, and the involved overheads. We also list the dimension of the inputs given to each benchmark. In all experiments, the CPU was running at 180 MHz, while the FPGA at 90 MHz. On SEcube™, CPU and FPGA share the same clock oscillator, so their synchronization is natural.
Looking at the Table, it is possible to notice the very low amount of additional code (always less than 4%). Results are instead conflicting as regards the execution times, with the first four examples at less than 1% overhead, while the last one at a very high overhead. This seems to contrast with the percentage of instructions added, which remains low as for the others. Actually, the discrepancy is generated because in the code there are very frequent indirect calls to functions consisting of a few instructions. The impact of the added code is therefore much greater than in other cases. This is also useful to show how the execution impact actually depends on how the code is architected. No solution that includes even a minimum of instrumentation can limit this, even if our solution greatly limits the percentage of total instructions added.
VII. CONCLUSIONS
In this paper, we presented a solution to guarantee the Control-Flow Integrity (CFI) of firmware running on bare-metal microcontrollers, which constitute a relevant part in the embedded domain. The work was mainly aimed at mitigating the drawbacks present in the previous state-of-the-art solutions. Using a mixture of binary instrumentation and hardware-based supervision, the solution entrusts the binary enforcement with the sole task of informing a CFI monitor present on an FPGA about the status of the CFG through simple additional write instructions at critical points. The hardware monitor is encharged of storing the information about the CFG and performing the needed computation for the validation. As demonstrated by experimental results, this technique greatly reduces the overhead of code necessary for protection. No multitasking is required, and the protection can be implemented on very simple systems and with minimal resources. In addition, the monitor is implemented on a reconfigurable hardware device, which frees the solution from the need of designing custom CPU architectures to support the defense. The only constraint is the presence of a reconfigurable hardware, but as explained in Subsection I-A, this is a diffused market trend.
VIII. ACKNOWLEDGMENTS
This paper is supported in part by European Union’s Horizon 2020 research and innovation programme under grant agreement No. 830892, project SPARTA.
REFERENCES
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>Inputs</th>
<th>Time (no prot.)</th>
<th>Time (prot.)</th>
<th>Overhead</th>
<th># instr. (no prot.)</th>
<th># instr. (prot.)</th>
<th>Overhead</th>
</tr>
</thead>
<tbody>
<tr>
<td>STHX</td>
<td>Message of 100 KB</td>
<td>506849 µs</td>
<td>666057 µs</td>
<td>< 10%</td>
<td>20433</td>
<td>21413</td>
<td>3.62%</td>
</tr>
<tr>
<td>R1NDJIE</td>
<td>Message of 100 KB</td>
<td>1085568 µs</td>
<td>1007854 µs</td>
<td>7.01%</td>
<td>25011</td>
<td>26009</td>
<td>3.49%</td>
</tr>
<tr>
<td>DUKSTRA</td>
<td>Matrix of 100x100 int</td>
<td>2880724 µs</td>
<td>2894381 µs</td>
<td>0.14%</td>
<td>20166</td>
<td>20869</td>
<td>3.70%</td>
</tr>
<tr>
<td>STRING</td>
<td>1331 strings (var. length)</td>
<td>178616 µs</td>
<td>180028 µs</td>
<td>0.79%</td>
<td>20080</td>
<td>20791</td>
<td>3.64%</td>
</tr>
<tr>
<td>BITCOUNT</td>
<td>12800 int</td>
<td>419543 µs</td>
<td>1233227 µs</td>
<td>193%</td>
<td>20192</td>
<td>20944</td>
<td>3.72%</td>
</tr>
</tbody>
</table>
TABLE I: Preliminary Experimental Results
|
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2838933/382453/A%20FPGA-based%20Control-Flow%20Integrity%20Solution%20for%20Securing%20Bare-Metal%20Embedded%20Systems.pdf", "len_cl100k_base": 9298, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34260, "total-output-tokens": 11566, "length": "2e13", "weborganizer": {"__label__adult": 0.0011081695556640625, "__label__art_design": 0.0011091232299804688, "__label__crime_law": 0.003725051879882813, "__label__education_jobs": 0.0004725456237792969, "__label__entertainment": 0.0001932382583618164, "__label__fashion_beauty": 0.0003972053527832031, "__label__finance_business": 0.0004661083221435547, "__label__food_dining": 0.0007538795471191406, "__label__games": 0.0019989013671875, "__label__hardware": 0.06396484375, "__label__health": 0.001453399658203125, "__label__history": 0.0005335807800292969, "__label__home_hobbies": 0.0004241466522216797, "__label__industrial": 0.0030269622802734375, "__label__literature": 0.0003285408020019531, "__label__politics": 0.0007457733154296875, "__label__religion": 0.0010528564453125, "__label__science_tech": 0.43212890625, "__label__social_life": 0.00010985136032104492, "__label__software": 0.01187896728515625, "__label__software_dev": 0.470947265625, "__label__sports_fitness": 0.0007023811340332031, "__label__transportation": 0.002109527587890625, "__label__travel": 0.00030541419982910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49455, 0.01933]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49455, 0.44765]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49455, 0.92131]], "google_gemma-3-12b-it_contains_pii": [[0, 1238, false], [1238, 6587, null], [6587, 14292, null], [14292, 18584, null], [18584, 21025, null], [21025, 26634, null], [26634, 32290, null], [32290, 35272, null], [35272, 41185, null], [41185, 47844, null], [47844, 49455, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1238, true], [1238, 6587, null], [6587, 14292, null], [14292, 18584, null], [18584, 21025, null], [21025, 26634, null], [26634, 32290, null], [32290, 35272, null], [35272, 41185, null], [41185, 47844, null], [47844, 49455, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49455, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49455, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49455, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49455, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49455, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49455, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49455, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49455, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49455, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49455, null]], "pdf_page_numbers": [[0, 1238, 1], [1238, 6587, 2], [6587, 14292, 3], [14292, 18584, 4], [18584, 21025, 5], [21025, 26634, 6], [26634, 32290, 7], [32290, 35272, 8], [35272, 41185, 9], [41185, 47844, 10], [47844, 49455, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49455, 0.02381]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
2d0a7bff34bf9a9a9c7e68d53ffc8bd346bc13c7
|
Raising the Bar for Using GPUs in Software Packet Processing
Anuj Kalia, Dong Zhou, Michael Kaminsky∗, and David G. Andersen
Carnegie Mellon University and "Intel Labs
Abstract
Numerous recent research efforts have explored the use of Graphics Processing Units (GPUs) as accelerators for software-based routing and packet handling applications, typically demonstrating throughput several times higher than using legacy code on the CPU alone.
In this paper, we explore a new hypothesis about such designs: For many such applications, the benefits arise less from the GPU hardware itself as from the expression of the problem in a language such as CUDA or OpenCL that facilitates memory latency hiding and vectorization through massive concurrency. We demonstrate that in several cases, after applying a similar style of optimization to algorithm implementations, a CPU-only implementation is, in fact, more resource efficient than the version running on the GPU. To "raise the bar" for future uses of GPUs in packet processing applications, we present and evaluate a preliminary language/compiler-based framework called G-Opt that can accelerate CPU-based packet handling programs by automatically hiding memory access latency.
1 Introduction
The question of matching hardware architectures to networking requirements involves numerous trade-offs between flexibility, the use of off-the-shelf components, and speed and efficiency. ASIC implementations are fast, but relatively inflexible once designed, and must be produced in large quantities to offset the high development costs. Software routers are as flexible as code, but have comparatively poor performance, in packets-per-second (pps), as well as in cost (pps/$) and energy efficiency (pps/watt). Both ends of the spectrum are successful: Software-based firewalls are a popular use of the flexibility and affordability of systems up to a few gigabits per second; commodity Ethernet switches based on high-volume ASICs achieve seemingly unbeatable energy and cost efficiency.
In the last decade, several potential middle grounds emerged, from network forwarding engines such as the Intel IXP, to FPGA designs [12], and, as we focus on in this paper, to the use of commodity GPUs. Understanding the advantages of these architectures, and how to best exploit them, is important both in research (software-based implementations are far easier to experiment with) and in practice (software-based approaches are used for low-speed applications and in cases such as forwarding within virtual switches [13]).
Our goal in this paper is to advance understanding of the advantages of GPU-assisted packet processors compared to CPU-only designs. In particular, noting that several recent efforts have claimed that GPU-based designs can be faster even for simple applications such as IPv4 forwarding [23, 43, 31, 50, 35, 30], we attempt to identify the reasons for that speedup. At the outset of this work, we hypothesized that much of the advantage came from the way the GPUs were programmed, and that less of it came from the fundamental hardware advantages of GPUs (computational efficiency from having many processing units and huge memory bandwidth).
In this paper, we show that this hypothesis appears correct. Although GPU-based approaches are faster than a straightforward implementation of various forwarding algorithms, it is possible to transform the CPU implementations into a form that is more resource efficient than GPUs.
For many packet processing applications, the key advantage of a GPU is not its computational power, but that it can transparently hide the 60-200ns of latency required to retrieve data from main memory. GPUs do this by exploiting massive parallelism and using fast hardware thread switching to switch between sets of packets when one set is waiting for memory. We demonstrate that insights from code optimization techniques such as group prefetching and software pipelining [17, 51] apply to typical CPU packet handling code to boost its performance. In many cases, the CPU version is more resource efficient than the GPU, and delivers lower latency because it does not incur the additional overhead of transferring data to and from the GPU.
Finally, to make these optimizations more widely usable, both in support of practical implementations of software packet processing applications, and to give future research a stronger CPU baseline for comparison, we present a method to automatically transform data structure lookup code to overlap its memory accesses and computation. This automatically transformed code is up to 1.5-6.6x faster than the baseline code for several common
look up patterns, and its performance is within 10% of our hand-optimized version. By applying these optimizations, we hope to “raise the bar” for future architectural comparisons against the baseline CPU-based design.
2 Strengths and weaknesses of GPUs for packet processing
In this section, we first provide relevant background on GPU architecture and programming, and discuss the reasons why previous research efforts have used GPUs as accelerators for packet processing applications. Then, we show how the fundamental differences between the requirements of packet processing applications and conventional graphics applications make GPUs less attractive for packet processing than people often assume. Throughout this paper, we use NVIDIA and CUDA’s terminology for GPU architecture and programming model, but we believe that our discussion and conclusions apply equally to other discrete GPUs (e.g., GPUs using OpenCL).
2.1 GPU strengths: vectorization and memory latency hiding
A modern CUDA-enabled GPU (Figure 1) consists of a large number of processing cores grouped into Streaming Multiprocessors (SMs). It also contains registers, a small amount of memory in a cache hierarchy, and a large global memory. The code that runs on a GPU is called a kernel, and is executed in groups of 32 threads called warps. The threads in a warp follow a SIMT (Single Instruction, Multiple Thread) model of computation: they share an instruction pointer and execute the same instructions. If the threads “diverge” (i.e., take different execution paths), the GPU selectively disables the threads as necessary to allow them to execute correctly.
Vectorization: The large number of processing cores on a GPU make it attractive as a vector processor for packets. Although network packets do have some inter-packet ordering requirements, most core networking functions such as lookups, hash computation, or encryption can be executed in parallel for multiple packets at a time. This parallelism is easily accessible to the programmer through well-established GPU-programming frameworks such as CUDA and OpenCL. The programmer writes code for a single thread; the framework automatically runs this code with multiple threads on multiple processors.
Comparison with CPUs: The AVX2 vector instruction set in the current generation of Intel processors has 256-bit registers that can process 8 32-bit integers in parallel. However, the programming language support for CPU-based vectorization is still maturing [8].
Memory latency hiding: Packet processing applications often involve lookups into large data structures kept in DRAM. Absent latency-hiding, access to these structures will stall execution while it completes (300-400 cycles for NVIDIA’s GPUs). Modern GPUs hide latency using hardware. The warp scheduler in an SM holds up to 64 warps to run on its cores. When threads in a warp access global memory, the scheduler switches to a different warp. Each SM has thousands of registers to store the warp-execution context so that this switching does not require explicitly saving and restoring registers.
Comparison with CPUs: Three architectural features in modern CPUs enable memory latency hiding. First, CPUs have a small number of hardware threads (typically two) that can run on a single core, enabling ongoing computation when one thread is stalled on memory. Unfortunately, while each core can maintain up to ten outstanding cache misses [51], hyperthreading can only provide two “for free”. Second, CPUs provide both hardware and software-managed prefetching to fetch data from DRAM into caches before it is needed. And third, after issuing a DRAM access, CPUs can continue executing independent instructions using out-of-order execution. These features, however, are less able to hide latency in unmodified code than the hardware-supported context switches on GPUs, and leave ample room for improvement using latency-hiding code optimizations (Section 3).
2.2 GPU weaknesses: setup overhead and random memory accesses
Although GPUs have attractive features for accelerating packet processing, two requirements of packet processing applications make GPUs a less attractive choice:
Many networking applications require low latency. For example, it is undesirable for a software router in a datacenter to add more than a few microseconds of latency [20]. In the measurement setup we use in this paper, the RTT through an unloaded CPU-based forwarder is 16µs. Recent work in high-performance packet processing reports numbers from 12 to 40µs [32, 51]. Unfortunately, merely communicating from the CPU to the GPU and back may add more latency than the total RTT of these existing systems. For example, it takes ~ 15µs to transfer one byte to and from a GPU,
and $\sim 5\mu s$ to launch the kernel [33]. Moreover, GPU-accelerated systems must assemble large batches of packets to process on the GPU in order to take advantage of their massive parallelism and amortize setup and transfer costs. This batching further increases latency.
Networking applications often require random memory accesses into data structures, but the memory subsystem in GPUs is optimized for contiguous access. Under random accesses, GPUs lose a significant fraction of their memory bandwidth advantage over CPUs.
We now discuss these two factors in more detail. Then, keeping these two fundamental factors in mind, we perform simple experiments through which we seek to answer the following question: *When is it beneficial to offload random memory accesses or computation to a GPU?*
### 2.3 Experimental Setup
We perform our measurements on three CPUs and three GPUs, representing the low, mid, and high end of the recent CPU and GPU markets. Table 1 shows their relevant hardware specifications and cost. All prices are from http://www.newegg.com as of 9/2014. The K20 connects to an AMD Opteron 6272 socket via PCIe 2.0 x16, the GTX 980 to a Xeon E5-2680 via PCIe 2.0 x16, and the GTX 650 to an i7-4770 via PCIe 3.0 x16.
### 2.4 Latency of CPU-GPU communication
We first measure the minimum time required to involve a GPU in a computation—the minimum extra latency that a GPU in a software router will add to every packet. In this experiment, the host transfers an input array with $N$ 32-bit integers to the GPU, the GPU performs negligible computations on the array, and generates an output array with the same size. To provide a fair basis for comparison with CPUs, we explored the space of possible methods for this CPU-GPU data exchange in search of the best, and present results from two methods here:
**Asynchronous CUDA functions**: This method performs memory copies and kernel launch using asynchronous functions (e.g., cudaMemcpyAsync) provided by the CUDA API. Unlike synchronous CUDA functions, these functions can reduce the total processing time by overlapping data-copying with kernel execution. Figure 2 shows the timing breakdown for the different functions. We define the time taken for an asynchronous CUDA function call as the time it takes to return control to the calling CPU thread. The extra time taken to complete all the pending asynchronous functions is shown separately.
**Polling on mapped memory**: To avoid the overhead of CUDA functions, we tried using CUDA's mapped memory feature that allows the GPU to access the host’s memory over PCIe. We perform CPU-GPU communication using mapped memory as follows. The CPU creates the input array and a flag in the host memory and raises the flag when the input is ready. CUDA threads continuously poll the flag and read the input array when they notice a raised flag. After processing, they update the output array and start polling for the flag to be raised again. This method does not use any CUDA functions in the critical path, but all accesses to mapped memory (reading the flag, reading the input array, and writing to the output array) that come from CUDA threads lead to PCIe transactions.
Figure 3 shows the time taken for this process with different values of $N$. The solid lines show the results with polling on mapped memory, and the dotted lines use the asynchronous CUDA functions. For small values of $N$, avoiding the CUDA driver overhead significantly reduces total time. However, polling generates a linearly increasing number of PCIe transactions as $N$ increases, and becomes slower than CUDA functions for $N \sim 1000$. As GPU-offloading generally requires larger batch sizes to be efficient, we only use asynchronous CUDA functions in the rest of this work.
### 2.5 GPU random memory access speed
Although GPUs have much higher sequential memory bandwidth than CPUs (Table 1), they lose a significant fraction of their advantage when memory accesses are random, as in data structure lookups in many packet processing applications. We quantify this loss by measuring...
the random access rate of CPUs and GPU as follows. We create a 1 GB array \( L \) containing a random permutation of \( \{0, \ldots, 2^{28} - 1\} \), and an array \( H \) of \( B \) random offsets into \( L \), and pre-copy them to the GPU’s memory. In the experiment, each element of \( H \) is used to follow a chain of random locations in \( L \) by executing \( H[i] = L[H[i]] \) \( D \) times. For maximum memory parallelism, each GPU thread handles one chain, whereas each CPU core handles all the chains simultaneously. Then, the random access rate is \( \frac{B + D}{t} \), where \( t \) is the time taken to complete the above process.
Table 1 shows the rate achieved for different CPUs and GPUs with \( D = 10 \), and the value of \( B \) that gave the maximum rate (\( B = 16 \) for CPUs, \( B = 2^{19} \) for GPUs).\(^1\) Although the advertised memory bandwidth of a GTX 980 (224 GB/s) is 4.37x of a Xeon E5-2680, our measured random access rate is only 2.12x. This reduction in GPU bandwidth is explained by the inability of its memory controller to coalesce memory accesses done by different threads in a warp. The coalescing optimization is only done when the warp’s threads access contiguous memory, which rarely happens in our experiment.
2.6 When should we offload to a GPU?
Given that involving GPUs takes several microseconds, and their random memory access rate is not much higher than that of CPUs, it is intriguing to find out in which scenarios GPU-offloading is really beneficial. Here, we focus on two widely-explored tasks from prior work: random memory accesses and expensive computations. In the rest of this paper, all experiments are done on the E5-2680 machine with the GTX 980 GPU.
2.6.1 Offloading random memory accesses
Lookups in pointer-based data structures such as IPv4/IPv6 tries and state machines follow a chain of mostly random pointers in memory. To understand the benefit of offloading these memory accesses to GPUs, we perform the experiment in Section 2.5, but include the time taken to transfer \( H \) to and from the GPU. \( H \) represents a batch of header addresses used for lookups in packet processing. We set \( B \) (the size of the batch) to 8192—slightly higher than the number of packets arriving in 100\( \mu \)s on our 40 Gbps network. We use different values of \( D \), representing the variation in the number of pointer-dereferencing operations for different data structures.
Figure 4a plots the number of headers processed per second for the GPU and different numbers of CPU cores. As \( D \) increases, the overhead of the CUDA function calls gets amortized and the GPU outperforms an increasing number of CPU cores. However for \( D \leq 4 \), the CPU outperforms the GPU, indicating that offloading \( \leq 4 \) dependent memory accesses (e.g., IPv4 lookups in Packet-Shader [23] and GALE [50]) should be slower than using the CPU only.
2.6.2 Offloading expensive computation
Although GPUs can provide substantially more computing power than CPUs, the gap decreases significantly when we take the communication overhead into account. To compare the computational power of GPUs and CPUs for varying amounts of offloaded computation, we perform a sequence of \( D \) dependent CityHash32 [4] operations on each element of \( H \) (\( B \) is set to 8192).
Figure 4b shows that the CPU outperforms the GPU if \( D \leq 3 \). Computing 3 CityHashes takes \( \sim 40 \)ns on one CPU core. This time frame allows for a reasonable amount of computation before it makes sense to switch to GPU offloading. For example, a CPU core can compute the cryptographically stronger Siphash [16] of a 16-byte string in \( \sim 36 \)ns.
3 Automatic DRAM latency hiding for CPUs
The section above showed that CPUs support respectable random memory access rates. However, achieving these rates is challenging: CPUs do not have hardware support for fast thread switching that enables latency hiding on GPUs. Furthermore, programs written for GPUs in CUDA or OpenCL start from the perspective of processing many (mostly)-independent packets, which facilitates latency hiding.
The simple experiment in the previous section saturated the CPU’s random memory access capability because of its simplicity. Our code was structured such that each core issued \( B \) independent memory accesses—one for each chain—in a tight loop. The CPU has a limited window for reordering and issuing out-of-order instructions.
---
**Table 1:** CPU and GPU specifications, and *measured* random access rate
<table>
<thead>
<tr>
<th>Name</th>
<th># of cores</th>
<th>Memory b/w</th>
<th>Arch., Lithography</th>
<th>Released</th>
<th>Cost</th>
<th>Random Access Rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>Xeon E5-2680</td>
<td>8</td>
<td>51.2 GB/s</td>
<td>SandyBridge, 32nm</td>
<td>2012</td>
<td>$1,748</td>
<td>595 M/s</td>
</tr>
<tr>
<td>Xeon E5-2650 v2</td>
<td>8</td>
<td>59.7 GB/s</td>
<td>IvyBridge, 22nm</td>
<td>2013</td>
<td>$1,169</td>
<td>464 M/s</td>
</tr>
<tr>
<td>i7-4770</td>
<td>4</td>
<td>25.6 GB/s</td>
<td>Haswell, 22nm</td>
<td>2013</td>
<td>$309</td>
<td>262 M/s</td>
</tr>
<tr>
<td>Tesla K20</td>
<td>2,496</td>
<td>208 GB/s</td>
<td>Kepler, 28nm</td>
<td>2012</td>
<td>$2,848</td>
<td>792 M/s</td>
</tr>
<tr>
<td>GTX 980</td>
<td>2048</td>
<td>224 GB/s</td>
<td>Maxwell, 28nm</td>
<td>2014</td>
<td>$560</td>
<td>1260 M/s</td>
</tr>
<tr>
<td>GTX 650 Ti</td>
<td>768</td>
<td>86.4 GB/s</td>
<td>Kepler, 28nm</td>
<td>2012</td>
<td>$130</td>
<td>597 M/s</td>
</tr>
</tbody>
</table>
\(^1\)The K20’s rate increases to 1390 M/s if \( L \) is smaller than 256 MB.
When memory accesses are independent and close in the instruction stream, the CPU can hide the latency by issuing subsequent accesses before the first completes. However, as described below, re-structuring and optimizing real-world applications in this manner is tedious or inefficient.
A typical unoptimized packet-processing program operates by getting a batch of packets from the NIC driver, and then processing the packets one by one. Memory accesses within a packet are logically dependent on each other, and the memory accesses across multiple packets are spaced far apart in the instruction stream. This reduces or eliminates the memory latency-hiding effect of out-of-order execution. Our goal, then, is to (automatically) restructure this CPU code in a way that hides memory latency.
In this section, we first discuss existing techniques for optimizing CPU programs to hide their memory access latency. As these techniques are not suited to automatically hiding DRAM latency, we present a new technique called G-Opt that achieves this goal for programs with parallel data structure lookups. Although the problem of automatic parallelization and latency hiding in general is hard, certain common patterns in packet processing applications can be handled automatically. G-Opt hides the DRAM latency for parallel lookups that observe the same constraints as their CUDA implementations: independence across lookups and read-only data structures.
Figure 5: Naive batched hash table lookup.
3.1 Existing techniques for hiding memory access latency
3.1.1 Group prefetching
Group prefetching hides latency by processing a batch of lookups at once and by using memory prefetches instead of memory accesses. In a prefetch, the CPU issues a request to load a given memory location into cache, but does not wait for the request to complete. By intelligently scheduling independent instructions after a prefetch, useful work can be done while the prefetch completes. This “hiding” of prefetch latency behind independent instructions can increase performance significantly.
A data structure lookup often consists of a series of dependent memory accesses. Figure 5 shows a simple implementation of a batched hash table lookup function. Each invocation of the function processes a batch of B lookups. Each hash table entry contains an integer key and a pointer to a value. For simplicity, we assume for now that there are no hash collisions. There are three steps for each lookup in the batch: hash computation (line 4), accessing the hash table entry to get the value pointer (line 6), and finally accessing the value (line 9). Within a lookup, each step depends on the previous one: there are no independent instructions that can be overlapped with prefetches. However, independent instructions do exist if we consider the different lookups in the batch [17, 51].
Figure 6 is a variant of Figure 5 with the group prefetching optimization. It splits up the lookup code into three stages, delimited by the expensive memory accesses in the original code. We define an expensive memory access as a memory load operation that is likely to miss all levels of cache and hit DRAM. The optimized code does not directly access the hash table entry after computing the hash for a lookup key; it issues a prefetch for the entry and proceeds to compute the hash for the remaining lookups. By doing this, it does not stall on a memory lookup for the hash table entry and instead overlaps the prefetch with independent instructions (hash computation and prefetch instructions) from other lookups.
```c
1 find(entry_t *h_table, key_t *K, value_t *V) {
2 int i;
3 for(i = 0; i < B; i++) {
4 int entry_idx = hash(K[i]);
5 // g.expensive (&h_table[entry_idx]);
6 value_t *v_ptr = h_table[entry_idx].v_ptr;
7 if(v_ptr != NULL) {
8 // g.expensive (v_ptr);
9 V[i] = *v_ptr;
10 } else {
11 V[i] = NOT_FOUND;
12 }
13 }
14 }
```
We now describe our method, called G-Opt, for automatically hiding DRAM latency for data structure lookup algorithms. Our technique borrows from both group prefetching and fast context switching. Individually, each of these techniques falls short of our goal: Group prefetching can hide DRAM latency but there is no general technique to automate it, and fast context switching is easy to automate but has large overhead.
G-Opt is a source-to-source transformation that operates on a batched lookup function, $F$, written in C. It imposes the same constraints on the programmer that languages such as CUDA [3], OpenCL, and Intel’s ISPC [8] do: the programmer must write batch code that expresses parallelism by granting the language explicit permission to run the code on multiple independent inputs. G-Opt additionally requires the programmer to annotate the expensive memory accesses that occur within $F$. To annotate the batch lookup code in Figure 5, the lines with g_expensive hints should be uncommented, indicating that the following lines (line 6 and line 9) contain an expensive memory access. g_expensive is a macro that evaluates to an empty string: it does not affect the original code, but G-Opt uses it as a directive during code generation. The input function, $F$, processes the batch of lookups one-by-one as in Figure 5. Applying G-Opt to $F$ yields a new function $\hat{F}$ that has the same result as $F$, but includes extra logic that tries to hide the latency of DRAM accesses. Before describing the transformation in more detail, we outline how the function $\hat{F}$ performs the lookups.
$\hat{F}$ begins by executing code for the first lookup. Instead of performing an expensive memory access for this lookup, $\hat{F}$ issues a prefetch for the access and switches to executing code for the second lookup. This continues until the second lookup encounters an expensive memory access, at which point $\hat{F}$ switches to the third lookup, or back to the first lookup if there are only two lookups in the batch. Upon returning to the first lookup, the new code then accesses the memory that it had previously prefetched. In the optimal case, this memory access does not need to wait on DRAM because the data is already available in the processor’s L1 cache.
We now describe the transformation in more detail by discussing its action on the batched hash table lookup code in Figure 5. The code produced by G-Opt is shown in Figure 7. The key characteristics of the transformed code are:
1. **Cheap per-lookup state-maintenance**: There are two pieces of state for a lookup in $\hat{F}$. First, the function-specific state for a lookup is maintained in local arrays derived from the local variables in $F$: the local variable named $x$ in $F$ is stored in $x[I]$ for the $I^{th}$ lookup in $\hat{F}$. Second, there are two G-Opt-specific control variables for lookup $I$: g_labels[I] stores its goto target, and g_mask’s $I^{th}$ bit records if it has finished execution.
```c
1 find(entry_t *h_table, key_t *K, value_t *V) {
2 int entry_idx[B], i;
3 value_t v_ptr[B];
4 // Stage 1: Hash-Computation
5 for(i = 0; i < B; i++) {
6 entry_idx[i] = hash(K[i]);
7 prefetch(&h_table[entry_idx[i]]);
8 }
9
10 // Stage 2: Access hash table entry
11 for(i = 0; i < B; i++) {
12 v_ptr[i] = h_table[entry_idx[i]].v_ptr;
13 prefetch(v_ptr[i]);
14 }
15
16 // Stage 3: Access value
17 for(i = 0; i < B; i++) {
18 if(v_ptr[i] != NULL) {
19 V[I] = *v_ptr[i];
20 } else {
21 V[I] = NOT_FOUND;
22 }
23 }
24
25 Figure 6: Batched lookup with group prefetching.
```
Unfortunately, group prefetching does not apply trivially to general lookup code because of control divergence. It requires dividing the code linearly into stages, which is difficult for code with complicated control flow. Even if such a linear layout were possible, control divergence will require a possibly large number of masks to record the execution paths taken by different lookups. Divergence also means that fewer lookups from a batch will enter later stages, reducing the number of instructions available to overlap with prefetches.
### 3.1.2 Fast context switching
In Grappa [36], fast context switching among lightweight threads is used to hide the latency of remote memory accesses over InfiniBand. After issuing a remote memory operation, the current thread yields control in an attempt to overlap the remote operation’s execution with work from other threads. The minimum reported context switch time, 38 nanoseconds, is sufficiently small compared to remote memory accesses that take a few microseconds to complete. Importantly, this solution (like the hardware context switches on GPUs) is able to handle the control divergence of general packet processing. Unfortunately, the local DRAM accesses required in most packet processing applications take 60-100 nanoseconds, making the overhead of even highly optimized generic context switching unacceptable.
### 3.2 G-Opt
We now describe our method, called G-Opt, for automatically hiding DRAM latency for data structure lookup algorithms. Our technique borrows from both group prefetching and fast context switching. Individually, each of these
2. **Lookup-switching using gotos:** Instead of stalling on a memory access for lookup \( l \), \( g \) issues a prefetch for the memory access, saves the goto target at the next line of code into \( g.labels[I] \), and jumps to the goto target for the next lookup. We call this procedure a “Prefetch, Save, and Switch”, or PSS. It acts as a fast switching mechanism between different lookups, and is carried out using the \( G.PSS \) macro that takes two arguments: the address to prefetch and the label to save as the goto target. G-Opt inserts a \( G.PSS \) macro and a goto target before every expensive memory access; this is achieved by using the annotations in \( T \).
3. **Extra initialization and termination code:** G-Opt automatically sets the initial goto target label for all lookups to \( g.label_0 \). Because different lookups can take significantly different code paths in complex applications, they can reach the label \( g.end \) in any order. \( g \) uses a bitmask to record which lookups have finished executing, and the function returns only after all lookups in the batch have reached \( g.end \).
We implemented G-Opt using the ANTLR parser generator [2] framework. G-Opt performs 8 passes over the input function’s Abstract Syntax Tree. It converts local variables into local arrays. It recognizes the annotations in the input function and emits labels and \( G.PSS \) macros. Finally, it deletes the top-level loop (written as a foreach loop to distinguish it from other loops) and adds the initialization and termination code based on the control variables. Our current implementation does not allow pre-processor macros in the input code, and enforces a slightly restricted subset of the ISO C grammar to avoid ambiguous cases that would normally be resolved subsequent to parsing (e.g., the original grammar can interpret \( \text{foo}(x) \); as a variable declaration of type \( \text{foo} \)).
### 3.3 Evaluation of G-Opt
In this section, we evaluate G-Opt on a collection of synthetic microbenchmarks that perform random memory accesses; Section 4 discusses the usefulness of G-Opt for a full-fledged software router. We present a list of our microbenchmarks along with their possible uses in real-world applications below. For each microbenchmark, we also list the source of expensive memory accesses and computation. The speedup provided by G-Opt depends on a balance between these two factors: G-Opt is not useful for compute-intensive programs with no expensive memory accesses, and loses some of its benefit for memory-intensive programs with little computation.
**Cuckoo hashing:** Cuckoo hashing [37] is an efficient method for storing in-memory lookup tables [19, 51]. Our 2-8 cuckoo hash table (using the terminology from MemC3 [19]) maps integer keys to integer values. Com-
```c
1 // Prefetch, Save label, and Switch lookup
2 #define G.PSS(addr, label) do {
3 prefetch(addr); \n4 g.labels[I] = &label; \n5 I = (I + 1) % B; \n6 goto *g.labels[I]; \n7 } while(0);
8
9 find(entry_t *h_table, key_t *K, value_t *V) {
10 // Local variables from the function
11 int entry_idx[B];
12 value_t *v_ptr[B];
13
14 // G-Opt control variables
15 int I = 0, g_mask = 0;
16 void *g.labels[B] = {g.label_0};
17
18 g.label_0:
19 entry_idx[I] = hash(K[I]);
20 G.PSS(h_table, entry_idx[I], g.label_1);
21 g.label_1:
22 v_ptr[I] = h_table[entry_idx[I]].v_ptr;
23 if(v_ptr[I] != NULL) {
24 G.PSS(v_ptr[I], g.label_2);
25 g.label_2:
26 V[I] = *v_ptr[I];
27 } else {
28 V[I] = NOT_FOUND;
29 }
30
31 g.end:
32 g.labels[I] = &g.end;
33 g_mask = SET_BIT(g_mask, I);
34 if(g_mask == (1 << B) - 1) {
35 return;
36 }
37 I = (I + 1) % B;
38 goto *g.labels[I];
39 }
```
Figure 7: Batched hash table lookup after G-Opt transforming.
**putation:** hashing a lookup key. **Memory:** reading the corresponding entries from the hash table.
**Pointer chasing:** Several algorithms that operate on pointer-based data structures, such as trees, tries, and linked lists, are based on following pointers in memory and involve little computation. We simulate a pointer-based data structure with minimal computation by using the experiment in Section 2.5. We set \( D \) to 100, emulating the long chains of dependent memory accesses performed for traversing data structures such as state machines and trees. **Computation:** negligible. **Memory:** reading an integer at a random offset in L.
**IPv6 lookup:** To demonstrate the applicability of G-Opt to real-world code, we used it to accelerate Intel DPDK’s batched IPv6 lookup function. Applying G-Opt to the lookup code required only minor syntactic changes and one line of annotation, whereas hand-optimization required significant changes to the code’s logic. We populated DPDK’s Longest Prefix Match (LPM) structure with 200,000 random IPv6 prefixes (as done in Packet-
Shaker ([23]) with lengths between 48 and 64 bits, and used random samples from these prefixes to simulate a worst case lookup workload. **Computation:** a few arithmetic and bitwise operations. **Memory:** 4 to 6 accesses to the LPM data structure.
Our microbenchmarks use 2 MB hugepages to reduce TLB misses [32]. We use gcc version 4.6.3 with -O3. The experiments in this section were performed on a Xeon E5-2680 CPU with 32 GB of RAM and 20 MB of L3 cache. We also tested G-Opt on the CPUs in Table 1 with similar results.
### 3.3.1 Speedup over baseline code
Figure 8 shows the benefit of G-Opt for our microbenchmarks. G-Opt speeds up cuckoo hashing by 2.6x, pointer chasing (with $D = 100$) by 6.6x, and IPv6 lookups by 2x. The figure also shows the speedup obtained by manually re-arranging the baseline code to perform group prefetching. There is modest room for further optimization of the generated code in the future, but G-Opt performs surprisingly well compared to hand-optimized code: the manually optimized code is up to 5% faster than G-Opt. For every expensive memory access, G-Opt issues a prefetch, saves a label, and executes a go to, but the hand-optimized code avoids the last two steps.
### 3.3.2 Instruction overhead of G-Opt
G-Opt’s output, $\mathcal{S}$, has more code than the original input function $f$. The new function needs instructions to switch between different lookups, plus the initialization and termination code. G-Opt also replaces local variable accesses with array accesses. This can lead to additional load and store instructions because array locations are not register allocated.
Although G-Opt’s code executes more instructions than the baseline code, it uses fewer cycles by reducing the number of cycles that are spent stalled on DRAM accesses. We quantify this effect in Figure 9 by measuring the total number of instructions and the instructions-per-cycle (IPC) for the baseline and with G-Opt. We use the PAPI tool [9] to access hardware counters for total retired instructions and total cycles. G-Opt offsets the increase in instruction count by an even larger increase in the IPC, leading to an overall decrease in execution time.
### 4 Evaluation
We evaluate four packet processing applications on CPUs and GPUs, each representing a different balance of computation, memory accesses, and overall processing required. We describe each application and list its computational and memory access requirements below. Although the CPU cycles used for packet header manipulation and transmission are an important source of computation, they are common to all evaluated applications and we therefore omit them from the per-application bullets. As described in Section 4.2, G-Opt also overlaps these computations with memory accesses.
**Echo:** To understand the limits of our hardware, we use a toy application called Echo. An Echo router forwards a packet to a uniformly random port $P$ based on a random integer $X$ in the packet’s payload ($P = X \mod 4$). In the GPU-offloaded version, we use the GPU to compute $P$ from $X$. As this application does not involve expensive memory accesses, we do not use G-Opt on it.
**IPv4 forwarding:** We use Intel DPDK’s implementation of the DIR-24-8-BASIC algorithm [22] for IPv4 lookups. It creates a 32 MB table for prefixes with length up to 24 bits and allocates 128 MB for longer prefixes. We populate the forwarding table with 527,961 prefixes from a BGP table snapshot [14], and use randomly generated IPv4 addresses in the workload. **Computation:** negligible. **Memory:** ~1 memory access on average (only 1% of our prefixes are longer than 24 bits).
**IPv6 forwarding:** As described in Section 3.3.
**Layer-2 switch:** We use the CuckooSwitch design [51]. It uses a cuckoo hash table to map MAC addresses to output ports. **Computation:** 1.5 hash-computations (on average) for determining the candidate buckets for a destination MAC address; comparing the destination MAC address with the addresses in the buckets’ slots. **Memory:** 1.5 memory accesses (on average) for reading the buckets.
**Named Data Networking:** We use the hash-based algorithm for name lookup from Wang et al. [46], but
We conduct full-system experiments on a Xeon E5-2680 CPU (8 cores @2.70 GHz)-based server. The CPU socket has 32 GB of quad-channel DDR3-1600 DRAM in its NUMA domain, 2 dual-port Intel X520 10 GbE NICs connected via PCIe 2.0 x8, and a GTX 980 connected via PCIe 2.0 x16. To generate the workload for the server, we use two client machines equipped with Intel L5640 CPUs (6 cores @2.27 GHz) and one Intel X520 NIC. The two 10 GbE ports on these machines are connected directly to two ports on the server. The machines run Ubuntu with Linux kernel 3.11.2 with Intel DPDK 1.5 and CUDA 6.0.
**4.2 System design**
**Network I/O:** We use Intel’s DPDK [5] to access the NICs from userspace. We create as many RX and TX queues on the NIC ports as the number of active CPU cores, and ensure exclusive access to queues. Although the 40 Gbps of network bandwidth on the server machine corresponds to a maximum packet rate of 59.52 (14.88 * 4) million packets per second (Mpps) for minimum sized Ethernet frames, only 47.2 Mpps is achievable; the PCIe 2.0 x8 interface to the dual-port NIC is the bottleneck for minimum sized packets [51]. As the maximum gains from GPU acceleration come for small packets [23], we use the smallest possible packet size in all experiments.
**GPU acceleration:** We use PacketShader’s approach to GPU-based packet processing as follows. We run a dedicated master thread that communicates with the GPU, and several worker threads that receive and transmit packets from the network. Using a single thread to communicate with the GPU is necessary because the overhead of CUDA functions increases drastically when called from multiple threads or processes. The worker threads extract the essential information from the packets and pass it on to the master thread using exclusive worker-master queues. The workers also perform standard packet processing tasks like sanity checks and setting header fields. This division of labor between workers and master reduces the amount of data that the master needs to transmit to the GPU. For example, in IPv4 forwarding, the master receives only one 4-byte IPv4 address per received packet. In our implementation, each worker can have up to 4096 outstanding packets to the master.
---
Footnotes:
1. Our CPU version does not need to make these assumptions, and performs similarly with variable length URLs.
2. The server is dual socket, but we restricted experiments to a single CPU to avoid noise from cross-socket QPI traffic. Previous work on software packet processing suggests that performance will scale and our results will apply to two socket systems [32, 51, 23].
PacketShader’s master thread issues a separate CUDA memcpy for the data generated by each worker to transfer it to the GPU directly via DMA without first copying to the master’s cache. Because of the large overhead of CUDA function calls (Figure 2), we chose not to use this approach.
Using G-Opt for packet processing programs: Intel DPDK provides functions to receive and transmit batches of packets. Using batching reduces function call and PCIe transaction overheads [23, 51] and is required for achieving the peak throughput. Our baseline code works as follows. First, it calls the batched receive function to get a batch of up to 16 packets from a NIC queue. It then passes this batch to the packet processing function \( F \), which processes the packets one by one.
We then apply G-Opt on \( F \) to generate the optimized function \( \mathcal{G} \). Unlike the simpler benchmarks in Section 3.3, \( F \) is a full-fledged packet handler: it includes code for header manipulation and packet transmission in addition to the core data structure lookup. This gives \( \mathcal{G} \) freedom to overlap the prefetches from the lookups with this additional code, but also gives it permission to transmit packets in a different order than they were received. However, \( \mathcal{G} \) preserves the per-flow ordering if forwarding decisions are made based on packet headers only, as in all the applications above. If so, all packets from the same flow are “switched out” by \( \mathcal{G} \) at the same program points, ensuring that they reach the transmission code in order.
4.3 Workload generation
The performance of the above-mentioned packet processing applications depends significantly on two workload characteristics. The following discussion focuses on IPv4 forwarding, but similar factors exist for the other applications. First, the distribution of prefixes in the server’s forwarding table, and the IP addresses in the workload packets generated by the clients, affects the cache hit rate in the server. Second, in real-world traffic, packets with the same IP address (e.g., from the same TCP connection) arrive in bursts, increasing the cache hit rate.
Although these considerations are important, recall that our primary focus is understanding the relative advantages of GPU acceleration as presented in previous work. We therefore tried to mimic PacketShader’s experiments that measure the near worst-case performance of both CPUs and GPUs. Thus, for IPv4 forwarding, we used a real-world forwarding table and generated the IPv4 addresses in the packets with a uniform random distribution. For IPv6 forwarding, we populated the forwarding table with prefixes with randomly generated content, and chose the workload’s addresses from these prefixes using uniformly random sampling. We speculate that prior work may have favored these conditions because worst-case performance is an important factor in router design for quality of service and denial-of-service resilience. Based on results from previous studies [31, 48], we also expect that more cache-friendly (non-random) workloads are likely to improve CPU performance more than that of GPUs.
4.4 Throughput comparison
Figure 10 shows the throughput of CPU-only and GPU-accelerated software routers with different numbers of CPU cores. For Echo (Figure 10a), the CPU achieves \( \sim 17.5 \) Mpps of single-core throughput and needs 3 cores to saturate the 2 dual-port 10 GbE NICs. The GPU-offloaded implementation needs at least 4 worker cores, for a total of 5 CPU cores including the master thread. This happens because the overhead of communicating each request with the master reduces the single-worker throughput to 14.6 Mpps.
Figure 10b shows the graphs for IPv4 lookup. Without G-Opt, using a GPU provides some benefit: With a budget of 4 CPU cores, the GPU-accelerated version outperforms the baseline by 12.5%. After optimizing with G-Opt, the CPU version is strictly better than the GPU-accelerated version. G-Opt achieves the platform’s peak throughput with 4 CPU cores, whereas the GPU-accelerated version requires 5 CPU cores and a GPU.
With G-Opt, a single core can process 16 million IPv4 packets per second, which is 59% higher than the baseline’s single-core performance and is only 8.9% less than the 17.5 Mpps for Echos. When using the DIR-24-8-BASIC algorithm for IPv4 lookups, the CPU needs to perform only \( \sim 1 \) expensive memory access in addition to the work done in Echo. With G-Opt, the latency of this memory access for a packet is hidden behind independent packet-handling instructions from other packets. As GPUs also hide memory access latency, the GPU-accelerated version of IPv4 forwarding performs similarly to its Echo counterpart.
For IPv6 forwarding (Figure 10c), G-Opt increases single-core throughput by 3.8x from 2.2 Mpps to 8.4 Mpps. Interestingly, this increase is larger than G-Opt’s 2x gain in local IPv6 lookup performance (Figure 8). This counter-intuitive observation is explained by the reduction in effectiveness of the reorder buffer for the baseline code: Due to additional packet handling instructions, the independent memory accesses for different packets in a batch are spaced farther apart in the forwarding code than in the local benchmarking code. These instructions consume slots in the processor’s reorder buffer, reducing its ability to detect the inter-packet independence.
___
*For applications that also examine the packet content, the transmission code can be moved outside \( F \) for a small performance penalty.
With G-Opt, our CPU-only implementation achieves 39 Mpps with 5 cores, and the platform’s peak IPv6 throughput (42 Mpps) with 6 cores. Because IPv6 lookups require relatively heavyweight processing, our GPU-based implementation indeed provides higher \textit{per-worker} throughput—it delivers line rate with only 4 worker cores, but it requires another core for the master in addition to the GPU. Therefore, using a GPU plus 5 CPU cores can provide a 7.7% throughput increase over using just 5 CPUs, but is equivalent to using 6 CPUs.
For the L2 switch (Figure 10d), G-Opt increases the throughput of the baseline by 86%, delivering 9.8 Mpps of single-core throughput. This is significantly smaller than the 17.5 Mpps for Echos because of the expensive hash computation required by cuckoo hashing. Our CPU-only implementation saturates the NICs with 6 cores, and achieves 96% of the peak throughput with 5 cores. In comparison, our GPU-accelerated L2 switch requires 5 CPU cores and a GPU for peak throughput.
For Named Data Networking, G-Opt increases single-core throughput from 4.8 Mpps to 7.3 Mpps, a 1.5x increase. With a budget of 4 CPU cores, the (simplified) GPU version’s performance is 24% higher than G-Opt, but is almost identical if G-Opt is given one additional CPU core.
\textbf{Conclusion:} For all our applications, the throughput gain from adding a GPU is never larger than from adding just one CPU core. The cost of a Xeon E5-2680 v3 [6] core (more powerful than the cores used in this paper) is $150. In comparison, the cheapest GPU used in this paper costs $130 and consumes 110W of extra power. CPUs are therefore a more attractive and resource efficient choice than GPUs for these applications.
### 4.5 Latency comparison
The GPU-accelerated versions of the above applications not only require more resources than their G-Opt counterparts, but also add significant latency. Each round of communication with the GPU on our server takes \(\sim 20\mu s\) (Figure 2). As the packets that arrive during a round must wait for the next round to begin, the average latency added is \(20 \times 1.5 = 30\mu s\).
Our latency experiments measure the round-trip latency at clients. Ideally, we would have liked to measure the latency \textit{added} by the server without including the latency added by the client’s NIC and network stack. This requires the use of hardware-based traffic generators [42] to which we did not have access.\(^7\)
In our experiments, clients add a timestamp to packets during transmission and use it to measure the RTT after reception. We control the load offered by clients by tuning the amount of time they sleep between packet transmissions. The large sleep time required for generating a low load, and buffered transmission at the server [32] cause our measured latency to be higher than our system’s minimum RTT of 16\(\mu s\).
For brevity, we present our latency-vs-throughput graphs only for Echo, and IPv4 and IPv6 forwarding. The CPU-only versions use G-Opt. All measurements used the minimum number of CPU cores required for saturating the network bandwidth.
Figure 11a shows that the RTT of CPU-only Echo is 29\(\mu s\) at peak throughput and 19.5\(\mu s\) at low load. The minimum RTT with GPU acceleration is 52\(\mu s\), which is close to 30\(\mu s\) larger than the CPU-only version’s minimum RTT. We observe similar numbers for IPv4 and IPv6 forwarding (Figures 11b and 11c), but the GPU version’s latency increases at high load because of the larger batch sizes required for efficient memory latency hiding.
### 5 Discussion
\begin{itemize}
\item \textbf{Other similar optimizations for CPU programs} Until now, we have discussed the benefit of an automatic DRAM latency-hiding optimization, G-Opt. We now discuss how intrusion detection systems (IDSes), an application whose working set fits in cache [41], can benefit from similar, latency-hiding optimizations.
We study the packet filtering stage of Snort [39], a popular IDS. In this stage, each packet’s payload is used to traverse one of several Aho-Corasick [15] DFAs. The DFA represents the set of malicious patterns against which this packet should be matched; Snort chooses which DFA to use based on the packet header. For our experiments, we recorded the patterns inserted by Snort v2.9.7 into its DFAs and used them to populate our simplified pattern matching engine. Our experiment uses 23,331 patterns inserted into 450 DFAs, leading to 301,857 DFA states. The workload is a \texttt{tcpdump} file from the DARPA Intrusion Detection Data Sets [11].
Our baseline implementation of packet filtering passes batches of \(B (~ 8)\) packets to a function that returns \(B\) lists of matched patterns. This function processes packets one-by-one. We made two optimizations to this function. First, we perform a loop interchange: Instead of completing one traversal before beginning another, we interweave them to give the CPU more independent instructions to reorder, reducing stalls from long-latency loads from cache. Second, we collect a larger batch of packets (8192 in our implementation), and sort it—first by the packet’s DFA number and then by length. Sorting by DFA number reduces cache misses during batch traversal. Sorting by length increases the effectiveness of loop interchange—similar to minimizing control flow divergence for GPU-based traversals [41].
\end{itemize}
Figure 12 shows that, for a local experiment without network I/O, these optimizations increase single-core matching throughput by 2.4x or more. We believe that our optimizations also apply to the CPU-only versions of pattern matching in GPU-accelerated IDSes including Kargus [24] and Snap [43]. As we have only implemented the packet filtering stage (Snort uses a second, similar stage to discard false positives), we do not claim that CPUs can outperform GPUs for a full IDS. However, they can reduce the GPU advantage, or make CPU-only versions more cost effective. For example, in an experiment with innocent traffic, Kargus’s throughput (with network I/O) improved between 1.4x and 2.4x with GPU offloading. Our pattern matching improvements offer similar gains which should persist in this experiment: innocent traffic rarely triggers the second stage, and network I/O requires less than 10% of the CPU cycles spent in pattern matching.
**Additional applications** We have shown that CPU implementations can be competitive with (or outperform) GPUs for a wide range of applications, including lightweight (IPv4 forwarding, Layer-2 switching), mid-weight (IPv6 and NDN forwarding), and heavyweight (intrusion detection) applications. Previous work explores the applicability of GPU acceleration to a number of different applications; one particularly important class, however, is cryptographic applications.
Cryptographic applications, on the one hand, involve large amounts of computation, making them seem attractive for vector processing [25, 23]. On the other hand, encryption and hashing requires copying the full packet data to the GPU (not just headers, for example). Since the publication of PacketShader, the first work in this area, Intel has implemented hardware AES encryption support for their CPUs. We therefore suspect that the 3.5x speedup observed in PacketShader for IPSec encryption would be unlikely to hold on today’s CPUs. And, indeed, 6WIND’s AES-NI-based IPSec implementation delivers 6 Gbps per core [1], 8x higher than PacketShader’s CPU-only IPSec, though on different hardware.
One cryptographic workload where GPUs still have an advantage is processing expensive, but infrequent, RSA operations as done in SSLShader, assuming that connections arrive closely enough together for their RSA setup to be batched.\(^8\) Being compute intensive, these cryptographic applications raise a second question for future work: Can automatic vectorization approaches (e.g., Intel’s ISPC [8]) be used to increase the efficiency of CPU-based cryptographic applications?
**Revising TCO estimates** In light of the speedups we have shown possible for some CPU-based packet processing applications, it bears revisiting total-cost-of-ownership calculations for such machines. The TCO of a machine includes not just the cost of the CPUs, but the motherboard and chipset as well as the total system power draw, and the physical space occupied by the machine.
Although our measurements did not include power, several conclusions are obvious: Because the GPU-accelerated versions required almost as many CPU cores as the CPU-only versions, they are likely to use at least modestly more power than the CPU versions. The GTX 980 in our experiments can draw up to 165W compared to 130W for the E5-2680’s 8 cores, though we lack precise power draw measurements.
Adding GPUs requires additional PCIe slots and lanes from the CPU, in addition to the cost of the GPUs. This burden is likely small for applications that require transferring only the packet header to the GPU, such as forwarding—but those applications are also a poor match for the GPU. It can, however, be significant for high-bandwidth offload applications, such as encryption and deep packet inspection.
---
\(^8\)And perhaps HMAC-SHA1, but Intel’s next generation “Skylake” CPUs will have hardware support for SHA-1 and SHA-256.
Future GPU trends may improve the picture. Several capabilities are on the horizon: CPU-integrated GPU functions may substantially reduce the cost of data and control transfers to the GPU. Newer NVidia GPUs support “GPUDirect” [7], which allows both the CPU and certain NICs to DMA directly to the GPU. GPUDirect could thus allow complete CPU-bypass from NIC to GPU, or reduce CUDA’s overhead by letting the CPU write directly to GPU memory [29]. This technology currently has several restrictions—the software is nascent, and only expensive Tesla GPUs (over $1,700 each) and RDMA-capable NICs are supported. A more fundamental and long-term limitation of removing CPU involvement from packet processing is that it requires entire packets, not just headers, to be transferred to the GPU. The CPU’s PCIe lanes would then have to be divided almost equally between NICs and GPUs, possibly halving the network bandwidth that the system can handle.
Alternative architectures such as Tilera’s manycore designs, which place over 100 cores on a single chip with high I/O and memory bandwidth, or Intel’s Xeon Phi, are interesting and under-explored possibilities. Although our results say nothing about the relative efficiency of these architectures, we hope that our techniques will enable better comparisons between them and traditional CPUs.
Handling updates Currently, G-Opt works only for data structures that are not updated concurrently. This constraint also applies to GPU-accelerated routers where the CPU constructs the data structure and ships it to the GPU. It is possible to hide DRAM latency for updates using manual group prefetching [32]; if updates are relatively infrequent, they also can be handled outside the batch lookup code. Incorporating updates into G-Opt is part of future work.
6 Related Work
GPU-based packet processing Several systems have used GPUs for IPv4 lookups [18], [21], [27], [44], demonstrating substantial speedups. Our end-to-end measurements that include network I/O, however, show that there is very little room for improving IPv4 lookup performance—when IPv4 forwarding is optimized with G-Opt, the single-core throughput drops by less than 9% relative to Echo. Packet classification requires matching packet headers against a corpus of rules; the large amount of per-packet processing makes it promising for GPU acceleration [23, 27, 44]. GSwtich [44] is a recent GPU-accelerated packet classification system. We believe that the Bloom filter and hash table lookups in GSwtich’s CPU version can benefit from G-Opt’s latency hiding, reducing the GPU’s advantage.
CPU-based packet processing RouteBricks [18] focused on mechanisms to allocate packets to cores; its techniques are now standard for making effective use of a multicore CPU for network packet handling. User-level networking frameworks like Intel’s DPDK [5], netmap [38], and PF_RING [10] provide a modern and efficient software basis for packet forwarding, which our work and others take advantage of. Many of the insights in this paper were motivated by our prior work on hiding lookup latency in CuckooSwitch [51], an L2 switch that achieves 80 Gbps while storing a billion MAC addresses.
Hiding DRAM latency for CPU programs is important in many contexts: Group prefetching and software pipelining has been used to hide DRAM latency for database hash-joins [17], a software-based L2 switch [51], in-memory trees [40, 28], and in-memory key-value stores [32, 34, 26]. These systems required manual code rewrites. To our knowledge, G-Opt is the first method to automatically hide DRAM latency for the independent lookups in these applications.
7 Conclusion
Our work challenges the conclusions of prior studies about the relative performance advantages of GPUs in packet processing. GPUs achieve their parallelism and performance benefits by constraining the code that programmers can write, but this very coding paradigm also allows for latency-hiding GPU implementations. Our G-Opt tool provides a semi-automated way to produce such implementations. CPU-only implementations of IPv4, IPv6, NDN, and Layer-2 forwarding can thereby be more resource efficient and add lower latency than GPU implementations. We hope that enabling researchers and developers to more easily optimize their CPU-based designs will help improve future evaluation of both hardware and software-based approaches for packet processing. Although we have examined a wide range of applications, this work is not the end of the line. Numerous other applications have been proposed for GPU-based acceleration, and we believe that these techniques may be applicable to other domains that involve read-mostly, parallelizable processing of small requests.
Code release The code for G-Opt and the experiments in this paper are available at https://github.com/efficient/gopt.
Acknowledgements This work was supported by funding from the National Science Foundation under awards CNS-1314721, CCF-0964474, and 1345305; and by Intel via the Intel Science and Technology Center for Cloud Computing (ISTIC-CC). Emulab [47] and PRObe [21] were used for some experiments. PRObe is supported in part by NSF awards CNS-1042537 and CNS-1042543 (PRObe). We thank Sangjin Han and Hyeontaek Lim for valuable comments, and David Maltz for shepherding.
References
|
{"Source-Url": "http://www.pdl.cmu.edu/PDL-FTP/associated/gopt_nsdi.pdf", "len_cl100k_base": 13365, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 54388, "total-output-tokens": 16700, "length": "2e13", "weborganizer": {"__label__adult": 0.0006103515625, "__label__art_design": 0.0006852149963378906, "__label__crime_law": 0.0006833076477050781, "__label__education_jobs": 0.0006012916564941406, "__label__entertainment": 0.00019359588623046875, "__label__fashion_beauty": 0.00025463104248046875, "__label__finance_business": 0.0002827644348144531, "__label__food_dining": 0.0004315376281738281, "__label__games": 0.001361846923828125, "__label__hardware": 0.0186920166015625, "__label__health": 0.0008306503295898438, "__label__history": 0.0005707740783691406, "__label__home_hobbies": 0.00018417835235595703, "__label__industrial": 0.0013942718505859375, "__label__literature": 0.0002734661102294922, "__label__politics": 0.000461578369140625, "__label__religion": 0.0009098052978515624, "__label__science_tech": 0.4736328125, "__label__social_life": 9.691715240478516e-05, "__label__software": 0.016693115234375, "__label__software_dev": 0.479248046875, "__label__sports_fitness": 0.0004165172576904297, "__label__transportation": 0.0012264251708984375, "__label__travel": 0.00026535987854003906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66409, 0.07447]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66409, 0.51994]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66409, 0.88426]], "google_gemma-3-12b-it_contains_pii": [[0, 4671, false], [4671, 9437, null], [9437, 13537, null], [13537, 19077, null], [19077, 23081, null], [23081, 28319, null], [28319, 33244, null], [33244, 37453, null], [37453, 40088, null], [40088, 45655, null], [45655, 51069, null], [51069, 54978, null], [54978, 60290, null], [60290, 65091, null], [65091, 66409, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4671, true], [4671, 9437, null], [9437, 13537, null], [13537, 19077, null], [19077, 23081, null], [23081, 28319, null], [28319, 33244, null], [33244, 37453, null], [37453, 40088, null], [40088, 45655, null], [45655, 51069, null], [51069, 54978, null], [54978, 60290, null], [60290, 65091, null], [65091, 66409, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66409, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66409, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66409, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66409, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66409, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66409, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66409, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66409, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66409, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66409, null]], "pdf_page_numbers": [[0, 4671, 1], [4671, 9437, 2], [9437, 13537, 3], [13537, 19077, 4], [19077, 23081, 5], [23081, 28319, 6], [28319, 33244, 7], [33244, 37453, 8], [37453, 40088, 9], [40088, 45655, 10], [45655, 51069, 11], [51069, 54978, 12], [54978, 60290, 13], [60290, 65091, 14], [65091, 66409, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66409, 0.0273]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
e49a70e732d8c2fa8c370eb8d5fd6bfd6ec48183
|
[REMOVED]
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/74941931/Machine_Learning_for_Automated_Inductive_Theorem_Proving.pdf", "len_cl100k_base": 8381, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 40526, "total-output-tokens": 10999, "length": "2e13", "weborganizer": {"__label__adult": 0.0004727840423583984, "__label__art_design": 0.0007081031799316406, "__label__crime_law": 0.0006542205810546875, "__label__education_jobs": 0.0021648406982421875, "__label__entertainment": 0.0001933574676513672, "__label__fashion_beauty": 0.00027441978454589844, "__label__finance_business": 0.0005159378051757812, "__label__food_dining": 0.0005970001220703125, "__label__games": 0.001094818115234375, "__label__hardware": 0.0010786056518554688, "__label__health": 0.0012359619140625, "__label__history": 0.0005507469177246094, "__label__home_hobbies": 0.0001809597015380859, "__label__industrial": 0.0009403228759765624, "__label__literature": 0.0007948875427246094, "__label__politics": 0.00057220458984375, "__label__religion": 0.0007734298706054688, "__label__science_tech": 0.480712890625, "__label__social_life": 0.00019562244415283203, "__label__software": 0.00899505615234375, "__label__software_dev": 0.495849609375, "__label__sports_fitness": 0.0004634857177734375, "__label__transportation": 0.0008373260498046875, "__label__travel": 0.0002453327178955078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41357, 0.03226]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41357, 0.4105]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41357, 0.89167]], "google_gemma-3-12b-it_contains_pii": [[0, 1539, false], [1539, 3895, null], [3895, 6624, null], [6624, 8574, null], [8574, 11189, null], [11189, 13934, null], [13934, 16543, null], [16543, 19675, null], [19675, 21197, null], [21197, 23668, null], [23668, 26082, null], [26082, 28781, null], [28781, 30848, null], [30848, 33001, null], [33001, 35906, null], [35906, 38932, null], [38932, 41357, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1539, true], [1539, 3895, null], [3895, 6624, null], [6624, 8574, null], [8574, 11189, null], [11189, 13934, null], [13934, 16543, null], [16543, 19675, null], [19675, 21197, null], [21197, 23668, null], [23668, 26082, null], [26082, 28781, null], [28781, 30848, null], [30848, 33001, null], [33001, 35906, null], [35906, 38932, null], [38932, 41357, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41357, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41357, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41357, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41357, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41357, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41357, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41357, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41357, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41357, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41357, null]], "pdf_page_numbers": [[0, 1539, 1], [1539, 3895, 2], [3895, 6624, 3], [6624, 8574, 4], [8574, 11189, 5], [11189, 13934, 6], [13934, 16543, 7], [16543, 19675, 8], [19675, 21197, 9], [21197, 23668, 10], [23668, 26082, 11], [26082, 28781, 12], [28781, 30848, 13], [30848, 33001, 14], [33001, 35906, 15], [35906, 38932, 16], [38932, 41357, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41357, 0.08425]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
ef96c28df95b67667b584d60f9cb97bbec85b8a1
|
Search through Systematic Set Enumeration
Ron Rymon
Department of Computer and Information Science
University of Pennsylvania
Philadelphia, PA 19104
Abstract
In many problem domains, solutions take the form of unordered sets. We present the Set-Enumeration (SE)-tree - a vehicle for representing sets and/or enumerating them in a best-first fashion. We demonstrate its usefulness as the basis for a unifying search-based framework for domains where minimal (maximal) elements of a power set are targeted, where minimal (maximal) partial instantiations of a set of variables are sought, or where a composite decision is not dependent on the order in which its primitive component-decisions are taken. Particular instantiations of SE-tree-based algorithms for some AI problem domains are used to demonstrate the general features of the approach. These algorithms are compared theoretically and empirically with current algorithms.
More generally, variables can be instantiated from an arbitrary domain.
Researchers in Artificial Intelligence (AI) have also made use of such abstract problems in their models. The HS problem, for example, was used by [Reiter 87] in his formalization of diagnosis. In a newer characterization, diagnoses are viewed as partial assignments of state to components [de Kleer et al. 90]. Many other AI problems are, or could be, formulated so as to admit sets as solutions.
Our goal in introducing the Set-Enumeration (SE)-tree is to provide a unified search-based framework for solving such problems, albeit their problem-independent solution criteria. SE-tree-based algorithms for different problems will share their skeletal structure, but will each use additional domain-specific tactics. Furthermore, at a certain level of abstraction, even those tactics are general and can be shared across domains. General tactics identified here include pruning rules which exploit the SE-tree structure, exploration policies, and problem decomposition methods. Incremental versions of SE-tree-based algorithms can be constructed for some problem domains. In what follows, we use particular instantiations of SE-tree-based algorithms to demonstrate the general features of the approach.
1 INTRODUCTION
Many computer science problems admit solutions which are elements of a given power-set. Typically, such sets are required to satisfy some problem-specific criterion which designates them as solutions. In many cases, such criteria either include, or are augmented with, some minimality/maximality requirement. Consider, for example, the Hitting-Set (HS) problem [Karp 72]. Given a collection of sets, solutions are required to have a non-empty intersection with each member of the collection. In applications of the HS problem, interesting solutions are typically minimal with respect to set inclusion. In a more general class of problems, solutions are partial instantiations of a set of variables. A hitting-set, for example, can also be described as a membership-based mapping from the underlying set of primitive elements to \( \{0, 1\} \).
*Address for correspondence: Ron Rymon, Computer and Information Science, Room 423C, 3401 Walnut Street, Philadelphia PA 19104, e-mail: rymon@fiac.cis.upenn.edu.
characterization of diagnoses [de Kleer et al. 90]. Although derived from a very general search framework, this algorithm corresponds to a prime implication generation algorithm proposed by [Slagle et al. 79], and is empirically shown to perform quite well compared to a recent algorithm [Ngair 92]. Unlike Slagle et al.'s algorithm, the extended SE-HS can work under diagnostic theories with multiple fault modes, can use a variety of exploration policies for focusing purposes, and has an incremental version. Furthermore, we subsequently augment it with a problem decomposition tactic, thereby obtaining an improved version of Slagle et al.'s algorithm. Finally, we briefly review potential use of the SE-tree in abductive diagnostic frameworks.
In Section 6, we contrast features of the SE-tree with decision trees in the context of learning classification rules from examples. For lack of space, the scope of this study is very limited and the reader is referred to [Rymon 92b] for a more detailed analysis and empirical evaluation.
2 THE BASIC SE-TREE
The Set-Enumeration (SE)-tree is a vehicle for representing and/or enumerating sets in a best-first fashion. The complete SE-tree systematically enumerates elements of a power-set using a pre-imposed order on the underlying set of elements. In problems where the search space is a subset of that power-set that is (or can be) closed under set-inclusion, the SE-tree induces a complete irredundant search technique. Let $E$ be the underlying set of elements. We first index $E$'s elements using a one-to-one function $\text{ind} : E \rightarrow \mathbb{N}$. Then, given any subset $S \subseteq E$, we define its SE-tree view:
Definition 2.1 A Node's View
$$\text{View}(\text{ind}, S) \overset{\Delta}{=} \{ e \in E \mid \text{ind}(e) > \max_{e' \in S} \text{ind}(e') \}$$
Definition 2.2 A Basic Set Enumeration Tree
Let $F$ be a collection of sets that is closed under $\subseteq$ (i.e., for every $S \in F$, if $S \subseteq T$ then $S \in F$). $T$ is a Set Enumeration tree for $F$ iff:
1. The root of $T$ is labeled by the empty set;
2. The children of a node labeled $S$ in $T$ are
$$\{ S \cup \{e\} \in F \mid e \in \text{View}(\text{ind}, S) \}.$$
Figure 1 illustrates an SE-tree for the complete powerset of $\{1, 2, 3, 4\}$. Note that restricting a node's expansion to its View, ensures that every set is uniquely explored within the tree. By itself, the idea of using an imposed order is not new; it is used for similar purposes in many specific algorithms. Our contribution is in identifying the SE-tree as a recurring search structure, thereby facilitating its use in a general framework and the sharing of particular tactics.
Notice also that the SE-tree can be used as a data structure for caching unordered sets, and as an effective means of checking whether a new set is subsumed by any of those already cached. [de Kleer 92] has made such use of an SE-tree and reports significant improvements in run-time. As a caching device, the SE-tree is a special case of Knuth's tree data structure [Knuth 73], originally offered for ordered sets. While we too use the SE-tree for caching solutions and for subsumption checking, our main objective in this paper is its use in a search framework.
3 AN SE-TREE-BASED HITTING-SET ALGORITHM
In this section, we demonstrate the use of the basic SE-tree structure for a hitting-set algorithm in the context of Reiter's theory of diagnosis. We open with a brief introduction of Reiter's theory, to the point in which a hitting-set problem is formulated. An SE-tree-based algorithm (SE-HS) is then contrasted with the dag-based algorithm proposed by [Reiter 87, Greiner et al. 89] to show that a large number of calls to a subsumption checking procedure can be saved. Then, the SE-tree systematically improves SE-HS via a domain-specific pruning rule. Empirical comparison of the improved SE-HS with the dag-based implementation of [Greiner et al. 89] supports our claims.
3.1 REITER'S THEORY OF DIAGNOSIS
Reiter's theory of diagnosis [Reiter 87] is among the most widely referenced logic-based approaches to model-based diagnosis. For lack of space, we shall only present the concepts and theorem which Reiter uses to derive his hitting-set algorithm.
Definition 3.1 A Diagnostic Problem [Reiter 87]
A diagnostic problem is a triple $(SD, COMPS, OBS)$.
1. SD - the system description, is a set of first order sentences;
2. COMPS := \{c\}_{i=1}^m - the system's components, is a finite set of constants; and
3. OBS - the observations, is also a set of first order sentences.
The language in which diagnostic problems are expressed is thus first order, and is augmented with an extra \textit{AB} predicate (for abnormal).
Definition 3.2 Conflict Set
\textit{Given a diagnostic problem, a conflict is a set of components that cannot all be functioning correctly. Let CONFLICTS denote the collection of conflict sets.}
Theorem 3.3 [Reiter 87] \textit{Given a diagnostic problem, minimal diagnoses are precisely the minimal hitting sets for CONFLICTS.}
Reiter’s algorithm is an implementation of Theorem 3.3. In two steps, it first discovers conflicts, and then runs an HS algorithm on the conflicts discovered. We shall concentrate on the latter phase.
3.2 DAG-BASED APPROACH
Given a collection of conflict sets, Reiter’s algorithm grows an HS-tree in which nodes represent partial hitting sets and leaves represent complete ones. To avoid highly redundant exploration, Reiter augments this basic algorithm with a set of rules for \textit{reusing} and \textit{pruning} nodes. [Greiner et al. 89] present a correction to this algorithm which uses a directed acyclic graph (dag). It proceeds as follows:
1. Let $D$ represent a growing HS-dag. Label its root with an arbitrary $C \in \text{CONFLICTS}$;
2. Process nodes in $D$ in a breadth-first order. To process a node $n$:
a. Let $H(n)$ be the set of edge labels on the path from the root to $n$. If $H(n)$ hits all sets in CONFLICTS, mark it as a minimal hitting set. Otherwise, label $n$ with the first set of CONFLICTS which is not hit by $H(n)$.
b. If $n$ is labeled by $\Sigma$, generate a downward arc labeled by any $\sigma \in \Sigma$.
This algorithm is augmented with three types of rules for expanding a node $n$:
1. \textit{Reusing}: If there is another node $m$ for which $H(m) = H(n) \cup \{\sigma\}$, do not expand $n$, but rather link it to $m$, labeling that link with $\sigma$.
2. \textit{Closing}: If there is a node $m$ which is marked as a hitting set, such that $H(m) \subseteq H(n)$, then close $n$, i.e. do not expand it at all.
3. \textit{Pruning}: If a set $\Sigma$ is to label a node $n$ and it has not been used previously, then try to prune $D$:
a. If there is a node $m$ which has been labeled with a set $S'$ such that $\Sigma \subseteq S'$, then relabel $m$ with $\Sigma$. Prune all edges from $m$ with arcs labeled with $\sigma$s from $S' \setminus \Sigma$.
b. Interchange $S'$ and $\Sigma$ in CONFLICTS.
3.3 SE-TREE-BASED ALTERNATIVE
SE-HS (Algorithm 3.4) is an SE-tree-based hitting set algorithm. In a best-first fashion, it explores nodes in an order conforming to some predetermined priority function. For that purpose, nodes along the tree’s expanding fringe are kept in a priority queue and the next node to be expanded is accessed via the Next-Best operation. Prioritization allows implementation of various exploration policies, to be discussed shortly. Let us first assume that nodes are explored by their cardinality; i.e. breadth-first.
Algorithm 3.4 Finding Minimal Hitting Sets
Program SE-HS (CONFLICTS)
1. Let $HS = \{\}$; OPEN-NODES := \{\}
2. Until OPEN-NODES is empty do
3. Expand (Next-Best(OPEN-NODES))
Procedure Expand($S$)
1. Let $Window(S) = \{ c \mid c \in \text{View}(ind,S) \}$
2. For each $c \in Window(S)$ which is a member of some set from NYH($S$) do
3. Unless there is $S' \in HS$ such that $S' \subseteq S$.[$c$]
4. If $S' \cup \{c\}$ is a hitting set, add it to $HS$;
5. Otherwise, add it to OPEN-NODES.
The main SE-HS program simply implements a best-first search. The algorithm's functionality is embodied in its Expand procedure, where the SE-tree structure is used, and where hitting sets are identified. In choosing viable expansions for a node labeled $S$, we restrict ourselves to components within $S$'s View. Such components are also required to participate in conflicts not yet hit by $S$ (denoted NYH($S$)). Step 3 in Expand prunes away nodes subsumed by minimal hitting sets. It corresponds to the closing step in [Greiner et al. 89]. However, SE-HS avoids the redundancy for which reusing rules were devised, and does not require pruning.
Theorem 3.5 If nodes are prioritized by their label's cardinality, then SE-HS is correct (produces all and only minimal hitting sets.)
3.4 PROJECTED GAIN
The HS-dag algorithm uses three pruning rules, each of which is computationally expensive and requires numerous calls to a subsumption checking procedure. In examining the purpose of these rules, we note that (1) Reusing is aimed at avoiding redundancy in search, i.e. the phenomena that same part of the search space is repeatedly explored within the HS-tree. It requires comparing each new node to every previous node; (2) Closing is aimed at shutting nodes which are supersets of minimal conflicts. For that purpose, if the HS-dag is explored breadth-first, each node will only have to be compared against previous minimal hitting sets; finally (3) Pruning is aimed at “correcting” the HS-dag from the effects of non-minimal conflict sets. The same effect could also be achieved \emph{a priori}, by “sorting” conflicts by cardinality.
As previously explained, while closing cannot be avoided, SE-HS requires neither reusing, nor pruning. Avoiding numerous calls to a subsumption checking procedure results in a tremendous improvement in runtime (see Section 3.7).
3.5 EXPLORATION POLICIES
Due to its potentially exponential size, it may often be impossible to completely explore the space of sets. In such cases, it may be beneficial to characterize partial outputs of an SE-tree-based algorithm, given a variety of exploration policies.
Definition 3.6 Correct Exploration Policy
An exploration policy is a priority function \( \psi \), defined for each set. It is correct if whenever open nodes are so prioritized, the resulting algorithm is correct.
For the particular case of SE-HS, a variety of exploration policies are sensible.
Proposition 3.7 Any monotonic function \( \psi \) (i.e. such that for every \( S \subseteq S' \) we have \( \psi(S) \leq \psi(S') \)) is a correct exploration policy for SE-HS.
We have already seen that exploration by cardinality is correct. Simpler diagnoses are explored first using this exploration policy. Other interesting policies include exploration by probability \( \psi(S) = \text{Prob}(S \text{ is a diagnosis}) \), and by utility or some other monotonic external criterion imposed on sets.
3.6 PRUNING UNPROMISING PARTS OF THE SEARCH SPACE
So far, nodes were pruned only if subsumed by known hitting-sets, thereby using the minimality requirement and the monotonicity of the SE-tree with respect to set-inclusion. We have not used the systematic ordering of nodes in the SE-tree for that purpose. That ordering provides a restriction on node labels which can occur in a given node's sub-tree. More specifically, let \( S \) be a node’s label, then the sub-tree rooted at that node will only have nodes whose labels are expansions of \( S \) with components from View(\( \text{ind}, S \)). Thus, in choosing viable expansions for \( S \), we can restrict ourselves to expansions such that every set that will not be hit by the expanded set will still contain components within its View (and thus stand the chance of being hit by any of that node’s descendants).
This is, in fact, a general feature of an SE-tree-based search program: the systematic enumeration embedded in the SE-tree structure allows us to ignore parts of the space which do not have the potential to lead to a solution.
To incorporate this pruning rule into SE-HS, it is sufficient to modify the node expansion routine.
Algorithm 3.8 Node Expansion (version 2)
Procedure \( \text{Expand}(S) \)
1. Let Window\( (S) \)
\[ \{ c \mid c \in \text{View}(\text{ind}, S) \} \cap \{ c \mid \text{ind}(c) \leq \min_{S' \in \text{NYH}(S)} \max_{c' \in S' \text{ind}(c')} \} \]
2. For each \( c \in \text{Window}(S) \) which is a member of some set from \( \text{NYH}(S) \) do
3. Unless there is \( S' \in \text{HS} \) such that \( S' \subseteq \bigcup c \)
4. If \( \bigcup c \) is a hitting set, add it to \( \text{HS} \)
5. Otherwise, add it to \( \text{OPEN-NODES} \).
This algorithm is identical to Algorithm 3.4, except for the additional restriction in line 1. This change is an example of a domain-specific SE-tree-based pruning rule. The algorithm remains correct, but fewer nodes need be explored. We demonstrate this in the next section by way of an example, and via empirical experiments.
3.7 DEMONSTRATED GAIN
To demonstrate the advantages of SE-HS over the dag-based algorithm, we will first work through a complete example (taken from [Reiter 87]), and will then present the results of extensive empirical experiments.
Example 3.9 Consider the following collection of conflicts \( \{2, 4, 5\}, \{1, 2, 3\}, \{1, 3, 5\}, \{2, 4, 6\}, \{2, 4\}, \{2, 3, 5\}, \{1, 6\} \) [Reiter 87]. Figure 2 depicts the corresponding HS-dag, where O's mark hitting sets, and X's denote closed nodes. The rightmost branch from the root was pruned by the last node to be explored (itself a descendant of that branch).
4 THE EXTENDED SE-TREE
Sometimes the space being searched consists not of sets of components, but rather of sets of partially instantiated attributes (variables). We next extend the SE-tree accordingly.
Definition 4.1 Partial Descriptions
Let \( \text{ATTRS} = \{ A_i \}_{i=1}^n \) be a set of attributes, with domains \( \{ \text{Dom}(A_i) \}_{i=1}^n \). A partial description \( \pi \) is a subset of \( \text{ATTRS} \), each of which is instantiated with one value from its domain. It is complete if all attributes are instantiated.
Consider, for example, the space defined by 3 boolean attributes. The set \( \{ A_1=T, A_2=F, A_3=F \} \) is a complete description.
As with its basic counterpart, to define the extended SE-tree we first impose an ordering (ind) on \( \text{ATTRS} \), and define a node's View as all attributes ranked higher than the highest ranked attribute participating in that node. Then,
Definition 4.2 An Extended Set Enumeration Tree
Let \( F \) be a collection of sets of attribute instantiations such that each set contains at most one value for each attribute and such that \( F \) is closed under \( \subseteq \), then \( T \) is an extended SE-tree for \( F \) iff:
1. The root of \( T \) is labeled by the empty set;
2. The children of a node \( S \) in \( T \) are
\[ \{ \exists (A=v) \in F \mid A \in \text{View}(\text{ind}, S), v \in \text{Dom}(A) \} \].
Figure 6 depicts an extended SE-tree for the complete space defined by three boolean attributes. Note the use of reduced notation where \( \bar{1} \) stands for \( \{ A_i=T \} \), and \( \bar{i} \) represents \( \{ A_i=F \} \).
5 SE-TREE-BASED PRIME IMPLICATE ALGORITHM
In this section, we present an extension of SE-HS and demonstrate its use for the diagnostic framework of [de Kleer et al. 90]. We begin with a short description of the extended theory where kernel diagnoses are characterized as prime implicants of the (newly defined) set of conflicts. An extension of SE-HS, presented next, can be used to find those kernel diagnoses. The extended SE-HS has other useful properties: it can be flexibly focused; it can work with multiple behavioral modes; and it has an incremental version. A two-mode restriction of this algorithm corresponds to an old prime implicate generation algorithm [Slagle et al. 70]. We first demonstrate the empirical performance of this restricted version compared to a recent prime implicate generation algorithm. We then augment it with a new problem decomposition tactic, thereby obtaining an improved algorithm for prime implicate generation.
5.1 EXTENDED THEORY OF DIAGNOSIS
[de Kleer et al. 90] extended Reiter's theory with the notion of kernel diagnoses. Rather than having a diagnosis represent only faulty components (with the implicit assumption that all other components function properly), the new theory allows a diagnosis to explicitly specify working and non-working condition, without any presumption about other components' state.
Definition 5.1 AB-Clause [de Kleer et al. 90]
Let an AB-literal be $AB(c)$, or $\neg AB(c)$ for some $c \in\text{COMPS}$. An AB-clause is a disjunction of AB-literals containing no complementary pair of AB-literals. An AB-clause is positive if all its AB-literals are positive.
Definition 5.2 Conflict [de Kleer et al. 90]
A conflict is any AB-clause entailed by $SDJ O B S$. A conflict set is its underlying set of AB-literals.
Note that the new definition extends Reiter's original definition which, roughly speaking, allows only positive conflicts. We shall interchangeably speak about conflicts and their underlying sets.
Definition 5.3 Partial Diagnosis [de Kleer et al. 90]
A partial diagnosis is a conjunction of AB-literals $P$ such that $P$ is satisfiable (does not contain complementary pairs), and for any other satisfiable conjunction $\phi$ covered by $P$, $SDJ O B S \phi$ is satisfiable.
In other words, not only is $P$ consistent with the system description and the observed behavior, but also any extension of $P$ that assigns either $AB$, or $\neg AB$ to components not mentioned in $P$, is also consistent.
Definition 5.4 Kernel Diagnosis [de Kleer et al. 90]
A kernel diagnosis is a partial diagnosis such that the only partial diagnosis covering it is itself.
[de Kleer et al. 90] use the notion of prime implicants to characterize kernel diagnoses:
Definition 5.5 Prime Implicant [de Kleer et al. 90]
A conjunction \( \pi \) of \( AB \)-literals, containing no complementary pairs, is an implicant of \( SDJ\)BUS if it entails every formula in \( SDJ\)BUS. It is a prime implicant if it is not covered by any other implicant.
Theorem 5.6 [de Kleer et al. 90] The kernel diagnoses are precisely the prime implicants of \( SDJ\)BUS.
There are several early algorithms for computing prime implicants (or prime implicates), used primarily for Boolean minimization (e.g. [Tison 67, Slagle et al. 70]). Recent interest in the AI community, for tasks such as ATMS encoding and circumstance, has yielded new algorithms (e.g. [Ngair 92]) as well as improvements to old algorithms (e.g. [Kean & Taitnis 90, de Kleer 92]). Next, an extension of SE-HS will be shown to find kernel diagnoses, and therefore to generate all prime implicants of a CNF formula.
5.2 SE-HS EXTENDED
[de Kleer et al. 90] characterize kernel diagnoses as the prime implicants of \( SDJ\)BUS. Alternatively, kernel diagnoses can be defined in terms of hitting sets.
Theorem 5.7 Kernel Diagnoses and Conflicts
Let \( CONFLICTS \) be the collection of conflict sets. The kernel diagnoses are precisely those minimal hitting sets for \( CONFLICTS \) that do not contain complementary pairs of \( AB \)-literals.
Two important implications are (a) that SE-HS can be modified to find kernel diagnoses, and (b) that the modified algorithm can also serve to find prime implicants (implicates) in other settings. The proof for an extended version of this theorem can be found in [Rymon 92a]. Algorithm 5.8 presents the extended version of SE-HS's \( EXPAND \) procedure; the main program remains as previously described.
Algorithm 5.8 Node Expansion (version 3)
Procedure \( EXPAND(S) \)
1. Let \( Window(S) = \{ c \in View(ind, S) \cap \{ c | ind(c), c \in \Pi \text{ in } S, \min_{c \in \Pi} \text{ appears in } s' \} \} \)
2. For each \( c \in Window(S) \) for which there exists some \( B \in \{ \neg AB, AB \} \) such that \( B(c) \) participates in some set from \( NYH(S) \) do
3. Unless there is \( S' \subseteq \text{HS} \) such that \( S' \subseteq \text{JS}(B(c)) \)
4. If \( \text{JS}(B(c)) \) is a hitting set, add it to \( \text{HS} \);
5. Otherwise, add it to OPEN-NODES.
The new \( EXPAND \) procedure assigns state \( (AB \text{ or } \neg AB) \) to a new component, not yet in the expanded set. The algorithm's correctness is easy to verify.
Besides its simplicity, being derived from a general SE-tree-based framework, SE-HS enjoys the following features:
1. Focusing facility. Due to the possibly overwhelming number of hypothetical diagnoses, much research on ATMS-based diagnostic programs has centered on methods for focusing on the most probable solutions (e.g. [Forbus & de Kleer 88, de Kleer 91]). [Provan & Poole 91] advocate a preference criterion that is based on a diagnosis' use. Exploration policies, as in Section 3.8, can be used for that purpose.
2. Fault models. The importance of explicit models of faulty behavior has been recognized in the model-based diagnosis community (e.g. [Holzblatt 90, de Kleer & Williams 89]). In [Rymon 92a], we extend the diagnostic theory of [de Kleer et al. 90] to multiple behavioral modes and prove that kernel diagnosis in the new theory can still be characterized in terms of hitting sets. SE-HS can be easily extended to any number of behavioral modes.
3. Incrementalism. [Rymon 92a] outlines an incremental diagnostic framework that is based on a variation of SE-HS which can incrementally re fine its hypothesis as conflicts arrive.
5.3 PERFORMANCE EVALUATION
We have implemented the extended SE-HS algorithm and have compared its performance to that of a PHI-based prime implicate generation algorithm [Ngair 92]. As before, the two algorithms were run on hundreds of examples that were randomly generated according to the three parameters \( (#\text{conf}, #\text{lit}, #\text{comp}) \). Due to the relative strength of both algorithms, we used larger examples in this experiment. As a side note,
the SE-HS implementation is general in that it can take any number of behavioral modes. This generality is not useful in the experiment, where examples are bi-modal. Figure 7 depicts two one-way sensitivity analyses (for #conf, #lit) and one three-way analysis. Again, shadowed squares correspond to SE-HS performance, open ones to that of the PHI-based algorithm.
5.4 PROBLEM DECOMPOSITION
As so far presented, we could draw a correspondence between nodes explored by the bi-modal version of the extended SE-HS algorithm and the operation of an old prime implicant generation algorithm proposed by Slagle et al. [70]. This is important for two reasons: first, it reveals the general SE-tree-based features of Slagle et al.'s algorithm, but more importantly, our next improvement to SE-HS will result in an improved version of their algorithm.
Where feasible, problem decomposition (also referred to as divide-and-conquer) is a well known strategy to sharply reduce problem solving costs (time, space, etc.) In the context of diagnosis, such an opportunity may arise when a fault is composed of a number of unrelated, or partially related sub-faults. [Wu 90] shows tremendous gain in utilizing problem decomposition techniques in diagnosis.
In the context of multiple fault diagnosis, in addition to potential saving of time and space, decomposition may also lead to more compact representation of a solution. In many cases, a solution can be written more compactly if it is factorized. For example, a solution of the form $X = \{\text{AB}(X_1), \text{AB}(X_2), \ldots\}$, when expanded, consists of $2^n$ minimal diagnoses. Put differently, if formulae, some solutions can be represented compactly as CNF whereas others are more concise in their disjunctive form. This is, roughly, the intuition behind the following heuristic.
Theorem 5.9 Problem Decomposition
If CONFLICTS can be partitioned into two disjoint subsets $C'$ and $C''$, such that no component appears in both subsets, then the minimal hitting sets (MHS) for CONFLICTS are given by:
$$\text{MHS(CONFLICTS)} = \text{MHS}(C') \times \text{MHS}(C'')$$
If a partition exists, it can clearly save significant work. Recursive application of SE-HS to each of the two partitions can cut the exponential search space into two smaller search spaces. The notion of partitioning can be extended to any number of partitions, making the latter equivalence classes and making the partitioning unique. The solution in such case is the Cartesian product of the sub-solutions.
Fortunately, if one exists, there is a simple, almost-linear, algorithm that finds a partitioning for a collection of sets (e.g., using a union-find strategy [Tarjan 83]). Moreover, even if there is no facilitating partitioning to begin with, it is possible that one exists when a node's particular view is considered. Given a node S, recall that any of S's descendants will only expand with respect to View(ind,S). Thus, it is enough to look for a partition in the restriction of NYH(S) to View(ind,S).
Algorithm 5.10 An Amendment to Expand
1. Let $\Gamma$ be the restriction of NYH(S) to components in View(ind,S).
2. If there is a partitioning $\Gamma = \times \Gamma_i$, then
3. Run SE-HS on each of the $\Gamma_i$ independently. Let Hitting($\Gamma_i$) be the corresponding results, merge $\{S\} \times (\times \Gamma_i \text{ Hitting($\Gamma_i$)})$ into HS while checking for possible subsumption.
4. Otherwise, expand S as usual.
Exact prioritization is a problem in the augmented algorithm since every node in a new tree represents only part of (possibly many) solutions. For similar reasons, subsumption has to be more aggressively monitored (although this is easily done when hitting sets are cached in an SE-tree-based data structure). Before, subsumption was avoided by the subsuming solution being discovered prior to the subsuming one. Now, it is possible that a solution node in the original SE-tree will be subsumed by some but not all of the solutions in which a given node in some new tree participates. Nevertheless, problem decomposition is still attractive since it is particularly effective in problems which admit highly disjunctive solutions. Those are hardest for the original SE-HS algorithm. The following example demonstrates the effectiveness of the problem decomposition heuristic.
Example 5.11 Consider the following collection of conflicts: $\{\{\text{AB}(1), \text{AB}(3), \text{AB}(4)\}, \{\text{AB}(5), \text{AB}(6)\}, \{\text{AB}(3), \text{AB}(4)\}, \{\text{AB}(2), \text{AB}(5)\}\}$. Figure 8 illustrates the SE-tree explored by SE-HS without decomposition. As before, O's denote hitting sets, X's mark closed nodes. Exploration for the same problem with decomposition is depicted in Figure 9. There, the first step involved partitioning the collection of conflicts into two disjoint sets. Thereafter, two sub-problems are solved, and the solution is the cross-product of the respective results, i.e. $\{\{\text{AB}(1), \text{AB}(3)\}, \{\text{AB}(4)\}\} \times \{\{\text{AB}(2), \text{AB}(5)\}, \{\text{AB}(3), \text{AB}(6)\}\}$. The reductions in time and space are obvious.
5.5 ABDUCTIVE DIAGNOSTIC MODELS
In [Reggia et al. 85], diagnosis is formulated as a generalized set covering (GSC) problem. In their basic
model, a diagnostic problem is represented in a bipartite graph in which symptoms and disorders form each of the respective partitions. Each disorder in the graph is linked to all of its symptoms via a causes relation. Given a set of observed symptoms, a diagnosis is defined as a minimal set of disorders which covers all symptoms.
A most-probable-first search algorithm for that problem is described in [Peng & Reggia 87]. It searches the space of sets of disorders for such sets which cover all symptoms. This algorithm, however, is redundant in that partial hypotheses may be discovered repeatedly during search. That redundancy could be avoided if an SE-tree framework were adopted.
Alternatively, the problem can be turned into a hitting
set problem. [Reiter 87] presents a transformation of a GSC representation of a diagnostic problem into his own framework. There is, in fact, a better transformation which avoids the conflict generation part of Reiter's theory by mapping the GSC problem directly into an HS one. Then, we could simply use SE-HS. Given a set of symptoms \( s_i \), we could define a "conflict set" for each symptom:
\[
\text{conflict}(s_i) \overset{def}{=} \{ d \mid d \text{ is a disease, } d \text{ causes } s_i \}
\]
Presented with \( s_i \), the conflict asserts that it is impossible that none of its causing disorders are present. It is easy to prove that a set of disorders is a minimal set cover if it is a minimal hitting set for such conflicts.
In [Peng & Reggia 87], hypotheses are explored by their likelihood. The SE-tree-based framework allows such exploration, as well as a variety of other exploration policies. In [Peng & Reggia 87], non-minimal hypotheses are also explored. This is easily done in SE-HS by removing the subsumption requirement (\texttt{Expand}, step 3). In addition, pruning rules (cf. Section 3.6) can be used to avoid unpromising parts of the search space. Problem decomposition (cf. Section 5.4) may also be helpful in reducing time and cost. Finally, it seems that other models of diagno-
sis in which solutions are defined in terms of sets, e.g.,
[Bylander et al. 91, Poole 91, Console & Torasso 91],
can also use an SE-tree-based search framework in
their implementations.
6 LEARNING MINIMAL CLASSIFICATION RULES
Decision trees are an important tool, and serve as
an underlying representation in many problem solving
tasks. Significant research in Machine Learning
has used decision trees in architectures for induction
of classification knowledge from examples. Best
known are ID3 [Quinlan 86] and its descendants. In
[Rymon 92b], we present an SE-tree-based character-
ization of the induction task, contrast it from classification
and search perspectives with the decision-tree-based
framework, and compare the two empirically.
Here, we will only contrast features of the two represen-
tations, concentrating on search aspects.
Definition 6.1 Rules
A training set (TSET) is a collection of examples. Each example is a complete description for which a
correct classification (denoted $x$) is known. A rule is
a partial description $R$ such that if $t, t' \in TSET$ are such
that $R \subseteq t, t'$, then $x(t) = x(t')$. It is minimal if none
of its subsets is a rule.
The objective of a learning system is to learn rules
that can be expected to perform well not only on the training set, but also on new examples. While there is
no consensus as to the precise composition of such a
collection, it is fairly acceptable that general (minimal)
rules are preferable to specific ones. We shall therefore
concentrate on finding minimal classification rules.
6.1 PROPOSED SOLUTION
ID3 constructs a decision tree in which internal nodes
are labeled with attributes, edges with instantiations of
these attributes, and leaves with a class prediction.
Briefly, the tree is constructed by successively parti-
tioning the set of training examples until all remaining
examples are equally classified. Such node becomes a
leaf and is labeled with that class.
While construction of an arbitrary decision tree that
correctly classifies the training data is straightforward,
it is well known that the success of decision-tree-based
algorithms on future data is crucially dependent on the
particular order in which the attributes were chosen in
the successive refinement steps [Fayyad & Irani 88,
Goodman & Smyth 88]. As Quinlan notes, one can of-
ten not afford to generate all possible decision trees in
order to choose the best one. Thus, ID3 (as do other
algorithms) uses a heuristic to guide its choice of attri-
butes. One prominent heuristic is based on entropy-
minimization, using Shannon's information-theoretic
measure.
6.2 SE-TREE-BASED ALTERNATIVE
Aimed at all minimal rules, SE-Learn (Algorithm 6.3)
uses an SE-tree-based framework. As before, open
trees are prioritised, facilitating various exploration
policies. In the context of learning, these will be used to
represent bias and will be briefly discussed in the
end of this section. As before, SE-Learn exploits the
systematic ordering to prune away unpromising parts
of the search space (i.e., nodes which cannot lead to
minimal rules).
Definition 6.2 Candidate Expansions
Let $S$ be a node, $TSET(S) \stackrel{\text{def}}{=} \{ t \in TSET \mid S \subseteq t \}$. We say that $(A=v)$ is a candidate expansion of
$S$ if $A \in \text{View}(\text{ind}, S)$, $v \in \text{Dom}(A)$, and in addition
$TSET(S \cup \{(A=v)\}) \neq TSET(S)$. A node $S$ will be called
impartial if either (1) $TSET(S)$ is empty; or (2) there
exist $t, t' \in TSET(S)$ disagreeing on their class, and
only differing in their assignment to attributes not in
$\text{View}(\text{ind}, S)$.
Algorithm 6.3 Induction of Minimal Rules
Program SE-Learn (TSET)
1. Let RULES --- {};
OPEN-NODES --- {}
2. Until OPEN-NODES is empty do
3. Expand(Next-Best(OPEN-NODES))
Procedure Expand(S)
1. For each candidate expansion $(A=v)$, let
$R \stackrel{\text{def}}{=} S \cup \{(A=v)\}$, do
2. If $R$ is not impartial, nor is it
subsumed by any $R' \in \text{RULES}$ then
3. If $R$ is a rule then add it to RULES;
4. Or else add it to OPEN-NODES.
Theorem 6.4 If open nodes are prioritized by their
label's cardinality then SE-Learn is correct (produces
all and only minimal rules).
Given the incompleteness of the examples with which
they are presented, learning programs may often
have to choose among a number of candidate classi-
fiers, all of which are consistent with the training
set. External preference criteria, also referred to as bias [Mitchell 80], may be necessary for that purpose. Within an SE-tree-based framework, exploration policies can serve in the implementation of such bias. In programs such as SE-Learn, where all rules are explored during the learning phase, an exploration policy will serve in the classification of new objects by guiding preference over possibly conflicting rules. In variants of SE-Learn in which only a subset of the rules are learned, an exploration policy will implement a preference among possible subsets. As was the case for SE-HS, any exploration policy that is monotonic will result in a correct algorithm. Important policies include (1) exploration by cardinality, where a preference is given to simpler rules; (2) by probability (using either a known distribution or frequency in the training set), resulting in preference to characterization of denser parts of the search space; (3) using Shannon’s information-theoretic measure, preferring more discriminating rules; and (4) by utility or some other monotone preference criterion.
6.3 PROJECTED GAIN AND COST
Three related problems arise when a decision tree is used as a framework for search and representation of minimal rules:
1. The minimality problem — rules will often not be discovered in their minimal form;
2. The multiplicity problem — a minimal rule may be discovered repeatedly, disguised in a number of its minimal subsets; and
3. The incompleteness problem — some minimal rules may not be discovered at all.
The minimality problem is often addressed by subsequently pruning the rules extracted from the decision tree [Quinlan 87]. The replication problem, a special case of multiplicity in which sub-trees are replicated within a single decision tree, has been addressed by several researchers, e.g. [Rivest 87, Pagallo & Hauskeller 90]. The more general multiplicity problem, however, may take many other forms. Incompleteness is the result of the mutual exclusiveness property of decision-tree-based rules (see [Weiss & Indurkhya 91]).
In contrast, the SE-tree-based framework does not suffer from these problems:
1. Rules are always discovered in minimal form;
2. Minimal rules are always discovered uniquely; and
3. All minimal rules are discovered.
The fact that any given decision tree may suffer from those problems suggests that none is globally optimal. The SE-tree, however, can be shown to embed many decision trees. More specifically, all decision trees in which attributes are chosen monotonically with respect to some arbitrary indexing, are topologically and semantically equivalent to a tree formed from a subset of the SE-tree’s edges.
Complexity-wise, the SE-tree’s exhaustiveness and relatively large initial branching factor are deceiving. Its complexity is fairly close to that of a single decision tree.
Theorem 6.5 SE-Tree Size
If all attributes are b-valued, then the number of nodes in a complete decision tree is \( \sum_{i=0}^{n} b^i > b^n \). In sharp contrast, the size of a super-tree in which all decision trees are embedded is significantly larger: \( b^n \cdot n! \). The size of a complete SE-tree is only \( (b + 1)^n \).
7 Summary
Many problems in which partial sets or partially instantiated set of variables are targeted share a common structure when viewed as search problems. We presented the Set-Enumeration (SE)-tree as a simple, complete and irredundant vehicle for representing and/or enumerating sets in a best-first fashion. As such, it can serve as the basis for a search-based framework for many such problems.
To demonstrate its usefulness and effectiveness, we presented SE-tree-based algorithms for the hitting-set problem, in the context of consistency-based diagnosis. We used the particular instantiations of these algorithms to demonstrate general features of the paradigm, and compare it with current algorithms. Throughout this process, we developed several add-on tactics including SE-tree-based pruning rules, exploration policies, and problem decomposition methods. Besides their particular incarnations in the SE-HS algorithms, those methods are general and can be shared across many problem domains. In the last part of this paper, in the context of rule induction, we compared features of an SE-tree-based representation with one that is based on decision trees.
Acknowledgements
This research was supported in part by a graduate fellowship ARO Grant DAAL03-89-C0031PR. I thank Teow-Hin Ngair, Greg Provan, Russ Greiner, Alex Kean, and Ron Rivest for useful discussions and suggestions. I also thank Kevin Atteson, Michael Niv, Philip Resnik, Jeff Siskind, and Bonnie Webber for comments on previous drafts. Finally, I am grateful to Barbara Smith for providing the HS-dag implementation and Teow-Hin Ngair for providing the code of his prime implicate generation algorithm.
References
|
{"Source-Url": "http://www.faculty.idc.ac.il/ron/kr92.pdf", "len_cl100k_base": 10231, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 13724, "total-output-tokens": 12759, "length": "2e13", "weborganizer": {"__label__adult": 0.0003762245178222656, "__label__art_design": 0.0006308555603027344, "__label__crime_law": 0.000514984130859375, "__label__education_jobs": 0.0034275054931640625, "__label__entertainment": 0.00012445449829101562, "__label__fashion_beauty": 0.00028443336486816406, "__label__finance_business": 0.0004620552062988281, "__label__food_dining": 0.0004260540008544922, "__label__games": 0.0009074211120605468, "__label__hardware": 0.00168609619140625, "__label__health": 0.0013208389282226562, "__label__history": 0.0004911422729492188, "__label__home_hobbies": 0.0002343654632568359, "__label__industrial": 0.0007472038269042969, "__label__literature": 0.0006566047668457031, "__label__politics": 0.0003528594970703125, "__label__religion": 0.0006313323974609375, "__label__science_tech": 0.4345703125, "__label__social_life": 0.00015783309936523438, "__label__software": 0.0140533447265625, "__label__software_dev": 0.53662109375, "__label__sports_fitness": 0.0003459453582763672, "__label__transportation": 0.0007147789001464844, "__label__travel": 0.00020003318786621096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46612, 0.02583]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46612, 0.57403]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46612, 0.8909]], "google_gemma-3-12b-it_contains_pii": [[0, 3234, false], [3234, 7608, null], [7608, 12099, null], [12099, 16969, null], [16969, 18600, null], [18600, 21091, null], [21091, 25439, null], [25439, 30732, null], [30732, 32792, null], [32792, 37219, null], [37219, 42107, null], [42107, 46612, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3234, true], [3234, 7608, null], [7608, 12099, null], [12099, 16969, null], [16969, 18600, null], [18600, 21091, null], [21091, 25439, null], [25439, 30732, null], [30732, 32792, null], [32792, 37219, null], [37219, 42107, null], [42107, 46612, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46612, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46612, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46612, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46612, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46612, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46612, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46612, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46612, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46612, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46612, null]], "pdf_page_numbers": [[0, 3234, 1], [3234, 7608, 2], [7608, 12099, 3], [12099, 16969, 4], [16969, 18600, 5], [18600, 21091, 6], [21091, 25439, 7], [25439, 30732, 8], [30732, 32792, 9], [32792, 37219, 10], [37219, 42107, 11], [42107, 46612, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46612, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
35221801f34630f35613e9799f1eb734da9d3831
|
Thank you for purchasing the MEAP of *Modern Fortran: Building efficient parallel applications*. I'm excited that you chose to be part of this journey with me. This book is still in early development and I hope that you can help me make it as great as possible.
As you can probably tell from the title, this is a book about parallel programming with modern Fortran. I place emphasis on modern because Fortran has evolved substantially since 1957 when it appeared as the first high-level programming language in history. Despite the explosive growth and mobile- and web-oriented technologies in the last two decades, Fortran remains the only standardized language with a native parallel programming model – the Fortran Coarrays.
This book will teach you Fortran and parallel programming by doing. It will guide you step-by-step through the development of a complete and parallel fluid dynamics solver from scratch. We will start by learning the core features of the language, essential for any Fortran application. We will then dive into Fortran Coarrays and learn how to decompose the computational domain into parallel tiles and exchange data between them. We will dig into measuring and improving performance of parallel apps, and later some object-oriented and functional programming techniques for data abstraction and writing cleaner code. Finally, we will cover modern approaches to interfacing C applications, as well as I/O with common data formats.
I believe this book will be most useful to Fortran novices or existing Fortran programmers who need to harness the power of many parallel CPUs for their application. Whether you use parallelism to speed up your code, or to overcome the memory limitations of a single computer, this book will show you how to do it in practice. In the end, you will come out with a skill set that you can apply to your own application, be it numerical weather prediction, aerodynamics simulation, or machine learning.
Your advice is essential for my writing of this book. Please use the [Author Online Forum](https://forums.manning.com/forums/modern-fortran) to post feedback or ask question. This will give you an opportunity to steer the writing in a certain direction. Whether I left something unclear, or you find a code example with a critical error, let me know! I want to create the best book possible for you and others.
—Milan Curcic
brief contents
PART 1: GETTING STARTED WITH MODERN FORTRAN
1 Introducing Fortran
2 Getting started: Minimal working app
3 Writing reusable code with procedures and modules
4 Fast math with array operators and arithmetic
PART 2: ADVANCED FORTRAN USE
5 Going parallel with Fortran coarrays
6 Using derived types to work with abstract data
7 Overloading operators and generic functions
8 Input and output: Namelists, JSON, and NetCDF
PART 3: THE FINAL STRETCH
9 Advanced parallelism with teams and events
10 Interoperability with C: Exposing your app to the web
11 Easy to use apps with rich CLIs and documentation
12 Publishing your Fortran app
APPENDIXES
A Setting up the Fortran development environment
B From calculus to code
C Glossary
This is a book about Fortran, one of the first high-level programming languages in history. It will teach you the language by guiding you step-by-step through the development of a fully-featured, parallel physics simulation app. Notice the emphasis on parallel. I will introduce the concept of parallel programming early on, and start applying it from first principles. Parallel programming allows you to break down your problem into pieces, and let multiple processors each work on only part of the problem, thus reaching the solution in less time. By the end, you will be able to recognize problems that can be parallelized, and you will be able to use modern Fortran techniques to solve them.
Modern Fortran is not a comprehensive reference manual for every Fortran feature. There are significant parts of the language that I have omitted on purpose. Instead, I focus on the most practical Fortran features that you would use to build a real-world application. As we work on our app chapter by chapter, we will apply modern Fortran features and software design techniques to make our app robust, portable, and easy to use and extend. That said, I stand corrected. This is not just a book about Fortran. This is a book about building robust, parallel software using modern Fortran.
1.1 What is Fortran?
"I don’t know what the language of the year 2000 will look like, but I know it will be called Fortran."
-- Tony Hoare, winner of the 1980 Turing Award
Fortran is a general-purpose, parallel programming language that excels in scientific and engineering applications. Originally called FORTRAN (FORmula TRANslation) in 1957, it has evolved over decades to a robust, mature, and high performance-oriented programming language. Today, Fortran keeps churning under the hood of many systems that we take for granted:
- Numerical weather, ocean, and surf prediction
- Climate science and prediction
- Computational fluid dynamics software used in mechanical and civil engineering
- Aerodynamics solvers for designing cars, airplanes, and spacecraft
- Fast linear algebra libraries used by machine learning frameworks
- Benchmarking the fastest supercomputers in the world (top500.org)
Here’s a specific example. In my work, I deal mostly with the development of numerical models for weather, ocean surface waves, and deep ocean circulation. Speaking about it over the years, I found that most people didn’t really know where weather forecasts come from. The general idea is that a group of meteorologists would gather and together come up with a chart of what the weather will be like tomorrow, in a week, or a month from now. This is only partially true. In reality, we use sophisticated numerical models that crunch a huge amount of numbers on very large computers. In layman terms, these models simulate the atmosphere to create an educated guess of what the weather will be like some time in the future. The results of these simulations are then used by meteorologists to create a meaningful weather map (Figure 1.1). This map shows just a sliver of all data that is produced by the model. The output size of a weather forecast like this counts in hundreds of gigabytes.
Figure 1.1. A forecast of Hurricane Irma on September 10, 2017, computed by an operational weather prediction model written in Fortran. Shading and barbs shows surface wind speed in meters per second, and contours are isolines of sea-level pressure. A typical weather forecast is computed in parallel using hundreds of CPUs. Data provided by the NOAA National Center for Environmental Prediction (NCEP).
The most powerful Fortran applications run in parallel on hundreds or thousands of CPUs. Development of the Fortran language and its libraries have been largely driven by the need to solve extremely large computational problems in physics, engineering, and biomedicine. To access even more computational power than what the most powerful single computer at the time could offer, in the late 20th century we started connecting many computers with high-bandwidth networks, and let them each work on a piece of the problem. The result is the so-called supercomputer, a massive computer that is typically made of thousands of commodity CPUs (Figure 1.2). Supercomputers are similar to modern server farms hosted by Google or Amazon, except that the network infrastructure in supercomputers is designed to maximize bandwidth and minimize latency between the servers themselves, rather than the outside world. As a result, the CPUs in a supercomputer act like one giant processor with distributed-memory access that is almost as fast as local memory access. To this day, Fortran remains the dominant language used for such massive-scale parallel computations.
Figure 1.2. MareNostrum 4 supercomputer at the Barcelona Supercomputing Center. The computer is housed inside the Torre Girona Chapel in Barcelona, Catalonia, Spain. A high-speed network connects each cabinet one to another. With 165,888 Intel Xeon cores, MareNostrum 4 is the fastest supercomputer in Spain, and 16th fastest in the world as of November 2017 (www.top500.org/lists/2017/06/). It is used for many scientific applications, from astro- and materials physics, to climate and atmospheric dust transport prediction, to biomedicine. Image source: www.bsc.es/marenostrum/marenostrum.
1.2 Fortran features
"This is not your parents' Fortran."
-- Damian Rouson
In the context of programming languages, Fortran is:
- **Compiled**: You will write whole programs and pass them to the *compiler* before executing them. This is in contrast to *interpreted* programming languages like Python or Javascript which can be parsed and executed line by line. While this makes writing programs a bit more tedious, it allows the compiler to generate extremely efficient executable code. In typical use cases, Fortran programs are one or two orders of magnitude faster than equivalent Python programs.
What is a compiler?
A computer program that reads source code written in one programming language and translates it to equivalent code in another programming language. In our case, a Fortran compiler will read Fortran source code and generate equivalent assembly code and machine (binary) instructions.
- **Statically-typed**: In Fortran, you will give all variables a type upon declaration, and they will remain of that type until the end of the program:
```fortran
real :: pi
pi = 3.141592
```
1. `pi` must be declared before use
2. `pi` remains a `real` number until the program halts.
You will also need to explicitly declare the variables before their use, which is known as *manifest typing*. Finally, Fortran employs the so-called *strong typing*, which means that the compiler will raise an error if it can notice that a procedure is being invoked with an argument of the wrong type. While static typing helps the compiler to generate efficient programs, manifest and strong typing enforce good programming hygiene and make Fortran a safe language. I find it is easier to write correct Fortran programs than Python or Javascript, which come with many hidden caveats and "gotchas".
- **Multi-paradigm**: You can write Fortran programs in several different paradigms, or styles. These include imperative, procedural, array-oriented, object-oriented, and even functional programming. Some paradigms are more appropriate than others, depending on the problem you are trying to solve. We will explore different paradigms in more detail in later chapters.
- **Parallel**: Fortran is also a *parallel* language. This refers to the capability to split the computational problem between multiple processes that communicate through whatever network lays between them. These processes can be running on the same processing core (known as thread-based parallelism), on different cores that share RAM (shared-memory parallelism), or distributed across the network (distributed-memory parallelism). Computers working together on the same parallel program can be physically located across the room, or even across the world. Fortran 2008 standard introduced *coarrays*, a syntax element that allows you to express parallel algorithms and remote data exchange without any external libraries. A coarray is an entity that allows you to access remote memory in the same way that you would access elements of an array. I show an example of exchanging data between *images* (a Fortran word for parallel processes) in Listing 1.2.
```fortran
integer :: coarray, x, y, z
coarray = 1
x = 2
y = 3
z = x + y
```
```
the program.
```fortran
integer, codimension[*] :: a
integer :: i
a = this_image()
if (this_image() == 1) then
do i = 1, num_images()
write(*,*) 'Value on image', i, 'is', a[i]
end do
end if
```
1. Each image declares a local copy of an integer `a`
2. Each image assigns its number (1, 2, 3, etc.) to `a`
3. Only image 1 will enter this `if`-block
4. Loop from 1 to the total number of images
5. For each remote image, image 1 will get the value of `a` on that image, and print it to screen
The Fortran standard itself does not dictate how the data exchange is implemented in the underlying hardware and operating system — it merely specifies the syntax and the expected behavior. This allows the compiler developers to use the optimal mechanisms available on specific hardware. Given a capable compiler and libraries, the Fortran programmer will be able to write code that will run on conventional CPUs, many-core (hybrid) CPUs like Intel MIC co-processors, or general-purpose GPUs.
- **Mature**: In 2016, we celebrated 60 years since the birth of Fortran. The language has evolved through several iterations of the standard:
- FORTRAN 66, also known as FORTRAN IV (ANSI, 1966)
- FORTRAN 77 (ANSI, 1978)
- Fortran 90 (ISO/IEC, 1991; ANSI, 1992)
- Fortran 95 (ISO/IEC, 1997)
- Fortran 2008 (ISO/IEC, 2010)
- Fortran 2018 (to be published in 2018)
Fortran development and implementation in compilers has been heavily supported by the industry: IBM, CRAY, Intel, NAG, Portland Group/NVIDIA, and others. There have also been significant developments in the open source community, most notably through development of gfortran (gcc.gnu.org/wiki/GFortran), a free Fortran compiler that is part of the GNU Compiler Collection (GCC). Finally, because of its role in the early days of computer science, today we have a vast set of robust and mature libraries that have served as the computational backbone of many applications. With mature compilers and a large and trusted legacy code base, Fortran remains the language of choice for many new software projects for which computational efficiency and parallel execution is key.
- **Easy to learn**: Believe it or not, Fortran is quite easy to learn. This was my
experience and the personal experience of many of my colleagues. This is partly
due to Fortran’s strict typing system, which allows the compiler to keep the
programmer in check, and warn them at compile time when they mess up. While
verbose, the syntax is clean and easy to read. However, like every other
programming language or skill in general, Fortran is difficult to master. This is
one of the reasons I chose to write this book.
1.3 Why learn Fortran?
"There were programs here that had been written five thousand years ago, before
Humankind ever left Earth. The wonder of it - the horror of it, Sura said - was that
unlike the useless wrecks of Canberra’s past, these programs still worked! And via
a million million circuitous threads of inheritance, many of the oldest programs
still ran in the bowels of the Qeng Ho system."
-- Vernor Vinge A Deepness in the Sky
Since the early 1990s, we have seen an explosion of new programming languages and
frameworks, mainly driven by the widespread use of the internet, and later, mobile
devices. C++ took over computer science departments, Java has been revered in the
enterprise, Javascript redefined the modern web, R became the mother tongue of
statisticians, and Python rose up as an all-around great programming language for most
tasks. Where does Fortran fit in all this? Through steady revisions of the language,
Fortran has maintained a solid footing in its niche domain, High Performance
Computing (HPC). Its computational efficiency is still unparalleled, with only C and
C++ coming close. However, unlike C and C++, Fortran has been designed for array-
oriented calculations, and is in my opinion significantly easier to learn and program.
Finally, a strong argument for Fortran is in its native support for parallel programming
that was introduced in the 2008 revision of the standard.
What is High Performance Computing?
High Performance Computing (HPC) is the practice of combining computer resources to solve
computational problems that would otherwise not be possible with a single desktop computer. HPC
systems typically aggregate hundreds or thousands of servers and connect them with fast networks.
Most HPC systems today run some flavor of Linux OS.
Despite being a decades-old technology, Fortran has several attractive features that
make it indispensable, even compared to more recent languages:
- **Array-oriented**: Fortran 90 introduced array-oriented syntax and constructs,
which greatly simplified operations that operated on arrays element-wise.
Consider the task of multiplying two 2-dimensional arrays:
```fortran
do j = 1, jm
do i = 1, im
c(i,j) = a(i,j) * b(i,j)
end do
end do
```
©Manning Publications Co. We welcome reader comments about anything in the manuscript - other than typos and
other simple mistakes. These will be cleaned up during production of the book by copyeditors and proofreaders.
https://forums.manning.com/forums/modern-fortran
Since Fortran 90, you can simply do:
\[ c = a \times b \]
This is not only more expressive and readable, but also indicates to the compiler that it can choose the most optimal way to perform the operation. Arrays lend themselves very well to CPU architectures and computer memory because they are designed as contiguous sequence of numbers, and in that sense, mirror the physical layout of the memory space. Fortran compilers are capable of generating extremely efficient machine code because of all the assumptions that they can safely make.
- **The only parallel language developed by a standards committee (ISO):** The Fortran standards committee ensures that the development of Fortran goes in the direction that supports its target audience: computational scientists and engineers.
- **Mature libraries for science, engineering, and math:** Fortran started in the 1950s as the programming language for science, engineering, and mathematics. Decades later, we have a rich legacy of robust and trusted libraries for linear algebra, numerical differentiation and integration, and others. These libraries have been used and tested by generations of programmers, to the point that they are guaranteed to be almost bug-free.
- **Growing general-purpose library ecosystem:** In the past decade, Fortran has also seen a growing ecosystem of general-purpose libraries: text parsing and manipulation, I/O libraries for many data formats, working with dates and times, collections and data structures, and so on. One has even built a web framework as a proof of concept ([fortran.io](https://fortran.io)). I think that any programming language is as powerful as its libraries, and the growing number of Fortran libraries make it more useful today than ever before.
- **Still unmatched performance:** Fortran is still about as close to the metal as it gets with high-level programming languages. This is the case both because of its array-oriented design and mature compilers that are getting increasingly better at optimizing code. If you are working on a problem that involves many mathematical operations on large arrays, few other languages get close to Fortran’s performance.
In summary, learn Fortran if you need to implement efficient and parallel numerical operations on large multi-dimensional arrays.
### 1.4 Advantages and disadvantages
Many Fortran features give it both an advantage and a disadvantage. I list some below:
- **Domain-specific language:** Despite being technically a general-purpose language, Fortran is very much a domain-specific language in the sense that it has been designed for science, engineering, and math applications. If your problem involves some arithmetic on large and structured arrays, Fortran will shine. If you want to
write a web browser or low-level device drivers, Fortran is not the right tool for the task.
- **A niche language**: Fortran is extremely important to a relatively small number of people: scientists and engineers in select disciplines. As a consequence, it may often be difficult to find as many tutorials or blogs about Fortran as there are for more mainstream languages. At the time of this writing, there are a bit over 8,000 questions with Fortran tag on Stack Overflow, a popular programming Q&A website. Contrast this with a whopping 800,000 questions with the Python tag.
- **Statically and strongly typed language**: As I mentioned above, this makes Fortran a very safe language to program in, and helps compilers generate efficient executables. On the flip-side, it makes it less flexible and more verbose, and thus not the ideal language for rapid prototyping.
- **Nothing is a pointer**: Unless you explicitly declare it a pointer. Every variable gets its own space in physical memory. In general, you wouldn’t use pointers in Fortran unless you have to. For example, implementing a linked list requires use of pointers by definition. Pointers are also the only way to create a memory leak in Fortran, making it a relatively safe language.
- **Garbage collection**: Fortran has a basic garbage collection model specified by the standard. Any non-pointer variable is automatically freed from memory once it goes out of scope. However, any pointers must be explicitly dereferenced after their use to avoid the possibility of memory leaks. There is thus some responsibility on you as the programmer to keep track of how pointers are used.
The comparison of Fortran to Python that follows will help you better understand its advantages and disadvantages in the general-purpose programming context.
### 1.4.1 Side-by-side comparison with Python
How does modern Fortran compare to a more recent general-purpose programming language? Python has had the most rapidly growing ecosystem in the past few years for data analysis and light number crunching ([stackoverflow.blog/2017/09/14/python-growing-quickly](https://stackoverflow.blog/2017/09/14/python-growing-quickly)). It is used by many Fortran programmers that I know for post-processing of model output and data analysis. In fact, Python is my second favorite programming language (guess which one is my number one). Because of the application domain overlap between Fortran and Python, it is useful to summarize the main differences between these languages. If you are a Python programmer, this summary will give you an idea of what you can and cannot do with Fortran.
**Table 1.1. Comparison between Fortran and Python features. This table lists only those features available by the core implementation of each language.**
<table>
<thead>
<tr>
<th>Language</th>
<th>Fortran</th>
<th>Python</th>
</tr>
</thead>
<tbody>
<tr>
<td>First appeared</td>
<td>1957</td>
<td>1991</td>
</tr>
<tr>
<td>Latest iteration</td>
<td>Fortran 2018</td>
<td>3.6.5 (2018)</td>
</tr>
<tr>
<td>International</td>
<td>ISO/IEC</td>
<td>No</td>
</tr>
</tbody>
</table>
©Manning Publications Co. We welcome reader comments about anything in the manuscript - other than typos and other simple mistakes. These will be cleaned up during production of the book by copyeditors and proofreaders. [https://forums.manning.com/forums/modern-fortran](https://forums.manning.com/forums/modern-fortran)
<table>
<thead>
<tr>
<th>Standard</th>
<th>Fortran</th>
<th>Python</th>
</tr>
</thead>
<tbody>
<tr>
<td>Implementation language</td>
<td>C, Fortran, Assembly (compiler dependent)</td>
<td>C</td>
</tr>
<tr>
<td>Compiled vs. interpreted</td>
<td>Compiled</td>
<td>Interpreted</td>
</tr>
<tr>
<td>Typing discipline</td>
<td>Static, strong</td>
<td>Dynamic, strong</td>
</tr>
<tr>
<td>Parallel</td>
<td>Shared and distributed memory</td>
<td>Shared-memory only</td>
</tr>
<tr>
<td>Multidimensional arrays</td>
<td>Yes, up to 15 dimensions</td>
<td>3rd party library only (numpy)</td>
</tr>
<tr>
<td>First array index</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>Intrinsic types</td>
<td>character, complex, integer, logical, real</td>
<td>bool, bytearray, bytes, complex, dict, ellipsis, float, frozense, int, list, set, str, tuple</td>
</tr>
<tr>
<td>Integer kinds</td>
<td>1, 2, 4, and 8 bytes, signed only</td>
<td>2, 4, and 8 bytes, signed and unsigned</td>
</tr>
<tr>
<td>Real / float kinds</td>
<td>4, 8, and 16 bytes</td>
<td>4 and 8 bytes</td>
</tr>
<tr>
<td>Constants</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td>Pointers</td>
<td>Explicit</td>
<td>Implicit</td>
</tr>
<tr>
<td>Classes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Encapsulation</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td>Inheritance</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Polymorphism</td>
<td>Limited</td>
<td>Yes</td>
</tr>
<tr>
<td>Generic programming</td>
<td>Limited</td>
<td>Yes</td>
</tr>
<tr>
<td>Pure functions</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td>Higher-order functions</td>
<td>Limited</td>
<td>Yes</td>
</tr>
<tr>
<td>Anonymous functions</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Metaprogramming</td>
<td>Preprocessor macros only</td>
<td>Yes</td>
</tr>
<tr>
<td>Garbage collection</td>
<td>None</td>
<td>Optional</td>
</tr>
<tr>
<td>Interoperability with other languages</td>
<td>C (limited)</td>
<td>C (limited)</td>
</tr>
<tr>
<td>OS interface</td>
<td>Limited</td>
<td>Yes</td>
</tr>
</tbody>
</table>
Going through Table 1.1, we notice the key differences between Fortran and Python:
- Fortran is developed by an international standard committee. New language features and programming paradigms are more slowly introduced into revisions of the Fortran standard, but the committee ensures that the usefulness of the language
does not decline for its target audience - scientists and engineers.
- Fortran is compiled and statically typed, while Python is interpreted and dynamically typed. This makes Fortran a bit more verbose and slower to write programs in, but makes it easier for the compiler to generate fast binary code. This is thus a blessing and a curse - Fortran is not designed for rapid prototyping, but allows producing robust and efficient programs.
- Parallelism on both shared and distributed memory computers is native to Fortran. Shared memory parallelism is available in Python using the `multiprocessing` module, however, distributed-memory parallelism is possible only with a third party library that interfaces a message passing protocol written in another language.
- Fortran is array-oriented. Arrays are also where Fortran performs best, as they map well to the layout of elements in memory. In contrast, Python wants little to do with arrays except in special cases. The array-oriented programming model came about from the need of scientists and engineers to apply same arithmetic operations to a large number of elements, and do it fast. The need for blazing fast array operations drove the development of the SIMD (Single Instruction Multiple Data) computing architecture and vector computers in the 1970s, which dominated the supercomputer space through late 1990s. Similarly, GPUs (Graphics Processing Units) were developed with the goal to rotate and translate a large number of small matrices at once. Originally pushed by the video game industry, GPUs are coming back as an important player in general-purpose HPC applications.
- Fortran offers a minimal set of intrinsic types, and most of them are numerical. The standard library lacks common collections and data structures such as lists, dictionaries, and queues. However, it is relatively straightforward to implement these with core Fortran features, as we will learn later in this book. Because of limited types and data structures out-of-the-box, Fortran is not the ideal language for complex business and web applications that operate on unstructured user data in real time. Nevertheless, thanks to the object-oriented features introduced in Fortran 2003 and 2008, several libraries with general-purpose, reusable data structures are now available.
- While Fortran has had a powerful object-oriented programming model since Fortran 2003, it still has limited capability in terms of generic (procedures accepting arguments of any type) and functional programming. For example, while you can pass a function as an argument to another function, it is still not possible to create and return a function object programmatically. Fortran also has an advantage in terms of declaring pure functions, which allows the compiler to execute them in the most efficient way it can find. Inclusion of more advanced programming paradigms into the Fortran standard has been limited to ensure that program performance remains close to that of machine instructions or assembly code.
In summary, it is difficult to use Fortran to write device drivers, graphical video games, or a web browser. However, if you need to solve a large numerical problem that can be
distributed across multiple computers, Fortran is the ideal implementation language.
### 1.5 The parallel Fortran mental model
Let me take a few minutes to illustrate the kind of problem where Fortran really shines.
---
**Summer ends on old Ralph’s farm**
Farmer Ralph has two sons and two daughters, and a big farm. It’s the end of the summer and about time to cut the grass and make hay for the cattle to eat. But the pasture is big and old Ralph is weak. His children, however, are young and strong. If they all work hard and as a team, they could get it done in a day. They agree to split the work between themselves in four equal parts. Each of Ralph’s children grabs a scythe and a fork, and head to their part of the pasture. They work hard, cutting grass row by row. Every hour or so, they meet at the edges to sharpen the tools and chat about how it’s going. The work is going well and almost all of the grass is cut by mid-afternoon. Near the end of the day, they collect the hay into bales and take them to the barn. Old Ralph is happy that he has strong and hard-working children, but even more so that they make such a great team! Working together, they completed work that would take four times as long if only one of them was working.
Now you must be thinking, what the heck does old Ralph’s farm have to do with parallel Fortran programming? More than meets the eye, I can tell you! Old Ralph and his big pasture are an analogy to a slow computer and a big compute problem. Just like Ralph asked his sons and daughters to help him cut the grass, in a typical parallel problem we will divide the computational domain, or input data, into equal pieces and distribute them between CPUs. Recall that his children cut the grass row-by-row — some of the most efficient and expressive Fortran code are the whole-array operations and arithmetic. Periodically, they met at the edges to sharpen the tools and have a chat. In many real-world apps, you will instruct the parallel processes to exchange data between each other, and this is true for all the parallel examples that I will guide you through in this book. Finally, each parallel process will asynchronously write its data to disk. I illustrate this pattern on Figure 1.3.
---
Much like farmer Ralph, Fortran is old. This is by no means a bad thing! It is a mature, robust, and dependable language that isn’t going anywhere. While it does carry some quirks of an old programming language, it has been improved decade over decade by generations of computer scientists and programmers, and has been battle-tested in countless applications where performance is critical. The ease of parallel programming with Fortran is key for high-performance apps, which is why I chose to make it the focus of this book.
1.6 What will you learn in this book?
This book will teach you how to write modern, efficient, and parallel Fortran programs. Working through each chapter, we will build from scratch a fully-functional, parallel, fluid dynamics solver with a specific application to tsunami prediction. If you work through the book, you will come out with three distinct skill sets:
- You will be fluent with most modern Fortran features. This is a unique and desired skill in a robust and niche market that is HPC.
- You will be able to recognize problems that are parallel in nature. You will think parallel-first, and parallel solutions to problems will seem intuitive. In contrast, a serial solution to a parallel problem will become just an edge-case scenario.
- You will get a grasp on good software design, including design patterns, unit and regression testing, documenting the code, and sharing your project with the online community.
community. You will also be able to adapt existing Fortran libraries in your project and contribute back. This will not only make your project useful to others, but can open doors in terms of career and learning opportunities. It did for me!
In this book, I assume that you have at least some programming experience, and understand basic concepts like variables, loops, and branches. Ideally, you already have coded basic scripts in Python or MATLAB. Since our running example is centered around solving a system of partial differential equations, it is helpful if you have some knowledge of calculus and linear algebra. We will also be working a lot in the terminal, so some experience with Linux or UNIX-like shell is expected. Given the topic of the book, I expect that this book will be ideal for:
- Undergraduate and graduate students in physical science, engineering, or applied math, especially with focus on fluid dynamics
- Instructors and researchers in the above fields
- Meteorologists, oceanographers, and other fluid dynamicists working in the industry
- Serial Fortran programmers who want to step up their parallel game
- HPC system administrators
If you fit in one of the above categories, you may already know that Fortran’s main selling point is its ease of programming efficient and parallel programs for large supercomputers. This has kept it as the dominant HPC language of physical sciences and engineering. While this book will teach you Fortran from the ground up, I will also take the unconventional approach and teach it in the context of parallel programming from the get go. Rather than gaining just another technical skill as an afterthought, you will learn how to think parallel. You will recognize ways in which the workload and memory can be distributed to arrive at the solution more efficiently. With parallel thinking, you will come out with two critical advantages:
1. You will be able to solve problems in less time.
2. You will be able to solve problems that can’t fit into a single computer.
The first is a definite nice-to-have, but the second is a deal-breaker. Some problems simply can’t be solved without parallel programming. Next section will give you a gentle introduction and an example of parallel programming.
## 1.7 Think parallel!
"For over a decade prophets have voiced the contention that the organization of a single computer has reached its limits and that truly significant advances can be made only by interconnection of a multiplicity of computers in such a manner as to permit cooperative solution."
-- Gene Amdahl, (computer architect) in 1967
Parallel programming is only becoming more important with time. The rate of semiconductor density increase, as described by Moore’s law, while still positive,
limited. Traditionally we went past this limit by placing more processing cores on a single die. Even the processors in most smartphones today are multicore. Beyond the shared-memory computer, we have connected many machines using sophisticated networks, and made them talk to each other to solve huge computational problems. As I mentioned earlier, the weather forecast that you saw this morning on your favorite TV channel or news website was computed on hundreds or thousands of parallel processors. Due to the practical limits of Moore’s law and the current tendency toward many-core architectures, there is a sense of urgency to teach programming parallel-first.
**What is Moore's law?**
Gordon Moore, the cofounder of Intel, noticed in 1965 that the number of transistors in a CPU was doubling each year. He later revised this trend to a doubling every two years, but nevertheless, this kind of rate of increase is exponential. This trend is closely related to a continuous decrease in cost of computers. For example, a computer you buy today for $1000 is about twice as powerful as a computer you could buy for the same amount two years ago.
Similarly, when you buy a new smartphone, the OS and the apps seem smooth and fast. What happens two years later? As the apps update and get new features, they demand increasingly more CPU cycles and memory. As the hardware in your phone stays the same, eventually the apps slow down to a creep.
All parallel problems fall into two categories:
1. **Embarrassingly parallel**: Here, by "embarrassingly" we actually mean "embarrassingly easy" - it’s a good thing! These are problems that can be distributed across processors with little to no effort (Figure 1.4, left). In general, any function $f(x)$ that operates element-wise on an array $x$ without need for communication or synchronization between elements is embarrassingly parallel. Because the domain decomposition of embarrassingly parallel problems is trivial, modern compilers are capable of auto-parallelizing such code in most cases. Real world examples include graphics rendering, serving static websites, or processing a large number of independent data records.
2. **Non-embarrassingly parallel**: Any parallel problem in which there is inter-dependency between processing elements, which requires communication and synchronization (Figure 1.4, right). Most partial differential equations solvers are non-embarrassingly parallel. The relative amount of communication versus computation dictates how well a parallel problem will scale. The objective for most physical solvers is thus to minimize communication and maximize computation. Real world examples include modeling fluid flows, molecular dynamics, or any other physical process that can be described by partial differential equations. This class of parallel problems is more difficult, and in my opinion, more interesting!
*Figure 1.4. A schematic of an embarrassingly parallel problem (left) and a non-embarrassingly parallel problem (right). In both cases, the CPUs receive input $(x_1, x_2)$ and process it to produce output $(y_1, y_2)$. In an embarrassingly parallel problem, $x_1$ and $x_2$ can be processed independently.*
of each other. Furthermore, both input and output data are local in memory to each CPU, indicated by solid arrows. In a non-embarrassingly parallel problem, input data is not always local in memory to each CPU and has to be distributed through the network, indicated by dashed arrows. In addition there may be data inter-dependency between CPUs during the computation step, which requires synchronization (horizontal dashed arrow).
Why is it called embarrassingly parallel?
It refers to overabundance, as in embarrassment of riches. It’s the kind of problem that you want to have. The term is attributed to Cleve Moler, inventor of MATLAB and one of the authors of EISPACK and LINPACK, Fortran libraries for numerical computing. LINPACK is still used to benchmark the fastest supercomputers in the world.
Because our application domain deals mainly with non-embarrassingly parallel problems, we will focus on how to implement parallel data exchange between processors in a clean, expressive, and minimal way. This will involve both distributing the input data among processors (downward dashed arrows in Figure 1.4), and exchanging the data between them whenever there is inter-dependency (horizontal arrow in Figure 1.4).
Parallel Fortran programming in the past has been done either using the OpenMP directives for shared-memory computers only, or with the Message Passing Interface (MPI) for both shared and distributed memory computers. Differences between shared-memory (SM) and distributed-memory (DM) systems are illustrated on Figure 1.5. The main advantage of SM systems is very low latency in communication between processes. However, there is a limit to the number of processing cores that can exist in a SM system. Since OpenMP was designed for SM parallel programming exclusively, we will focus on MPI for our specific example below.
Figure 1.5. Shared-memory (left) versus distributed-memory (right) system. In a shared-memory system, processors (orange) have access to common memory (RAM, purple). In a distributed-memory system, each processor has their own memory, and exchange data through a network,
indicated by dashed lines. The distributed-memory system is most commonly composed of multicore shared-memory systems.
### OpenMP versus MPI
OpenMP is a set of directives that allow the programmer to indicate to the compiler the sections of the code that are to be parallelized. OpenMP is implemented by most Fortran compilers and does not require external libraries. However, OpenMP is limited to shared-memory machines.
Message Passing Interface (MPI) is a standardized specification for portable message passing (read: data copy) between arbitrary remote processes. This means that MPI can be used for multi-threading on a single core, multicore processing on a shared-memory machine, or distributed-memory programming across networks. MPI implementations typically provide interfaces for C, C++, and Fortran. MPI is often described as the assembly language of parallel programming, illustrating the fact that most MPI operations are low-level.
#### 1.7.1 Copying an array from one processor to another
In most scientific and engineering parallel applications, there is data dependency between computational processes. Typically, a 2-d array is decomposed into tiles like a chess board, and the workload of each tile is assigned to a processor. Each tile has its own data in memory that is local to its processor. To illustrate the simplest case of parallel programming in a real-world scenario, let’s take the following meteorological situation for example. Suppose that the data consists of two variables, wind and air temperature. Wind is blowing from one tile with lower temperature (cold tile) toward another tile with higher temperature (warm tile). If we were to solve how the temperature evolves in time, the warm tile would need to know what temperature is coming in with the wind from the cold tile. Because this is not known a priori (remember that the data is local to each tile), we need to copy the data from the cold tile into the memory that belongs to the warm tile. On the lowest level, this is done by explicitly copying the data from one processor to another. When the copy is finished, the processors can continue with the remaining computations. Copying an array from one process to another is the most common operation done in parallel
programming (Figure 1.6).
**Figure 1.6.** An illustration of a remote array copy between two CPUs. The numbers inside the boxes indicate initial array values. Our goal is to copy values of array from CPU 1 to CPU 2.
Since we’re barely starting, let’s focus on getting just this one operation done. Our goal is to do the following:
1. Initialize array on each process - [1, 2, 3, 4, 5] on CPU 1 and all zeros on CPU 2.
2. Copy values of array from CPU 1 to CPU 2.
3. Print the new values of array on CPU 2. These should be [1, 2, 3, 4, 5].
I will show you two examples of how to solve this problem. One is the traditional approach using an external library like MPI. Unless you’re a somewhat experienced Fortran programmer, don’t try to grok this example. I merely want to demonstrate how complicated and verbose this approach is. Then, I will show you the solution using the new Fortran Coarray approach. In contrast to MPI, with coarrays you can use an array indexing-like syntax to perform remote data exchange between parallel processes.
**MPI: The traditional way of parallel programming**
MPI has been often described as the assembly of parallel programming, and indeed, that was developers' original intention! The main vision of MPI was to be implemented by compiler developers to enable native parallel programming languages. However, over the past three decades, application developers were much faster at adopting MPI directly in their programs, and MPI has become, for better or for worse, a de facto standard tool for parallel programming in Fortran, C, and C++. As a result, most HPC applications today still rely on low-level MPI calls.
Below is a Fortran program that sends data from one process to another using MPI:
```fortran
program array_copy_mpi
use mpi
implicit none
integer :: ierr, nproc, procsze, request
integer, dimension(mpi_status_size) :: stat
```
©Manning Publications Co. We welcome reader comments about anything in the manuscript - other than typos and other simple mistakes. These will be cleaned up during production of the book by copyeditors and proofreaders. https://forums.manning.com/forums/modern-fortran
integer, dimension(5) :: array
integer, parameter :: sender = 0, receiver = 1
call mpi_init(ierr)
call mpi_comm_rank(mpi_comm_world, nproc, ierr)
call mpi_comm_size(mpi_comm_world, procsyze, ierr)
if (procsyze /= 2) then
call mpi_finalize(ierr)
stop 'Error: This program must be run on 2 parallel processes'
end if
if (nproc == sender) then
array = [1, 2, 3, 4, 5]
else
array = 0
end if
write(*,'(a,i1,a,5(4x,i2))')'array on proc ', nproc, &
' before copy:', array
call mpi_barrier(mpi_comm_world, ierr)
if (nproc == sender) then
call mpi_isend(array, size(array), mpi_int, receiver, 1, &
mpi_comm_world, request, ierr)
else if (nproc == receiver) then
call mpi_irecv(array, size(array), mpi_int, sender, 1, &
mpi_comm_world, request, ierr)
call mpi_wait(request, stat, ierr)
end if
write(*,'(a,i1,a,5(4x,i2))')'array on proc ', nproc, &
' after copy: ', array
call mpi_finalize(ierr)
end program array_copy_mpi
1 Access MPI subroutines and mpi_comm_world global variable from a module
2 Initialize MPI
3 Which processor number (nproc) am I?
4 How many total processes are there?
5 Shut down MPI and stop the program if we are not running on 2 processors
6 Initialize array on sending process
7 Initialize array on receiving process
8 Print text to screen with specific formatting
9 Wait here for both processes
10 Sender posts a non-blocking send
11 Receiver posts a non-blocking receive
12 Receiver waits for the message
13 Finalize MPI at the end of the program
Running the program on 2 processors outputs the following:
<table>
<thead>
<tr>
<th>Listing 1.4. Output of array_copy_mpi program.</th>
</tr>
</thead>
<tbody>
<tr>
<td>array on proc 0 before copy: 1 2 3 4 5</td>
</tr>
<tr>
<td>array on proc 1 before copy: 0 0 0 0 0</td>
</tr>
<tr>
<td>array on proc 0 after copy: 1 2 3 4 5</td>
</tr>
<tr>
<td>array on proc 1 after copy: 1 2 3 4 5</td>
</tr>
</tbody>
</table>
The above output confirms that our program did what we wanted - copy the array values from process 0 to process 1.
Compiling and running this example
Don't worry about building and running this example yourself for the time being. In the next chapter, you will set up the complete compute environment for working with examples in this book, including this one.
Enter Fortran Coarrays
Coarray Fortran (CAF) is the native Fortran model for parallel programming. Originally developed by Robert Numrich and John Reid in the 1990s as an extension for the Cray Fortran compiler, CAF has been introduced into the standard starting with Fortran 2008 revision. Coarrays are very much like arrays, as the name implies, except that their elements are distributed along the axis of parallel processes (think cores or threads). As such, they provide an intuitive way to send and receive data between remote processes.
What follows is the coarray implementation of our array copy example:
<table>
<thead>
<tr>
<th>Listing 1.5. Copying an array from one process to another using coarrays</th>
</tr>
</thead>
<tbody>
<tr>
<td>program array_copy_caf</td>
</tr>
<tr>
<td>implicit none</td>
</tr>
<tr>
<td>integer, dimension(5), codimension[*] :: array</td>
</tr>
<tr>
<td>integer, parameter :: sender = 1, receiver = 2</td>
</tr>
<tr>
<td>if (num_images() /= 2) then</td>
</tr>
<tr>
<td>stop 'Error: This program must be run on 2 parallel processes'</td>
</tr>
<tr>
<td>end if</td>
</tr>
<tr>
<td>if (this_image() == sender) then</td>
</tr>
<tr>
<td>array = [1, 2, 3, 4, 5]</td>
</tr>
<tr>
<td>else</td>
</tr>
<tr>
<td>array = 0</td>
</tr>
<tr>
<td>end if</td>
</tr>
<tr>
<td>write(*,'(a,i2,a,5(4x,i2))')'array on proc ', this_image(),&</td>
</tr>
<tr>
<td>' before copy:', array</td>
</tr>
<tr>
<td>sync all</td>
</tr>
</tbody>
</table>
if (this_image() == receiver) array(:) = array(:)[1]
write(*,'(a,i1,a,5(4x,i2))')'array on proc ', this_image(),&
' after copy: ', array
end program array_copy_caf
1 Declare an integer coarray
2 Throw an error if we are not running on 2 processes
3 Coarray image indices start at 1
4 Wait here for all images; equivalent to mpi_barrier()
5 Non-blocking copy from sending image to receiving image
The output of the program is the same as in the MPI variant:
<table>
<thead>
<tr>
<th>Listing 1.6. Output of array_copy_caf program.</th>
</tr>
</thead>
<tbody>
<tr>
<td>array on proc 1 before copy: 1 2 3 4 5</td>
</tr>
<tr>
<td>array on proc 2 before copy: 0 0 0 0 0</td>
</tr>
<tr>
<td>array on proc 1 after copy: 1 2 3 4 5</td>
</tr>
<tr>
<td>array on proc 2 after copy: 1 2 3 4 5</td>
</tr>
</tbody>
</table>
These two programs are thus semantically the same. Let’s look at the key differences:
- The number of lines of code (LOC) dropped from 30 in the MPI examples to 17 in the coarray example. This is almost a factor of 2 decrease. However, if we look specifically for MPI-related boilerplate code, we can count 15 lines of such code. Compare this to 2 lines of coarray-related code! As debugging time is roughly proportional to the LOC, we see how Coarray Fortran will be much more cost-effective for development of parallel Fortran applications.
- The core of the data copy in MPI example is quite verbose for such a simple operation:
```
if (nproc == 0) then
call mpi_isend(array, size(array), mpi_int, receiver, 1,&
mpi_comm_world, request, ierr)
else if (nproc == 1) then
call mpi_irecv(array, size(array), mpi_int, sender, 1,&
mpi_comm_world, request, ierr)
call mpi_wait(request, stat, ierr)
end if
```
compared to the intuitive array-indexing and assignment syntax of coarrays:
```
array(:)[2] = array(:)[1]
```
• Finally, MPI needs to be initialized and finalized using `mpi_init()` and `mpi_finalize()` subroutines. Coarray Fortran needs no such code. This one is minor, but a welcome improvement!
**Parallel process indexing**
Did you notice that our parallel processes were indexed 0 and 1 in the MPI example and 1 and 2 in the coarray example? This is because MPI is implemented in C, in which array indices begin at 0. In contrast, coarray images start at 1 by default.
As we saw in this example, both MPI and CAF can be effectively used to exchange data between parallel processes. However, MPI code is low-level and verbose, and would soon become tedious and error-prone as the complexity of our app increases. In contrast, CAF offers an intuitive indexing syntax that is analogous to the familiar operations with arrays. Furthermore, with MPI, you tell the compiler *what to do*; with CAF, you tell the compiler *what you want*, and let it decide what’s the best way to do it. This approach relieves a big deal of responsibility off your shoulders, and lets you focus on your application. I hope that this convinces you that Fortran coarrays are the way to go for expressive and intuitive implementation of data exchange between parallel processes.
### 1.8 A Partitioned Global Address Space language
Fortran is also a Partitioned Global Address Space (PGAS) language. In a nutshell, PGAS abstracts the distributed-memory space and allows you to:
1. View the memory layout as a shared-memory space: This will give you a tremendous boost in productivity and ease of programming when designing parallel algorithms. When performing data exchange, you won’t need to translate or transform array indices from one image to another. In other words, the memory spaces that belong to remote images will *appear* as they are local, and you will be able to express your algorithm in such way.
2. Exploit the locality of reference: In simpler words, design and code your parallel algorithms without foresight about whether a subsection of memory is local to the current image or not. If it is, the compiler will use that information to its advantage. If it is not, the most efficient data exchange pattern available will be performed.
For example, with Fortran Coarrays, PGAS allows you to use one image to initiate a data exchange pattern between two remote images:
**Listing 1.9. From image 1, initiate a remote copy of `array` from image 8 to image 7.**
```fortran
if (this_image() == 1) array(:)[7] = array(:)[8]
```
In this snippet, the `if`-statement ensures that the assignment executes only on image 1. However, the indices inside the square brackets refer to images 7 and 8! What this means is that the image 1 will asynchronously request an array copy from image 8 to
The power of PGAS is that, from the programmer’s point of view, the indices inside the square brackets can be treated just like any other array elements that are local in memory. However, in practice, these images could be mapped to different cores on the same shared-memory computer, or across the server room and connected via the local interconnect, or even across the world and connected through the internet!
Other notable PGAS languages are Chapel (chapel-lang.org) and Unified Parallel C (upc-lang.org).
1.9 **Running example: A parallel tsunami simulator**
I believe that most learning happens by doing rather than reading, especially if immersed in a longer-term project. Lessons in this book are thus framed within the context of developing your own, fully featured, parallel app.
1.9.1 **Why tsunami simulator?**
A *tsunami* is a series of long water waves that are triggered by a displacement of a large body of water. This typically occurs due to earthquakes, underwater volcanoes, or landslides. Once generated, a tsunami propagates radially outward and grows in height and steepness as it enters shallow water. I think a tsunami simulator is a good running example for this book because tsunamis are:
- **Fun**: Speaking strictly as a scientist here! Tsunami is a process that is fun to watch and play with in a numerical sandbox.
- **Dangerous**: Tsunamis pose a great threat to low-lying and heavily populated coastal areas. There is thus a great need to understand and predict them better.
- **Simple math**: Can be simulated using a minimal set of equations - the so-called shallow water equations. This is important so that we don’t get bogged down by the math and focus on implementation instead.
- **Parallelizable**: A physical process that is suitable for teaching parallel programming, especially considering that it is a non-embarrassingly parallel problem. To get it to work in parallel, we will need to carefully design data exchange patterns between images.
To simulate tsunamis, we will write a solver for the shallow water system of equations.
1.9.2 **Shallow water equations**
Shallow water equations (SWE) are a simple system equation derived from Navier-Stokes equations. They are also known as the Saint Venant equations, after the french engineer and mathematician A. J. C. Barre de Saint-Venant, who derived the 1-d form from first principles in pursuit of his interest in hydraulic engineering and open-channel flows. SWE are powerful because they can reproduce many observed motions in the atmosphere and the ocean:
- Large-scale weather such as cyclones and anticyclones
Western boundary currents such as Gulf Stream in the Atlantic and Kuroshio current in the Pacific
Long gravity waves such as tsunami and tidal bores
Watershed from rainfall and snow melt over land
Wind-generated (surf) waves
Ripples in a pond
SWE system consists of only a few terms:
**Figure 1.7. Shallow water equations.** Top equation is the momentum (velocity) conservation law, and the bottom is mass (water level) conservation law. $u$ is the 2-d velocity vector, $g$ is the gravitational acceleration, $h$ is the water elevation, $H$ is the unperturbed water depth, and $t$ is time. The "nabla" symbol (upside-down triangle) is a vector differentiation operator.
\[
\begin{align*}
\text{velocity tendency} & \quad \frac{\partial u}{\partial t} + u \cdot \nabla u = -g \nabla h \\
\text{water height tendency} & \quad \frac{\partial h}{\partial t} = -\nabla \cdot (u(H + h))
\end{align*}
\]
What is the physical interpretation of the above system? The top equation states that where there is slope along the water surface, water will accelerate and move from region of higher level to region of lower level due to pressure gradient. The advection term is non-linear and causes chaotic behavior in fluids, known as turbulence. The bottom equation states that if there is an area where velocity is converging (coming together), there will be an increase in water level because water has to go somewhere - this is why we call it conservation of mass. Similarly, if the velocity is diverging (moving apart), there will be a decrease in water level in response.
**Comfortable with math?**
If you’re experienced with calculus and partial differential equations, great! There is more for you in Appendix B. Otherwise, don’t worry! This book won’t dwell on math much more than this, and will focus instead on implementation and Fortran programming.
Shallow water equations are dear to me because I first learned Fortran programming by modeling these equations in my undergraduate meteorology program at the University of Belgrade. Despite my Fortran code looking (and working) much differently now than back then, I still find this system of equations an ideal case for teaching parallel Fortran programming. I hope you enjoy the process as much as I do!
1.9.3 What we want our app to do
Let’s decide on some requirements for the features of our Tsunami Simulator:
- **Parallel**: The model can scale to hundreds of processors with nothing but pure Fortran code. This is not only important for speeding up the program and reducing run-time, but also for enabling very large simulations that otherwise would not fit into the memory of a single computer. With almost all modern laptops having at least 2 processing cores, most readers should be able to enjoy the fruits of their (parallel programming) labor.
- **Extensible**: Physics terms can be easily formulated and added to the solver. This is important for the general usability of the model. If we can design our computational kernel in form of reusable classes and functions, new mathematical terms can be easily added as functional, parallel operators, following the approach by Damian Rouson
(www.lanl.gov/conferences/salishan/salishan2014/rouson.pdf). We could code our equations from Figure 1.7 as:
- Momentum balance:
\[ \frac{du}{dt} = -u \cdot \text{dot} \cdot (\text{grad} \cdot u) - g \cdot (\text{grad} \cdot h) \]
- Mass balance:
\[ \frac{dh}{dt} = -\text{div} \cdot (u \cdot (h + h_{\text{mean}})) \]
In the above snippets, parallel decomposition and data exchange would be implemented inside the operators .dot., .grad., and .div., which correspond to dot product, gradient, and divergence operators, respectively. This way, the technical implementation is encapsulated inside these functions, and on high-level we would be able to code our equations much like we would write them on a blackboard.
- **General**: Can be run in idealized experiments, for example with flat bottom and periodic boundary conditions, as well as on realistic domains with ocean bathymetry from input data.
- **Easy to use**: The model can be configured via command line parameters, like common Linux tools:
$ \text{tsunami} --\text{amplitude}=3 --\text{duration}=12 --\text{output}=\text{netcdf}$
- **Software library**: Provides a reusable set of classes and functions that can be used to build other parallel models.
- **Useful code documentation**: All software should be useful, and no software user should have to guess what did the original author of the program intended. We will write our app in such a way that great code documentation can be auto-generated. We often call this self-documenting code.
- **Discoverable online**: Writing a program just for yourself is great for learning and discovery. However, software becomes really useful once you can share it with others who can use it to solve their problems. I will teach you how to put your app...
out there in the wild, make it easy to discover, and make it attractive to other contributors. Other people fixing my bugs and implementing features from my todo list? Yes, please!
If you haven’t already, I encourage you to go ahead and check out the code for the running example from Github:
git clone https://github.com/modern-fortran/tsunami
Once you got it, take a look around, explore, peek inside the source files. It’s the project that we will build together. We start the next chapter by setting up the development environment so you can compile and run the minimal working version of our app.
1.10 Summary
In this chapter you learned that Fortran is:
- One of the first high-level programming languages in history.
- Still the dominant technology for many applications in science and engineering.
- The only standardized language with a native model for parallel programming.
- In the concrete example of array copy from one parallel process to another, you learned that Fortran Coarrays are ideal for clean and expressive implementation of parallel algorithms.
- Robust, efficient, and easy to program.
Fortran is not a language for everybody. It is definitely not a systems programming language, nor a web development language. Programming a graphical video game or a web browser in Fortran is possible, but extremely difficult. However if you are working on computationally intensive problems in science or engineering, it may be exactly what you need. Modern Fortran will take you on a journey through core Fortran features from a parallel-first perspective. Where applicable, you will also apply object-oriented and functional techniques, as well as adapt existing Fortran libraries into your application. By working through this book chapter by chapter, you will gain the experience of developing a fully-featured parallel app from scratch. If it’s your first software project, I hope it excites your inner software developer and inspires you to go make something on your own.
|
{"Source-Url": "https://manning-content.s3.amazonaws.com/download/a/207f04f-919d-433f-862c-313a7ea2b835/Curcic_MF_MEAP_V02_ch1.pdf", "len_cl100k_base": 14094, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 65205, "total-output-tokens": 15034, "length": "2e13", "weborganizer": {"__label__adult": 0.00037026405334472656, "__label__art_design": 0.00036835670471191406, "__label__crime_law": 0.0002155303955078125, "__label__education_jobs": 0.0015401840209960938, "__label__entertainment": 0.00012409687042236328, "__label__fashion_beauty": 0.00016033649444580078, "__label__finance_business": 0.0003063678741455078, "__label__food_dining": 0.00048160552978515625, "__label__games": 0.0007424354553222656, "__label__hardware": 0.0013437271118164062, "__label__health": 0.0004396438598632813, "__label__history": 0.00030517578125, "__label__home_hobbies": 0.00015354156494140625, "__label__industrial": 0.0006046295166015625, "__label__literature": 0.00041866302490234375, "__label__politics": 0.0002008676528930664, "__label__religion": 0.0004968643188476562, "__label__science_tech": 0.0665283203125, "__label__social_life": 0.0001283884048461914, "__label__software": 0.00907135009765625, "__label__software_dev": 0.9150390625, "__label__sports_fitness": 0.0003254413604736328, "__label__transportation": 0.0006117820739746094, "__label__travel": 0.00019299983978271484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63111, 0.0213]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63111, 0.77052]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63111, 0.92475]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 2388, false], [2388, 3134, null], [3134, 4419, null], [4419, 6314, null], [6314, 7873, null], [7873, 9071, null], [9071, 11705, null], [11705, 13969, null], [13969, 16939, null], [16939, 19704, null], [19704, 23009, null], [23009, 25391, null], [25391, 28606, null], [28606, 30855, null], [30855, 32312, null], [32312, 35085, null], [35085, 38291, null], [38291, 40414, null], [40414, 42681, null], [42681, 44837, null], [44837, 46390, null], [46390, 48968, null], [48968, 50771, null], [50771, 53545, null], [53545, 56166, null], [56166, 58426, null], [58426, 61113, null], [61113, 63111, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 2388, true], [2388, 3134, null], [3134, 4419, null], [4419, 6314, null], [6314, 7873, null], [7873, 9071, null], [9071, 11705, null], [11705, 13969, null], [13969, 16939, null], [16939, 19704, null], [19704, 23009, null], [23009, 25391, null], [25391, 28606, null], [28606, 30855, null], [30855, 32312, null], [32312, 35085, null], [35085, 38291, null], [38291, 40414, null], [40414, 42681, null], [42681, 44837, null], [44837, 46390, null], [46390, 48968, null], [48968, 50771, null], [50771, 53545, null], [53545, 56166, null], [56166, 58426, null], [58426, 61113, null], [61113, 63111, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 63111, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63111, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63111, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63111, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63111, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63111, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63111, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63111, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63111, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 63111, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 2388, 3], [2388, 3134, 4], [3134, 4419, 5], [4419, 6314, 6], [6314, 7873, 7], [7873, 9071, 8], [9071, 11705, 9], [11705, 13969, 10], [13969, 16939, 11], [16939, 19704, 12], [19704, 23009, 13], [23009, 25391, 14], [25391, 28606, 15], [28606, 30855, 16], [30855, 32312, 17], [32312, 35085, 18], [35085, 38291, 19], [38291, 40414, 20], [40414, 42681, 21], [42681, 44837, 22], [44837, 46390, 23], [46390, 48968, 24], [48968, 50771, 25], [50771, 53545, 26], [53545, 56166, 27], [56166, 58426, 28], [58426, 61113, 29], [61113, 63111, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63111, 0.13288]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
2e25b7679610898df705f56176aef0a55bc0f53b
|
Model Based Testing of an Interactive Music System
Clément Poncelet, Florent Jacquemard
To cite this version:
HAL Id: hal-01097345
https://hal.archives-ouvertes.fr/hal-01097345v2
Submitted on 16 Jan 2015
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Model Based Testing of an Interactive Music System *
Clément Poncelet
DGA/INRIA and Ircam (UMR SMTS - CNRS/UPMC)
clement.poncelet@ircam.fr
Florent Jacquemard
INRIA and Ircam (UMR SMTS - CNRS/UPMC)
florent.jacquemard@inria.fr
December 2014
Abstract
The role of an interactive music system (IMS) is to accompany musicians during live performances, like a real musician. It reacts in realtime to audio signals from musicians, according to a timed specification called mixed score, written in a domain specific language. Such goals imply strong requirements of temporal reliability and robustness to unforeseen errors in input, yet not so much studied in the computer music community.
We present the application of model-based testing techniques and tools to a state-of-the-art IMS, including the following tasks: generation of relevant input data for testing (including timing values) following coverage criteria, computation of the corresponding expected output, according to the semantics of a given mixed score, black-box execution of the test data and verdict. Our method is based on formal models compiled directly from mixed scores, and passed, after conversion to timed automata, to the model-checker Uppaal. This fully automatic approach has been applied to real mixed scores used in concerts and the results obtained have permitted to identify bugs in the target IMS.
1 Introduction
Score based interactive music systems (IMS) [13] are involved in live music performances and aims at acting as an electronic musician playing with other human musicians. Such a system requires a mixed score describing the parts of human musicians (input) together with electronic parts (output). During a performance, it aligns in real-time the performance of the human musicians to the score, handling possible errors, detects the current tempo, and plays the electronic part. Playing is done by passing messages to an external audio environment such as MAX [12]. A popular example of this scenario is automatic accompaniment [4].
An IMS is therefore a reactive system, interacting with the outside environment (the musicians), under strong timing constraints: the output messages must indeed be emitted at the right moment, not too late but also not too early. It is important to be able to assess the behavior of an IMS on a given score before its real use in a concert. A traditional approach is to rehearse with musicians, trying to detect potential problems manually, i.e. by audition. This tedious method offers no real guaranty since it is not precise, not complete (it covers only one or a few particular musician’s performances), and error prone (it relies on a subjective view of the expected behavior instead of a formal specification).
Several works like [6] and [8] implement different MBT techniques using Uppaal model checker features. The test problems are reduced into reachability or safety constraints delegated to Uppaal. Our case study presents important originalities compared to

other MBT applications to realtime systems. On the one hand, the time model supports several time units, including the wall clock time, measured in seconds, and the time of music scores, measured in number of beats relatively to a tempo. This situation raised several new problems for the generation of test suites and their execution. On the other hand, mixed scores specify completely the expected timed behavior (of the IMS), based on the DSL semantics implemented in the compiler described in Section 3.2. Hence, the formal specification of this behavior is produced automatically from the score (instead of being written manually). This enables a fully automatic test scenario fitting well in a music authoring workflow where scores in preparation are constantly evolving.
2 Preliminaries
We first introduce the IMS Antescofo, its domain specific language (DSL) for mixed scores and our MBT framework.
2.1 The score-based IMS Antescofo
Figure 2 describes roughly the architecture of Antescofo, which is made of two main modules. A listening machine (LM) decodes an audio or MIDI stream incoming from a musician and infers in realtime: (i) the musician’s position in the given mixed score, (ii) the musician’s instantaneous pace (tempo, in beats per minute) [3]. These values are sent to a reactive engine (RE) which schedules the electronic actions to be played, as specified in the mixed score. The actions are messages emitted on time to an audio environment. Therefore, the information exchanged between LM and RE as well as between RE and the output environment of the system is made of discrete events.
The mixed scores of Antescofo are written in a textual reactive synchronous language enabling the description of the electronic accompaniment in reaction to the detected instrumental events. We give here a simplified abstract syntax corresponding to a fragment of this language, in order to illustrate our test framework (see [7] for more complete descriptions). Let \( O \) be a set of output messages (also called action symbols and denoted \( a \)) which can be emitted by the system and let \( I \) be a set of event symbols (denoted \( e \)) to be detected by the LM (i.e. positions in score). An action is a term \( \text{act}(d, s, a) \) where \( d \) is the delay before starting the action, \( s \) is either an atom in \( O \) or a finite sequence of actions (such a sequence is called a group), and \( a \) is a list of attributes. A mixed score is a finite sequence of input events of the form \( \text{evt}(e, d, s) \) where \( e \in I \), \( d \) is a duration and \( s \) is the top-level group triggered by \( e \). Sequences are denoted with square brackets \([,]\) and the empty sequence is \([\ ]\).
We consider here two time units for expressing delays and durations \( d \) : (i) the number of beats (default unit): a logical time unit traditionally used in music scores that we call relative time, and (ii) milliseconds (ms), referred to as physical time. The reconciliation of the relative and physical times is done through the detected tempo values.
Example 1 Figure 3 displays a small extract of a mixed score, in abstract syntax and traditional musical notation. The top stave is the musician part, it contains three quarter notes (whose duration is one beat): \( \text{D}_4, \text{B}_3\) and \( \text{E}_4 \). The bottom field is the electronic part with output messages \( a_0, \ldots, a_7 \). Note that actions are triggered only by the first event which fires the top group \( s_1 \) when detected. \( s_1 \) fires concurrently and simultaneously (0 delay) the atomic action \( a_0 \) and the loose global group \( s_2 \). A delay of \( \frac{1}{2} \) (action \( a_2 \)) corresponds to an eighth note i.e. half of a beat.
The high-level attributes attached to an action \( \text{act}(d, s, a) \) are indications regarding musical expressiveness [4]. We consider here four attributes for illustration purpose (their interpretation will be defined formally in Section 3.2): two attributes are used to express the synchronization of the group \( s \) to the musician’s part: loose (synchronization on tempo) and tight (synchronization on events) and two attributes describe strategies for handling errors in input: local (skip actions) and global (play actions immediately at the detection of an error). An error is in particular an event of the score missing during performance, either because the musician did not play it or because it was not detected by the LM.
Example 2 The possible interpretations of the actions in our running example according to 4 combinations of strategies for err and sync is depicted in Figure 4.
2.2 Model-Based Testing
We consider a black-box testing conformance approach for Antescofo, based on timed traces comparisons. We assume a given mixed score \( M \) with a default tempo value.
A timed trace is a sequence of pairs \( \langle a, t \rangle \) made of a symbol \( a \in I \cup O \), and a timestamp \( t \in \mathbb{R}^+ \), either in physical or relative time. A trace containing symbols exclusively in \( I \) (resp. \( O \)) is called an input trace (resp. an output trace). We denote below \( T_{in} \) (resp. \( T_{out} \)) the set of input (resp. output) traces with relative timestamps. The ideal trace is the input trace consisting in the projection of all events in \( M \) with their duration.
Example 3 The ideal trace for the score in Figure 3 is the following: \( \langle e_1, 1 \rangle \cdot \langle e_2, 2 \rangle \cdot \langle e_3, 3 \rangle \).
By definition of music performance, traces of real executions can be arbitrarily far from ideal traces: the tempo and delays can diverge from the written values (the musician adding her/his own expressiveness values), and moreover their can be errors during a performance (missing notes).
We shall, in Section 3, associate to the model \( M \) two formal models: a specification \( E \) of the possible behaviors of the environment (i.e. the events of the musician playing \( M \), as detected by the LM) and a specification \( S \) of the behavior of the system on \( M \). In our case, \( E \) can be seen as a subset of \( T_{in} \) and \( S \) as a function from \( T_{in} \) into \( T_{out} \). This disymmetry between \( E \) and \( S \) reflects our case study, with on one side the musician and the LM and on the other side the RE (see Figure 2).
A test case is a pair \( \langle t_{in}, t_{out} \rangle \in T_{in} \times T_{out} \) where \( t_{in} \in E \) and \( t_{out} = S(t_{in}) \). Two complementary approaches for the offline generation of \( t_{in} \) are presented in Section 4.
The execution of a test case \( \langle t_{in}, t_{out} \rangle \) consists in several tasks summarized in the following definition of conformance. First, we pass the events of \( t_{in} \) to the implementation under test (IUT), i.e. the IMS Antescofo, respecting the timestamps. Second, we monitor the outcome of the IUT in an output trace \( t'_{out} = IUT(t_{in}) \). Finally, we compare \( t'_{out} \) to \( t_{out} = S(t_{in}) \). We define the conformance of the IUT to \( S \) wrt \( E \) as: \( \forall t_{in} \in E, IUT(t_{in}) = S(t_{in}) \). This is a particular case of the relation \( \text{rtno} \) considered in [6, 8].
This definition makes sense only if the timestamps of \( t_{in}, t_{out} \) and \( t'_{out} \) are in the same time unit. We will show how this important issue is addressed in practice in Section 4, with several options for the conversion of all traces into physical time (thanks to the addition of tempo values).
3 Models
We present a procedure for compiling a mixed score into an intermediate representation (IR) used as a model for test case generation in our MBT framework.
3.1 Intermediate Representation
The IR has the form of an executable code modeling the expected behavior of Antescofo on a given score. We present here a simplified version of Antescofo’s IR suitable for our presentation, leaving features such as thread creation, conditional branching, and variable handling outside of the scope of this paper.
Syntax. An IR is a finite sequence (called network) of FSMs of the form \( A = (\Sigma, Q, \ell_0, \Delta_0, \Delta_1) \) where \( \Sigma \) is an alphabet partitioned into \( \Sigma = \Sigma_{in} \cup \Sigma_{out} \cup \Sigma_{sig} \) (respectively the sets of input, output symbols and internal signals), \( Q \) is a finite set of locations, partitioned into \( Q = Q_0 \cup Q_1 \), \( \ell_0 \in Q \) is the initial location, \( \Delta_0 \subseteq Q_0 \times (\Sigma_{out} \cup \Sigma_{sig}) \times Q \) is a finite set of synchronous transitions (denoted \( \ell \xrightarrow{\sigma} \ell' \)), and \( \Delta_1 \subseteq Q_1 \times (\Sigma_{in} \cup \Sigma_{sig} \cup \mathbb{R}^+) \times Q \) is a finite set of asynchronous transitions. We write \( \ell \xrightarrow{\tau_1 \ldots \tau_p} \ell' \) for \( \langle \ell, (\tau_1, \ell_1), \ldots, (\tau_p, \ell_p) \rangle \in \Delta_1 \) where \( p \geq 1, \ell \in Q_1, \ell_1, \ldots, \ell_p \in Q \), and \( \tau_1, \ldots, \tau_p \in \Sigma_{in} \cup \Sigma_{sig} \cup \mathbb{R}^+ \). Moreover, there must be at most one delay \( d \in \mathbb{R}^+ \) at most one input event in \( \Sigma_{in} \) amongst \( \tau_1, \ldots, \tau_p \).
Informally, every synchronous transition \( \ell \xrightarrow{\sigma} \ell' \in \Delta_0 \) describes the emission, while in source location \( \ell \), of the output message or signal \( \sigma \), followed by the change of location to the target location \( \ell' \), and every asynchronous transition \( \ell \xrightarrow{\tau_1 \ldots \tau_p} \ell_1 \ldots \ell_p \in \Delta_1 \) describes the concurrent wait, in source location \( \ell \), of \( \tau_1, \ldots, \tau_p \). The first event \( \tau_1 \) occurring provokes a change of location to the target \( \ell_1 \). Moreover, every location can be the source of at most one transition in \( \Delta_0 \cup \Delta_1 \).
Semantics. In the above partition of \( \Sigma, \Sigma_{in} \) and \( \Sigma_{out} \) are sets of symbols used for communication with the environment, and \( \Sigma_{sig} \) is an auxiliary set of internal signals used for communication between FSMs of a network.
We consider a model of superdense time [10, 11] with timestamps in \( (t, n) \in \mathbb{R}^+ \times \mathbb{N} \), where \( t \) is a date in relative time called logical instant. The logical time can flow only when in locations of \( Q_1 \) (sources of asynchronous transitions). In locations of \( Q_0 \), \( t \) is fixed and the execution of synchronous transitions will increase the second
Let \( \mathcal{A} = A_1 \parallel \cdots \parallel A_k \) be a network of \( k \) FSMs composed in parallel with \( \mathcal{A}_i = (\Sigma, Q^i, \ell_0^i, \Delta_0^i, \Delta_1^i) \) for all \( 1 \leq i \leq k \). A state of \( \mathcal{A} \) is auple of the form \((t, n, [\ell_1, \ldots, \ell_k], \Theta, \omega)\) where \((t, n)\) is a timestamp, \([\ell_1, \ldots, \ell_k] \in Q^1 \times \cdots \times Q^k\) the array of current locations, \( \Theta \subseteq \Sigma_{\text{sig}} \) is a finite set of internal signals or messages and \( \omega \in \Sigma_{\text{out}} \cup \{ \perp \} \). The initial state of \( \mathcal{A} \) is \( s_0 = (0, 0, [\ell_0^1, \ldots, \ell_0^k], \emptyset, \perp) \). An external event \( \tau \) can be the arrival of an input in \( \Sigma \) or the expiration with the necessary (re)-conversion of delays in physical time using (updated) tempo values, see [7]). We assume that a given input trace \( t_m \in T_m \) specifies the arrivals of inputs in \( \Sigma_m \).
The moves of \( \mathcal{A} \) (between states) are defined as follows.
\[
(t, n, [\ell_1, \ldots, \ell_k], \Theta, \omega) \rightarrow (t, n + 1, [\ell'_1, \ldots, \ell'_k], \Theta \cup \{ \sigma \}, \bot)
\]
where the locations \( \ell'_1, \ldots, \ell'_k \) are defined as follows. Let \( i \) be the smallest index (in the array of current locations) such that \( \ell_i \in Q_0 \), and there exists \( \ell_i \xrightarrow{\sigma} \ell'_i \in \Delta_0 \), with \( \sigma \in \Sigma_{\text{out}} \) for (es) and \( \sigma \in \Sigma_{\text{out}} \) for (em). Then, \( \ell'_i = \ell''_i \) and \( \ell'_j = \ell_j \) for all \( j \neq i \). Here, a synchronous transition is executed and the signal \( \sigma \) (resp. message \( \sigma \)) emitted is added to \( \Theta \) (resp. \( \omega \)).
\[
(t, n, [\ell_1, \ldots, \ell_k], \Theta, \omega) \rightarrow (t, n + 1, [\ell'_1, \ldots, \ell'_k], \emptyset, \bot)
\]
where \( \ell_1, \ldots, \ell_k \in Q_1 \), and for all \( i, 1 \leq i \leq k \), there exists \( \ell_i \xrightarrow{\tau_i} \ell'_i \in \Delta_1 \), with \( \tau_i \in \Theta \cap \Sigma_{\text{sig}} \). If \( \tau_i \) exists, then \( \ell'_i = \ell_j \) in \( j \neq i \), and otherwise \( \ell'_i = \ell_i \). Here, the signals expected and present in \( \Theta \) are all received at once, and \( \Theta \) and \( \omega \) are flushed.
\[
(t, n, [\ell_1, \ldots, \ell_k], \Theta, \omega) \rightarrow (t', 0, [\ell'_1, \ldots, \ell'_k], \emptyset, \bot)
\]
where \( \ell_1, \ldots, \ell_k \in Q_1 \), and for all \( i, 1 \leq i \leq k \), there exists \( \ell_i \xrightarrow{\tau_i} [\tau_{i,1}, \ldots, \tau_{i,p_i}] \xrightarrow{\ell_i,1} [\ell_{i,1}, \ldots, \ell_{i,p_i}] \in \Delta_1 \), and moreover, letting \( T^i = \{ \tau_{i,1}, \ldots, \tau_{i,p_i} \} \), it holds that \( T^i \cap \Theta = \emptyset \) (i.e. (rs) cannot be fired). Then \( t' \geq t \) is the first date (after \( t \)) of occurrence of an external event \( \tau \in T^i \), with a priority for delays (in \( \mathbb{R}^+ \)) over inputs (\( \Sigma_m \)). For all \( 1 \leq i \leq k \), if \( \tau_{i,j} = \tau \) for some \( 1 \leq j \leq p_i \), then \( \ell'_i = \ell_{i,j} \), otherwise \( \ell'_i = \ell_i \).
Here, \( t' \) is the date of the first event \( \tau \) after \( t \) in \( t_m \) or the expiration of a delay \( d \in \mathbb{R}^+ \) and in the latter case, \( t' = t + d \).
Note that with the above definitions, the moves are mutually exclusive and performed in the given priority order. In particular, the steps (es) and (em) are repeated as long as there exists executable synchronous transitions (i.e. locations of \( Q_0 \) in the state). Note also that the steps (es) and (rs) may loop in the same logical instant.
A run \( \rho \) is a sequence of states \( s_0, s_1, \ldots \) such that for all \( i \geq 0 \), there is a move between \( s_i \) and \( s_{i+1} \).
3.2 Compiling mixed scores into IR
An IR is constructed directly from a given mixed score \( sc \), during the parsing of the latter.
For the sake of conciseness, we do not give the inferences defining recursive constructions, but instead, a graphical representation of the IR obtained for our running example. The environment IR (\( \mathcal{E} \)) is built with a single pass through score events. There are several options for constructing \( \mathcal{E} \), regarding missed notes (see Section 4). In Figure 5, the musician modeled by \( \mathcal{E} \), when in \( \ell_0 \), can miss the first note \( e_1 \) (upper edge to location \( \ell_2 \)) or \( e_1 \) and \( e_2 \) (\( \ell_3 \)). In \( \ell_1 \), he can miss \( e_2 \) (going to \( \ell_3 \)).
The proxy IR (\( \mathcal{P} \)) in Figure 6 provides a definition of errors. We assume one internal signal \( \pi \in \Sigma_{\text{sig}} \) for each input event in \( e_i \in I \). The proxy IR emits this signal when \( e_i \) is detected as missing, because an event \( e_j \) with \( j > i \) was received. The proxy IR will simplify the complex task of specifying error management in other IRs.
Figure 7 and 8 show the IR obtained from the running example, for two different synchronization strategies (resp. loose and tight). Those models of the IUT behavior are constructed by iteratively traversing the sequence of actions in a group. The parts build at each step are framed and annotated as \( T_g\), for starting the


Let us now present the implementation and results of our MBT approach, applied to the score-based IMS Antescofo.
Compiling mixed score into IR
Compiling mixed scores into IR has been implemented as a command line tool, written in C++ on the top of the original Antescofo’s parser. The parsing produces an abstract syntax tree which is traversed using a visitor pattern in order to build the IR following the approach presented in Section 3.2. Several options are offered for the production of the IR corresponding to the environment \( E \), in particular regarding the number of possible successive errors (missed notes). The most general case (any note can be missed) results in a model \( E \) with a quadratic (in score’s size) number of transitions and an exponential number of input traces. The explosion can be controlled by choosing appropriate hypotheses on the environment \( E \).
Translation of IR to Timed Automata
The IR is then translated into a network of timed automata (TA), in a format that can be handled by tools of the Uppaal suite for MBT. Some graphical coordinates are computed during compiling, and used for a nice display of the score models under Uppaal, providing composers with useful visual feedbacks of the low level control-flow in their mixed score.
The translation of IR, in the simplified version presented in Section 3.1, into equivalent TA is possible whenever all the delays are expressed in relative time. Indeed, in TA, all the clock values are expressed in a unique abstract model time unit (mtu). For the TA associated to the environment IR model \( E \), several options are offered for adding lower and upper bounds on the duration of each event, in order to limit the state exploration for the generation of \( t_{in} \).
Some care has to be taken for the simulation of the move rule (rs) in the semantics of IR. In fact, in the states of IR runs, signals are stored in an unordered set at each logical instant, and hence can be received in an arbitrary order. This is not the case with TA models. In order to sort the the interleaving between signals and external input events, an auxiliary step is performed, possibly modifying the IR structure (e.g. with urgent state in proxy, and early signals for group’s triggers).
We have chosen to use an IR instead of directly translating mixed scores into TA [7] because there is a clear correspondence between this ad’hoc model and the semantics of Antescofo’s DSL, and the general IR format is used for other purposes and includes features not supported by TA (such as variables and dynamic thread creation).
Model-based generation of covering suites of test cases
We use the Uppaal extension called CoVer [8] to generate automatically suites of test cases, under a certain \( E \), that cover the possible behavior of the specification \( S \) according to some coverage criteria. These criteria are defined by a finite state automaton \( O bs \) called observer monitoring the parallel execution of \( A_C \) and \( A_S \), the TA associated to the IR \( E \) and \( S \). Every transition of \( O bs \) is labeled by a predicate checking whether a transition of \( A_C \) or \( A_S \) is fired. The model checker Uppaal is used by CoVer to generate the set of input traces \( t_{in} \in T_{in} \) resulting from an execution of the Cartesian product of \( A_C \times A_S \) with \( O bs \) reaching a final state of \( O bs \).
For loop-free IR \( S \) and \( E \), with an observer checking that all transitions of \( A_C \) and \( A_S \) are fired, CoVer will return a test suite \( T \) complete for non-conformance: if
there exists an input trace $t_{in} \in \mathcal{E}$ such that IUT($t_{in}$) and $\mathcal{S}(t_{in})$ differ, then $\mathcal{T}$ will contain such an input trace. Note that the IR produced by the fragment of the DSL of Section 2.1, using the procedure of Section 3.2, are loop-free. However this is not true for the general DSL which allows e.g. jump to label instructions.
In practice, we avoid state explosion with appropriate restrictions on $\mathcal{E}$ (number of missed events, see above) and the associated TA $\mathcal{A}_E$ (bounds on event’s durations).
**Test cases generation by fuzzing ideal trace**
An alternative method for the generation of relevant test cases is to start with the ideal trace associated to a mixed score and add deformations of several kinds. *Time-warsps* [5] and variants like *Time-Maps* (Jaffe 1985), *Time-deformations* (Anderson and Kuivila 1990), are continuous and monotonically increasing functions used to define either variations of tempo or variations of the duration of individual notes wrt the written score events (*time-shift*). Some models of performance [9, 5] are defined by combination of these two transformations, defined independently. We consider a discrete version of such models, with extended input traces $t_{in}$ made of triples $(a, t, p)$, where $a$, $t$ and $p$ are like in Section 2.2 and $p$ is a tempo value in beats per minute (BPM). The time-shifts are applied to the timestamps $t$ (they are expressed in relative time), and the tempi $b$ are values on a tempo curve. An important difference with [9, 5] is the possibility to include missed notes in input traces.
**Tests execution and verdicts**
We have developed several scenarios for the execution of a test case $(t_{in}, t_{out})$, corresponding to several boundaries for the black box tested inside the whole system – see Figure 2.
In a first scenario, $t_{in}$ contains triples like in the above paragraph. The tempo values are either values in a curve, in the case of traces generated by fuzzing, or a fixed value in the case of trace generated by CoVer. This scenario is performed on a standalone version of Antescofo equipped with an internal test adapter module. The adapter iteratively reads one element $(e, d, p)$ of a file containing $t_{in}$, converts $d$ into a physical time value $d'$ (remember that delays are expressed in relative time in $t_{in}$), and waits $d'$ seconds before sending $e$ and $p$ to the RE. More precisely, it does not physically wait, but instead notifies a virtual clock in the RE that the time has flown of $d'$ seconds. This way the test needs not to be executed in realtime but can be done in fast-forward mode. This is very important for batch execution of huge suites of test cases. The timestamps in the expected trace $t_{out}$ are converted from relative to physical time using the tempo values in $t_{in}$, in order to be compared to the monitored trace $t'_{out}$. Here, the blackbox is the RE (the LM is idle).
In a second scenario, the tempo values are not read in $t_{in}$ but detected by the LM. The rest of the scenario follows the first case. Here, the blackbox is the RE plus the part of the LM in charge of tempo inference.
A third scenario is executed in a version of Antescofo embedded into MAX (as a MAX patch). In this case, the blackbox is the whole IMS Antescofo, and instead of seeing discrete events to the IUT (like in scenarios 1 and 2), we generate an audio stream with a MIDI synthesizer (in MAX), using the events in $t_{in}$ as MIDI events.
The verdicts are produced offline by a tool comparing the expected and monitored traces $t_{out}$ and $t'_{out}$ with an acceptable latency (about 0.1 ms). The comparison is not totally obvious since we have no clue a priori about missed or added actions/events in the traces and about the order of items.
**Experiments**
Two case studies will be reported: $B$, a benchmark made of hundreds of little mixed scores, covering many features of the IUT’s DSL and EIN, a real mixed score of the piece *Einspielung* by Emmanuel Nunes.
The first benchmark can be useful for the development (debugging and regression tests) of further versions of the system Antescofo. It aims at covering the IUT’s DSL functionality and checking the reactions of the system. The second is a long real test case, for evaluating the scalability of our test method. It is composed of two extracts: the first 4 bars (22 events and 112 actions) and 14 bars (72 events and 389 messages).
Each case study is processed three times, with different numbers of possible consecutive missing events (0, 3 and 6 events) and a bound of 5% for the variation of the duration in the interpretation on each event. A script creates the IR and TA models, generates test suites (using CoVer), executes them according to the first scenario presented above and compares the outcome to test cases. Table 1 summarizes the results for the Benchmark $B$, reporting the number of traces generated by CoVer and the time taken by the whole test. Note that the increase of the number of traces with
$$\text{Table 1: Results of experiments}$$
<table>
<thead>
<tr>
<th></th>
<th>c. miss</th>
<th>nb score</th>
<th>nb trace</th>
<th>time (s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>$B$</td>
<td>0</td>
<td>582</td>
<td>1843</td>
<td>140</td>
</tr>
<tr>
<td></td>
<td>3</td>
<td>582</td>
<td>5718</td>
<td>334</td>
</tr>
<tr>
<td></td>
<td>6</td>
<td>582</td>
<td>6387</td>
<td>405</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th></th>
<th>c. miss</th>
<th>locations</th>
<th>nb trace</th>
<th>time (s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>$EIN$</td>
<td>0</td>
<td>400/1394</td>
<td>7/35</td>
<td>1/24</td>
</tr>
<tr>
<td></td>
<td>3</td>
<td>518/1812</td>
<td>36/50</td>
<td>3/198</td>
</tr>
<tr>
<td></td>
<td>6</td>
<td>771/2815</td>
<td>67/NA</td>
<td>97/400</td>
</tr>
</tbody>
</table>
1 http://brahms.ircam.fr/works/work/32409/
missing events (the length of the tested scores is generally between 3 to 6 events). The second table summarizes the results for Einspielung, with the number of IR locations, traces and testing time for each extract. CoVer did not succeed to generate the input traces for the 14 bars extract in the case of 6 possible missed events.
Despite CoVer scalability (that can be bypassed with other scenarios), generated suites of traces are relevant and test the IUT for an exhaustive set of possible performances.
A problem encountered with CoVer is that it generates only time optimal test suites, i.e. input trace with minimum time-delay satisfying a given reachability property. This is not well suited to our case study. Indeed, since the trace $t_n$ is stamped in relative time, a time optimal $t_n$ will result in a geometric progression of the tempo.
5 Conclusion and further work
Thanks to an ad’hoc intermediate representation for mixed scores, and conversion into timed automata, we have developed a fully automatic offline model-based testing procedure dedicated to an interactive music system. An advantage of this case study for MBT is the possibility to generate the formal specifications (as IR) automatically from the given scores. A drawback is the necessity to deal with different time units, in particular relative time. This latter problem prevented us from using the online testing tool Tron [8] (roughly, Tron can deal with several clocks but they must all be defined as a factor of the wall clock).
Our method is designed to test the behaviour of the IMS on one given score, by generation of a covering set of input traces describing a range of musical performance of the score. This approach is advantageous both for IMS debugging, thanks to coverage criteria, and for user assistance to authors of mixed scores, using fuzzing based on models of musical performance. A more general perspective could be to test the behavior of the IMS on any score. This would require a complete specification of the IMS (written manually) as e.g. an hybrid system, and the automatic generation, as test input, of a covering set of "extreme" scores and covering sets of performance traces for these scores.
Acknowledgments
The authors wish to thank the members of the teams developing Uppaal and Antescofo for their help.
References
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01097345v2/document", "len_cl100k_base": 8292, "olmocr-version": "0.1.48", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 31557, "total-output-tokens": 9557, "length": "2e13", "weborganizer": {"__label__adult": 0.0006079673767089844, "__label__art_design": 0.005741119384765625, "__label__crime_law": 0.0006251335144042969, "__label__education_jobs": 0.001987457275390625, "__label__entertainment": 0.002292633056640625, "__label__fashion_beauty": 0.000278472900390625, "__label__finance_business": 0.00047969818115234375, "__label__food_dining": 0.0008077621459960938, "__label__games": 0.0018587112426757812, "__label__hardware": 0.0032863616943359375, "__label__health": 0.0006527900695800781, "__label__history": 0.0006070137023925781, "__label__home_hobbies": 0.00020134449005126953, "__label__industrial": 0.0010585784912109375, "__label__literature": 0.00096893310546875, "__label__politics": 0.0006256103515625, "__label__religion": 0.0008816719055175781, "__label__science_tech": 0.460205078125, "__label__social_life": 0.00020623207092285156, "__label__software": 0.0213165283203125, "__label__software_dev": 0.49365234375, "__label__sports_fitness": 0.0005636215209960938, "__label__transportation": 0.0009436607360839844, "__label__travel": 0.0003094673156738281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33618, 0.0397]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33618, 0.37703]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33618, 0.85627]], "google_gemma-3-12b-it_contains_pii": [[0, 1000, false], [1000, 4060, null], [4060, 8737, null], [8737, 14717, null], [14717, 20281, null], [20281, 23909, null], [23909, 29587, null], [29587, 33618, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1000, true], [1000, 4060, null], [4060, 8737, null], [8737, 14717, null], [14717, 20281, null], [20281, 23909, null], [23909, 29587, null], [29587, 33618, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33618, null]], "pdf_page_numbers": [[0, 1000, 1], [1000, 4060, 2], [4060, 8737, 3], [8737, 14717, 4], [14717, 20281, 5], [20281, 23909, 6], [23909, 29587, 7], [29587, 33618, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33618, 0.07407]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
4f9fc7f313143be45f91fc1a92dc001988f65b36
|
[REMOVED]
|
{"Source-Url": "https://www.sosy-lab.org/research/pub/2013-VSTTE.Verification_of_a_Virtual_Filesystem_Switch.pdf", "len_cl100k_base": 14165, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 70691, "total-output-tokens": 17533, "length": "2e13", "weborganizer": {"__label__adult": 0.0003604888916015625, "__label__art_design": 0.0004296302795410156, "__label__crime_law": 0.00038743019104003906, "__label__education_jobs": 0.000591278076171875, "__label__entertainment": 8.20159912109375e-05, "__label__fashion_beauty": 0.00019037723541259768, "__label__finance_business": 0.00042557716369628906, "__label__food_dining": 0.0003898143768310547, "__label__games": 0.0007610321044921875, "__label__hardware": 0.002567291259765625, "__label__health": 0.0005340576171875, "__label__history": 0.00035262107849121094, "__label__home_hobbies": 0.00013208389282226562, "__label__industrial": 0.0006585121154785156, "__label__literature": 0.0003185272216796875, "__label__politics": 0.000335693359375, "__label__religion": 0.000499725341796875, "__label__science_tech": 0.1455078125, "__label__social_life": 8.171796798706055e-05, "__label__software": 0.0135040283203125, "__label__software_dev": 0.83056640625, "__label__sports_fitness": 0.000274658203125, "__label__transportation": 0.00081634521484375, "__label__travel": 0.00021219253540039065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57595, 0.01528]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57595, 0.45483]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57595, 0.79222]], "google_gemma-3-12b-it_contains_pii": [[0, 2477, false], [2477, 5636, null], [5636, 8347, null], [8347, 11983, null], [11983, 15019, null], [15019, 17543, null], [17543, 20949, null], [20949, 23534, null], [23534, 24959, null], [24959, 27918, null], [27918, 30105, null], [30105, 32879, null], [32879, 36057, null], [36057, 39062, null], [39062, 41826, null], [41826, 45836, null], [45836, 48428, null], [48428, 51493, null], [51493, 54662, null], [54662, 57595, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2477, true], [2477, 5636, null], [5636, 8347, null], [8347, 11983, null], [11983, 15019, null], [15019, 17543, null], [17543, 20949, null], [20949, 23534, null], [23534, 24959, null], [24959, 27918, null], [27918, 30105, null], [30105, 32879, null], [32879, 36057, null], [36057, 39062, null], [39062, 41826, null], [41826, 45836, null], [45836, 48428, null], [48428, 51493, null], [51493, 54662, null], [54662, 57595, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57595, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57595, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57595, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57595, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57595, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57595, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57595, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57595, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57595, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57595, null]], "pdf_page_numbers": [[0, 2477, 1], [2477, 5636, 2], [5636, 8347, 3], [8347, 11983, 4], [11983, 15019, 5], [15019, 17543, 6], [17543, 20949, 7], [20949, 23534, 8], [23534, 24959, 9], [24959, 27918, 10], [27918, 30105, 11], [30105, 32879, 12], [32879, 36057, 13], [36057, 39062, 14], [39062, 41826, 15], [41826, 45836, 16], [45836, 48428, 17], [48428, 51493, 18], [51493, 54662, 19], [54662, 57595, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57595, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
ef120b14c398f17e7ebef740ea9a34b75d89c0d3
|
EXCERPT FROM THE
PROCEEDINGS
OF THE
SEVENTH ANNUAL ACQUISITION
RESEARCH SYMPOSIUM
WEDNESDAY SESSIONS
VOLUME I
Acquisition Research
Creating Synergy for Informed Change
May 12 - 13, 2010
Published: 30 April 2010
Approved for public release, distribution unlimited.
Prepared for: Naval Postgraduate School, Monterey, California 93943
The US Department of Defense (specifically, but not limited to, the DoD CIO’s Clarifying Guidance Regarding Open Source Software, DISA’s launch of Forge.mil and OSD’s Open Technology Development Roadmap Plan) has called for increased use of open source software and the adoption of best practices from the free/open source software (F/OSS) community to foster greater reuse and innovation between programs in the DoD. In our paper, we examine some key aspects of open and collaborative software development inspired by the success of the F/OSS movement as it might manifest itself within the US DoD. This examination is made from two perspectives: the reuse potential among DoD programs sharing software and the incentives, strategies and policies that will be required to foster a culture of collaboration needed to achieve the benefits indicative of F/OSS. Our conclusion is that to achieve predictable and expected reuse, not only are technical infrastructures needed, but also a shift to the business practices in the software development and delivery pattern seen in the traditional acquisition lifecycle is needed. Thus, there is potential to overcome the challenges discussed within this paper to engender a culture of openness and community collaboration to support the DoD mission.
The research presented at the symposium was supported by the Acquisition Chair of the Graduate School of Business & Public Policy at the Naval Postgraduate School.
To request Defense Acquisition Research or to become a research sponsor, please contact:
NPS Acquisition Research Program
Attn: James B. Greene, RADM, USN, (Ret.)
Acquisition Chair
Graduate School of Business and Public Policy
Naval Postgraduate School
555 Dyer Road, Room 332
Monterey, CA 93943-5103
Tel: (831) 656-2092
Fax: (831) 656-2253
E-mail: jbgreene@nps.edu
Copies of the Acquisition Sponsored Research Reports may be printed from our website www.acquisitionresearch.net
On Open and Collaborative Software Development in the DoD
Scott Hissam—Scott Hissam is a Senior Member of the Technical Staff for the Carnegie Mellon Software Engineering Institute, where he conducts research on component-based software engineering, open source software, and multicore. Mr. Hissam is a founding member and secretary of the International Federation for Information Processing (IFIP) Working Group 2.13 on Open Source Software and co-organizer of its annual conference. His publications include two books (Building Systems from Commercial Components and Perspectives on Free and Open Source Software), papers published in international journals, and numerous technical reports. He has a BS in Computer Science from West Virginia University.
Scott A. Hissam
Carnegie Mellon Software Engineering Institute
4500 5th Avenue
Pittsburgh, PA, 15213 USA
+1.412.268.6526
shissam@sei.cmu.edu
Charles B. Weinstock—Charles B. Weinstock is in the Research, Technology, and System Solutions Program at the Software Engineering Institute. His main interest is in dependable computing. For the last several years, he has been developing assurance case technology. He is also active in the open source software community. Previously, Weinstock worked at Tartan Laboratories and SRI International. Weinstock has a PhD in Computer Science, an MS in Industrial Administration (MBA), and a BS in Mathematics, all from Carnegie Mellon. He is a Senior Member of the IEEE and a member of IFIP Working Group 10.4 on Dependable Computing and Fault Tolerance.
Charles B. Weinstock
Carnegie Mellon Software Engineering Institute
4500 5th Avenue
Pittsburgh, PA, 15213 USA
+1.412.268.7719
weinstock@sei.cmu.edu
Len Bass—Len Bass is a Senior Member of the Technical Staff at the Software Engineering Institute. He has authored two award-winning books in software architecture and several other books and papers in various computer science and software engineering areas. He has been a keynote speaker or a distinguished lecturer on six continents. He is currently working on techniques for the methodical design of software architectures, supporting usability through software architecture, and to understand the relationship between software architecture and global software development practices. He has worked in the development of numerous software systems, ranging in a multitude of domains.
Len Bass
Carnegie Mellon Software Engineering Institute
4500 5th Avenue
Pittsburgh, PA, 15213 USA
+1.412.268.6763
ljb@sei.cmu.edu
Abstract
The US Department of Defense (specifically, but not limited to, the DoD CIO's Clarifying Guidance Regarding Open Source Software, DISA's launch of Forge.mil and OSD's Open Technology Development Roadmap Plan) has called for increased use of open source software and the adoption of best practices from the free/open source software (F/OSS) community to foster greater reuse and innovation between programs in the DoD. In our paper, we examine some key aspects of open and collaborative software development inspired by the success of the F/OSS movement as it might manifest itself within the US DoD. This examination is made from two perspectives: the reuse potential among DoD programs sharing software and the incentives, strategies and policies that will be required to foster a culture of collaboration needed to achieve the benefits indicative of F/OSS. Our conclusion is that to achieve predictable and expected reuse, not only are technical infrastructures needed, but also a shift to the business practices in the software development and delivery pattern seen in the traditional acquisition lifecycle is needed. Thus, there is potential to overcome the challenges discussed within this paper to engender a culture of openness and community collaboration to support the DoD mission.
Keywords: Open source software, software engineering, reuse, collaborative development
Introduction
Free and open source software (F/OSS) has been available, in one form or another, for several decades. Successful F/OSS projects benefit from the efforts of a large, usually diverse set of developers. For such projects, the software developed is often as good as or better than the best commercially available software. An even larger community is able to make use of and reap the benefits of this software. The DoD (US Department of Defense) would like to capitalize on this success and adopt an F/OSS model to exploit both reuse among DoD programs and collaboration to improve quality, spark innovation, and reduce time and cost.
The Open Technology Development (OTD) Roadmap Plan prepared for Ms. Sue Payton, Deputy Under Secretary for Defense, Advance Systems and Concepts, identified the following advantages sought from adopting OSS development methodologies (Herz, Lucas & Scott, 2006):
- Encourages software re-use [sic],
- Can increase code quality and security,
- Potentially subject to scrutiny by many eyes,
- Decreases vendor lock-in,
- Reduces cost of acquisition,
- Increases customizability, and
- Meritocratic community.
Most recently, Dan Risacher, Office of the Assistant Secretary of Defense (ASD), Networks and Information Integration (NII), was quoted by Government Computing News (Jackson, 2008) regarding the benefits of F/OSS as it might apply to defense agencies:
By using open-source software, the services can update their software as soon as a vulnerability is found or an update is needed, rather than wait for the vendor to supply a patch. Open source also promises faster prototyping of systems, and lower barriers to exit. And if a government-written application is released into open source, outside developers could work to fix the problem, lowering maintenance costs of software.
This office is in the process of updating the Stenbit memorandum clarifying the use of F/OSS in DoD programs (Stenbit, 2003).
What is important about these two data points is that they illustrate the level of expectation that is driving the push for the adoption of the F/OSS model of open and collaborative software development in the DoD software community.
This paper explores the idea of adapting the F/OSS model to the DoD software community. While there are a number of other significant concerns mentioned, this paper concentrates on addressing two that are of interest. The first is reasoning how an open and collaborate approach would need to operate in the DoD community, assuming that community was motivated to behave in the same manner as seen in the public F/OSS community. The second focuses on this assumption and reasons as to how to incentivize the DoD community to make use of, and contribute to, such a resource.
The remainder of this paper is laid out as follows: Section 2 looks at the progressive movement towards F/OSS and some of the software reuse repositories (and their challenges) that proceeded today’s F/OSS movement. Section 3 takes an abstract view of a project’s operation in SourceForge.net as a means for understanding how such resources support the F/OSS community and what they do not do to illustrate a gap that is needed to be filled to support reuse across the DoD community. Section 3 then instantiates this abstract view for use in the DoD to consider the ways in which a DoD-specific resource would compare to that seen in the F/OSS community. Section 4 addresses the prior assumption about behavior expected by the DoD community to consider the incentives necessary to create a healthy and collaborative DoD OSS community. Sections 5 and 6 provide final thoughts on points not yet addressed (perhaps motivating further discussion) and summarize the positions stated in this paper.
The following closely related and relevant topics are beyond the scope of this immediate paper: data rights/licensing issues (commercial, F/OSS, or otherwise); security classifications; various software lifecycle stages beyond IOC (initial operational capability), i.e., pre-RFP (request for proposal) tensions; maintenance of fielded system; field upgrade (new capability); and new systems reusing or proposing to reuse from prior systems.
**History of Collaboration and Reuse**
There are a number of papers, articles, and publications on the history of F/OSS, some tracing their beginnings to SHARE and the SHARE library in 1955, “to help scientific users grapple with the problems of IBM’s first major commercial mainframe” (Gardner, 2005). Others trace to the earlier PACT (Project for the Advancement of Coding Techniques) initiative in 1953, a collaboration between the military and aviation industries (Melahn, 1956; Feller & Fitzgerald, 2001). Feller and Fitzgerald’s book provides a nice treatise on the history of F/OSS from these beginnings through the Berkeley Software Distribution, TEX, the creation of the Free Software Foundation (FSF) and GNU (GNU is Not Unix) and, eventually, to the creation of the Open Source Initiative (OSI). With the advent of
the ARPANET during these emerging beginnings of the modern F/OSS movement, general software repositories began to appear; the most popular included SIMTEL20, originally hosted at MIT (Granoff, 2002), as well as tools to aid in searching these repositories, such as Archie and gopher (Howe, 2009).
With the ever-growing increase in the availability of F/OSS, the benefits of software reuse was also gaining traction within the DoD. In the late 80s (particularly with the DoD’s adoption of the Ada programming language) and early 90s, various software reuse efforts within the DoD emerged, including STARS, STARS SCAI, ASSET, CARDS, PRISM, DSRS, ELSA, DSSA ADAGE, and RICC (Department of the United States Air Force [USAF], 1996). Although differences did exist among these repositories with respect to artifact management philosophies, some adopted a generally common theme centered on repositories of reusable software artifacts (code, documentation, etc.) having domain- and/or application-specific classifications, taxonomies, and software architectures all supported by techniques and methods embracing reuse in software development—essentially advocating the concepts that are among the underpinnings of software product lines (SPL) (Clements & Northrop, 2001).
Many of these repositories listed above are no longer in existence, even though their concepts are (in the authors’ opinion) sound. Although a case study to completely understand why these efforts ceased would be nice—not the purpose of this paper—we will briefly touch on some of the technical challenges that faced some of the efforts. These include:
- **Quality Arbitration:** The administrative function of deciding what is and what is not included in the repository. This ranges from accepting everything (perhaps resulting in a junk yard or flea market) to a decisive selection (an inventory of few precious selections). Deciding which is the most appropriate is challenging. For the latter, repository customers have higher confidence in artifacts extracted at a higher cost of upfront qualification and an administrative bottleneck in populating the repository. This philosophical difference resulted in two camps: managed and unmanaged repositories.
- **Search and Browse:** At the time of these repositories, free text search and retrieval was a serious resource and computational problem. Free text was not practical; search was a matter of defining a well-crafted database schema, typically relational. There were two approaches. In one, a general purpose schema was defined; in another, domain analysis was used to identify domain specific concepts and terminology. Frakes demonstrated, however, that there was no substantial gain in user search performance obtained by the extra cost and effort of domain analysis (Frakes & Nejmeh, 1987). With time and advances, such free text search capabilities are now more common place (e.g., Google) and no longer presents a major hurdle.
- **Beyond Search and Browse:** Some argued that critiquing domain analysis with respect to retrieval of single reuse items missed the point. Capturing the (sometimes complex) relationships among domain concepts, spanning requirements, algorithms, architecture, code, test, and other artifacts was what was important. The CARDS repository (Wallnau, 1992), for example, used the KL-ONE (Brachman & Schmolze, 1985) semantic network formalism to capture these relations, and use them to support reuse of large-scale domain structures.
Today’s work in Web Ontologies also uses a descendant of KL-ONE, and for much the same purpose.
Altogether, this history lesson is worth remembering. In comparison, we believe that the infrastructures supporting the F/OSS community are superior for collaborative development for the projects they service—something that past reuse repositories never imagined. For the larger F/OSS community, these infrastructures are similar to past unmanaged reuse repositories capable of great (seemingly effortless) free text search suitable for opportunistic reuse. We examine this position in more detail below.
Infrastructures for Reuse and Collaboration
There are a number of resources available to the F/OSS community for F/OSS projects including SourceForge.net, RubyForge, JavaForge, Tigris.org, and freshmeat.net, only to name a few. An abstract view of SourceForge.net is created here for the purpose of understanding what such resources commonly do to support the F/OSS community and also what they don’t do as a means to illustrate gaps in what is needed to support reuse across the DoD community as well as what would be needed in the DoD to support open and collaborative software development.
SourceForge.net®
SourceForge.net, owned and operated by SourceForge, Inc. (SourceForge, 2009a), is by all accounts one of the most successful source code repositories in the last decade, now boasting over 180,000 projects and nearly 2 million registered users (SourceForge, 2009b). However, simply referring to SourceForge.net as a (software reuse) repository is a great misnomer. Yes, SourceForge.net contains software source code (some of which is reused everyday), but SourceForge.net provides a wealth of other IT-related (hosting and backup) services to the F/OSS community as well as collaborative software engineering and project management tools.
Figure 1. Abstract View of a SourceForge.net Project’s Operation
SourceForge.net can best be thought of as a collection of self-contained projects. Each project is administered and owned by a project owner(s) who arbitrates (and delegates) ultimate control over what is committed into the project’s code (or artifact) base, what software features are added or removed (over time), and the priorities upon which work progresses. The project’s ownership determines the degree of control that is asserted over the project. The project owner is depicted as a crown in Error! Reference source not found. as a means to connote the “power” those arbitrators have over the project.
As work progresses, those arbitrators are continuously making collaborative decisions about what is to be done next. For simplicity, the focus for this discussion is on changes offered from the project specific community (on the left of Error! Reference source not found.) to the projects artifacts. By balancing their priorities and plans, the arbitrator make decisions on how to merge the interests of this community and the larger F/OSS communities to make changes (and commit those changes) to the artifact base. This churning effect (represented by the cyclic, thick arrows in Error! Reference source not found.) is an important and vital aspect of F/OSS collaborative software development. Succinctly, it is this churning and frequent updates (i.e., “release early, release often”) to the artifacts that spark innovation through incremental improvements to early and emerging design and source code artifacts given that such updates are open and observable by all in the F/OSS community (Goldman & Gabriel, 2005). This is a continuous, open, and insightful process that is not driven by some external calendar, fiscal boundaries or legal/acquisition milestones.
Lastly, others are free to download software artifacts from the project’s repository codebase. This group (in the lower right of Error! Reference source not found.) is separated from the project specific community to the left as a means to indicate others¹ that have tangentially “stumbled” upon the project (by whatever means—by search, by reputation, etc.). This group serves a useful purpose in this paper to illustrate another crucial point—that is Eric Raymond’s caution in The Cathedral and the Bazaar, caveat emptor—“let the buyer beware” (Raymond, 2001). This is represented by the large measuring tape in Error! Reference source not found.:
Like the earlier users of SIMTEL20, Archie, and gopher, the onus is on this group to determine the degree of fit between artifacts retrieved from the project’s codebase and their own needs. One aspect of this determination is partially driven by the need to ascertain if a search actually returned a relevant hit. That is, did the search terms find that which was sought? This is something that was recognized early and many of the DoD software reuse repositories tried to address this with various approaches to classifications and data definitions, for instance ASSET’s approach was a faceted classification schema (USAF, 1996; Kempe, 1998) in which CARDS’s approach was a domain-specific repository (software for a specific application domain, e.g., command centers). SourceForge.net’s classification scheme for projects themselves is limited to broad project categories (for example, Games/Entertainment, Scientific/Engineering, and Security) and subcategories as well as filters allowing other search criteria such as language, operating system, and even licensing. SourceForge.net also provides mechanisms to search across projects (limited to free text searches of project’s names and descriptions), to conduct searches within a project
¹ Such individuals may become part of the F/OSS community for a project through a variety of actions, including reporting bug, finding bugs, helping others, porting, contributing ideas, code, etc.
(for example within its documentation, forums, bugs, mailing lists, and configured download packages), and find published files (but not within CVS or SVN—two popular version control systems).
Another important aspect is determining the quality of the artifacts found. If quality is assumed by reputation (e.g., Apache, MySQL, and a host of other reputable F/OSS offerings), this may be no more difficult than in the past with the reputable software of that era (e.g., wuftp, X, and many of the popular GNU offerings). However, putting reputation aside, quality of the software artifact is at the sole discretion of the project owner—and this has to be discovered in effort expended by the “buyer” through learning, inspection, trial, and testing.
Perhaps the most important aspect is determining if the artifact can actually be reused in the context of the “buyer’s” need. The software found may be relevant, and it may be of high quality (by reputation), but may be architected and designed with assumptions that are inconsistent with the context in which it is intended to be reused. One example the authors experienced was to discover that a highly relevant and reputable MP3 encoder/decoder library could not be reused due to the fact that the decoder was implemented in a manner that was not thread safe, even though the encoder portion was. This resulted in an architectural mismatch that prevented reuse in this case. The CARDS and STARS SCAI (USAF, 1996) were some of the earliest DoD software reuse repositories that recognized the need to minimize this mismatch by adopting architecture-centric approaches as a means for qualifying software for reuse within a specific domain.
To summarize key points taken from this abstract view:
- These F/OSS resources (such as SourceForge.net) are for IT-related services housing F/OSS projects and their artifacts with facilities supporting open and collaborative development.
- Project artifacts themselves are managed by a project owner(s) having sole arbitration over the entire project.
- Artifacts are frequently updated and churned over by the F/OSS community, resulting in better quality and innovation.
- It is up to others expending real effort to find, inspect, and assess project artifacts for reuse within their context.
**DoDSF**
The idea of creating a “SourceForge.net” within the US Government or US Department of Defense, i.e., a “SourceForge.mil” was not invented by us. We credit Schaefer (2005) for the name. Furthermore, the OTD Roadmap called for “an internal DoD collaborative code repository” (Herz et al., 2006). So rather than conflate our analysis with any intent others may have with this idea (either in the past, present or future), we instantiate our thinking by using the term “DoDSF” (a DoD SourceForge).
Like SourceForge.net, DoDSF could also support the IT-related (hosting and backup) services to the DoD community as well as the collaborative software engineering and project
management tools, but cast in the setting of a DoD program acquisition.\(^2\) Using Error! Reference source not found. as a basis for DoDSF, Error! Reference source not found. illustrates a number of similarities and differences that can immediately be teased out.
Working left to right in Error! Reference source not found., the project specific community is the first difference. In this case, the project specific community is not identical to the wider F/OSS community served by F/OSS collaborative resources on the Internet. In the case of DoDSF, it is likely and expected that DoDSF will be gated in some manner, thus losing the 'F/O' as in F/OSS. The reality is that there will be classified software that the DoD hopes and expects to be reused and to evolve in a collaborative sense. Therefore, the openness assumed and intended for DoDSF will be as open as it can be for those in the gated community. This is not unprecedented; over the last decade, many private corporations—also wanting to reap the benefits of open and collaborative software development—have adapted F/OSS ideals. Such initiatives have been labeled using the terms corporate source (Dinkelacker, & Garg, 2001), progressive open source (Melian, 2007), and inner corporate source (Wesselius, 2006).

The other difference in this community is its mix (as denoted by the shading of some of the characters in Error! Reference source not found.). Some from the community will likely be employees of private companies under contract to the DoD and under the oversight of a government program office—it is not assumed that these are the same private companies, contracts or government offices; it is only assumed that they share common needs and concerns. This too, is not unprecedented. In the F/OSS community, an
---
2 This is not intended to be narrow, as we recognize that post deployment maintenance and long-term support would also have to benefit from open, collaborative and continuous software development. The description here is suitable for our discussion.
increasing number of private companies allocate resources to F/OSS projects and some companies even sponsor F/OSS projects, for example, MySQL, IBM for Eclipse, and Sun Microsystems for OpenOffice.org.
Moving further to the right in Error! Reference source not found., the next significant difference is the introduction of an additional commit and arbitration step and a second crown. This abstraction is added to our DoDSF as a means to rectify weaknesses in the SourceForge.net abstraction discussed earlier regarding caveat emptor and the burden that is placed on the larger community having to assess a project artifact's degree of fit. As in F/OSS projects, it is expected that projects will continue to have “project owner(s)” that arbitrate (and delegate) ultimate control over what is committed into the project’s code (or artifact) base, what software features are added or removed (over time), and the priorities upon which work progresses.
What is different with the introduction of the additional step is that these project owners are not the sole arbitrator as to what (specifically) from the project’s codebase is actually committed to DoDSF. This additional arbitration step is needed to ensure that which is being submitted to DoDSF is consistent with the domain- or application-specific nature reflected onto DoDSF—in other words, the project’s artifact is consistent with the architecture and variation mechanisms expected and needed for effective reuse of artifacts contained within DoDSF (Bachmann & Clements, 2005). How and who conducts that additional arbitration certainly would need to be addressed. Some software reuse repositories discussed earlier, specifically STARS SCAI and CARDS, used domain engineering approaches (i.e., domain managers) reflective of software product lines (i.e., product line manager) to oversee such consistency (USAF, 1996; Clements & Northrop, 2001). This, in effect, would empower the administrators or arbitrators (the second crown) of DoDSF with a role in quality arbitration not seen in SourceForge.net and reminiscent of earlier software reuse repositories, thereby affording the opportunity for a software product line approach.3
Given this additional step, the intent would be to reduce the real effort expended by others who find and assess artifacts downloaded from DoDSF for fitness for use and to increase the likelihood that those artifacts can be reused within their context (denoted by the smaller size of the measuring tape in Error! Reference source not found.). This represents a fundamental shift from the model in the F/OSS community of caveat emptor with the onus on the “buyer” to caveat venditor, or “let the seller beware,” as the onus would shift to the product line managers to ensure that the artifacts committed to DoDSF are fit for (re-)use.
Continuing on the journey around Error! Reference source not found., the next visual clue introduced is that in the lower right, depicting the group separate from the project specific group. This group is the same as that served in the F/OSS abstraction discussed earlier—a group that has come to DoDSF to find and reuse artifacts suitable for their context. However, this group has the foreknowledge that artifacts within DoDSF have been developed following product line practices. That would mean that DoDSF could have domain- and/or application-specific classifications, taxonomies, and software architectures that are meaningful to the DoD community and commonality across similar projects.
---
3 Additional opportunities for collaboration are possible with the “project owners,” including the supplier, users, and others in the DoD community with the arbitrators working this additional step.
Like Error! Reference source not found. also includes cyclic, thick arrows to represent, in this case, a need for frequent updates to artifacts contained within DoDSF. Like the F/OSS community, the DoD community should also be continuous in its endeavor to improve the quality of its software through open and collaborative development. And, like its F/OSS counterpart, updates of artifacts to DoDSF should not be bound exclusively by fixed or planned milestones, as traditionally thought in contracted software acquisition. Rather, here, updates are driven by the DoD community.
Without this cyclic churning, for example, a project artifact is only submitted to DoDSF at or near the “completion” of a project; there then is no opportunity for DoD community feedback and participation in the open and collaborative process that is expected to improve quality or spark innovation. Inclusion of this cyclic churning is a significant break from the software development delivery pattern seen in the traditional DoD software acquisition lifecycle. To summarize key points taken from this DoDSF view:
- Like SourceForge.net, DoDSF would be a resource for IT-related services housing artifacts from DoD projects supporting open and collaborative development.
- Although the “project owner” has purview over the DoD project itself, the artifacts that are committed to DoDSF are arbitrated in a manner that is consistent with a product line approach.
- The DoD community here is a gated community similar to the F/OSS collaborative model adapted by private companies.
- The mantra of “release early, release often,” indicative of F/OSS, is necessary to stimulate collaboration and spark innovation, as it does in the F/OSS community.
Throughout this discussion of DoDSF, it was assumed that the DoD community was motivated to behave in a manner that was consistent with the behavior often exhibited by the F/OSS community. We now turn our attention to this assumption.
**Incentivizing a Culture of Collaboration, Innovation and Reuse**
There is one final visual in Error! Reference source not found. to be discussed, that is the overarching “umbrella” of culture, incentives, policies, and strategies that must exist to engender the DoD community to behave in a manner that is indicative of openness and collaboration. The intent of this “umbrella” is to achieve the goals of reuse, quality and innovation coveted of the F/OSS community. Returning again to the OTD Roadmap, which recognized that their Roadmap “entails a parallel shift in acquisition methodologies and corporate attitude to facilitate discovery and re-use of software code across DoD.” The Roadmap goes on to explain that today’s acquisition model treats “DoD-developed software code as a physical good, DoD is limiting and restricting the ability of the market to compete for the provision of new and innovative solutions and capabilities.” So any reformulation of today’s acquisition model will fundamentally have to change the laws, policies and even thinking of the software code, not so much as a product, but more as means to mission capabilities and perhaps services. This is understandably a daunting task (white paper or not).
F/OSS Collaboration, Innovation and Reuse
Raymond’s comprehensive insight into the motivation of the F/OSS community is foundational (Raymond, 2001). For some, necessity is the only impetus—a simple need for something. And, fortunately, many in the F/OSS community have the ability to fulfill that need through coding. And when their ability is outstripped by the realities of the problem, they create an F/OSS project and hope that others having the skills join (the birth of a project community). Such people that lend their helping hands often do with the greatest of intentions perhaps motivated by the same need or simply just feel the need to do some technically interesting work (i.e., “scratch an itch” in Error! Reference source not found.).
Sometimes that “need” can already be satisfied by product offerings from the commercial marketplace (i.e., the Cathedral) but the desire is to make a better alternative to that offering, one that is free and open to all. Many F/OSS projects started this way.
![Diagram of culture of collaboration in the F/OSS community]

As touched upon briefly above in Section 3, there is precedence for business models based on F/OSS projects. Many new projects have come and are coming into existence through software contributions en masse (e.g., Netscape’s Mozilla, Sun’s Java, IBM’s Eclipse, MySQL) as business opportunities appear from ancillary services through the contribution of these codebases and through their use. However, this in and of itself is not an answer, but it certainly presents evidence to the behavior that is desirable in the DoD community. The Ultra-Large-Scale Systems (ULS) study called for research in Social and Economic Foundations for Non-Competitive Social Collaboration as inspired, in part, by the F/OSS movement; “as pure self-interest is supplanted by altruistic motivations and the desire to be perceived as productive and intelligent” while at the same time recognizing the need for incentive structures encouraging the community to cooperate (Feiler et al., 2006).
It is also important to recognize those that are motivated to voluntarily offer their time and contribute to F/OSS projects. Some of the motivations just discussed apply to these individuals as well (i.e., altruism, itching, etc.), but further extend to the meritocratic—that is to (socially and in governance)—rise in the community to which they serve. Further, some see F/OSS projects as venues to show off their prowess, to develop skills that make them more employable, or to network with others (a social phenomenon). And practically, others need (not just want) to see that their modifications, enhancements, and features find there way back into the mainstream product. Otherwise, if the F/OSS community does not accept
such changes, the only recourse is to reincorporate those changes into all future versions (i.e., rework) (Hissam & Weinstock, 2001).
Reasoning about DoDSF (Section 3) based on resources like SourceForge.net show DoDSF must differ if there is to be effective reuse for the DoD. For one, a DoD project is not likely to be incorporated in its entirety within some other DoD project. The projects are simply too big. However, there are certainly subsystems or modules of those overall projects that lend themselves to the DoDSF model. An example might be a subsystem that develops a common operational picture from a series of incoming tracks. To be able to reuse such a subsystem will require commonality at many levels, including mission needs, requirements, software architecture, design, data- and function-interdependencies, and other software artifacts.
Practically all of the Linux distributions (Debian, Fedora, Ubuntu, etc.) reuse the Linux kernel (www.kernel.org), which itself (Linux) has been ported to a wide variety of hardware architectures. In those distributions, other F/OSS applications are included (a list which is simply too long to even begin to enumerate). At the same time, like the Linux distributions, there are other POSIX-based distributions that are Linux-free, for example, Apple’s Mac OS X, which is based on the Berkeley Software Distribution (BSD) of The Open Group’s Unix. And those same applications available to the Linux distributions are mostly available to Mac OS X. For the F/OSS community; the reasons for this are obvious: the underlying operating system, its architecture, interfaces (both for applications and device drivers), and interdependencies are openly specified, architected and, when necessary, debated. This leads to a shared understanding and context.
Baldwin and Clark (2006) argued that the architecture of F/OSS projects is a critical factor of the open and collaborative software development process in that it is the modularity of those architectures and the option values stemming from such modular architectures that contribute to collaboration and innovation. They noted that codebases that are more modular have more option value, thus attracting volunteers. That is the more modular and option rich, the more active and larger the innovator community is likely to be. Furthermore, it is these innovators that are incentivized to form voluntary, collective groups for the purpose of sharing and improving ideas. This, in and of itself, increases the likelihood of future variations and experimentation. Finally, the ULS report identified modularity as key to managing the complexity of software and to producing software systems amenable to change and to concurrent development—something that is clearly indicative of F/OSS collaborative development.
Looking again to some of the F/OSS “poster children,” specifically Linux, Apache, and now Firefox (direct descendent of Netscape), those projects did not start out with wonderfully modular architectures. They only became modular after the complexity of features, project management, distributed development became too overwhelming and had to adapt. Chastek, McGregor, and Northrop (2007) identified Eclipse’s plug-in (modular) architecture as one of the project’s most valuable core assets, providing for multiple forms of variation including extension points of various types and (in the authors’ opinion) learning from the lessons from past F/OSS projects.
To summarize key points taken from this F/OSS view:
- Some of the incentives that motivate individuals, groups, and companies to participate and collaborate in the F/OSS community can be explained, but more study is warranted.
- Some private companies have moved from treating software source code as a physical good and have found market opportunity in services from the use of the software.
- Modularity of an architecture not only promotes reuse, but is a key factor in spurring innovation in collaborative communities.
- Like F/OSS projects, software emanating from DoD projects will have to have architectures and interfaces that promote modularity and option value.
**DoDSF Collaboration, Innovation and Reuse**
Taking the key points from the previous sections, the “big money” question is how do these map into the gated DoD community that was established back in Section 3 (recall civilian and military personnel, along with employees of private companies under contract to the DoD and under the oversight of a government program office having common needs and concerns)? Furthermore, what needs to be done to change acquisition policy and strategy and to establish the incentives that will enable a culture and behavior similar to that seen in the F/OSS community?
As daunting as these questions may be, we humbly offer a few suggestions.
**Recognize Product Line Practices are Not Free**
Creating modularized subsystems and components that are consistent with the architecture and variability expected and needed for effective reuse will cost development dollars with payoff that may not be realized until the reuse of the component can be amortized. **Strategically, this should be expected and not avoided.** Furthermore, and before new components are created (or existing components are refactored), resources will have to be expended to identify product-line-wide architectures that are suitable for DoDSF and against which project artifacts are assessed before commitment to DoDSF. Such activities will likely require planning and development that are beyond any one project, yet are necessary for the projects themselves. Such planning includes mission objectives, product strategies, requirements analysis, architecture and design modifications, extra documentation, and packaging. Incentivizing the program managers that oversee these projects would require some combination of providing extra funding and making performance evaluation dependant on contributions to DoDSF.
**Incentivize the “Churn”**
If effort is to be expended to create a product-line-wide architecture for the DoDSF, and individuals across the DoD-wide enterprise are empowered as product line managers, the DoDSF has to be more than a “field of dreams” followed by the often cursing mantra “If you build it, they will come.” Recognize that reuse is not free and that reuse does not come easily or by happenstance (Tracz, 1995). If the desired behavior of the DoD community is to use the DoDSF for finding project artifacts, then those artifacts have to be meaningful, relevant, and, by reputation, sound. Recall, the desire is to unburden the “buyer” from assessing the component’s degree of fit—as expected in software product lines. By reducing this burden as a significant barrier to reuse, incentives may be necessary to bootstrap or kick start reciprocating contributions, feedback, improvements, and otherwise collaborative behaviors—but observations from the F/OSS community would lead to the belief that such incentives would not be necessary. But this is not entirely clear in the gated
DoD community. Talented, willing and able civilian and military personnel may be more likely to behave in this manner. Employees of private companies—while on contract—might also behave in this manner. Again, there is precedent in the F/OSS community for private companies to commit resources to F/OSS projects. Following this model, perhaps there are incentives for contracting companies that are successful in getting subsystems and components into DoDSF—that being negotiated service contracts, thereby allowing for continued involvement servicing the DoD community.
There are good reasons (perhaps un-incentivized) that a new DoD project would prefer to see bidders propose using proven artifacts from DoDSF. Such includes less risk to the project—a subsystem taken from DoDSF is already a known quantity, and lower development costs allowing valuable program dollars to be used elsewhere in the program. A possible disincentive (or opportunity, perspective is everything) is that it may be viewed by Congress that the project should be built for less money because it uses a subsystem(s) in DoDSF; the program office may be given less money to get the job done, which may be viewed as a negative outcome by some.
A supplier bidding on a project really has only two incentives to use an artifact contained in DoDSF. If the program office has indicated that the use of such artifacts will be a determining factor in a successful proposal, then there is a strong incentive to do so. In the absence of such a requirement, the supplier may be incentivized to reuse an artifact to enable it to be the lowest bidder.
**Incentivize Software as a Non-Rivalrous Good**
Treating source code as if it were a physical good is a mentality that inhibits collaboration. Rivalry should be encouraged between competing subsystems or components for the same role in a produce-line-wide architecture (i.e., let the stronger or better prevail). But the source code itself should serve as the source of inspiration, innovation and improvements for that “better” subsystem—rather than the opaque enigma requiring resources to be expended to re-engineer from scratch (or worse, reverse-engineer because the source code is long forgotten and lost).
**Last Thoughts**
**Governance**
Reminiscent of reuse repositories discussed in Section 2, great care has to be given in governance of DoDSF. The DoD must have a vested interest in seeing that the artifacts in DoDSF can be reused in subsequent projects. It has invested in them and would like to see a payback in terms of reduced development time, risk, and cost in the future. Thus, there is an upfront quality requirement for items to be placed into DoDSF. For SourceForge.net, the evaluation is ultimately done by the F/OSS community (using or not using) the project. For DoDSF there is presumably a contractual requirement regarding the subsystem. Someone has to evaluate the subsystem and its suitability for reuse, which needs to be a part of the original development contract. Otherwise there is every incentive for the supplier to place something into DoDSF that is ultimately unusable by anyone other than the original supplier.
Who does this evaluation? In the body of this paper, we placed the onus on the “seller” (*caveat venditor*), which, in this case, was tagged as the product line manager or the “second crown.” In reality, that role will come down to real people in the DoD community.
Determining just who exactly those individuals are is beyond the scope of this white paper, but it is certainly something that will have to be decided.
**Security**
In this white paper, we acknowledge that classification of project artifacts in DoDSF is a reality. This presents a challenge for DoDSF. If an artifact is from a top-secret project, then it may be difficult to declassify it for contribution to a DoDSF that does not respect security issues. But allowing DoDSF to embrace a multi-level security model raises concerns. Here’s one example. Is a top secret project able to use an artifact classified at a lower level? If so, how does it trust it? If it makes modifications (even a bug fix) what happens to the security classification of the artifact when the modification is given back to DoDSF? Does this result in a security-level fork? There are many such questions that could be raised, but a further discussion of this is beyond the scope of this paper.
**Summary**
The number of references that were used in the preparation of this white paper was far more than any of the authors expected. This simply illustrates, in our opinion, the tip of a very large iceberg on the topic of reuse and F/OSS openness and collaboration coming from various disciplines.
Perhaps the most relevant reference that we came across for this paper was the Open Technology Development Roadmap Plan (Herz et al., 2006). Those interested in following up on some of the discussion covered in this paper should consider getting the latest progress on the actions called for within that Roadmap Plan. That plan called for very specific actions with respect to changing the traditional acquisition lifecycle. Most interesting was the recommendation: “Evaluate the potential use of the Defense Acquisition Challenge (DAC) program to demonstrate Open Technology alternatives to projects or programs that have implementation issues; e.g., make application of open source based products or development methodologies a specific interest item for DAC.”
On the topic of product lines, it is worth noting that there are case studies that show how product line approaches can be effective and successful in industry and government ventures (USAF, 1996; Clements & Northrop, 2001; Jensen, 2007; Mebane & Ohta, 2007). Furthermore, there are efforts and thinking happening now to merge F/OSS models with software product lines (Chastek et al., 2007) and (van Gurp, Prehofer & Bosch, 2010) along with three international workshops on Open Source Software and Product Lines (specifically OSSPL 2006, OSSPL 2007, and OSSPL 2008).
F/OSS works today because of the culture, environment, and motivation touched upon in this white paper. It is important to note that this F/OSS culture was not planned at all, but is founded by a loose set of principles and rules (some of which are formalized through F/OSS licenses) that guide the behavior to achieve freely available, lightly controlled software developed in a collaborative manner. This behavior is informed by centuries of human populations and communities creating new knowledge and building off each other’s work.
The question the readers should ask themselves (and we would not have done our job if you didn’t ask yourself) is what would such principles and rules look like in a gated DoD community, a community itself informed by approximately 200 years of contracting,
procurement and competition? Additionally, what is needed to foster the behavior the DoD wants to engender? What can the DoD control and what control must the DoD relinquish?
Acknowledgements
We would like to thank Gary Chastek, Terry Dailey, Bob Gobeille, Guy Martin, Catherina Melian, Linda Northrop, Robert Vietmeyer, and Kurt Wallnau for their thoughtful review and suggestions and with a special thanks to Nickolas Guertin whose curiosity, energy, and interest in the topic inspired this paper.
References
2003 - 2010 Sponsored Research Topics
**Acquisition Management**
- Acquiring Combat Capability via Public-Private Partnerships (PPPs)
- BCA: Contractor vs. Organic Growth
- Defense Industry Consolidation
- EU-US Defense Industrial Relationships
- Knowledge Value Added (KVA) + Real Options (RO) Applied to Shipyard Planning Processes
- Managing the Services Supply Chain
- MOSA Contracting Implications
- Portfolio Optimization via KVA + RO
- Private Military Sector
- Software Requirements for OA
- Spiral Development
- Strategy for Defense Acquisition Research
- The Software, Hardware Asset Reuse Enterprise (SHARE) repository
**Contract Management**
- Commodity Sourcing Strategies
- Contracting Government Procurement Functions
- Contractors in 21st-century Combat Zone
- Joint Contingency Contracting
- Model for Optimizing Contingency Contracting, Planning and Execution
- Navy Contract Writing Guide
- Past Performance in Source Selection
- Strategic Contingency Contracting
- Transforming DoD Contract Closeout
- USAF Energy Savings Performance Contracts
- USAF IT Commodity Council
- USMC Contingency Contracting
**Financial Management**
- Acquisitions via Leasing: MPS case
- Budget Scoring
- Budgeting for Capabilities-based Planning
Capital Budgeting for the DoD
Energy Saving Contracts/DoD Mobile Assets
Financing DoD Budget via PPPs
Lessons from Private Sector Capital Budgeting for DoD Acquisition Budgeting Reform
PPPs and Government Financing
ROI of Information Warfare Systems
Special Termination Liability in MDAPs
Strategic Sourcing
Transaction Cost Economics (TCE) to Improve Cost Estimates
Human Resources
Indefinite Reenlistment
Individual Augmentation
Learning Management Systems
Moral Conduct Waivers and First-tem Attrition
Retention
The Navy’s Selective Reenlistment Bonus (SRB) Management System
Tuition Assistance
Logistics Management
Analysis of LAV Depot Maintenance
Army LOG MOD
ASDS Product Support Analysis
Cold-chain Logistics
Contractors Supporting Military Operations
Diffusion/Variability on Vendor Performance Evaluation
Evolutionary Acquisition
Lean Six Sigma to Reduce Costs and Improve Readiness
Naval Aviation Maintenance and Process Improvement (2)
Optimizing CIWS Lifecycle Support (LCS)
Outsourcing the Pearl Harbor MK-48 Intermediate Maintenance Activity
Pallet Management System
PBL (4)
Privatization-NOSL/NAWCI
RFID (6)
Risk Analysis for Performance-based Logistics
R-TOC AEGIS Microwave Power Tubes
Sense-and-Respond Logistics Network
Strategic Sourcing
Program Management
- Building Collaborative Capacity
- Business Process Reengineering (BPR) for LCS Mission Module Acquisition
- Collaborative IT Tools Leveraging Competence
- Contractor vs. Organic Support
- Knowledge, Responsibilities and Decision Rights in MDAPs
- KVA Applied to AEGIS and SSDS
- Managing the Service Supply Chain
- Measuring Uncertainty in Earned Value
- Organizational Modeling and Simulation
- Public-Private Partnership
- Terminating Your Own Program
- Utilizing Collaborative and Three-dimensional Imaging Technology
A complete listing and electronic copies of published research are available on our website: www.acquisitionresearch.org
On Open and Collaborative Software Development in the DoD
Software Engineering Institute
Carnegie Mellon University
Pittsburgh, PA 15213
Scott Hissam
12 May 2010
Agenda
Open Source Software in the Department of Defense
Collaboration in the OSS Community
Drivers for Collaboration and Innovation
Collaboration in the DoD Community: Making Lightning Strike Twice
Open Source Software in the Supply Chain
From various memorandums, OSS is recognized as a viable source for software that can be used in the US government and military systems
- OSS is COTS (per FAR)—DoNCIO
- Acquisition policies and guidelines still apply to OSS;
- OSS, like any software, must ensure that support is adequate;
- Use of OSS means adhering to OSS licensing—seek counsel;
- Addresses obligation to licensing (“Wennergren” subparagraph 2.e.);
- Suggest use of DoD Forges (“Wennergren” 2.f.);
- and (with clarifications†) use of OSS does not mean everything must be distributed back to community;
Onus is *still* on the user *(buyer beware)* to assess the “fitness for use” of OSS within the mission context
- E.g., National Security Telecommunications and Information Systems Security Policy (NSTISSP) No. 11
Open Source Software (beyond “Cheap” software)
Dan Risacher (ASD/NII, “open source evangelist to DoD”) in gcn.com, October 8, 2008 re: use of the term “open source software” applies to:
- Body of code of the software program—freely available
- Licensing—rules for lightly-controlled creation and usage
- Development methodology—encourages volunteers to help write the code
“He says…” | “We (the OSS community) hear…”
--- | ---
Freely available | open (for all to see), available (no barriers to access)
Lightly-controlled | promote continuous innovation and improvement and place no restrictions on how the software is used or by whom.
Encourages Volunteers | collaborative environment conducive to open debate, contributions, and (peer) support
†http://gcn.com/articles/2008/10/08/pentagon-open-source-good-to-go.aspx
Why Shift to an OSS Development in the DoD?
Improve quality
Reduce time and cost
Reduce (or eliminate) restrictions on use
Encourage collaboration
Spark innovation
Others…
Not about:
Opening up DoD software code to the world (practical restrictions on “freely available”)
Reaping the Benefits of an OSS-style Approach
Improving quality
- Many “eyes”—better code, but more importantly better designs
- Open and observable code and design changes that are continually visible
Encouraging collaboration and innovation
- Culture
- Infrastructure
- Incentives
Achieving (broad) reuse
- Reduce onus on the code consumer to assess degree of fit (plagues OSS)
- Not “Not Invented Here” should be rewarded and expected
Blasts from the Past
Software Reuse Libraries and Repositories
- PACT (Project for the Advancement of Coding Techniques)
- SHARE, IBM Users Group
- SIMTEL20
- BSD, GNU
- DoD efforts: STARS, STARS SCAI, ASSET, CARDS, PRISM, DSRS, ELSA, DSSA ADAGE, and RICC
- Proprietary “software component marketplace”
Lessons
- Quality arbitration: flea market to few precious selections
- Search/browse: no longer a resource and computational problem: Google!
- Context and Semantics: relationship between problem domain and artifacts
- Underpinnings of Software Product Lines (SPL)
New, Initial Steps…
Based on TeamForge from Collab.NET
Modeled after ‘Internal Forge’ industry concept
- Inner source or corporate source software
Community is primarily DoD employees and its suppliers
- Not quite 100% “open”
Project in Software Forge.mil:
- Open source of DoD Community source license
- Not a ‘fork’ of an open source project or duplicate of an existing Forge project
From “Introduction to Forge.mil”, 26 MAR 2009, online CollabNET and Carahsoft web seminar hosted by Rob Vietmeyer and Guy Martin
Culture of Collaboration in the OSS Community
(self) motivation
community
product(s)
simple need
innovators
useful
“scratch an itch”
developers
intended purpose
altruism
users
other purposes
competition
free riders, others…
### Evolving Motivations and Incentives
<table>
<thead>
<tr>
<th>Generation</th>
<th>FSF Generation</th>
<th>OSI Generation</th>
<th>eScience Generation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Zero</td>
<td>PACT</td>
<td>open source</td>
<td>OSGeo</td>
</tr>
<tr>
<td></td>
<td>SHARE</td>
<td>openSUSE</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Various Publications</td>
<td>MySQL</td>
<td></td>
</tr>
<tr>
<td></td>
<td>• Scientific collaboration and sharing</td>
<td>• Marketplace Dominance</td>
<td>• Solve Hard Problems</td>
</tr>
<tr>
<td></td>
<td>• Altruism</td>
<td>• Competition</td>
<td></td>
</tr>
<tr>
<td></td>
<td>• Scratch an “Itch”</td>
<td>• Meritocracy</td>
<td></td>
</tr>
<tr>
<td></td>
<td>• Solve Hard Problems</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Motivation(s)**
- Scientific collaboration and sharing
- Altruism
**Examples**
- PACT
- SHARE
- Various Publications
- GCC
- Linux
- Mozilla
- MySQL
- OSGeo
- WDC
Collaboration “Fusion”
- Here all elements are open:
- Architectures
- Standards
- Tools (incl. code)
- Data
- Results
- Software engineering processes
Collaboration fusion occurs when the right catalyst (incentive) is used to initiate and sustain an open and collaborative community sharing of all artifacts.
Infrastructures Supporting OSS Communities
Project specific community
- code
- merge
- Project Owner (arbitrate)
- commit
- SOURCEFORGE, etc.
- download
- others...
- arbiter of good taste:
benevolent dictator
- onus is on community: which to “buy”; how to “use”
caveat emptor
- Frequent updates given “sufficient” material and catalyst in an environment conducive to open source
Infrastructures Supporting OSS Communities
Project specific community
arbiter of good taste: *benevolent dictator*
onus is on community: which to “buy”; how to “use” *caveat emptor*
Frequent updates given “sufficient” material and catalyst in an environment conducive to open sour
Infrastructures Supporting DoD Communities
Culture, Incentives, Policies, and Strategies
Project specific community
Project Owner (arbitrate)
merge
report/discuss (collaborate)
Predictable, expected reuse
arbiter of good taste:
strategically-thinking dictatorial board(s)
lessen onus is on community: easy to “find”; easy to “use”
caveat venditor (“let the seller beware”)
Frequent updates given “sufficient” material and catalyst in an environment conducive to open source
Software Engineering Institute | Carnegie Mellon
On Open and Collaborative Software Development in the DoD
Hissam, 12 May 2010
© 2010 Carnegie Mellon University
Making Lightning Strike Twice
OSS works today because of the culture, environment, and motivation
- Founding by a loose set principles and rules (formalized through licenses today) which guide behavior to achieve freely available, lightly-controlled software developed in a collaborative manner
- Itself informed by centuries of communities creating new knowledge and building off each others work
What would such principles and rules look like in a “gated” DoD OSS community?
- Itself informed by 200 years of contracting, procurement, and competition?
What is needed to foster the behavior the DoD wants to engender?
- What can “it” control?
- What control must “it” relinquish?
“Open source is not for everyone, but if you have the right attitude then it can be a major success factor for your project. You must be willing to give up control and share decision making with your community. Working together you can create something much better than you could by working alone. Good luck!”
—Goldman & Gabriel, IHE
Contact Information Slide Format
Scott A. Hissam
Senior member of the technical staff
RTSS
Telephone: +1 412-268-6526
Email: shissam@sei.cmu.edu
Web:
www.sei.cmu.edu
http://www.sei.cmu.edu/contact.cfm
Customer Relations
Email: info@sei.cmu.edu
Telephone: +1 412-268-5800
SEI Phone: +1 412-268-5800
SEI Fax: +1 412-268-6257
NO WARRANTY
THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN “AS-IS” BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT.
Use of any trademarks in this presentation is not intended in any way to infringe on the rights of the trademark holder.
This Presentation may be reproduced in its entirety, without modification, and freely distributed in written or electronic form without requesting formal permission. Permission is required for any other use. Requests for permission should be directed to the Software Engineering Institute at permission@sei.cmu.edu.
This work was created in the performance of Federal Government Contract Number FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. The Government of the United States has a royalty-free government-purpose license to use, duplicate, or disclose the work, in whole or in part and in any manner, and to have or permit others to do so, for government purposes pursuant to the copyright license under the clause at 252.227-7013.
|
{"Source-Url": "https://apps.dtic.mil/dtic/tr/fulltext/u2/a530067.pdf", "len_cl100k_base": 13151, "olmocr-version": "0.1.53", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 84750, "total-output-tokens": 16723, "length": "2e13", "weborganizer": {"__label__adult": 0.0003540515899658203, "__label__art_design": 0.0004189014434814453, "__label__crime_law": 0.0006070137023925781, "__label__education_jobs": 0.003734588623046875, "__label__entertainment": 6.818771362304688e-05, "__label__fashion_beauty": 0.00015211105346679688, "__label__finance_business": 0.0021114349365234375, "__label__food_dining": 0.0002906322479248047, "__label__games": 0.0005984306335449219, "__label__hardware": 0.0006036758422851562, "__label__health": 0.0003719329833984375, "__label__history": 0.00031280517578125, "__label__home_hobbies": 0.00010406970977783204, "__label__industrial": 0.00034117698669433594, "__label__literature": 0.0002579689025878906, "__label__politics": 0.0006608963012695312, "__label__religion": 0.00024378299713134768, "__label__science_tech": 0.01100921630859375, "__label__social_life": 0.00014388561248779297, "__label__software": 0.00896453857421875, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.0002065896987915039, "__label__transportation": 0.0004935264587402344, "__label__travel": 0.00016415119171142578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68219, 0.02691]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68219, 0.20312]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68219, 0.92211]], "google_gemma-3-12b-it_contains_pii": [[0, 335, false], [335, 1626, null], [1626, 2272, null], [2272, 4789, null], [4789, 7586, null], [7586, 11212, null], [11212, 14707, null], [14707, 16561, null], [16561, 20499, null], [20499, 23468, null], [23468, 25577, null], [25577, 29306, null], [29306, 32506, null], [32506, 35350, null], [35350, 39055, null], [39055, 42420, null], [42420, 45861, null], [45861, 49268, null], [49268, 52302, null], [52302, 55409, null], [55409, 55409, null], [55409, 56658, null], [56658, 57842, null], [57842, 58643, null], [58643, 58643, null], [58643, 58643, null], [58643, 58807, null], [58807, 59010, null], [59010, 59987, null], [59987, 60810, null], [60810, 61090, null], [61090, 61531, null], [61531, 62106, null], [62106, 62626, null], [62626, 62855, null], [62855, 63746, null], [63746, 64070, null], [64070, 64458, null], [64458, 64743, null], [64743, 65389, null], [65389, 66412, null], [66412, 66738, null], [66738, 68219, null]], "google_gemma-3-12b-it_is_public_document": [[0, 335, true], [335, 1626, null], [1626, 2272, null], [2272, 4789, null], [4789, 7586, null], [7586, 11212, null], [11212, 14707, null], [14707, 16561, null], [16561, 20499, null], [20499, 23468, null], [23468, 25577, null], [25577, 29306, null], [29306, 32506, null], [32506, 35350, null], [35350, 39055, null], [39055, 42420, null], [42420, 45861, null], [45861, 49268, null], [49268, 52302, null], [52302, 55409, null], [55409, 55409, null], [55409, 56658, null], [56658, 57842, null], [57842, 58643, null], [58643, 58643, null], [58643, 58643, null], [58643, 58807, null], [58807, 59010, null], [59010, 59987, null], [59987, 60810, null], [60810, 61090, null], [61090, 61531, null], [61531, 62106, null], [62106, 62626, null], [62626, 62855, null], [62855, 63746, null], [63746, 64070, null], [64070, 64458, null], [64458, 64743, null], [64743, 65389, null], [65389, 66412, null], [66412, 66738, null], [66738, 68219, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68219, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68219, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68219, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68219, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68219, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68219, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68219, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68219, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68219, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68219, null]], "pdf_page_numbers": [[0, 335, 1], [335, 1626, 2], [1626, 2272, 3], [2272, 4789, 4], [4789, 7586, 5], [7586, 11212, 6], [11212, 14707, 7], [14707, 16561, 8], [16561, 20499, 9], [20499, 23468, 10], [23468, 25577, 11], [25577, 29306, 12], [29306, 32506, 13], [32506, 35350, 14], [35350, 39055, 15], [39055, 42420, 16], [42420, 45861, 17], [45861, 49268, 18], [49268, 52302, 19], [52302, 55409, 20], [55409, 55409, 21], [55409, 56658, 22], [56658, 57842, 23], [57842, 58643, 24], [58643, 58643, 25], [58643, 58643, 26], [58643, 58807, 27], [58807, 59010, 28], [59010, 59987, 29], [59987, 60810, 30], [60810, 61090, 31], [61090, 61531, 32], [61531, 62106, 33], [62106, 62626, 34], [62626, 62855, 35], [62855, 63746, 36], [63746, 64070, 37], [64070, 64458, 38], [64458, 64743, 39], [64743, 65389, 40], [65389, 66412, 41], [66412, 66738, 42], [66738, 68219, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68219, 0.01907]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
2ea1adadba659638b4bc6b8ef8773e6e59c0e6df
|
<table>
<thead>
<tr>
<th>Title</th>
<th>Dynamic Querying of Mass-Storage RDF Data with Rule-Based Entailment Regimes</th>
</tr>
</thead>
<tbody>
<tr>
<td>Author(s)</td>
<td>Krennwallner, Thomas; Martello, Alessandra; Polleres, Axel</td>
</tr>
<tr>
<td>Publication Date</td>
<td>2009</td>
</tr>
<tr>
<td>Publisher</td>
<td>Springer</td>
</tr>
<tr>
<td>Link to publisher's version</td>
<td><a href="http://dx.doi.org/10.1007/978-3-642-04930-9_20">http://dx.doi.org/10.1007/978-3-642-04930-9_20</a></td>
</tr>
<tr>
<td>Item record</td>
<td><a href="http://hdl.handle.net/10379/456">http://hdl.handle.net/10379/456</a></td>
</tr>
</tbody>
</table>
Some rights reserved. For more information, please see the item record link above.
Dynamic Querying of Mass-Storage RDF Data with Rule-Based Entailment Regimes*
Giovambattista Ianni¹, Thomas Krennwallner², Alessandra Martello¹, and Axel Polleres³
¹ Dipartimento di Matematica, Università della Calabria, I-87036 Rende (CS), Italy
{ianni,a.martello}@mat.unical.it
² Institut für Informationssysteme 184/3, Technische Universität Wien, Austria
tkren@kr.tuwien.ac.at
³ Digital Enterprise Research Institute, National University of Ireland, Galway
axel.polleres@deri.org
Abstract. RDF Schema (RDFS) as a lightweight ontology language is gaining popularity and, consequently, tools for scalable RDFS inference and querying are needed. SPARQL has become recently a W3C standard for querying RDF data, but it mostly provides means for querying simple RDF graphs only, whereas querying with respect to RDFS or other entailment regimes is left outside the current specification. In this paper, we show that SPARQL faces certain unwanted ramifications when querying ontologies in conjunction with RDF datasets that comprise multiple named graphs, and we provide an extension for SPARQL that remedies these effects. Moreover, since RDFS inference has a close relationship with logic rules, we generalize our approach to select a custom ruleset for specifying inferences to be taken into account in a SPARQL query. We show that our extensions are technically feasible by providing benchmark results for RDFS querying in our prototype system GiaBATA, which uses Datalog coupled with a persistent Relational Database as a back-end for implementing SPARQL with dynamic rule-based inference. By employing different optimization techniques like magic set rewriting our system remains competitive with state-of-the-art RDFS querying systems.
1 Introduction
Thanks to initiatives such as DBPedia or the Linked Open Data project,⁴ a huge amount of machine-readable RDF [1] data is available, accompanying pervasive ontologies describing this data such as FOAF [2], SIOC [3], or YAGO [4].
A vast amount of Semantic Web data uses rather small and lightweight ontologies that can be dealt with rule-based RDFS and OWL reasoning [5–7], in contrast to the full power of expressive description logic reasoning. However, even if many practical use cases do not require complete reasoning on the terminological level provided by DL-reasoners, the following tasks become of utter importance. First, a Semantic Web system should be able to handle and evaluate (possibly complex) queries on large amounts of RDF instance data. Second, it should be able to take into account implicit knowledge found by ontological inferences as well as by additional custom rules involving built-ins
* This work has been partially supported by the Italian Research Ministry (MIUR) project Interlink II04CG8AGG, the Austrian Science Fund (FWF) project P20841, by Science Foundation Ireland under Grant No. SFI/08/CE/I1380 (Lion-2).
⁴http://dbpedia.org/ and http://linkeddata.org/
or even nonmonotonicity. The latter features are necessary, e.g., for modeling complex mappings [8] between different RDF vocabularies. As a third point, joining the first and the second task, if we want the Semantic Web to be a solution to – as Ora Lassila formulated it – those problems and situations that we are yet to define; we need triple stores that allow dynamic querying of different data graphs, ontologies, and (mapping) rules harvested from the Web. The notion of dynamic querying is in opposition to static querying, meaning that the same dataset, depending on context, reference ontology and entailment regime, might give different answers to the same query. Indeed, there are many situations in which the dataset at hand and its supporting class hierarchy cannot be assumed to be known upfront: think of distributed querying of remotely exported RDF data.
Concerning the first point, traditional RDF processors like Jena (using the default configuration) are designed for handling large RDF graphs in memory, thus reaching their limits very early when dealing with large graphs retrieved from the Web. Current RDF Stores, such as YARS [9], Sesame, Jena TDB, ThreeStore, AllegroGraph, or OpenLink Virtuoso provide roughly the same functionality as traditional relational database systems do for relational data. They offer query facilities and allow to import large amounts of RDF data into their persistent storage, and typically support SPARQL [10], the W3C standard RDF query language. SPARQL has the same expressive power as non-recursive Datalog [11, 12] and includes a set of built-in predicates in so called filter expressions.
However, as for the second and third point, current RDF stores only offer limited support. OWL or RDF(S) inference, let alone custom rules, are typically fixed in combination with SPARQL querying (cf. Section 2). Usually, dynamically assigning different ontologies or rulesets to data for querying is neither supported by the SPARQL specification nor by existing systems. Use cases for such dynamic querying involve, e.g., querying data with different versions of ontologies or queries over data expressed in related ontologies adding custom mappings (using rules or “bridging” ontologies).
To this end, we propose an extension to SPARQL which caters for knowledge-intensive applications on top of Semantic Web data, combining SPARQL querying with dynamic, rule-based inference. In this framework, we overcome some of the above mentioned limitations of SPARQL and existing RDF stores. Moreover, our approach is easily extensible by allowing features such as aggregates and arbitrary built-in predicates to SPARQL (see [8, 14]) as well as the addition of custom inference and mapping rules. The contributions of our paper are summarized as follows:
- We introduce two additional language constructs to the normative SPARQL language. First, the directive using ontology for dynamically coupling a dataset with an arbitrary RDFS ontology, and second extended dataset clauses, which allow to specify datasets with named graphs in a flexible way. The using ruleset directive can be exploited for adding to the query at hand proper rulesets which might used for a variety of applications such as encoding mappings between entities, or encoding custom entailment rules, such as RDFS or different rule-based OWL fragments.
- We present the GiABATA system [15], which demonstrates how the above extensions can be implemented on a middle-ware layer translating SPARQL to Datalog and SQL. Namely, the system is based on known translations of SPARQL to Datalog rules. Arbitrary, possibly recursive rules can be added flexibly to model arbitrary ontological inference regimes.
---
vocabulary mappings, or alike. The resulting program is compiled to SQL where possible, such that only the recursive parts are evaluated by a native Datalog implementation. This hybrid approach allows to benefit from efficient algorithms of deductive database systems for custom rule evaluation, and native features such as query plan optimization techniques or rich built-in functions (which are for instance needed to implement complex filter expressions in SPARQL) of common database systems.
- We compare our GiaBATA prototype to well-known RDF(S) systems and provide experimental results for the LUBM [16] benchmark. Our approach proves to be competitive on both RDF and dynamic RDFS querying without the need to pre-materialize inferences.
In the remainder of this paper we first introduce SPARQL along with RDF(S) and partial OWL inference by means of some motivating example queries which existing systems partially cannot deal in a reasonably manner in Section 2. Section 3 sketches how the SPARQL language can be enhanced with custom ruleset specifications and arbitrary graph merging specifications. We then briefly introduce our approach to translate SPARQL rules to Datalog in Section 4, and how this is applied to a persistent storage system. We evaluate our approach with respect to existing RDF stores in Section 5, and then conclusions are drawn in Section 6.
2 SPARQL and some Motivating Examples
Similar in spirit to structured query languages like SQL, which allow to extract, combine and filter data from relational database tables, SPARQL allows to extract, combine and filter data from RDF graphs. The semantics and implementation of SPARQL involves, compared to SQL, several peculiarities, which we do not focus on in this paper, cf. [10, 18, 11, 19] for details. Instead, let us just start right away with some illustrating example motivating our proposed extensions of SPARQL; we assume two data graphs describing data about our well-known friends Bob and Alice shown in Fig. 1(b)+(c). Both graphs refer to terms in a combined ontology defining the FOAF and Relationship\(^7\) vocabularies, see Fig. 1(a) for an excerpt.
On this data the SPARQL query (1) intends to extract names of persons mentioned in those graphs that belong to friends of Bob. We assume that, by means of rdfs:seeAlso statements, Bob provides links to the graphs associated to the persons he is friend with.
\[
\text{select } \textit{?N} \text{ from } \langle \text{http://example.org/myOnt.rdfs} \rangle \\
\text{ from } \langle \text{http://bob.org} \rangle \\
\text{ from named } \langle \text{http://alice.org} \rangle \\
\text{ where } \{ \langle \text{http://bob.org#me} \rangle \textit{foaf:knows} \textit{?X} . \textit{?X} \textit{rdfs:seeAlso} \textit{?G} . \} \\
\text{ graph } \textit{?G} \{ \textit{?P} \textit{rdf:type} \textit{foaf:Person; foaf:name} \textit{?N} \} \}
\]
\(^7\)http://vocab.org/relationship/
Here, the from and from named clauses specify an RDF dataset. In general, the dataset $DS = (G, N)$ of a SPARQL query is defined by (i) a default graph $G$ obtained by the RDF merge [20] of all graphs mentioned in from clauses, and (ii) a set $N = \{(u_1, G_1), \ldots, (u_k, G_k)\}$ of named graphs, where each pair $(u_i, G_i)$ consists of an IRI $u_i$, given in a from named clause, paired with its corresponding graph $G_i$. For instance, the dataset of query (1) would be $DS_1 = (G_M \uplus G_B, \{(\langle http://alice.org\rangle, G_A)\})$, where $\uplus$ denotes merging of graphs according to the normative specifications.
Now, let us have a look at the answers to query (1). Answers to SPARQL select queries are defined in terms of multisets of partial variable substitutions. In fact the answer to query (1) is empty when – as typical for current SPARQL engines – only simple RDF entailment is taken into account, and query answering then boils down to simple graph matching. Since neither of the graphs in the default graph contain any triple matching the pattern $\langle http://bob.org#me \rangle \text{ foaf:knows} ?X$ in the where clause, the result of (1) is empty. When taking subproperty inference by the statements of the ontology in $G_M$ into account, however, one would expect to obtain three substitutions for the variable $?N$: {?N/"Alice", ?N/"Bob", ?N/"Charles"}. We will explain in the following why this is not the case in standard SPARQL.
In order to obtain the expected answer, firstly SPARQL's basic graph pattern matching needs to be extended, see [10, Section 12.6]. In theory, this means that the graph patterns in the where clause needs to be matched against an enlarged version of the original graphs in the dataset (which we will call the deductive closure $Cl(\cdot)$) of a given entailment regime. Generic extensions for SPARQL to entailment regimes other than simple RDF entailment are still an open research problem, due to various problems: (i) for (non-simple) RDF entailment regimes, such as full RDFS entailment, $Cl(G)$ is infinite, and thus SPARQL queries over an empty graph $G$ might already have infinite answers, and (ii) it is not yet clear which should be the intuitive answers to queries over inconsistent graphs, e.g. in OWL entailment, etc. In fact, SPARQL restricts extensions of basic graph pattern matching to retain finite answers. Not surprisingly, many existing implementations implement finite approximations of higher entailment regimes such as RDFS and OWL [6, 5, 21]. E.g., the RDF Semantics document [20] contains an informative set of entailment rules, a subset of which (such as the one presented in Section 3.2 below) is implemented by most available RDF stores. These rule-based approximations, which we focus on in this paper, are typically expressible by means of Datalog-style rules. These latter model how to infer a finite closure of a given RDF graph that covers sound but not necessarily complete RDF(S) and OWL inferences. It is worth noting that Rule-based entailment can be implemented in different ways: rules could be either dynamically evaluated upon query time, or the closure wrt. ruleset $R$, $Cl_R(G)$, could be materialized when graph $G$ is loaded into a store. Materialization of inferred triples at loading time allows faster query responses, yet it has drawbacks: it is time and space expensive and it has to be performed once and statically. In this setting, it must be decided upfront
(a) which ontology should be taken into account for which data graph, and
(b) to which graph(s) the inferred triples “belong”, which particularly complicates the querying of named graphs.
As for exemplifying (a), assume that a user agent wants to issue another query on graph $G_B$ with only the FOAF ontology in mind, since she does not trust the Relationship ontology. In the realm of FOAF alone, rel:friendOf has nothing to deal with.
However, when materializing all inferences upon loading $G_M$ and $G_B$ into the store, $\text{bob:me foaf:knows _:a}$ would be inferred from $G_M \sqcup G_B$ and would contribute to such a different query. Current RDF stores prevent to dynamically parameterize inference with an ontology of choice at query time, since indeed typically all inferences are computed upon loading time *once and for all.*
As for (b), queries upon datasets including named graphs are even more problematic. Query (1) uses $G_B$ in order to find the IRI identifiers for persons that Bob knows by following rdfs:seeAlso links and looks for persons and their names in the named RDF graphs found at these links. Even if rule-based inference was supported, the answer to query (1) over dataset $DS_1$ is just {"?N/"Alice"}, as “Alice” is the only (explicitly) asserted foaf:Person in $G_A$. Subproperty, domain and range inferences over the $G_M$ ontology do not propagate to $G_A$, since $G_M$ is normatively prescribed to be merged into the default graph, but not into the named graph. Thus, there is no way to infer that "Bob" and "Charles" are actually names of foaf:Persons within the named graph $G_A$. Indeed, SPARQL does not allow to merge, on demand, graphs into the named graphs, thus there is no way of combining $G_M$ with the named graph $G_A$.
To remedy these deficiencies, we suggest an extension of the SPARQL syntax, in order to allow the specification of datasets more flexibly: it is possible to group graphs to be merged in parentheses in *from* and *from named* clauses. The modified query, obtaining a dataset $DS_2 = (G_M \sqcup G_B, \{(\text{http://alice.org}, G_M \sqcup G_A)\})$ looks as follows:
```
select ?N
from <http://example.org/myOnt.rdfs> <http://bob.org/>
from named <http://alice.org/>
where {
ob:me foaf:knows ?X . ?X rdfs:seeAlso ?G .
}
(2)
```
For ontologies which should apply to the whole query, i.e., graphs to be merged into the default graph as well as any specified named graph, we suggest a more convenient shortcut notation by adding the keyword *using ontology* in the SPARQL syntax:
```
select ?N
using ontology <http://example.org/myOnt.rdfs>
from <http://bob.org/>
from named <http://alice.org/>
where {
ob:me foaf:knows ?X . ?X foaf:seeAlso ?G .
}
(3)
```
Hence, the *using ontology* construct allows for coupling the entire given dataset with the terminological knowledge in the *myOnt* data schema. As our investigation of currently available RDF stores (see Section 5) shows, none of these systems easily allow to merge ontologies into named graphs or to dynamically specify the dataset of choice.
In addition to parameterizing queries with ontologies in the dataset clauses, we also allow to parameterize the ruleset which models the entailment regime at hand. Per default, our framework supports a standard ruleset that “emulates” (a finite subset of) the RDFS semantics. This standard ruleset is outlined in Section 3 below. Alternatively, different rule-based entailment regimes, e.g., rulesets covering parts of the OWL semantics à la ter Horst [5], de Bruijn [22, Section 9.3], OWL2 RL [17] or other custom rulesets can be referenced with the *using ruleset* keyword. For instance, the following query returns the solution {?X/<http://alice.org#me>, ?Y/<http://bob.org#me>}, by doing equality reasoning over inverse functional properties such as foaf:homepage when the
FOAF ontology is being considered:
```
select ?X ?Y
using ontology <http://example.org/myOnt.rdf>
using ruleset <http://www.example.com/owl-horst>
from <http://bob.org/>
from <http://alice.org/>
where { ?X foaf:knows ?Y }
```
Query (4) uses the built-in RDFS rules for the usual subproperty inference, plus a ruleset implementing ter Horst’s inference rules, which might be available at URL http://www.example.com/owl-horst. This ruleset contains the following additional rules, that will “equate” the blank node used in $G_A$ for “Bob” with <http://bob.org#me>.
```
?X owl:sameAs ?Y → ?Y owl:sameAs ?X.
```
3 A Framework for Using Ontologies and Rules in SPARQL
In the following, we will provide a formal framework for the SPARQL extensions outlined above. In a sense, the notion of dynamic querying is formalized in terms of the dependence of BGP pattern answers $[P]^O,R$ from a variable ontology $O$ and ruleset $R$. For our exposition, we rely on well-known definitions of RDF datasets and SPARQL. Due to space limitations, we restrict ourselves to the bare minimum and just highlight some standard notation used in this paper.
**Preliminaries.** Let $I$, $B$, and $L$ denote pairwise disjoint infinite sets of IRIs, blank nodes, and RDF literals, respectively. A *term* is an element from $I \cup B \cup L$. An *RDF graph* $G$ (or simply *graph*) is defined as a set of *triples* from $I \cup B \times I \cup B \times I \cup B \cup L$ (cf. [18, 12]); by $\text{blank}(G)$ we denote the set of blank nodes of $G$.\(^9\)
A *blank node renaming* $\theta$ is a mapping $I \cup B \cup L \rightarrow I \cup B \cup L$. We denote by $t \theta$ the application of $\theta$ to a term $t$. If $t \in I \cup L$ then $t \theta = t$, and if $t \in B$ then $t \theta \in B$. If $(s, p, o)$ is a triple then $(s \theta, p \theta, o \theta)$ is the triple $(s \theta, p \theta, o \theta)$. Given a graph $G$, we denote by $G \theta$ the set of all triples $\{ t \theta \mid t \in G \}$. Let $G$ and $H$ be graphs. Let $\theta_{G,H}$ be an arbitrary blank node renaming such that $\text{blank}(G) \cap \text{blank}(H \theta_{G,H}) = \emptyset$. The *merge* of $G$ by $H$, denoted $G \cup H$, is defined as $G \cup H \theta_{G,H}$.
An RDF dataset $D = (G_0, N)$ is a pair consisting of exactly one unnamed graph, the so-called default graph $G_0$, and a set $N = \{ (u_1, G_1), \ldots, (u_n, G_n) \}$ of named graphs, coupled with their identifying URIs. The following conditions hold: (i) each $G_i$ ($0 \leq i \leq n$) is a graph, (ii) each $u_i$ ($1 \leq j \leq n$) is from $I$, and (iii) for all $i \neq j$, $(u_i, G_i), (u_j, G_j) \in N$ implies $u_i \neq u_j$ and $\text{blank}(G_i) \cap \text{blank}(G_j) = \emptyset$.
The syntax and semantics of SPARQL can now be defined as usual, cf. [10, 18, 12] for details. For the sake of this paper, we restrict ourselves to select queries as shown in the example queries (1)–(4) and just provide an overview of the necessary concepts. A query in SPARQL can be viewed as a tuple $Q = (V, D, P)$, where $V$ is the set of variables mentioned in the select clause, $D$ is an RDF dataset, defined by means of from and from named clauses, and $P$ is a graph pattern, defined in the where clause.
\(^9\) We use owl:ifFP as shortcut for owl:inverseFunctionalProperty.
\(^{10}\) Note that we allow generalized RDF graphs that may have blank nodes in property position.
Graph patterns are in the simplest case sets of RDF triples \((s, p, o)\), where terms and variables from an infinite set of variables \(\text{Var}\) are allowed, also called basic graph patterns (BGP). More complex graph patterns can be defined recursively, i.e., if \(P_1\) and \(P_2\) are graph patterns, \(g \in I \cup \text{Var}\) and \(R\) is a filter expression, then \(P_1\text{ union } P_2\), \(P_1\text{ filter } R\), and graph \(g\) \(P_1\) are graph patterns.
**Graph pattern matching.** Queries are evaluated by matching graph patterns against graphs in the dataset. In order to determine a query’s solution, in the simplest case BGP’s are matched against the active graph of the query, which is one particular graph in the dataset, identified as shown next.
Solutions of BGP matching consist of multisets of bindings for the variables mentioned in the pattern to terms in the active graph. Partial solutions of each subpattern are joined according to an algebra defining the optional, union and filter operators, cf. \([10, 18, 12]\). For what we are concerned with here, the most interesting operator though is the graph operator, since it changes the active graph. That is, the active graph is the default graph.
Queries are evaluated by matching graph patterns against a separate graph only. To generalize this from named graphs, specified via several from clauses – as shown, e.g., in query (1) – whereas in each from named clause a single, separated, named graph is added to the dataset. That is, graph patterns will always be matched against a separate graph only. To generalize this towards dynamic construction of groups of merged named graphs, we introduce the notion of an extended dataset, which can be specified by enlarging the syntax of SPARQL with two additional dataset clauses:
- For \(i, i_1, \ldots, i_m\) distinct IRIs \((m \geq 1)\), the statement “from named \(i(i_1 \ldots i_m)\)“ is called extended dataset clause. Intuitively, \(i_1 \ldots i_m\) constitute a group of graphs to be merged: the merged graph is given \(i\) as identifying IRI.
- For \(\sigma\in I\) we call the statement “using ontology \(\sigma\)“ an ontological dataset clause.
Intuitively, \(\sigma\) stands for a graph that will merged with all graphs in a given query.
**Extended RDF datasets** are thus defined as follows. A graph collection \(\mathcal{G}\) is a set of RDF datasets. An extended RDF dataset \(\mathcal{D}\) is a pair \((\mathcal{G}_0, \{(u_i, \mathcal{G}_i), \ldots, (u_n, \mathcal{G}_n)\})\) satisfying the following conditions: (i) each \(\mathcal{G}_i\) is a nonempty graph collection (note that \(\emptyset\) is a valid nonempty graph collection), (ii) each \(u_j\) is from \(I\), and (iii) for all \(i \neq j\), \(\langle u_i, \mathcal{G}_i \rangle, \langle u_j, \mathcal{G}_j \rangle \in \mathcal{D}\) implies \(u_i \neq u_j\). For \(G \in \mathcal{G}_i\) and \(H \in \mathcal{G}_j\), \(\text{blank}(G) \cap \text{blank}(H) = \emptyset\). We denote \(G_0\) as \(dg(\mathcal{D})\), the default graph collection of \(\mathcal{D}\).
Let \(\mathcal{D}\) and \(\mathcal{O}\) be an extended dataset and a graph collection, resp. The ordinary RDF dataset obtained from \(\mathcal{D}\) and \(\mathcal{O}\), denoted \(D(\mathcal{D}, \mathcal{O})\), is defined as
\[
\left( \bigcup_{g \in dg(\mathcal{D})} g \cup \bigcup_{o \in \mathcal{O}} \{ \langle u, g \rangle \mid (u, g) \in \mathcal{D} \} \right).
\]
We can now define the semantics of extended and ontological dataset clauses as follows. Let $F$ be a set of ordinary and extended dataset clauses, and $O$ be a set of ontological dataset clauses. Let $\text{graph}(g)$ be the graph associated to the IRI $g$: the extended RDF dataset obtained from $F$, denoted $\text{edataset}(F)$, is composed of:
1. $G_0 = \{\text{graph}(g) \mid \text{"from g"} \in F\}$. If there is no $\text{from}$ clause, then $G_0 = \emptyset$.
2. A named graph collection $\langle u, \{\text{graph}(u)\} \rangle$ for each “$\text{from named u}$” in $F$.
3. A named graph collection $\langle i, \{\text{graph}(i_1), \ldots, \text{graph}(i_m)\} \rangle$ for each “$\text{from named i}(i_1 \ldots i_m)$” in $F$.
The graph collection obtained from $O$, denoted $\text{o}\text{collection}(O)$, is the set $\{\text{graph}(o) \mid \text{"using ontology o"} \in O\}$. The ordinary dataset of $O$ and $F$, denoted $\text{dataset}(F, O)$, is the set $D(\text{edataset}(F), \text{o}\text{collection}(O))$.
Let $D$ and $O$ be as above. The evaluation of a graph pattern $P$ over $D$ and $O$ having active graph collection $G$, denoted $[P]_{G}^{D, O}$, is the evaluation of $P$ over $D(D, O)$ having active graph $G' = \bigcup_{g \in G} g$, that is, $[P]_{G}^{D, O} = [P]_{G'}^{D(D, O)}$.
Note that the semantics of extended datasets is defined in terms of ordinary RDF datasets. This allows to define the semantics of SPARQL with extended and ontological dataset clauses by means of the standard SPARQL semantics. Also note that our extension is conservative, i.e., the semantics coincides with the standard SPARQL semantics whenever no ontological clauses and extended dataset clauses are specified.
### 3.2 SPARQL with Arbitrary Rule Lenses
Extended dataset clauses give the possibility of merging arbitrary ontologies into any graph in the dataset. The second extension herein presented enables the possibility of dynamically deploying and specifying rule-based entailments regimes on a per query basis. To this end, we define a generic $R$-entailment, that is RDF entailment associated to a parametric ruleset $R$ which is taken into account when evaluating queries. For each such $R$-entailment regime we straightforwardly extend BGP matching, in accordance with the conditions for such extensions as defined in [10, Section 12.6].
We define an RDF inference rule $r$ as the pair $(\text{Ante}, \text{Con})$, where the antecedent $\text{Ante}$ and the consequent $\text{Con}$ are basic graph patterns such that $\mathcal{V}(\text{Con})$ and $\mathcal{V}(\text{Ante})$ are non-empty, $\mathcal{V}(\text{Con}) \subseteq \mathcal{V}(\text{Ante})$ and $\text{Con}$ does not contain blank nodes.\footnote{Unlike some other rule languages for RDF, the most prominent of which being CONSTRUCT statements in SPARQL itself, we forbid blank nodes; i.e., existential variables in rule consequents which require the “invention” of new blank nodes, typically causing termination issues.} As in Example (5) above, we typically write RDF inference rules as
$$\text{Ante} \rightarrow \text{Con}.$$ (6)
We call sets of inference rules RDF inference rulesets, or rulesets for short.
**Rule Application and Closure.** We define RDF rule application in terms of the immediate consequences of a rule $r$ or a ruleset $R$ on a graph $G$. Given a BGP $P$, we denote as $\mu(P)$ a pattern obtained by substituting variables in $P$ with elements of $I \cup B \cup L$. Let $r$ be a rule of the form (6) and $G$ be a set of RDF triples, then:
$$T_{r}(G) = \{\mu(\text{Con}) \mid \exists \mu \text{ such that } \mu(\text{Ante}) \subseteq G\}.$$
Accordingly, let $T_R(G) = \bigcup_{r \in R} T_r(G)$. Also, let $G_0 = G$ and $G_{i+1} = G_i \cup T_R(G_i)$ for $i \geq 0$. It can be easily shown that there exists the smallest $n$ such that $G_{n+1} = G_n$; we call then $C_l_R(G) = G_n$ the closure of $G$ with respect to ruleset $R$.
We can now further define $R$-entailment between two graphs $G_1$ and $G_2$, written $G_1 \models_R G_2$, as $C_l_R(G_1) \supseteq G_2$. Obviously for any finite graph $G$, $C_l_R(G)$ is finite. In order to define the semantics of a SPARQL query wrt. $R$-entailment we now extend graph pattern matching in $[P]_G^D$ towards respecting $R$.
**Definition 1** (extended basic graph pattern matching for $R$-entailment). Let $D$ be a dataset and $G$ be an active graph. The solution of a BGP $P$ wrt. $R$-entailment, denoted $[P]_G^{D,R}$, is $[P]_G^{D,C_l(R)(G)}$.
The solution $[P]_G^{D,R}$ naturally extends to more complex patterns according to the SPARQL algebra. In the following we will assume that $[P]_G^{D,R}$ is used for graph pattern matching. Our extension of basic graph pattern matching is in accordance with the conditions for extending BGP matching in [10, Section 12.6]. Basically, these conditions say that any extension needs to guarantee finiteness of the answers, and defines some conditions about a “scoping graph.” Intuitively, for our extension, the scoping graph is just equivalent to $C_l_R(G)$. We refer to [10, Section 12.6] for the details.
To account for this generic SPARQL BGP matching extension parameterized by an RDF inference ruleset $R_Q$ per SPARQL query $Q$, we introduce another novel language construct for SPARQL:
- For $r \in I$ we call “using ruleset $r$” a ruleset clause.
Analogously to IRIs denoting graphs, we now assume that an IRI $r \in I$ may not only refer to graphs but also to rulesets, and denote the corresponding ruleset by $\text{ruleset}(r)$. Each query $Q$ may contain zero or more ruleset clauses, and we define the query ruleset $R_Q = \bigcup_{r \in R} \text{ruleset}(r)$, where $R$ is the set of all ruleset clauses in $Q$.
The definitions of solutions of a query and the evaluation of a pattern in this query on active graph $G$ is now defined just as above, with the only difference that answer to a pattern $P$ are given by $[P]_G^{D,R_Q}$.
We observe that whenever $R = \emptyset$, then $R$-entailment boils down to simple RDF entailment. Thus, a query without ruleset clauses will just be evaluated using standard BGP matching. In general, our extension preserve full backward compatibility.
**Proposition 1.** For $R = \emptyset$ and RDF graph $G$, $[P]_G^{D,R} = [P]_G^D$.
Analogously, one might use $R$-entailment as the basis for RDFS entailment as follows. We consider here the $\rho D F$ fragment of RDFS entailment [6]. Let $R_{RDFS}$ denote the ruleset corresponding to the minimal set of entailment rules (2)–(4) from [6]:
$$
\text{7P rdfs:subPropertyOf 7Q . 7Q rdfs:subPropertyOf 7R . } \Rightarrow \text{7P rdfs:subPropertyOf 7R.} \\
\text{7P rdfs:subPropertyOf 7Q . 7S 7P 7O . } \Rightarrow \text{7S 7Q 7O.} \\
\text{7C rdfs:subClassOf 7D . 7D rdfs:subClassOf 7E . } \Rightarrow \text{7C rdfs:subClassOf 7E.} \\
\text{7C rdfs:subClassOf 7D . 7S rdf:type 7C . } \Rightarrow \text{7S rdf:type 7C.} \\
\text{7P rdfs:domain 7C . 7S 7P 7O . } \Rightarrow \text{7S rdf:type 7C.} \\
\text{7P rdfs:range 7C . 7S 7P 7O . } \Rightarrow \text{7O rdf:type 7C.}
$$
Since obviously $G \models_{\text{RDFS}} C_l_{R_{\text{RDFS}}}(G)$ and hence $C_l_{R_{\text{RDFS}}}(G)$ may be viewed as a finite approximation of RDFS-entailment, we can obtain a reasonable definition for defining a BGP matching extension for RDFS by simply defining $[P]_G^{D,R_{\text{RDFS}}} = [P]_G^{D,R_{\text{RDFS}}}$. We allow the special ruleset clause using ruleset rdfs to conveniently refer to this
particular ruleset. Other rulesets may be published under a Web dereferenceable URI, e.g., using an appropriate RIF [23] syntax.
Note, eventually, that our rulesets consist of positive rules, and as such enjoy a natural monotonicity property.
**Proposition 2.** For rulesets \( \mathcal{R} \) and \( \mathcal{R}' \), such that \( \mathcal{R} \subseteq \mathcal{R}' \), and graph \( G_1 \) and \( G_2 \), if \( G_1 \models \mathcal{R} G_2 \) then \( G_1 \models \mathcal{R}' G_2 \).
Entailment regimes modeled using rulesets can thus be enlarged without retracting former inferences. This for instance would allow to introduce tighter RDFS-entailment approximations by extending \( \mathcal{R}_{RDFS} \) with further axioms, yet preserving inferred triples.
### 4 Translating SPARQL into Datalog and SQL
Our extensions have been implemented by reducing both queries, datasets and rulesets to a common ground which allows arbitrary interoperability between the three realms. This common ground is Datalog, wherein rulesets naturally fit and SPARQL queries can be reduced to. Subsequently, the resulting combined Datalog programs can be evaluated over an efficient SQL interface to an underlying relational DBMS that works as triple store.
**From SPARQL to Datalog.** A SPARQL query \( Q \) is transformed into a corresponding Datalog program \( D_Q \). The principle is to break \( Q \) down to a series of Datalog rules, whose body is a conjunction of atoms encoding a graph pattern. \( D_Q \) is mostly a plain Datalog program in \( dlvhex \) [24] input format, i.e. Datalog with external predicates in the \( dlvhex \) language. These are explained along with a full account of the translation in [11, 19]. Main challenges in the transformation from SPARQL to Datalog are (i) faithful treatment of the semantics of joins over possibly unbound variables [11], (ii) the multiset semantics of SPARQL [19], and also (iii) the necessity of Skolemization of blank nodes in construct queries [8]. Treatment of optional statements is carried out by means of an appropriate encoding which exploits negation as failure. Special external predicates of \( dlvhex \) are used for supporting some features of the SPARQL language: in particular, importing RDF data is achieved using the external \&rdf predicate, which can be seen as a built-in referring to external data. Moreover, SPARQL filter expressions are implemented using the \( dlvhex \) external \&eval predicate in \( D_Q \).
Let us illustrate this transformation step by an example: the following query \( A \) asking for persons who are not named “Alice” and optionally their email addresses
```sparql
select * from <http://alice.org/>
where {
?X a foaf:Person.
?X foaf:name ?N.
filter ( ?N != "Alice")
optional { ?X foaf:mbox ?M }
}
```
is translated to the program \( D_A \) as follows:
```datalog
(r1) "triple"(S,P,0,default) :- &rdf[ "alice.org" ](S,P,0).
(r2) answer1(X_N,X_X,default) :- "triple"(X_X,"rdf:type","foaf:Person",default),
"triple"(X_X,"foaf:name",X_N,default),
&eval[ " ?N != 'Alice' ","N",X_N ](true).
(r3) answer2(X_M,X_X,default) :- "triple"(X_X,"foaf:mbox",X_M,default).
(r4) answer_b_join_1(X_M,X_X,N_X,X_X,default) :- answer1(X_N,X_X,default),
answer2(X_M,X_X,default),
answer_b_join_1(X_M,X_X,N_X,X_X,default).
(r5) answer_b_join_1(null,X_X,N_X,X_X,default) :- answer1(X_N,X_X,default),
not answer2_prime(X_X,default),
answer_b_join_1(X_M,X_N,X_X,X_X)
(r6) answer2_prime(X_X) :- answer1(X_N,X_X,default),
answer2(X_M,X_X,default).
(r7) answer(X_M,X_N,X_X) :- answer_b_join1(X_M,X_N,X_X,X_X).
```
where the first rule (r1) computes the predicate "triple" taking values from the built-in predicate &rdf. This latter is generally used to import RDF statements from the specified URI. The following rules (r2) and (r3) compute the solutions for the filtered basic graph patterns \{ ?X a foaf:Person. ?X foaf:name ?N. filter (?N != "Alice") \} and \{ ?X foaf:mbox ?M \}. In particular, note here that the evaluation of filter expressions is "outsourced" to the built-in predicate &eval, which takes a filter expression and an encoding of variable bindings as arguments, and returns the evaluation value (true, false or error, following the SPARQL semantics). In order to emulate SPARQL's optional patterns a combination of join and set difference operation is used, which is established by rules (r4)–(r6). Set difference is simulated by using both null values and negation as failure. According to the semantics of SPARQL, one has to particularly take care of variables which are joined and possibly unbound (i.e., set to the null value) in the course of this translation for the general case. Finally, the dedicated predicate answer in rule (r7) collects the answer substitutions for \( Q \).
**From Datalog to SQL.** For this step we rely on the system DLV\textsubscript{DB} [25] that implements Datalog under stable model semantics on top of a DBMS of choice. DLV\textsubscript{DB} is able to translate Datalog programs in a corresponding SQL query plan to be issued to the underlying DBMS. RDF Datasets are simply stored in a database \( D \), but the native dlvhex &rdf and &eval predicates in \( D_Q \) cannot be processed by DLV\textsubscript{DB} directly over \( D \). So, \( D_Q \) needs to be post-processed before it can be converted into suitable SQL statements.
Rule (r1) corresponds to loading persistent data into \( D \), instead of loading triples via the &rdf built-in predicate. In practice, the predicate "triple" occurring in program \( D_A \) is directly associated to a database table TRIPLE in \( D \). This operation is done off-line by a loader module which populates the TRIPLE table accordingly, while (r1) is removed from the program. The &eval predicate calls are recursively broken down into WHERE conditions in SQL statements, as sketched below when we discuss the implementation of filter statements.
After post-processing, we obtain a program \( D'_Q \), which DLV\textsubscript{DB} allows to be executed on a DBMS by translating it to corresponding SQL statements. \( D'_Q \) is coupled with a mapping file which defines the correspondences between predicate names appearing in \( D'_Q \) and corresponding table and view names stored in the DBMS \( D \).
For instance, the rule (r4) of \( D_A \), results in the following SQL statement issued to the RDBMS by DLV\textsubscript{DB}:
\[
\text{INSERT INTO answer_b_join_1 SELECT DISTINCT answer2_p2.a1, answer1_p1.a1, answer1_p1.a2, 'default'
FROM answer1 answer1_p1, answer2 answer2_p2
WHERE (answer1_p1.a2=answer2_p2.a2)
AND (answer1_p1.a3='default')
AND (answer2_p2.a3='default')
EXCEPT (SELECT * FROM answer_b_join_1)}
\]
Whenever possible, the predicates for computing intermediate results such as answer1, answer2, answer_b_join_1, ... are mapped to SQL views rather than materialized tables, enabling dynamic evaluation of predicate contents on the DBMS side.\textsuperscript{12}
**Schema rewriting.** Our system allows for customizing schemes which triples are stored in. It is known and debated [26] that in choosing the data scheme of \( D \) several aspects have to be considered, which affect performance and scalability when handling large-scale
\textsuperscript{12}For instance, recursive predicates require to be associated with permanent tables, while remaining predicates are normally associated to views.
RDF data. A widely adopted solution is to exploit a single table storing quadruples of form
\((s, p, o, c)\) where \(s, p, o\) and \(c\) are, respectively, the triple subject, predicate, object and
context the triple belongs to. This straightforward representation is easily improved [27]
by avoiding to store explicitly string values referring to URIs and literals. Instead, such
values are replaced with a corresponding hash value.
Other approaches suggest alternative data structures, e.g., property tables [27, 26].
These aim at denormalizing RDF graphs by storing them in a flattened representation,
trying to encode triples according to the hidden “schema” of RDF data. Similarly to a
traditional relational schema, in this approach \(D\) contains a table per each known property
name (and often also per class, splitting up the \(\text{rdf:type}\) table).
Our system gives sufficient flexibility in order to program different storage schemes:
while on higher levels of abstraction data are accessible via the 4-ary triple predicate,
a schema rewriter module is introduced in order to match \(D'_Q\) to the current database
scheme. This module currently adapts \(D'_Q\) by replacing constant IRIs and literals with
their corresponding hash value, and introducing further rules which translate answers,
converting hash values back to their original string representation.
**Magic sets.** Notably, DLV\(^{DB}\) can post-process \(D'_Q\) using the magic sets technique, an
optimization method well-known in the database field [28]. The optimized program \(mD'_Q\)
tailors the data to be queried to an extent significantly smaller than the original \(D'_Q\).
The application of magic sets allows, e.g., to apply entailment rules \(R_{RDFS}\) only on
triples which might affect the answer to \(Q\), preventing thus the full computation and/or
materialization of inferred data.
**Implementation of filter statements.** Evaluation of SPARQL filter statements is
pushed down to the underlying database \(D\) by translating filter expressions to appropriate
SQL views. This allows to dynamically evaluate filter expressions on the DBMS side. For
instance, given a rule \(r \in D_Q\) of the form
\[
\text{h}(X,Y) \ :- \ b(X,Y), \ &\text{eval}[f_Y]\{\text{bool}\}.
\]
where the \&eval atom encodes the filter statement \((f_Y\) representing the filter expres-
sion), then \(r\) is translated to
\[
\text{h}(X,Y) \ :- \ b'(X,Y).
\]
where \(b'\) is a fresh predicate associated via the mapping file to a database view. Such a
view defines the SQL code to be used for the computation of \(f_Y\), like
\[
\text{CREATE VIEW } B' \ ext{AS ( SELECT } X,Y \ ext{FROM } B \ ext{WHERE } f_Y )
\]
where \(f_Y\) is an appropriate translation of the SPARQL filter expression \(f_Y\) at hand to
an SQL Boolean condition,\(^{13}\) while \(B\) is the DBMS counterpart table of the predicate \(b\).
## 5 Experiments
In order to illustrate that our approach is practically feasible, we present a quantitative
performance comparison between our prototype system, GiaBATA, which implements the
approach outlined before, and some state-of-the-art triple stores. The test were done on an
Intel P4 3GHz machine with 1.5GB RAM under Linux 2.6.24. Let us briefly outline the
main features and versions of the triple stores we used in our comparison.
\(^{13}\) A version of this translation can be found in [29].
AllegroGraph works as a database and application framework for building Semantic Web applications. The system assures persistent storage and RDFS++ reasoning, a semantic extension including the RDF and RDFS constructs and some OWL constructs (owl:sameAs, owl:inverseOf, owl:TransitiveProperty, owl:hasValue). We tested the free Java edition of AllegroGraph 3.2 with its native persistence mechanism.\textsuperscript{14}
ARQ is a query engine implementing SPARQL under the Jena framework.\textsuperscript{15} It can be deployed on several persistent storage layers, like filesystem or RDBMS, and it includes a rule-based inference engine. Being based on the Jena library, it provides inferencing models and enables (incomplete) OWL reasoning. Also, the system comes with support for custom rules. We used ARQ 2.6 with RDBMS backend connected to PostgreSQL 8.3.
GiaBATA [15] is our prototype system implementing the SPARQL extensions described above. GiaBATA is based on a combination of the DLV\textsuperscript{DB} [25] and dlvhex [24] systems, and caters for persistent storage of both data and ontology graphs. The former system is a variant of DLV [13] with built-in database support. The latter is a solver for HEX-programs [24], which features an extensible plugin system which we used for developing a rewriter-plugin able to translate SPARQL queries to HEX-programs. The tests were done using development versions of the above systems connected to PostgreSQL 8.3.
Sesame is an open source RDF database with support for querying and reasoning.\textsuperscript{16} In addition to its in-memory database engine it can be coupled with relational databases or deployed on top of file systems. Sesame supports RDFS inference and other entailment regimes such as OWL-Horst [5] by coupling with external reasoners. Sesame provides an infrastructure for defining custom inference rules. Our tests have been done using Sesame 2.3 with persistence support given by the native store.
First of all, it is worth noting that all systems allow persistent storage on RDBMS. All systems, with the exception of ours, implement also direct filesystem storage. All cover RDFS (actually, disregarding axiomatic triples) and partial or non-standard OWL fragments. Although all the systems feature some form of persistence, both reasoning and query evaluation are usually performed in main memory. All the systems, except AllegroGraph and ours, adopt a persistent materialization approach for inferring data. All systems – along with basic inference – support named graph querying, but, with the exception of GiaBATA, combining both features results in incomplete behavior as described in Section 2. Inference is properly handled as long as the query ranges over the whole dataset, whereas it fails in case of querying explicit default or named graphs. That makes querying of named graphs involving inference impossible with standard systems.
For performance comparison we rely on the LUBM benchmark suite \cite{16}. Our tests involve the test datasets LUBM\textsuperscript{n} for \( n \in \{1, 5, 10, 30\} \) with LUBM30 having roughly four million triples (exact numbers are reported in \cite{16}). In order to test the additional performance cost of our extensions, we opted for showing how the performance figures change when queries which require RDFS entailment rules (LUBM Q4-Q7) are considered, w.r.t. queries in which rules do not have an impact (LUBM Q1-Q3, see Appendix of \cite{16} for the SPARQL encodings of Q1–Q7). Experiments are enough for comparing performance trends, so we didn’t consider at this stage larger instances of LUBM. Note that evaluation times include the data loading times. Indeed, while former performance benchmarks do
\textsuperscript{14} System available at http://agraph.franz.com/allegrograph/.
\textsuperscript{15} Distributed at https://jena.svn.sourceforge.net/svnroot/jena/ARQ/.
\textsuperscript{16} System available at http://www.openrdf.org/.
not take this aspect in account, from the semantic point of view, pre-materialization-at-loading computes inferences needed for complete query answering under the entailment of choice. On further reflection, dynamic querying of RDFS moves inference from this materialization to the query step, which would result in an apparent advantage for systems that rely on pre-materialization for RDFS data. Also, the setting of this paper assumes materialization cannot be performed \textit{una tantum}, since inferred information depends on the entailment regime of choice, and on the dataset at hand, on a per query basis. We set a 120min query timeout limit to all test runs.
Our test runs include the following system setup: (i) “Allegro (native)” and “Allegro (ordered)” (ii) “ARQ”; (iii) “GiaBATA (native)” and “GiaBATA (ordered)”; and (iv) “Sesame”. For (i) and (iii), which apply dynamic inference mechanisms, we use “(native)” and “(ordered)” to distinguish between executions of queries in LUBM’s native ordering and in a optimized reordered version, respectively. The GiaBATA test runs both
use Magic Sets optimization. To appreciate the cost of RDFS reasoning for queries $Q4$–$Q7$, the test runs for (i)–(iv) also include the loading time of the datasets, i.e., the time needed in order to perform RDFS data materialization or to simply store the raw RDF data.
The detailed outcome of the test results are summarized in Fig. 2. For the RDF test queries $Q1$–$Q3$, GiaBATA is able to compete for $Q1$ and $Q3$. The systems ARQ and Sesame turned out to be competitive for $Q2$ by having the best query response times, while Allegro (native) scored worst. For queries involving inference ($Q4$–$Q7$) Allegro shows better results. Interestingly, systems applying dynamic inference, namely Allegro and GiaBATA, query pattern reordering plays a crucial role in preserving performance and in assuring scalability; without reordering the queries simply timeout. In particular, Allegro is well-suited for queries ranging over several properties of a single class, whereas if the number of classes and properties increases ($Q7$), GiaBATA exhibits better scalability. Finally, a further distinction between systems relying on DBMS support and systems using native structures is disregarded, and since figures (in logarithmic scale) depict overall loading and querying time, this penalizes in specific cases those systems that use a DBMS.
6 Future Work and Conclusion
We presented a framework for dynamic querying of RDFS data, which extends SPARQL by two language constructs: **using ontology** and **using ruleset**. The former is geared towards dynamically creating the dataset, whereas the latter adapts the entailment regime of the query. We have shown that our extension conservatively extends the standard SPARQL language and that by selecting appropriate rules in **using ruleset**, we may choose varying rule-based entailment regimes at query-time. We illustrated how such extended SPARQL queries can be translated to Datalog and SQL, thus providing entry points for implementation and well-known optimization techniques. Our initial experiments have shown that although dynamic querying does more computation at query-time, it is still competitive for use cases that need on-the-fly construction of datasets and entailment regimes. Especially here, query optimization techniques play a crucial role, and our results suggest to focus further research in this direction. Furthermore, we aim at conducting a proper computational analysis as it has been done for Hypothetical datalog [30], in which truth of atoms is conditioned by hypothetical additions to the dataset at hand. Likewise, our framework allows to add ontological knowledge and rules to datasets before querying: note however that, in the spirit of [31], our framework allows for hypotheses (also called “premises”) on a per query basis rather than a per atom basis.
References
|
{"Source-Url": "https://aran.library.nuigalway.ie/bitstream/handle/10379/456/iann-etal-2009iswc.pdf;jsessionid=3D81062E656BC57692E31CCF5957D694?sequence=1", "len_cl100k_base": 12929, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 58442, "total-output-tokens": 15936, "length": "2e13", "weborganizer": {"__label__adult": 0.0003895759582519531, "__label__art_design": 0.0005769729614257812, "__label__crime_law": 0.0005846023559570312, "__label__education_jobs": 0.0015249252319335938, "__label__entertainment": 0.00017642974853515625, "__label__fashion_beauty": 0.00022172927856445312, "__label__finance_business": 0.0005679130554199219, "__label__food_dining": 0.0004360675811767578, "__label__games": 0.0008096694946289062, "__label__hardware": 0.0006966590881347656, "__label__health": 0.0006451606750488281, "__label__history": 0.00051116943359375, "__label__home_hobbies": 0.00013768672943115234, "__label__industrial": 0.0005364418029785156, "__label__literature": 0.0007915496826171875, "__label__politics": 0.0004360675811767578, "__label__religion": 0.0006299018859863281, "__label__science_tech": 0.1676025390625, "__label__social_life": 0.00019931793212890625, "__label__software": 0.046539306640625, "__label__software_dev": 0.77490234375, "__label__sports_fitness": 0.0002605915069580078, "__label__transportation": 0.0005922317504882812, "__label__travel": 0.0002796649932861328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55553, 0.0254]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55553, 0.32822]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55553, 0.84264]], "google_gemma-3-12b-it_contains_pii": [[0, 801, false], [801, 3757, null], [3757, 7720, null], [7720, 10647, null], [10647, 14570, null], [14570, 18083, null], [18083, 21681, null], [21681, 25119, null], [25119, 28765, null], [28765, 32605, null], [32605, 36211, null], [36211, 40026, null], [40026, 43445, null], [43445, 47424, null], [47424, 48518, null], [48518, 51977, null], [51977, 55553, null]], "google_gemma-3-12b-it_is_public_document": [[0, 801, true], [801, 3757, null], [3757, 7720, null], [7720, 10647, null], [10647, 14570, null], [14570, 18083, null], [18083, 21681, null], [21681, 25119, null], [25119, 28765, null], [28765, 32605, null], [32605, 36211, null], [36211, 40026, null], [40026, 43445, null], [43445, 47424, null], [47424, 48518, null], [48518, 51977, null], [51977, 55553, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55553, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55553, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55553, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55553, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55553, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55553, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55553, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55553, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55553, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55553, null]], "pdf_page_numbers": [[0, 801, 1], [801, 3757, 2], [3757, 7720, 3], [7720, 10647, 4], [10647, 14570, 5], [14570, 18083, 6], [18083, 21681, 7], [21681, 25119, 8], [25119, 28765, 9], [28765, 32605, 10], [32605, 36211, 11], [36211, 40026, 12], [40026, 43445, 13], [43445, 47424, 14], [47424, 48518, 15], [48518, 51977, 16], [51977, 55553, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55553, 0.02712]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
f772c725c9137bd99158c6251042d1dc0b4b6945
|
Practical rc.d scripting in BSD
Abstract
Beginners may find it difficult to relate the facts from the formal documentation on the BSD rc.d framework with the practical tasks of rc.d scripting. In this article, we consider a few typical cases of increasing complexity, show rc.d features suited for each case, and discuss how they work. Such an examination should provide reference points for further study of the design and efficient application of rc.d.
Table of Contents
1. Introduction .......................................................... 1
2. Outlining the task ................................................... 2
3. A dummy script ....................................................... 3
4. A configurable dummy script ...................................... 5
5. Startup and shutdown of a simple daemon ..................... 6
6. Startup and shutdown of an advanced daemon .................. 8
7. Connecting a script to the rc.d framework ..................... 11
8. Giving more flexibility to an rc.d script ....................... 14
9. Further reading ....................................................... 17
1. Introduction
The historical BSD had a monolithic startup script, /etc/rc. It was invoked by init(8) at system boot time and performed all userland tasks required for multi-user operation: checking and mounting file systems, setting up the network, starting daemons, and so on. The precise list of tasks was not the same in every system; admins needed to customize it. With few exceptions, /etc/rc had to be modified, and true hackers liked it.
The real problem with the monolithic approach was that it provided no control over the individual components started from /etc/rc. For instance, /etc/rc could not restart a single daemon. The system admin had to find the daemon process by hand, kill it, wait until it actually exited, then browse through /etc/rc for the flags, and finally type the full command line to start the daemon again. The task would become even more difficult and prone to errors if the service to restart consisted of more than one daemon or demanded additional actions. In a few words, the single script failed to fulfil what scripts are for: to make the system admin’s life easier.
Later there was an attempt to split out some parts of /etc/rc for the sake of starting the most important subsystems separately. The notorious example was /etc/netstart to bring up networking. It did allow for accessing the network from single-user mode, but it did not integrate well into the automatic startup process because parts of its code needed to interleave with actions essentially
unrelated to networking. That was why /etc/netstart mutated into /etc/rc.network. The latter was no longer an ordinary script; it comprised of large, tangled sh(1) functions called from /etc/rc at different stages of system startup. However, as the startup tasks grew diverse and sophisticated, the "quasi-modular" approach became even more of a drag than the monolithic /etc/rc had been.
Without a clean and well-designed framework, the startup scripts had to bend over backwards to satisfy the needs of rapidly developing BSD-based operating systems. It became obvious at last that more steps are necessary on the way to a fine-grained and extensible rc system. Thus BSD rc.d was born. Its acknowledged fathers were Luke Mewburn and the NetBSD community. Later it was imported into FreeBSD. Its name refers to the location of system scripts for individual services, which is in /etc/rc.d. Soon we will learn about more components of the rc.d system and see how the individual scripts are invoked.
The basic ideas behind BSD rc.d are fine modularity and code reuse. Fine modularity means that each basic “service” such as a system daemon or primitive startup task gets its own sh(1) script able to start the service, stop it, reload it, check its status. A particular action is chosen by the command-line argument to the script. The /etc/rc script still drives system startup, but now it merely invokes the smaller scripts one by one with the start argument. It is easy to perform shutdown tasks as well by running the same set of scripts with the stop argument, which is done by /etc/rc.shutdown. Note how closely this follows the Unix way of having a set of small specialized tools, each fulfilling its task as well as possible. Code reuse means that common operations are implemented as sh(1) functions and collected in /etc/rc.subr. Now a typical script can be just a few lines’ worth of sh(1) code. Finally, an important part of the rc.d framework is rcorder(8), which helps /etc/rc to run the small scripts orderly with respect to dependencies between them. It can help /etc/rc.shutdown, too, because the proper order for the shutdown sequence is opposite to that of startup.
The BSD rc.d design is described in the original article by Luke Mewburn, and the rc.d components are documented in great detail in the respective manual pages. However, it might not appear obvious to an rc.d newbie how to tie the numerous bits and pieces together to create a well-styled script for a particular task. Therefore this article will try a different approach to describe rc.d. It will show which features should be used in a number of typical cases, and why. Note that this is not a how-to document because our aim is not at giving ready-made recipes, but at showing a few easy entrances into the rc.d realm. Neither is this article a replacement for the relevant manual pages. Do not hesitate to refer to them for more formal and complete documentation while reading this article.
There are prerequisites to understanding this article. First of all, you should be familiar with the sh(1) scripting language to master rc.d. In addition, you should know how the system performs userland startup and shutdown tasks, which is described in rc(8).
This article focuses on the FreeBSD branch of rc.d. Nevertheless, it may be useful to NetBSD developers, too, because the two branches of BSD rc.d not only share the same design but also stay similar in their aspects visible to script authors.
2. Outlining the task
A little consideration before starting $EDITOR will not hurt. To write a well-tempered rc.d script for a system service, we should be able to answer the following questions first:
• Is the service mandatory or optional?
• Will the script serve a single program, e.g., a daemon, or perform more complex actions?
• Which other services will our service depend on, and vice versa?
From the examples that follow we will see why it is important to know the answers to these questions.
3. A dummy script
The following script just emits a message each time the system boots up:
```bash
#!/bin/sh
./etc/rc.subr
name="dummy"
start_cmd="${name}_start"
stop_cmd":""
dummy_start()
{
echo "Nothing started."
}
load_rc_config $name
run_rc_command "$1"
```
Things to note are:
- An interpreted script should begin with the magic "shebang" line. That line specifies the interpreter program for the script. Due to the shebang line, the script can be invoked exactly like a binary program provided that it has the execute bit set. (See `chmod(1)`.) For example, a system admin can run our script manually, from the command line:
```bash
# /etc/rc.d/dummy start
```
To be properly managed by the rc.d framework, its scripts need to be written in the `sh(1)` language. If you have a service or port that uses a binary control utility or a startup routine written in another language, install that element in `/usr/sbin` (for the system) or `/usr/local/sbin` (for ports) and call it from a `sh(1)` script in the appropriate rc.d directory.
If you would like to learn the details of why rc.d scripts must be written in the `sh(1)` language, see how `/etc/rc` invokes them by means of `run_rc_script`, then study the implementation of `run_rc_script` in `/etc/rc.subr`.
In /etc/rc.subr, a number of sh(1) functions are defined for an rc.d script to use. The functions are documented in rc.subr(8). While it is theoretically possible to write an rc.d script without ever using rc.subr(8), its functions prove extremely handy and make the job an order of magnitude easier. So it is no surprise that everybody resorts to rc.subr(8) in rc.d scripts. We are not going to be an exception.
An rc.d script must "source"/etc/rc.subr (include it using ".") before it calls rc.subr(8) functions so that sh(1) has an opportunity to learn the functions. The preferred style is to source /etc/rc.subr first of all.
Some useful functions related to networking are provided by another include file, /etc/network.subr.
The mandatory variable name specifies the name of our script. It is required by rc.subr(8). That is, each rc.d script must set name before it calls rc.subr(8) functions.
Now it is the right time to choose a unique name for our script once and for all. We will use it in a number of places while developing the script. For a start, let us give the same name to the script file, too.
The current style of rc.d scripting is to enclose values assigned to variables in double quotes. Keep in mind that it is just a style issue that may not always be applicable. You can safely omit quotes from around simple words without sh(1) metacharacters in them, while in certain cases you will need single quotes to prevent any interpretation of the value by sh(1). A programmer should be able to tell the language syntax from style conventions and use both of them wisely.
The main idea behind rc.subr(8) is that an rc.d script provides handlers, or methods, for rc.subr(8) to invoke. In particular, start, stop, and other arguments to an rc.d script are handled this way. A method is a sh(1) expression stored in a variable named argument_cmd, where argument corresponds to what can be specified on the script's command line. We will see later how rc.subr(8) provides default methods for the standard arguments.
To make the code in rc.d more uniform, it is common to use ${name} wherever appropriate. Thus a number of lines can be just copied from one script to another.
We should keep in mind that rc.subr(8) provides default methods for the standard arguments. Consequently, we must override a standard method with a no-op sh(1) expression if we want it to do nothing.
The body of a sophisticated method can be implemented as a function. It is a good idea to make the function name meaningful.
It is strongly recommended to add the prefix ${name} to the names of all functions defined in our script so they never clash with the functions from rc.subr(8) or another common include file.
This call to rc.subr(8) loads rc.conf(5) variables. Our script makes no use of them yet, but it still is
recommended to load `rc.conf(5)` because there can be `rc.conf(5)` variables controlling `rc.subr(8)` itself.
Usually this is the last command in an `rc.d` script. It invokes the `rc.subr(8)` machinery to perform the requested action using the variables and methods our script has provided.
### 4. A configurable dummy script
Now let us add some controls to our dummy script. As you may know, `rc.d` scripts are controlled with `rc.conf(5)`. Fortunately, `rc.subr(8)` hides all the complications from us. The following script uses `rc.conf(5)` via `rc.subr(8)` to see whether it is enabled in the first place, and to fetch a message to show at boot time. These two tasks in fact are independent. On the one hand, an `rc.d` script can just support enabling and disabling its service. On the other hand, a mandatory `rc.d` script can have configuration variables. We will do both things in the same script though:
```
#!/bin/sh
. /etc/rc.subr
name=dummy
rcvar=dummy_enable ①
start_cmd="${name}_start"
stop_cmd=":
load_rc_config $name ②
: ${dummy_enable:=no} ③
: ${dummy_msg="Nothing started."} ④
dummy_start()
{
echo "$dummy_msg" ⑤
}
run_rc_command "$1"
```
What changed in this example?
- The variable `rcvar` specifies the name of the ON/OFF knob variable.
- Now `load_rc_config` is invoked earlier in the script, before any `rc.conf(5)` variables are accessed.
While examining `rc.d` scripts, keep in mind that `sh(1)` defers the evaluation of expressions in a function until the latter is called. Therefore it is not an error to invoke `load_rc_config` as late as just before `run_rc_command` and still access `rc.conf(5)` variables from the method functions exported to `run_rc_command`. This is because the method functions are to be called by `run_rc_command`, which is invoked after `load_rc_config`.
A warning will be emitted by `run_rc_command` if `rcvar` itself is set, but the indicated knob variable is unset. If your rc.d script is for the base system, you should add a default setting for the knob to `/etc/defaults/rc.conf` and document it in `rc.conf(5)`. Otherwise it is your script that should provide a default setting for the knob. The canonical approach to the latter case is shown in the example.
You can make `rc.subr(8)` act as though the knob is set to ON, irrespective of its current setting, by prefixing the argument to the script with `one` or `force`, as in `onestart` or `forcestop`. Keep in mind though that `force` has other dangerous effects we will touch upon below, while `one` just overrides the ON/OFF knob. E.g., assume that `dummy_enable` is OFF. The following command will run the `start` method in spite of the setting:
```
# /etc/rc.d/dummy onestart
```
Now the message to be shown at boot time is no longer hard-coded in the script. It is specified by an `rc.conf(5)` variable named `dummy_msg`. This is a trivial example of how `rc.conf(5)` variables can control an rc.d script.
The names of all `rc.conf(5)` variables used exclusively by our script must have the same prefix: `${name}_`. For example: `dummy_mode`, `dummy_state_file`, and so on.
While it is possible to use a shorter name internally, e.g., just `msg`, adding the unique prefix `${name}_` to all global names introduced by our script will save us from possible collisions with the `rc.subr(8)` namespace.
As a rule, rc.d scripts of the base system need not provide defaults for their `rc.conf(5)` variables because the defaults should be set in `/etc/defaults/rc.conf` instead. On the other hand, rc.d scripts for ports should provide the defaults as shown in the example.
Here we use `dummy_msg` to actually control our script, i.e., to emit a variable message. Use of a shell function is overkill here, since it only runs a single command; an equally valid alternative is:
```
start_cmd="echo \"$dummy_msg\""
```
5. Startup and shutdown of a simple daemon
We said earlier that `rc.subr(8)` could provide default methods. Obviously, such defaults cannot be too general. They are suited for the common case of starting and shutting down a simple daemon program. Let us assume now that we need to write an rc.d script for such a daemon called `mumbled`. Here it is:
```
#!/bin/sh
```
Pleasingly simple, isn’t it? Let us examine our little script. The only new thing to note is as follows:
- The `command` variable is meaningful to `rc.subr(8)`. If it is set, `rc.subr(8)` will act according to the scenario of serving a conventional daemon. In particular, the default methods will be provided for such arguments: `start`, `stop`, `restart`, `poll`, and `status`.
The daemon will be started by running `$command` with command-line flags specified by `$mumbled_flags`. Thus all the input data for the default `start` method are available in the variables set by our script. Unlike `start`, other methods may require additional information about the process started. For instance, `stop` must know the PID of the process to terminate it. In the present case, `rc.subr(8)` will scan through the list of all processes, looking for a process with its name equal to `procname`. The latter is another variable of meaning to `rc.subr(8)`, and its value defaults to that of `command`. In other words, when we set `command`, `procname` is effectively set to the same value. This enables our script to kill the daemon and to check if it is running in the first place.
Some programs are in fact executable scripts. The system runs such a script by starting its interpreter and passing the name of the script to it as a command-line argument. This is reflected in the list of processes, which can confuse `rc.subr(8)`. You should additionally set `command_interpreter` to let `rc.subr(8)` know the actual name of the process if `$command` is a script.
For each `rc.d` script, there is an optional `rc.conf(5)` variable that takes precedence over `command`. Its name is constructed as follows: `${name}_program`, where `name` is the mandatory variable we discussed earlier. E.g., in this case it will be `mumbled_program`. It is `rc.subr(8)` that arranges `${name}_program` to override `command`.
Of course, `sh(1)` will permit you to set `${name}_program` from `rc.conf(5)` or the script itself even if `command` is unset. In that case, the special properties of `${name}_program` are lost, and it becomes an ordinary variable your script can use for its own purposes. However, the sole use of `${name}_program` is discouraged because using it together with `command` became an idiom of `rc.d` scripting.
For more detailed information on default methods, refer to `rc.subr(8)`.
6. Startup and shutdown of an advanced daemon
Let us add some meat onto the bones of the previous script and make it more complex and featureful. The default methods can do a good job for us, but we may need some of their aspects tweaked. Now we will learn how to tune the default methods to our needs.
```bash
#!/bin/sh
. /etc/rc.subr
name=mumbled
rcvar=mumbled_enable
command="/usr/sbin/${name}" ①
command_args="mock arguments > /dev/null 2>&1"
pidfile="/var/run/${name}.pid" ②
required_files="/etc/${name}.conf /usr/share/misc/${name}.rules" ③
sig_reload="USR1" ④
start_precmd="${name}_prestart" ⑤
stop_postcmd="echo Bye-bye" ⑥
extra_commands="reload plugh xyzzy" ⑦
plugh_cmd="mumbled_plugh" ⑧
xyzzy_cmd="echo 'Nothing happens.'"
mumbled_prestart()
{
if checkyesno mumbled_smart; then ⑨
rc_flags="-o smart ${rc_flags}" ⑩
fi
case "$mumbled_mode" in
foo)
rc_flags="-frotz ${rc_flags}"
;;
bar)
rc_flags="-baz ${rc_flags}"
;;
*)
warn "Invalid value for mumbled_mode" ⑪
return 1 ⑫
;;
esac
```
run_rc_command xyzzy
return 0
}
mumbled_plugh() {
echo 'A hollow voice says "plugh".'
}
load_rc_config $name
run_rc_command "$1"
Additional arguments to $command can be passed in command_args. They will be added to the command line after $mumbled_flags. Since the final command line is passed to eval for its actual execution, input and output redirections can be specified in command_args.
Never include dashed options, like -X or --foo, in command_args. The contents of command_args will appear at the end of the final command line, hence they are likely to follow arguments present in ${name}_flags; but most commands will not recognize dashed options after ordinary arguments. A better way of passing additional options to $command is to add them to the beginning of ${name}_flags. Another way is to modify rc_flags as shown later.
A good-mannered daemon should create a pidfile so that its process can be found more easily and reliably. The variable pidfile, if set, tells rc.subr(8) where it can find the pidfile for its default methods to use.
In fact, rc.subr(8) will also use the pidfile to see if the daemon is already running before starting it. This check can be skipped by using the faststart argument.
If the daemon cannot run unless certain files exist, just list them in required_files, and rc.subr(8) will check that those files do exist before starting the daemon. There also are required_dirs and required_vars for directories and environment variables, respectively. They all are described in detail in rc.subr(8).
The default method from rc.subr(8) can be forced to skip the prerequisite checks by using forcestart as the argument to the script.
We can customize signals to send to the daemon in case they differ from the well-known ones. In particular, sig_reload specifies the signal that makes the daemon reload its configuration; it is SIGHUP by default. Another signal is sent to stop the daemon process; the default is SIGTERM, but this can be changed by setting sig_stop appropriately.
The signal names should be specified to rc.subr(8) without the SIG prefix, as it is shown in the example. The FreeBSD version of kill(1) can recognize the SIG prefix, but the versions from other OS types may not.
Performing additional tasks before or after the default methods is easy. For each command-
argument supported by our script, we can define `argument_precmd` and `argument_postcmd`. These `sh(1)` commands are invoked before and after the respective method, as it is evident from their names.
Overriding a default method with a custom `argument_cmd` still does not prevent us from making use of `argument_precmd` or `argument_postcmd` if we need to. In particular, the former is good for checking custom, sophisticated conditions that should be met before performing the command itself. Using `argument_precmd` along with `argument_cmd` lets us logically separate the checks from the action.
Do not forget that you can cram any valid `sh(1)` expressions into the methods, pre-, and post-commands you define. Just invoking a function that makes the real job is a good style in most cases, but never let style limit your understanding of what is going on behind the curtain.
If we would like to implement custom arguments, which can also be thought of as commands to our script, we need to list them in `extra_commands` and provide methods to handle them.
The `reload` command is special. On the one hand, it has a preset method in `rc.subr(8)`. On the other hand, `reload` is not offered by default. The reason is that not all daemons use the same reload mechanism and some have nothing to reload at all. So we need to ask explicitly that the built-in functionality be provided. We can do so via `extra_commands`.
What do we get from the default method for `reload`? Quite often daemons reload their configuration upon reception of a signal—typically, SIGHUP. Therefore `rc.subr(8)` attempts to reload the daemon by sending a signal to it. The signal is preset to SIGHUP but can be customized via `sig_reload` if necessary.
Our script supports two non-standard commands, `plugh` and `xyzzy`. We saw them listed in `extra_commands`, and now it is time to provide methods for them. The method for `xyzzy` is just inlined while that for `plugh` is implemented as the `mumbled_plugh` function.
Non-standard commands are not invoked during startup or shutdown. Usually they are for the system admin’s convenience. They can also be used from other subsystems, e.g., `devd(8)` if specified in `devd.conf(5)`.
The full list of available commands can be found in the usage line printed by `rc.subr(8)` when the script is invoked without arguments. For example, here is the usage line from the script under study:
```
# /etc/rc.d/mumbled
Usage: /etc/rc.d/myled [fast|force|one]
(start|stop|restart|rcvar|reload|plugh|xyzzy|status|poll)
```
A script can invoke its own standard or non-standard commands if needed. This may look similar to calling functions, but we know that commands and shell functions are not always the same thing. For instance, `xyzzy` is not implemented as a function here. In addition, there can be a pre-command and post-command, which should be invoked orderly. So the proper way for a script to run its own
command is by means of `rc.subr(8)`, as shown in the example.
- A handy function named *checkyesno* is provided by `rc.subr(8)`. It takes a variable name as its argument and returns a zero exit code if and only if the variable is set to *YES*, or *TRUE*, or *ON*, or *1*, case insensitive; a non-zero exit code is returned otherwise. In the latter case, the function tests the variable for being set to *NO*, *FALSE*, *OFF*, or *0*, case insensitive; it prints a warning message if the variable contains anything else, i.e., junk.
Keep in mind that for *sh(1)* a zero exit code means true and a non-zero exit code means false.
The *checkyesno* function takes a *variable name*. Do not pass the expanded *value* of a variable to it; it will not work as expected.
The following is the correct usage of *checkyesno*:
```bash
if checkyesno mumbled_enable; then
foo
fi
```
On the contrary, calling *checkyesno* as shown below will not work - at least not as expected:
```bash
if checkyesno "${mumbled_enable}"; then
foo
fi
```
- We can affect the flags to be passed to `$command` by modifying *rc_flags* in `$start_precmd`.
- In certain cases we may need to emit an important message that should go to *syslog* as well. This can be done easily with the following `rc.subr(8)` functions: *debug*, *info*, *warn*, and *err*. The latter function then exits the script with the code specified.
- The exit codes from methods and their pre-commands are not just ignored by default. If *argument_precmd* returns a non-zero exit code, the main method will not be performed. In turn, *argument_postcmd* will not be invoked unless the main method returns a zero exit code.
However, `rc.subr(8)` can be instructed from the command line to ignore those exit codes and invoke all commands anyway by prefixing an argument with *force*, as in `forcestart`.
### 7. Connecting a script to the rc.d framework
After a script has been written, it needs to be integrated into rc.d. The crucial step is to install the script in `/etc/rc.d` (for the base system) or `/usr/local/etc/rc.d` (for ports). Both bsd.prog.mk and bsd.port.mk provide convenient hooks for that, and usually you do not have to worry about the proper ownership and mode. System scripts should be installed from src/libexec/rc/rc.d through the
Makefile found there. Port scripts can be installed using `USE_RC_SUBR` as described in the Porter’s Handbook.
However, we should consider beforehand the place of our script in the system startup sequence. The service handled by our script is likely to depend on other services. For instance, a network daemon cannot function without the network interfaces and routing up and running. Even if a service seems to demand nothing, it can hardly start before the basic filesystems have been checked and mounted.
We mentioned `rcorder(8)` already. Now it is time to have a close look at it. In a nutshell, `rcorder(8)` takes a set of files, examines their contents, and prints a dependency-ordered list of files from the set to `stdout`. The point is to keep dependency information inside the files so that each file can speak for itself only. A file can specify the following information:
- the names of the "conditions" (which means services to us) it provides;
- the names of the "conditions" it requires;
- the names of the "conditions" this file should run before;
- additional keywords that can be used to select a subset from the whole set of files (`rcorder(8)` can be instructed via options to include or omit the files having particular keywords listed.)
It is no surprise that `rcorder(8)` can handle only text files with a syntax close to that of `sh(1)`. That is, special lines understood by `rcorder(8)` look like `sh(1)` comments. The syntax of such special lines is rather rigid to simplify their processing. See `rcorder(8)` for details.
Besides using `rcorder(8)` special lines, a script can insist on its dependency upon another service by just starting it forcibly. This can be needed when the other service is optional and will not start by itself because the system admin has disabled it mistakenly in `rc.conf(5)`.
With this general knowledge in mind, let us consider the simple daemon script enhanced with dependency stuff:
```bash
#!/bin/sh
# PROVIDE: mumbled oldmumble ①
# REQUIRE: DAEMON cleanvar frotz ②
# BEFORE: LOGIN ③
# KEYWORD: nojail shutdown ④
./etc/rc.subr
name=mumbled
rcvar=mumbled_enable
command="/usr/sbin/${name}"
start_precmd="${name}_prestart"
mumbled_prestart()
{
if ! checkyesno frotz_enable &&
```
```
As before, detailed analysis follows:
That line declares the names of "conditions" our script provides. Now other scripts can record a dependency on our script by those names.
Usually a script specifies a single condition provided. However, nothing prevents us from listing several conditions there, e.g., for compatibility reasons.
In any case, the name of the main, or the only, PROVIDE: condition should be the same as $\{name\}.
So our script indicates which "conditions" provided by other scripts it depends on. According to the lines, our script asks rcorder(8) to put it after the script(s) providing DAEMON and cleanvar, but before that providing LOGIN.
The BEFORE: line should not be abused to work around an incomplete dependency list in the other script. The appropriate case for using BEFORE: is when the other script does not care about ours, but our script can do its task better if run before the other one. A typical real-life example is the network interfaces vs. the firewall: While the interfaces do not depend on the firewall in doing their job, the system security will benefit from the firewall being ready before there is any network traffic.
Besides conditions corresponding to a single service each, there are meta-conditions and their "placeholder" scripts used to ensure that certain groups of operations are performed before others. These are denoted by UPPERCASE names. Their list and purposes can be found in rc(8).
Keep in mind that putting a service name in the REQUIRE: line does not guarantee that the service will actually be running by the time our script starts. The required service may fail to start or just be disabled in rc.conf(5). Obviously, rcorder(8) cannot track such details, and rc(8) will not do that either. Consequently, the application started by our script should be able to cope with any required services being unavailable. In certain cases, we can help it as discussed below.
As we remember from the above text, rcorder(8) keywords can be used to select or leave out some scripts. Namely any rcorder(8) consumer can specify through -k and -s options which keywords are on the "keep list" and "skip list", respectively. From all the files to be dependency sorted, rcorder(8) will pick only those having a keyword from the keep list (unless empty) and not having a keyword from the skip list.
In FreeBSD, `rcorder(8)` is used by `/etc/rc` and `/etc/rc.shutdown`. These two scripts define the standard list of FreeBSD rc.d keywords and their meanings as follows:
**nojail**
The service is not for `jail(8)` environment. The automatic startup and shutdown procedures will ignore the script if inside a jail.
**nostart**
The service is to be started manually or not started at all. The automatic startup procedure will ignore the script. In conjunction with the shutdown keyword, this can be used to write scripts that do something only at system shutdown.
**shutdown**
This keyword is to be listed explicitly if the service needs to be stopped before system shutdown.
> When the system is going to shut down, `/etc/rc.shutdown` runs. It assumes that most rc.d scripts have nothing to do at that time. Therefore `/etc/rc.shutdown` selectively invokes rc.d scripts with the shutdown keyword, effectively ignoring the rest of the scripts. For even faster shutdown, `/etc/rc.shutdown` passes the faststop command to the scripts it runs so that they skip preliminary checks, e.g., the pidfile check. As dependent services should be stopped before their prerequisites, `/etc/rc.shutdown` runs the scripts in reverse dependency order. If writing a real rc.d script, you should consider whether it is relevant at system shutdown time. E.g., if your script does its work in response to the start command only, then you need not to include this keyword. However, if your script manages a service, it is probably a good idea to stop it before the system proceeds to the final stage of its shutdown sequence described in `halt(8)`. In particular, a service should be stopped explicitly if it needs considerable time or special actions to shut down cleanly. A typical example of such a service is a database engine.
---
To begin with, `force_depend` should be used with much care. It is generally better to revise the hierarchy of configuration variables for your rc.d scripts if they are interdependent.
If you still cannot do without `force_depend`, the example offers an idiom of how to invoke it conditionally. In the example, our `mumbled` daemon requires that another one, `frotz`, be started in advance. However, `frotz` is optional, too; and `rcorder(8)` knows nothing about such details. Fortunately, our script has access to all `rc.conf(5)` variables. If `frotz_enable` is true, we hope for the best and rely on rc.d to have started `frotz`. Otherwise we forcibly check the status of `frotz`. Finally, we enforce our dependency on `frotz` if it is found to be not running. A warning message will be emitted by `force_depend` because it should be invoked only if a misconfiguration has been detected.
**8. Giving more flexibility to an rc.d script**
When invoked during startup or shutdown, an rc.d script is supposed to act on the entire subsystem it is responsible for. E.g., `/etc/rc.d/netif` should start or stop all network interfaces described by `rc.conf(5)`. Either task can be uniquely indicated by a single command argument such as `start` or `stop`. Between startup and shutdown, rc.d scripts help the admin to control the running system, and
it is when the need for more flexibility and precision arises. For instance, the admin may want to add the settings of a new network interface to \texttt{rc.conf(5)} and then to start it without interfering with the operation of the existing interfaces. Next time the admin may need to shut down a single network interface. In the spirit of the command line, the respective \texttt{rc.d} script calls for an extra argument, the interface name.
Fortunately, \texttt{rc.subr(8)} allows for passing any number of arguments to script's methods (within the system limits). Due to that, the changes in the script itself can be minimal.
How can \texttt{rc.subr(8)} gain access to the extra command-line arguments. Should it just grab them directly? Not by any means. Firstly, an \texttt{sh(1)} function has no access to the positional parameters of its caller, but \texttt{rc.subr(8)} is just a sack of such functions. Secondly, the good manner of \texttt{rc.d} dictates that it is for the main script to decide which arguments are to be passed to its methods.
So the approach adopted by \texttt{rc.subr(8)} is as follows: \texttt{run_rc_command} passes on all its arguments but the first one to the respective method verbatim. The first, omitted, argument is the name of the method itself: \texttt{start}, \texttt{stop}, etc. It will be shifted out by \texttt{run_rc_command}, so what is \texttt{$2} in the original command line will be presented as \texttt{$1} to the method, and so on.
To illustrate this opportunity, let us modify the primitive dummy script so that its messages depend on the additional arguments supplied. Here we go:
```bash
#!/bin/sh
. /etc/rc.subr
name="dummy"
start_cmd="${name}_start"
stop_cmd=":
kiss_cmd="${name}_kiss"
extra_commands="kiss"
dummy_start()
{
if [ $# -gt 0 ]; then ①
echo "Greeting message: $*
else
echo "Nothing started."
fi
}
dummy_kiss()
{
echo -n "A ghost gives you a kiss"
if [ $# -gt 0 ]; then ②
echo -n " and whispers: $*
fi
case "$*" in
*[!?.])
echo
```
What essential changes can we notice in the script?
☐ All arguments you type after `start` can end up as positional parameters to the respective method. We can use them in any way according to our task, skills, and fancy. In the current example, we just pass all of them to `echo(1)` as one string in the next line - note `$*` within the double quotes. Here is how the script can be invoked now:
```bash
# /etc/rc.d/dummy start
Nothing started.
# /etc/rc.d/dummy start Hello world!
Greeting message: Hello world!
```
☐ The same applies to any method our script provides, not only to a standard one. We have added a custom method named `kiss`, and it can take advantage of the extra arguments not less than `start` does. E.g.:
```bash
# /etc/rc.d/dummy kiss
A ghost gives you a kiss.
# /etc/rc.d/dummy kiss Once I was Etaoin Shrdlu...
A ghost gives you a kiss and whispers: Once I was Etaoin Shrdlu...
```
☐ If we want just to pass all extra arguments to any method, we can merely substitute "@$" for "$1" in the last line of our script, where we invoke `run_rc_command`.
An `sh(1)` programmer ought to understand the subtle difference between `$*` and `@$` as the ways to designate all positional parameters. For its in-depth discussion, refer to a good handbook on `sh(1)` scripting. Do not use the expressions until you fully understand them because their misuse will result in buggy and insecure scripts.
Currently `run_rc_command` may have a bug that prevents it from keeping the original boundaries between arguments. That is, arguments with embedded whitespace may not be processed correctly. The bug stems from `$*` misuse.
9. Further reading
The original article by Luke Mewburn offers a general overview of rc.d and detailed rationale for its design decisions. It provides insight on the whole rc.d framework and its place in a modern BSD operating system.
The manual pages rc(8), rc.subr(8), and rcorder(8) document the rc.d components in great detail. You cannot fully use the rc.d power without studying the manual pages and referring to them while writing your own scripts.
The major source of working, real-life examples is /etc/rc.d in a live system. Its contents are easy and pleasant to read because most rough corners are hidden deep in rc.subr(8). Keep in mind though that the /etc/rc.d scripts were not written by angels, so they might suffer from bugs and suboptimal design decisions. Now you can improve them!
|
{"Source-Url": "https://download.freebsd.org/doc/en/articles/rc-scripting/rc-scripting_en.pdf", "len_cl100k_base": 8926, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 40980, "total-output-tokens": 9756, "length": "2e13", "weborganizer": {"__label__adult": 0.00019407272338867188, "__label__art_design": 0.0003361701965332031, "__label__crime_law": 0.00012362003326416016, "__label__education_jobs": 0.0004382133483886719, "__label__entertainment": 9.578466415405272e-05, "__label__fashion_beauty": 6.824731826782227e-05, "__label__finance_business": 0.0001825094223022461, "__label__food_dining": 0.00015151500701904297, "__label__games": 0.0005230903625488281, "__label__hardware": 0.000927448272705078, "__label__health": 0.0001029372215270996, "__label__history": 0.00014197826385498047, "__label__home_hobbies": 7.456541061401367e-05, "__label__industrial": 0.00019800662994384768, "__label__literature": 0.0001806020736694336, "__label__politics": 0.00010949373245239258, "__label__religion": 0.00024116039276123047, "__label__science_tech": 0.0087738037109375, "__label__social_life": 5.990266799926758e-05, "__label__software": 0.035888671875, "__label__software_dev": 0.95068359375, "__label__sports_fitness": 9.79304313659668e-05, "__label__transportation": 0.0001386404037475586, "__label__travel": 0.00011748075485229492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38360, 0.00773]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38360, 0.58397]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38360, 0.89227]], "google_gemma-3-12b-it_contains_pii": [[0, 2636, false], [2636, 6328, null], [6328, 7913, null], [7913, 10736, null], [10736, 12555, null], [12555, 14953, null], [14953, 17340, null], [17340, 18475, null], [18475, 20806, null], [20806, 23751, null], [23751, 26057, null], [26057, 28315, null], [28315, 30670, null], [30670, 33838, null], [33838, 35917, null], [35917, 37557, null], [37557, 38360, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2636, true], [2636, 6328, null], [6328, 7913, null], [7913, 10736, null], [10736, 12555, null], [12555, 14953, null], [14953, 17340, null], [17340, 18475, null], [18475, 20806, null], [20806, 23751, null], [23751, 26057, null], [26057, 28315, null], [28315, 30670, null], [30670, 33838, null], [33838, 35917, null], [35917, 37557, null], [37557, 38360, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38360, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38360, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38360, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38360, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38360, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38360, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38360, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38360, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38360, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38360, null]], "pdf_page_numbers": [[0, 2636, 1], [2636, 6328, 2], [6328, 7913, 3], [7913, 10736, 4], [10736, 12555, 5], [12555, 14953, 6], [14953, 17340, 7], [17340, 18475, 8], [18475, 20806, 9], [20806, 23751, 10], [23751, 26057, 11], [26057, 28315, 12], [28315, 30670, 13], [30670, 33838, 14], [33838, 35917, 15], [35917, 37557, 16], [37557, 38360, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38360, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
230717dced919e9ea216e5656b3ffbb77b649328
|
[REMOVED]
|
{"Source-Url": "http://www.tara.tcd.ie/bitstream/handle/2262/82021/UTCP-SBMF2017-MAIN.pdf?isAllowed=y&sequence=2", "len_cl100k_base": 11119, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 62221, "total-output-tokens": 14323, "length": "2e13", "weborganizer": {"__label__adult": 0.0004122257232666016, "__label__art_design": 0.0004115104675292969, "__label__crime_law": 0.0003314018249511719, "__label__education_jobs": 0.0008072853088378906, "__label__entertainment": 8.893013000488281e-05, "__label__fashion_beauty": 0.00017750263214111328, "__label__finance_business": 0.00020432472229003904, "__label__food_dining": 0.0004417896270751953, "__label__games": 0.00064849853515625, "__label__hardware": 0.0007309913635253906, "__label__health": 0.0005884170532226562, "__label__history": 0.00028967857360839844, "__label__home_hobbies": 0.00010651350021362303, "__label__industrial": 0.0004549026489257813, "__label__literature": 0.00055694580078125, "__label__politics": 0.00032067298889160156, "__label__religion": 0.0006618499755859375, "__label__science_tech": 0.0284423828125, "__label__social_life": 0.00011014938354492188, "__label__software": 0.004535675048828125, "__label__software_dev": 0.95849609375, "__label__sports_fitness": 0.0003266334533691406, "__label__transportation": 0.0007567405700683594, "__label__travel": 0.00022482872009277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50466, 0.01802]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50466, 0.45437]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50466, 0.89256]], "google_gemma-3-12b-it_contains_pii": [[0, 1695, false], [1695, 4836, null], [4836, 7985, null], [7985, 11336, null], [11336, 14228, null], [14228, 17000, null], [17000, 20241, null], [20241, 23697, null], [23697, 26694, null], [26694, 29734, null], [29734, 32345, null], [32345, 34591, null], [34591, 36653, null], [36653, 39522, null], [39522, 42251, null], [42251, 45208, null], [45208, 48507, null], [48507, 50466, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1695, true], [1695, 4836, null], [4836, 7985, null], [7985, 11336, null], [11336, 14228, null], [14228, 17000, null], [17000, 20241, null], [20241, 23697, null], [23697, 26694, null], [26694, 29734, null], [29734, 32345, null], [32345, 34591, null], [34591, 36653, null], [36653, 39522, null], [39522, 42251, null], [42251, 45208, null], [45208, 48507, null], [48507, 50466, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50466, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50466, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50466, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50466, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50466, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50466, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50466, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50466, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50466, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50466, null]], "pdf_page_numbers": [[0, 1695, 1], [1695, 4836, 2], [4836, 7985, 3], [7985, 11336, 4], [11336, 14228, 5], [14228, 17000, 6], [17000, 20241, 7], [20241, 23697, 8], [23697, 26694, 9], [26694, 29734, 10], [29734, 32345, 11], [32345, 34591, 12], [34591, 36653, 13], [36653, 39522, 14], [39522, 42251, 15], [42251, 45208, 16], [45208, 48507, 17], [48507, 50466, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50466, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
ff567cc21a9c9bae337062e317724767e6909722
|
A survey on android security: development and deployment hindrance and best practices
Ratul Sikder1, Md Shohel Khan2, Md Shohrab Hossain3, Wazir Zada Khan4
1,2,3Department of CSE, BUET, Dhaka, Bangladesh
4Department of CS and IT, Jazan University, Jazan, Saudi Arabia
ABSTRACT
Android OS is the most popular mobile OS for the past few years. Vulnerabilities arise with respect to the increasing functionality of Android OS, impolitic app development practices of developers, end-user incautious and interestingly remediation for the vulnerabilities has been introduced frequently as well. To mitigate security risk factor Google has been updated, deprecated and restricted many system level APIs for 3rd party developers. Considering the consequences, this paper provides a wide overview of Android’s system level app development, privacy issues, and guideline for the developers about what measure they should consider while developing apps. We also discussed the historical development of Android OS and the end-users role to maintain privacy and to minimize security risks.
1. INTRODUCTION
Current age is considered as the age of mobility. To communicate in a long-distance, we don’t have to wait for days or hours; now we can communicate almost in real-time. The rapid and accelerated development in communication technology and mobile devices in recent years have made this possible. From the early nineteenth century to date, the development of mobile devices boosts massively [1]. Earlier, the only use of mobile devices is to talk with someone in a long distance but today’s mobile phones specifically smartphones are powerful hand-held computers. Like any traditional computer, every smartphone operates based on its operating system. Android, iOS, Tizen, KaiOS are the major of this kind [2].
Today’s smart phone Operating System (OS) allows other software to run on the phone to provide diverse functionalities to the users. It enhances the user experience but security and privacy is the main concern by allowing 3rd party apps on users’ private device quirolgico2011vetting. Moreover, unfortunately, security and privacy are not one of the main targets of many small to big 3rd party app developers [3]. As a result, smartphone OS developers naturally don’t want to allow 3rd party apps to access root level and sensitive information. Being a flexible smart phone OS at the beginning, Google’s Android is also following the restrictive access method. Accessing system-level information, system log and other sensitive information are now being restricted continuously. On our studies, we have found that development of many device optimization and security-related apps had stopped due to permission depreciation.
This is an open access article under the CC BY-SA license.
On the other hand, more problems arise with the non-guided practices by the developers. Developers often don’t find the necessity of following the rules and recommendations for developing apps on the mobile platform, and it is very hard to monitor and mine the source code and app behavior to detect unwise programming and harmful activities of the apps. Though the developers, engineers and some machine learning based technologies are always trying to find harmful apps on Google Play Store [4, 5].
There are a few research works on Android’s security issues in the app development and adaptation phase. Jha et al. [6] studied on 13,483 real-world Android applications and found only 2,373 apps with no configuration errors; this is a development phase scenario. These security issues become more severe when studies found that security and privacy are not the primary tasks of the developers [3]. Security and privacy are shared responsibilities of both the app service providers and the end users. Usage-pattern and misuse: both intentionally and unintentionally may raise the probability of security threats to the end users. Google’s Android help and support center provide some simple guidelines for the Android device users for helping the device and information safe and secure [7].
A detailed survey on application and android ecosystem found some improvements over the traditional software systems [8]. But while improving some aspects of the ecosystem, it has also introduced a new range of problems. Moreover, a detailed analysis of Android and iOS showed that "Privacy by design" is better for mobile platform [9]. This ultimate power of platform should be enforced by the authorities to define and strictly regulate the privacy boundaries. There exists no such survey on these factors where the readers may find the current state of Android security and privacy violation, major changes in Android from the developers’ point of view, restrictions and best practices for the developers as well as the users.
We have studied mobile app security related papers and blogs, privacy policies, tested different 3rd party apps, open source projects and analyzed some of the helper classes of Google. There are very few resources on the current system level Android development. The objective of this paper is to summarize our findings from the mobile development history to the present age, including android development hindrance in different aspects, security and privacy issues, guidelines towards safe development and significant facts about the platform as well as the whole ecosystem.
The contributions of this paper are (i) discuss security vulnerabilities and possible solutions, (ii) development restrictions, (iii) recent changes and improvements, best practices for both the developers and users as well as some suggestions for the manufacturers in an organized fashion so that both the Android developers and the users could find it useful in a simpler way.
The rest of the paper is organized in the following sections. In section 2, we briefly describe the mobile device as well as the smartphone operating system development history. Section 3 characterizes the overall scenario of mobile app development, restrictions, and guidelines. Major issues of system level android app development are described in section 4. Best practice for the users to avoid security and privacy threats and recommendations for the developers for safe and secure development are explained in section 5. This section also includes the demand from the manufacturers as the developers’ and the users’ point of view about which security-related features should be added for the future release of Android. Applications of our findings, discussions as well as concluding notes are expressed in section 6.
2. MOBILE OS DEVELOPMENT HISTORY
iOS is a powerful mobile operating system, developed by Apple Inc. originally unveiled in 2007. It is the second most popular OS till now. At that time, Google was still working on Android secretly; but in November of that year, the company started to reveal its plans for Android and its functionalities. Finally, Android was released at the end of 2008. This is the beginning of today’s Android revolution in the smartphone market. Now, Android is the most popular mobile OS worldwide [10]. We will discuss the mobile phone’s operating system and the mobile phone development history in the following two subsections.
(a) Mobile devices and pre-android development
(b) Android OS development
2.1. Mobile devices and pre-android development
In 1947, Bell laboratory introduced the term Mobile Network. The first automated mobile phone system for private vehicles launched in Sweden in the year 1956. The device in the car used vacuum tube technology and weighed approximately 40 kg. An engineer of the Soviet Union developed and presented...
a number of experimental pocket-sized communications radio in 1957-1961. The weight of one model was only 70 g and it was palm-sized. The first commercial mobile phone was introduced by Motorola. In 1973, they made the first public mobile phone call on a device that weighed 1.1 Kg [11]. In the past 30 years, there were some major changes in the mobile phone architecture. Numerous revolution had been occurring in that era. Later in the late '90s, a closed-source mobile operating system called Symbian has been developed. People in that decade experienced a new form of the mobile OS which was specially designed for multitasking. On the other hand, Series 40 was a software platform, worked as OS, introduced by Nokia in 1999. It was one of the world’s most widely used mobile phone platforms but it is not considered as a smartphone operating system because of its limitations. In the meanwhile, Symbian was the most popular smartphone OS until the end of 2010. Obviously, the market was slowly captured by the iOS and the Android from 2007. Timeline 1 shows the historical development of mobile phones, mobile operating systems and finally the development of Android OS till date.
Google had at least two alpha builds of Android released internally before the release of version 1.0 at the end of 2008 [12]. The leading manufacturers, such as HTC, Motorola, Qualcomm, Texas Instruments and carriers including T-Mobile agreed on a formation for future mobile and related software production which is called Open Handset Alliance. For the promotion of the Android platform as a reliable smartphone operating system, OHA members are forbidden from producing devices based on incompatible forks of Android. Now, there are in total 84 firms under this agreement and they contribute to the open standards for the smartphone technology.
<table>
<thead>
<tr>
<th>Timeline 1: Evolution of Mobile Devices & OS</th>
</tr>
</thead>
<tbody>
<tr>
<td>1947 ➤ AT and T Bell Labs develop the idea of cellular phones</td>
</tr>
<tr>
<td>1956 ➤ The first automated mobile phone system for private vehicles launched in Sweden</td>
</tr>
<tr>
<td>1957-61 ➤ Soviet Union developed experimental pocket-sized communications radio</td>
</tr>
<tr>
<td>1973 ➤ First mobile handset invented by Motorola Inc.</td>
</tr>
<tr>
<td>1973-93 ➤ Mobile phones use embedded systems to control operation</td>
</tr>
<tr>
<td>1994 ➤ The first smartphone, the IBM Simon, has a touchscreen, email, and PDA features</td>
</tr>
<tr>
<td>1996 ➤ Palm Pilot 1000 PDA is introduced with the Palm OS</td>
</tr>
<tr>
<td>1998 ➤ Symbian OS was developed by Symbian Ltd. The OS was used by phone manufacturers</td>
</tr>
<tr>
<td>1999 ➤ Nokia 640 and N710 was introduced officially</td>
</tr>
<tr>
<td>2000 ➤ Symbian was the first modern mobile OS with the launch of the Ericsson R380</td>
</tr>
<tr>
<td>2008 ➤ Android released its first version internally</td>
</tr>
<tr>
<td>2010 ➤ Android version Gingerbread released, one of the most popular Android version</td>
</tr>
<tr>
<td>2011-14 ➤ Android released five popular versions in this era</td>
</tr>
<tr>
<td>2015-17 ➤ Marshmallow, Nougat and Oreo released. Currently installed on 64.4% android devices</td>
</tr>
<tr>
<td>2018 ➤ Android Pie, the most recent released OS</td>
</tr>
</tbody>
</table>
Now, the Android open source project is developed and maintained by Google and Open Handset Alliance.
2.2. Android development
From the very initial release, Google has been releasing new versions of Android OS every year, containing major changes in both the base architecture and the user interface.
The pie chart in Figure 1 shows the percentage of devices running different versions of the Android platform. In Oct. 2009, a year after the launch of Android 1.0, Google released version 2.0 of the OS. This version adds text-to-speech support and introduced live wallpapers, multiple account support, and Google Map’s navigation. This release contains many other new features and improvements. Android 2.3 Gingerbread was launched in Sept. 2010, is currently the oldest version of the most popular versions of Android. Basically, Android started dominating the smartphone market with the huge success of this version of the Android OS. Android 4.0, 4.2 and 4.4 ruled the market for several years and Android smartphone became a strong preference for the general people because of its functionalities like easy file sharing and backup, enriched app-store, gaming and so on alongside with its comparatively lower price tag. Moving on, Android versions from 5.0 to 9.0 gradually have grabbed around 75 percent of the whole mobile market share [13].
3. BIG PICTURE
Android OS is significantly popular than any other existing smartphone OS. Table 1 shows the smartphone OS market share in 2019. According to Global Stats, Android OS is the dominating smartphone OS with its 75 percent market share [14].
<table>
<thead>
<tr>
<th>S/N</th>
<th>Mobile OS</th>
<th>Market Share (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Android OS</td>
<td>75.33</td>
</tr>
<tr>
<td>2</td>
<td>iOS</td>
<td>22.4</td>
</tr>
<tr>
<td>3</td>
<td>KaiOS</td>
<td>0.84</td>
</tr>
<tr>
<td>4</td>
<td>Windows OS</td>
<td>0.61</td>
</tr>
<tr>
<td>5</td>
<td>Unknown</td>
<td>0.36</td>
</tr>
<tr>
<td>6</td>
<td>Windows</td>
<td>0.28</td>
</tr>
</tbody>
</table>
The number of active mobile users around the globe is 4.93 Billion [15]. The primary goal of developing a mobile phone was to communicate with others. But, today’s phone can do much more. Due to the increasing functionality of mobile phones, vulnerabilities arise which has become a serious concern for both the manufacturers and the developers. Many of the security vulnerabilities are unintentional, e.g., poor programming practices, app developers fail to validate input from the web, allowing adversaries to access the
protected files, etc. Vulnerabilities can also be intentional as well as malicious and can be hidden within a seemingly safe and legitimate app, i.e. a simple paint app asks for internet and GPS access [16]. Security vulnerabilities and unwise programming practices can lead to the following issues:
(a) Users’ privacy violation
(b) Performance degradation
(c) Heavy battery drain
(d) Poor end-user satisfaction
(e) Malware, virus, adware attack, etc.
Vulnerabilities result from security threats which are created with the collaboration of a group of hackers and unethical employees. The top security threats are discussed in the following:
3.1. Malicious app
Malicious apps are specially designed to attack smartphone systems. These malware apps significantly relay on the exploitation of OS and software technology of smartphone. We can enlist the malicious apps into the following four categories: [17]
(a) Spyware
(b) Trojans
(c) Phishing
(d) Hidden processes
3.1.1. Spyware
Spyware is unsought software that pervades one’s computing devices by robbing his/her internet usage data and other sensitive information like personal information without his/her knowledge of it. Spyware is a kind of malware designed to gain access to one’s device. The intentions of using spyware are diverse e.g., for tracking login information, selling internet usage data, capturing credit/debit card information, etc. Some spyware is able to install additional applications and they can change the settings of the victim’s smartphone. According to the Norton Cyber Security Insights Report, in the year 2017, nearly 978 million people from 20 countries were attacked by cybercrime and victims lost 172 billion USD globally and spyware caused more damages than other types of malicious app [18].
3.1.2. Trojans
Trojan is a kind of malware that is often faked as legitimate software. Trojans can be devoted by hackers and other cyber criminals to gain access to someone’s computing device and can severely damage the system, e.g., deleting, copying, disrupting, blocking and modifying data. Some common form of trojans includes Trojan-SMS, Trojan-Notifier, Trojan-Spy, Trojan-Mailfinder and so on [19]. Sometimes it is hard to ensure the absence of trojans in a system as they may not harm directly to the users rather steal the private and sensitive information silently.
3.1.3. Phishing
It is a type of social engineering attack designed to gain access to someone’s private information, e.g., credit card information and login credentials. The cyber thieves accomplish this by misleading people in very convenient ways. For example, one may receive an email which claims that his/her password for a specific website is about to expire within 24 hours and put a fake link which looks very legitimate to renew the password. Once the victim inserts his/her login credentials, the attacker captures the original information and eventually get access to his/her private data.
3.1.4. Hidden processes
These are the applications in which some anonymous activities are embedded without providing any knowledge to the users. For example, a gaming application scans for the nearby wireless devices which is not necessary for any of the gaming functionalities. These types of hidden operations can harm users and user experience.
3.2. Malware downloader
A malware downloader (i.e. trojan downloader) is a harmful application, basically installed by an exploit or some other fraudulent causes like an email attachment or a downloaded image that triggers to install the malicious program onto a victim’s computer [20].
3.3. Fake operation
Android OS family is very diverse. There are numerous official as well as unofficial versions of this OS. This open nature of the platform has given the attackers to introduce various fake operations. Faking operator’s identity, model, version, software update as well as fake apps’ goal, etc are some common examples of fake operations.
3.4. Hidden ads
"It won’t hurt if you don’t know it.” is a common proverb but unfortunately, this phase isn’t suitable for today’s smartphone security risks. Many of the free apps contain excessive ads that are available in the app store. That is legal because they acknowledge both parties that the app contains ads. But some malicious app contains hidden ads that may be harmful to users. Often these apps cause slowing down the device, sucking mobile data, draining the battery and so on. A recent study has shown that more than 5000 apps of both the major smartphone platforms contain hidden apps. It also causes a huge amount of loss to the advertising organization. They lose about $85 million per year because of the hidden ads [21].
3.5. Premium text
Sometime we may receive some messages from a four or five digit phone number e.g., get jokes for USD1 per month or send STOP to cancel the service. Majority of the users may not activate the service so they are not concern about it but after a month they get a bill of USD1. This unintentional or fake registration to a service is done by some scammers and fraudsters. They sign up for the victim by using the victim’s phone number from some websites [22].
3.6. Mobile spy
Mobile spying applications have been developed to monitor child or employee’s mobile and tablet usage. Targeted ads are a major source of income for an ad network which may enable this type of attackers to mine the personal information of a user [23]. Table 2 shows the top enlisted mobile threats in 2016-2017 along with their percentages.
<table>
<thead>
<tr>
<th>Mobile Threat</th>
<th>Percentage</th>
</tr>
</thead>
<tbody>
<tr>
<td>Malicious App</td>
<td>39.2</td>
</tr>
<tr>
<td>Malware Downloader</td>
<td>16.1</td>
</tr>
<tr>
<td>Fake Operation</td>
<td>5.2</td>
</tr>
<tr>
<td>Hidden Ads</td>
<td>4.8</td>
</tr>
<tr>
<td>Premium Text</td>
<td>4.1</td>
</tr>
<tr>
<td>Mobile Spy</td>
<td>3.2</td>
</tr>
<tr>
<td>SMS Blocker</td>
<td>2.3</td>
</tr>
<tr>
<td>Mal Dropper</td>
<td>2.1</td>
</tr>
<tr>
<td>Downloader</td>
<td>1.7</td>
</tr>
<tr>
<td>Dropper</td>
<td>1.7</td>
</tr>
<tr>
<td>Fake App</td>
<td>1.7</td>
</tr>
<tr>
<td>SMS Stealer</td>
<td>1.7</td>
</tr>
<tr>
<td>Rootnik</td>
<td>1.6</td>
</tr>
<tr>
<td>Lotoor</td>
<td>1.4</td>
</tr>
<tr>
<td>Reg SMS</td>
<td>1.2</td>
</tr>
<tr>
<td>Fake Inst</td>
<td>1.2</td>
</tr>
<tr>
<td>Hidden App</td>
<td>0.8</td>
</tr>
<tr>
<td>Lock Droid</td>
<td>0.8</td>
</tr>
</tbody>
</table>
Table 2. Enlisted mobile threats in 2016-2017 [26]
Being the most popular smartphone OS, Android has a much bigger target for malicious attacks. According to an industry research firm ”J. Gold Associates” companies that manufacture Android-powered devices should take necessary measures to make global policies to mitigate security risks that the platform may pose due to its’ massive growth [24]. On the other hand, though Google’s Android OS is an open-source OS, it has restricted a lot of features for 3rd party developers. The 3rd party developers have minimized developing security related stuff due to Android’s core level restrictions. For example, AutoStart restriction has been introduced in Android version 3.1 (Honeycomb), Android 6.0 (Marshmallow) update restricted apps’ ability to find the current running task by using the getRunningTasks() API [25].
4. **MAJOR ISSUES OF SYSTEM LEVEL ANDROID DEVELOPMENT**
Android OS has been developed, changed and modified significantly in recent days. Security-related development, as well as process handling API, is becoming deprecated for the 3rd party developers and Google has clearly mentioned that this is the job for mobile manufacturers [27].
Currently, developing apps for the root level platform optimization is not feasible without root access which is only available for manufacturers and trusted developers. This is mainly due to the continuous permission restrictions in every major release of the Android OS. This domain will be discussed in the following three subsections.
(a) Android security restrictions for 3rd parties
(b) Recent changes in android permission
(c) Feasibility of developing security related apps
4.1. **Android security restrictions for 3rd parties**
Android is continuously restricting access to 3rd party apps for different types of resources and raw data. Gradually, the majority of the apps developed to serve the purpose of further platform optimization such as battery optimization, security checking, process optimization, etc., had stopped from developing further. Some of the restrictions are shown below.
(a) Limiting directory access: world-writable directories may lead to security weaknesses and enable an application to manipulate trusted files. A proper file scanning scheme is mandatory for a security-related 3rd party app for better threat detection [27].
(b) Logging data: it increases the risk of the exposure of core level system data and reduces system performance upon excessive requests. On the other hand, log information is necessary to gather information about the battery, CPU and network usage which are mandatory for device optimization and security analysts [27].
(c) Device Metadata: Android also restricts access to data that does not seem directly sensitive, but that could lead to revealing characteristics about the user, user preferences, and other stuff [28].
4.2. **Recent changes in android permission**
Android has changed as well as depreciated some system level permissions and APIs. It has led some old applications to a non-workable state in the newer versions of Android. Figure 2 summarizes the changes in recent Android versions which includes battery and memory optimization, performance enhancement and others.

4.2.1. Changes in Android 6.0 (Marshmallow)
Runtime Permission: In Android 6.0, a new mechanism, called runtime permission, is added so that the user can identify what permissions are needed for a particular app and this gives them a chance to review the permissions to judge whether it is worthy or not. This technique reduces security risks via permission violation [29].
Doze and App Standby: If a device is stationary and idle along with the screen off, the device goes into doze mode. It is like a sleep state of the system and app standby technology enables the system to perceive that an app is idle.
4.2.2. Changes in Android 7 (Nougat)
Battery and Memory: In Android 7.0, the system’s characteristics was changed to enhance the battery life and the memory optimization of devices.
Improvement of Doze: Battery life is improved by ’Doze’ by limiting CPU and network activities. Doze triggers when a user keeps device unplugged; could be moving or stationary, with the screen turned off.
4.2.3. Changes in Android 8 (Oreo)
(a) Background execution limits
(b) Android background location limits
Android Go: Android Go is a lightweight version of Android OS specially designed for low-end devices to run apps and other processes smoothly. The operating system was introduced alongside Android 8.0 (Oreo) and it’s based on Android 8.0. This lightweight smartphone OS is optimized to run on smartphones having 1 GB or lower memory and it takes almost half of the storage than the regular Android versions [30].
Android Go devices are 15% faster in terms of apps opening than the regular versions. On the other hand, the apps may be faster and lightweight on Android Go but, they are lack of some features. The positive thing is that the developers can easily optimize their apps for the Android Go platform by following Google’s ’Building for Billions’ development guidelines.
4.2.4. Changes in Android 9 (Pie)
Android 9.0 is the newest version of this OS offered from Google. It introduces a number of changes to the system behavior of the OS. Important behavioral changes are briefly described below [31].
Power management: Android 9 (API level 28) introduces brand new features to improve the power management of Android-based devices. These power management features are two types as follows:
(a) App standby buckets: Analyzing the user’s usage pattern, the system limits access to various resources like battery or CPU to the apps.
(b) Battery saver improvements: The existing battery saving feature is improved with a wide area of restrictions.
Privacy changes:
(a) Limited access to sensors in background: In Android 9, apps running in the background can not access the camera, microphone, sensors using the continuous reporting mode as well as on-change and one-shot reporting mode.
(b) Android 9 restricts apps’ access to call logs and telephone numbers.
(c) The system restricts access to Wi-Fi location and connection information.
Restrictions on use of non-SDK interfaces: The platform restricts the use of some popular non-SDK methods and fields if the developers attempt to access these directly, via reflection or using JNI.
Security behavior changes:
(a) The system’s TLS implementation has experienced numerous modifications in Android 9.
(b) Android 9 additionally restricts the system calls available to apps that use privileged syscalls.
(c) Android secure encrypted files are no longer supported.
(d) Network address lookups can cause network violations that require name resolution. This might invoke network I/O and might be considered blocking actions. This can result in pauses on the main thread.
4.3. Feasibility of developing security related apps
We have studied different types of utility apps developed for mobile device (Android) optimization and battery life enhancement. We have found some interesting facts, that is most of these apps didn’t do anything right rather draining the battery and slowing down the performance. Android OS structure has been changing rapidly and the security involvements, process handling API, etc. are becoming deprecated for 3rd party developers. Google has clearly mentioned that this is the job for mobile manufacturers.
5. BEST PRACTICES
The overall security and the privacy of an Android mobile phone user depend on the end user’s usage pattern and awareness. The security and privacy related principals and efforts put by the developers in this field is also equally important for ensuring security and privacy. The first party such as manufacturers have a big opportunity to play important roles in delivering the latest security features, patches to the phone quickly. These three portions of security and privacy related practices all together result in greater protection and safety for the end users.
5.1. Best practices for the users
Almost every smart phone user has at least 100 apps on his/her mobile phone according to the survey of Fortune [32]. But, the user doesn’t use all the apps. A user may need an application for a while, but after getting the job done, he/she may not delete the application, even though he/she does not require the app anymore. In addition, one may stop using many such applications. These unused applications can be detected as malicious apps [33]. Many of these unused applications may use resources, like the internet, geographic locations, call logs and other user permissions. It is harmful to a user. So, every user should follow some rules to keep his/her devices healthy, and some of the rules are shown in the Figure 3. In the following, we will discuss some important security practices that should be considered for avoiding unwanted vulnerabilities.

5.1.1. Software update
Google Inc. and other manufacturers provide system updates which include security patches, features and functionalities, UI improvements and so on to overcome vulnerabilities and to ensure smoother user experience. To get the finest user experiences, it is definitely a good idea to update the phone software regularly. The newest versions of the software help the users to run their phones more smoothly and quickly with minimum numbers of lags and security vulnerabilities.
5.1.2. Installation of applications from untrustworthy sources
Unlike Google, many other third-party app stores never seem to concern about malicious applications on their stores. Many developers also offer beta apps that don’t follow some of the Google’s guidelines. Hence, it is a good practice not to install apps from any untrustworthy sources as well as the beta versions of smartphone apps.
5.1.3. Understanding app permissions
Android OS is getting better in terms of security enhancement day by day. From Android v6.0, runtime permission request is added. It means the user needs to agree with critical permission(s) during app usage. Though this process is safer than the previous versions’ of agreement, people often make mistakes while opening the app for the first time: they often grant permissions without reading and knowing the consequences of it. They also do not check the list of permissions during apps installation. Instead, they just accept the requests without thinking about the consequences. It may be harmful because the developer could take advantages of it.
5.1.4. Data encryption and remote phone wipe
Encrypting the data of a smartphone can help to improve users’ security and privacy one step ahead. A user can only get access to the encrypted data with a valid password or key. Encryption of a smartphone enables the needs for a password or key in every boot up of the smartphone. Apart from that, it doesn’t change anything how a user uses his/her smartphone. From Android version 6.0, data encryption is enabled by default. Encryption may cause slowing down the performance of some older smartphone, but it doesn’t affect today’s Android devices. If an application does not meet certain challenges, it should not be installed on phone [34]. Anti-virus provider Kaspersky lab published a report that an IT-based company developed an app called “skygofree” which was released with 48 different commands that was able to risk a user’s safety in several ways like it relies on 5 different exploits to achieve root privilege which allows it to bypass security key and it was also capable of location-based recording, capturing image, videos, calendar data and other personal information [35].
Although applying all the security measurement available, it won’t feel good if the phone got stolen, unfortunately. If someone faces such circumstances that he/she won’t regain his/her device again, it becomes necessary to wipe the phone. Google provides "find my device” option for all Google account linked Android smartphones. Victims need to head over to the "Google’s Device Manager” website and log into the Google account and complete the desired operation. If the device has internet access, it is possible to call, set alert, find the exact location, lock or wipe the phone’s data remotely from the device manager. The user must make sure that the Google account’s password is strong enough so that anyone else can’t wipe his/her smartphone.
5.1.5. Lock-screen and biometric scanner
Almost one-third of the total users don’t concern about the lock-screen security and they use the traditional swipe-to-unlock method [36]. Though it helps to protect the phone from accidental touches when the phone is in the pocket but the phone can not provide any security barrier if the phone got stolen or compromised somehow. All type of Android smartphone offer PIN, password, and pattern (mostly) to secure the phone which can easily be enabled from the security options in the settings. Additionally, modern smartphones have been introducing biometric sensors like fingerprint sensor, iris, and faceID to enhance smartphone security. Among these multiple biometric-based methods, fingerprint-based biometrics is the most secure way to date.
5.1.6. Online backups
Limited storage on any smartphones could create some troubles, especially on lower storage smartphones. With the increasing media consumption and day to day online interactions and activities, the inbuilt phone storage is getting occupied quickly. This lead to slowing down the smartphone’s performance, increase battery consumption and reduce the quality of user experience. A non-limited storage or at least visibly enough storage would solve the problem. On the other hand, if the phone is lost, stolen or damaged then all the important data and media might be lost forever if the data is not backed up. To preserve important data and access them from anywhere and from any device, cloud storage is a proper solution. Many online cloud service provider serves this purpose according to their own structure. For example, one can backup unlimited photos and videos to Google Photos free of cost. Dropbox, Google Drive, One Drive, etc. are offering online cloud storage.
5.1.7. Online password selection and two-step verification
Thousands of smartphone users use easy passwords like 123456, phone number, birth date, and so on to remember it which is very simple to guess for an attacker. So, selecting passwords, especially for online accounts, should not be that much simple and straightforward job. A user should not use a single password to handle all his/her accounts because compromising one account’s password can lead to compromise all other accounts of that user. To minimize the vulnerabilities, the selection of passwords must be based on some criteria. For example, every person should make his/her own reasoning for each password so that every time he/she can put it by remembering the reasoning developed earlier. Additionally, using two-step verification can add extra strength to an account; even if an intruder or hacker achieves the password of a particular account, he/she can’t access it without compromising the two-factor authentication media such as cell phone or email account configured earlier for the verification system. So, all the passwords including lock-screen password/PIN, Google accounts, Facebook, Twitter and so on should be selected wisely in order to remember and protect them easily.
5.2. Best practices for the developers
The scenario of Android development is quite different from the earlier versions. Things have changed a lot in recent Android architectures. It seems that new things which were introduced to provide functionalities, nowadays these may cost the user’s safety. Along with Google, many other companies and researchers work on these issues and come up with several different decisions. That is why many developers, who started developing Android applications before the deployment of Android Studio and Android version 4.4, have faced many difficulties. Moreover, many developers, especially developers from small companies are not interested in reading the privacy policies let alone maintain the policies. A research on app developers found the alarming things [3] as follows.
(a) Developers find privacy policies hard to read.
(b) Writing privacy policies is not considered useful to small developers.
(c) Privacy is not a primary task.
International Association of Privacy Professionals (IAPP) is a web tool that allows the user to compare, read and access ten different privacy policies from the US, Australia, and Europe. After examining the policies, many similarities and overlapping were found and the whole policies can be described with just the four major privacy practices [3] as follows.
(a) Developers should decrease the data to be collected. This will reduce the obligation of developers and save the users from unnecessary data collection.
(b) Old data should not be retained and older unnecessary data should be regularly deleted.
(c) Privacy policies should be enforced to every sort of communication between the two parties.
(d) Developers should encrypt every sensitive data to be stored and all communications should be through an encrypted channel.
Figure 4. Developers’ perspective: Guidelines summary [3]
Figure 4 depicts the summary of developers’ practice for maintaining users’ security and privacy. On the other hand, developers, working on security and core device related apps, are discoursed by Google, i.e., Android Power Usage APIs are not open for the 3rd party developers [37].
Developers can play the most important role in securing the users’ activities and privacy as well. Here, we will briefly discuss some important practices that should be abided by the developers to ensure the proper and desired security level and this will define the proper approach throughout the applications lifecycle [38].
5.2.1. Secure the server
Attacking on the server and its API is a common method from the attackers. The developers must secure the corresponding server and API to establish controls and prevent any sort of unauthorized access. Introducing web application firewall and conducting code reviews can help overcome this challenge.
5.2.2. Data encryption
All sensitive data stored on the mobile device should be encrypted. Additionally, the source code and data transmitted between applications should also be encrypted. High-level data encryption always protects valuable data from attackers.
5.2.3. Code obfuscation
It is important to protect the source code from human analyzer and decompiler preserving the operations correctly. This process of code obfuscation not only enhance the security of the app but also provides the confidentiality of intellectual properties.
5.2.4. Strong user authentication system
Two-factor authentication system, wise session management, hashing and encrypting login information can help protect sensitive information. Using advanced authorization tools like OAuth, JSON web tokens, etc. is also essential. These ensure secure and integrated access gateways for the corresponding app.
5.2.5. Regular updation and testing
Hackers always try to find vulnerabilities in apps and exploit them and it is a regular job for the developers to test their apps, repair the breaches and enhance the security. Google regularly updates its software to fix the vulnerabilities of Android platform but its the duty of the developers to find their own faults which may result from poor programming practices, technological changes, and unconsciousness.
5.2.6. Client side data storage
Data stored in smartphone can be exposed if lost or theft. On the other hand, a smartphone might not be secured always because some user roots their device for additional features or more control on the device. Therefore, sensitive data should be stored on the server side.
Alongside with the above recommendation, the Android platform maintainer Google has recommended the following checklist to the developers for securing their apps and sensitive information corresponding to the apps [39].
(a) Enforce secure communication
a) Apply signature-based permissions
b) Disallow access to your app’s content providers
c) Ask for credentials before showing sensitive information
d) Use implicit intents and non-exported content providers
e) Apply network security measures
f) Use SSL traffic
g) Add a network security configuration
h) Create your own trust manager
i) Use WebView objects carefully
j) Use HTML message channels
(b) Provide the right permissions
a) Use intents to defer permissions
b) Share data securely across apps
(c) Store data safely
a Store private data within the internal storage
b Use external storage cautiously
c Use scoped directory access
d Check validity of data
e Store only non-sensitive data in cache files
f Use SharedPreferences in private mode
(d) Keep services and dependencies up-to-date
a Check the Google Play services security provider
b Update all app dependencies
5.3. Gestures for the manufacturers
When any android phone manufacturers provide an update, they do not provide it for all the previous models that it has produced. However, Apple does not have such problem. When Apple provides a new security patch, it is provided for almost all the models of Apple product. Therefore, android developers, sometimes fail to cope up with every version of Android and they have to think about the same task in different ways for different devices. The possible solution for the problem is to make a sustainable policy for android version upgrade. These policies will be applicable to both the manufacturers and Google to push the update in a defined process. So that the developers don’t have to think about handling different API levels.
As Google changes the android’s core level security architectures very frequently and they are enforcing more and more restrictions day by day, it is very difficult for other developers to monitor and offer security. Google has already introduced some Google Play Services to monitor apps, harmful activities, malware, etc. But it is not still sufficient to provide full security for the users. Google should take over all the portions of security-related features and developments.
6. CONCLUSION
With the increasing popularity of Android smartphones, proper security of the devices is also becoming a serious concern. As the architecture of the system for smartphones differs somewhere from traditional devices, the existing security system is not enough for securing smartphones. Major market-leading companies don’t provide enough flexibility to the 3rd party developers and Google is becoming one of them. Now, the 3rd party developers should not concern about the core security-related process of the Android OS rather than developing utility apps by following platform standards and policies. Platform security and end users’ privacy is a shared responsibility among the platform maintainers, the app developers, and the users. All parties should follow the well-defined guidelines, recommendations and take proper responsibility for their actions to ensure a safe, secure and trusted smartphone platform. Android smartphone manufacturers and Google’s Android development team should collaborate properly in a defined protocol to ensure security patches timely for all or the majority of the active devices. After all, proper development practices should be spread and made available to the developers by the platform maintainers.
REFERENCES
|
{"Source-Url": "http://journal.uad.ac.id/index.php/TELKOMNIKA/article/download/13288/7567", "len_cl100k_base": 9085, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 34256, "total-output-tokens": 11826, "length": "2e13", "weborganizer": {"__label__adult": 0.0009374618530273438, "__label__art_design": 0.0007252693176269531, "__label__crime_law": 0.0022754669189453125, "__label__education_jobs": 0.0012950897216796875, "__label__entertainment": 0.0001735687255859375, "__label__fashion_beauty": 0.0004627704620361328, "__label__finance_business": 0.000957489013671875, "__label__food_dining": 0.00034689903259277344, "__label__games": 0.0031890869140625, "__label__hardware": 0.0631103515625, "__label__health": 0.0007405281066894531, "__label__history": 0.0005207061767578125, "__label__home_hobbies": 0.00026679039001464844, "__label__industrial": 0.0005507469177246094, "__label__literature": 0.0003361701965332031, "__label__politics": 0.0003342628479003906, "__label__religion": 0.000499725341796875, "__label__science_tech": 0.0931396484375, "__label__social_life": 0.00011140108108520508, "__label__software": 0.10186767578125, "__label__software_dev": 0.72705078125, "__label__sports_fitness": 0.0003478527069091797, "__label__transportation": 0.0007233619689941406, "__label__travel": 0.00022399425506591797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49863, 0.0422]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49863, 0.2865]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49863, 0.9158]], "google_gemma-3-12b-it_contains_pii": [[0, 2792, false], [2792, 7681, null], [7681, 10834, null], [10834, 13191, null], [13191, 16798, null], [16798, 20386, null], [20386, 22851, null], [22851, 26502, null], [26502, 29100, null], [29100, 33867, null], [33867, 36992, null], [36992, 40414, null], [40414, 44309, null], [44309, 48425, null], [48425, 49863, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2792, true], [2792, 7681, null], [7681, 10834, null], [10834, 13191, null], [13191, 16798, null], [16798, 20386, null], [20386, 22851, null], [22851, 26502, null], [26502, 29100, null], [29100, 33867, null], [33867, 36992, null], [36992, 40414, null], [40414, 44309, null], [44309, 48425, null], [48425, 49863, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49863, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49863, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49863, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49863, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49863, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49863, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49863, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49863, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49863, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49863, null]], "pdf_page_numbers": [[0, 2792, 1], [2792, 7681, 2], [7681, 10834, 3], [10834, 13191, 4], [13191, 16798, 5], [16798, 20386, 6], [20386, 22851, 7], [22851, 26502, 8], [26502, 29100, 9], [29100, 33867, 10], [33867, 36992, 11], [36992, 40414, 12], [40414, 44309, 13], [44309, 48425, 14], [48425, 49863, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49863, 0.16917]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
fe417905e04491fa880ef07856ef6e66bc7fe78e
|
The Argon AR Web Browser and Standards-based AR Application Environment
Blair MacIntyre1 Alex Hill1 Hafez Rouzati1 Maribeth Gandy2 Brian Davidson3
1Augmented Environments Lab, 2Interactive Media Technology Center, 3Research Network Operations Center
Georgia Institute of Technology, Atlanta, GA 30332
e-mail: {blair, ahill, hafez, mg129, bdavidson}@gatech.edu
1 I N T R O D U C T I O N
Since augmented reality (AR) was first demonstrated by Ivan Sutherland in 1965 [20] the idea has captured researchers imagination. Spurred on by science fiction authors, the term conjures dreams of people immersed in a hybrid physical/virtual world where synthetic content of all kinds is blended with the physical reality around them. AR research picked up in the late 1980s, with various researchers focused on the enabling technologies (e.g., tracking software and hardware, display technology), exploring different application domains (e.g., maintenance [5], medical [1], military [23]), understanding human factors (e.g., user perception of depth [12] or registration error [4]) and creating the authoring tools necessary to support this research and exploration (e.g., DART [13], Studierstube [16], GoblinXNA [14]).
Each of these components is necessary if the dream of immersive AR is to become a reality. However, success in each of these areas is not sufficient; the user experience implied by the visions of AR all share the idea that all AR content is presented in one unified AR application environment, regardless of the source of the content. Any AR experience, from the simple to to the complex (e.g., games, training applications, social media, search results, advertising, and playful toys), should always be available within one environment and should be able to be authored and made available by independent developers with no coordination or approval process.
The idea of a single AR environment, in which all AR content is presented, has been proposed multiple times over the past two decades (e.g., [10,15,16,17,19]), and is the (implicit) motivation behind many of the so-called “AR Browsers” appearing in the smartphone marketplace1. Unfortunately, none of the proposed (research or commercial) systems comes close to achieving the necessary functionality. Previous research systems have focused on specific research questions (e.g., interaction techniques, col-
Figure 1: A screen shot of the Argon browser showing three simultaneous AR channels (presentation slides with embedded video, live twitter search and marker tracking).
laration, etc.) without worrying if the proposed architecture could be deployed in a practical way. The various “AR Browsers” focus on search and browsing of information snippets, but ignore AR applications that cannot be represented as a collection of “information nuggets” (consider the breadth of AR applications proposed and prototyped over the years; most could not be implemented in one of these “Browsers”). Furthermore, none of these systems addresses the practical issue that individual “AR application” authors may want a high degree of control over the look, feel and interaction of their content, even if it is displayed in parallel with other AR content. Finally, there are a range of practical issues, from “cross-application” security to e-commerce and offline data management to efficiency and scalability concerns that a real system must address.
These concerns are not unique to AR, even though the style of content presentation is unique; re-examining the history of our existing 2D interactive computing systems helps to frame the problem. When 2D and 3D graphical applications began to appear, each application was written to control the entire display. Various SDKs and tools appeared to support application authoring, and researchers and practitioners experimented with a wide range of interaction techniques and metaphors. Akin to the data-centric AR system ideas, pluggable data-centric architectures for 2D content were created and championed (e.g., OpenDoc4), driven by the appeal of composable “active objects” rather than monolithic applications. In the end, the application/document model and the desktop metaphor for 2D user interfaces emerged as the dominant approach to sharing graphical displays between multiple applications, and is the foundation on which all modern graphical interfaces are based. The key concept behind the desktop metaphor is the “virtual device” abstraction, where each application is authored as if it has access to the full capabilities of an abstract collection of input and output devices. Users decide which programs are running, how they are arranged and how they interact with them. While this model has its limitations, the reality is that it successfully balances the needs of the application developer, the user, and creators of the underlying systems: the model is simple, and can result in robust, secure and practical systems.
When viewed in this historical context, what is needed for an AR application environment is analogous to the 2D desktop and windowing system. We are not suggesting literally moving 2D windows into the world around us (as done in [6]), but rather the related idea of an ecosystem in which independently created “AR applications” co-exist without needing to know what other AR content is also displayed. The granularity of the content elements (e.g., the windows, menus, palettes, and dialog boxes of the 2D desktop) will evolve over time, and may be different for different applications. Just as early windowing systems, such as the X11 window system, provided core mechanisms but allowed different policies and metaphors to be explored (i.e., through different “Window Managers”), we need a flexible system based on a robust set of policy-agnostic mechanisms. Similarly, we must ensure that the AR content authoring is at a reasonable level of abstraction, such that authors have sufficient control, but are not needlessly tied to a specific platform or hardware.
Over the years, as different ideas and designs for a single AR environment were put forth, mobile hardware technology was not mature enough to support such an environment, nor were there any sufficiently powerful and flexible mobile system architectures on which to base an implementation. As we will illustrate in this paper, the combination of powerful mobile devices and the full featured mobile web addresses these problems, and can serve as the foundation for an AR application environment that moves us one step closer to the dream of immersive AR. Over the past two years, we have designed and built such an environment, including a set of AR-specific web “application” abstractions, and an “AR web browser” supporting them. Argon, the AR web browser, has been freely available for iOS since February 14, 2011, and is starting to be used by researchers and developers around the world.
The overall architecture, called KARMA (KML/HTML Augmented Reality Mobile Architecture), is based on standard web technologies, whenever possible [8]. We have extended the semantics of KML (the markup language used by Google Earth (GE) and Google Maps) to support the requirements of AR. This extension of KML is called KARML, and lets an author specify where AR content lives in the world. AR applications (called channels) live on standard web servers, and one or more of those channels can be viewed simultaneously in Argon, as shown in Figure 1. Each channel is independent, and can have its own user interface and interactive content.
In this paper, we discuss the motivations behind the design of the system, the specific research contributions of this work, some of the more relevant details of Argon and KARML, and the implementation of Argon on iOS. We present a variety of example channels created by us, our collaborators and other developers, and highlight how they leverage the unique attributes of our platform.
1.1 Background: Deciding to Build on The Mobile Web
This project started in the fall of 2009, when we observed that the development trajectory of modern smart phone hardware and mobile web software would soon make the combination a suitable foundation for a comprehensive AR application environment.
First, it was clear that mobile computing technology was maturing rapidly, and would soon support the necessary system technologies (both hardware and software) for mobile AR. Powerful mobile phones with GPS and orientation sensors had already made a limited form of AR, handheld sensor-based video-see-through augmented reality, practical for commercial developers and accessible to millions of people. While early AR applications for mobile devices still rely almost entirely on the built-in sensors (i.e., GPS, compass, accelerometers and gyroscopes), newer computer vision toolkits, such as Qualcomm’s AR SDK2, are enabling developers to create a more powerful collection of applications that accurately register graphics with the physical world.
Second, we believed that the modern mobile WWW architecture would soon be mature enough to serve as the basis for an AR application environment. What was once exemplified by impoverished WAP browsers3 had been replaced by mobile browsers with features similar to their desktop counterparts. Mobile web renderers and the corresponding web standards included highly accelerated Javascript and HTML/CSS engines, and will soon include WebGL for arbitrary 3D content, the ability to safely run platform independent native code, and access to hardware such as the camera and the various sensors. Furthermore, a glance at a typical
---
2 developer.qualcomm.com/dev/augmented-reality
3 www.wapforum.org
4 en.wikipedia.org/wiki/OpenDoc
computer display shows that even then, many of our tools lived in
the web ecosystem, from stores like Amazon to services like
Facebook to entire operating environments like Google’s Chrom-
EOS. As more of what we do lives in the cloud, a cloud-based
ecosystem makes increasing sense.
Apple’s implementation of the WebKit 3D extensions in Mobile
Safari provided a key starting point for a web-based approach, by
allowing any interactive 2D web content to be rendered efficiently
in 3D. While 2D-billboards-in-3D is not the ideal solution for all
AR applications, the trajectory of web technologies is pointing in
the right direction (e.g., a combination of WebGL and native 3D
rendering will, in the near future, allow mobile web-based appli-
cations like Argon to support full 3D content as well).
Beyond the specifics of software, we do realize that the smart-
phone (by itself) is not the ideal vehicle for all AR applications,
because of its small screen and the need to hold it up to see
“through” it. However, when paired with a head-worn display
(which a number of display companies are working on), this limita-
tion will cease to be a problem. And the greatest advantage of
the mobile phone will continue to hold, it’s ubiquity: the best de-
vice is the one everyone already has in their pocket.
1.2 Goals
We had three main goals driving our development of Argon. First
and foremost, we wanted to create an AR application environment
that supports the vision of an immersive AR system: a “window
system” for AR. Our motivation to create such an environmen
t is driven by our desire to push AR technology forward; we firmly
believe that, unless AR technology is put in the hands of millions
of designers, engineers, artists and entrepreneurs around the
world, we will not fully understand where the “killer apps” might
lie, and what the true requirements of the technology are.
Our main goal was tempered by a second goal: to build on
existing mobile technology as much as possible. We did not want
to just leverage web technologies (for example, integrating a
JavaScript/HTML engine into an AR system); we wanted to inte-
grate with the web ecosystem as tightly as we could. As AR re-
searchers, we often forget that AR is just one technology among
the many that are needed to solve real problems. Some non-trivial
mobile AR applications will be complex, involving a spectrum of
2D and 3D content and interactions, and will need to be net-
worked and distributed. The enormous benefits in terms of author-
ing, deployment, access to web services and existing content that
are gained by integrating with the web outweigh the limitations,
for many possible AR applications.
Our final goal was to create an ecosystem that supports easy
and sophisticated authoring of applications; this again points to
the web as an ideal platform. KARML is based upon KML, along
with the full collection of contemporary Web 2.0 standards
(HTML, CSS, JavaScript, etc). While KARML extends the KML
language to better support handheld AR, we were careful to sup-
port traditional KML (most KML files will display in a predictable
way in Argon). Conversely, even complex combinations of HTML
and JavaScript can be used in Argon with minimal changes.
Taken together, experienced web developers can use tools and
techniques with which they are already very familiar (e.g. HTML5,
CSS, PHP, JavaScript, Google Earth, DreamWeaver,
Yahoo Pipes, etc.) to create their mobile AR applications, which
allows for existing web content to be repurposed with ease. Fur-
thermore, AR applications can be hosted on the same web servers
(since Argon uses the standard HTTP protocol), and even share
URLs with traditional web browsers (since Argon’s browser ID
string can be used by the server to identify requests from Argon
and respond appropriately). Together, these dramatically simplify
distribution and management of content.
2 Contributions
In this paper, we present the Argon AR web browser, the KARML
markup language and their integration with the web. The main
contributions of this paper and project are summarized here.
Demonstration that the web is a viable mobile AR platform.
We do not claim Argon is, or will be, the ideal AR platform for all
mobile AR applications. However, Argon clearly demonstrates
that mobile web technologies are a viable basis for a wide range
of mobile AR applications. Argon currently supports sensor-based
AR and marker-based AR using 2D-billboards-in-3D content; it
will soon support much more complex computer vision-based
tracking and full 3D content.
The KARML specification. The variation of KML we have
defined is a living example of a markup language for AR content.
The specification is far more comprehensive than previous efforts.
The Argon multi-channel AR architecture. Argon supports
multiple independently authored, but simultaneously displayed,
channels of AR content. Each is fully scriptable, interactive and
can define its own 2D/3D interface. By layering multiple transpar-
ent WebKit instances on top of each other, each channel is sand-
boxed in its own JavaScript context (for security and robustness).
Argon provides channels with notification that their channel has
gained or lost focus (so they can change appearance or behaviour
when not in front), a shared location across channels (even when
one channel “moves” the browser to a synthetic location), and
access to GeoSpots (geo-located panoramic images that can be
included in channels and used in place of live video and GPS
location).
Demonstration that the web-centric approach is powerful.
Beyond the web being viable for AR, by embracing the web we
enable previously impractical or impossible AR applications to be
created and deployed. Simple applications can be deployed rap-
idly (in hours, not weeks or months). Complex applications, in-
volving cloud services, asynchronous agents, content filtering and
so on, are tractable. Beyond this, by leveraging the web we don’t
have to reinvent the wheel with respect to content creation: con-
tent elements can be authored in tools such as Google Earth or
Dreamweaver, and assembled as appropriate.
3 Related Work
Since Vannevar Bush first described his hypothetical “memex”
device researchers have been seeking new ways to browse and
create connections between all types of information [3]. From the
beginning of AR research, systems were created that took data
with spatial meaning and attached it to the real-world objects and
locations. From merging ultrasound imagery with the patient [1]
to providing operating instructions for a printer visually registered
with the physical components [5], early AR systems demonstrated
the power of linking information to relevant spatial contexts.
Early outdoor AR systems expanded the range of scenarios to
include geospatial scale content; the Touring Machine [7] and
MARS [9] supported linking from 3D icons to the 2D web, and
Many of these early systems could be recreated on modern smartphones, and informed the requirements for our work.
Many authoring tools, of different forms, have been created. Tools such as Studierstube combined software abstraction layers for AR infrastructure and technologies into a framework usable via code or GUI front-ends [16], with similar motivations to our work but before the technical ecosystem was sufficiently evolved. In contrast, DART added AR concepts to an existing high level media authoring tool, Adobe Director [13]. A variety of projects, like Goblin [14], focused on adding AR technology to game engines. Other researchers focused on creating simple authoring environments for a specific application domain (e.g., Amire, for assembly tasks [24]). We expect that systems like Amire could be implemented with Argon.
The ARToolkit [2], and the more recent FLARToolkit, provide marker tracking in C++ and Adobe Flash, respectively. The appeal of FLARToolkit is that, despite the limitations of being locked inside the Flash engine, it makes it trivial for developers to author and distribute their applications, something that previously has been a major hurdle. Argon takes the next step beyond systems such as FLARToolkit, by supporting a wider variety of sophisticated web applications. Others have attempted to create a language for AR content and applications (e.g., Augmented Presentation and Interaction Language (APRIL) [11]), but without integrating with the web, have had little success.
“Windows on the World” incorporated an existing 2D window system within a 3D virtual world [6]. This system took X11 windows from the desktop and placed them into the physical world, but did not address authoring or real use. More relevant to this project are the WorldBoard and RWWW projects. WorldBoard envisioned a planetary augmented reality system that would provide innovative ways of associating information with places, with ideas for scalability, global access and so forth [19]. The Real World Wide Web (RWWW) project was our first attempt at creating a system like Argon [10], but the web was not mature enough at the time to serve as a solid foundation for the work. More recently, Schmalstieg et al have discussed leveraging the web ecosystem for AR [17] and they have presented some similar arguments (in terms of availability, scalability, etc.) in support of this general approach. They do not go as far as we do in proposing a system that not only interoperates with the web, but uses web technologies to actually realize rendering and interactivity elements. Nor do they build a complete prototype to test the idea.
In the last three years, a crop of commercially available “AR browsers!” have appeared, aimed at outdoor information browsing and retrieval. Each of these provides different degrees of openness to end-user content, but nothing on the scale or capability of even the early web. Juniao 2.0 introduces “indoor GPS” through the concept of LLA (longitude, latitude, altitude) markers. Like our GeoSpots, they provide precise location when GPS is inadequate. However, by encoding the location in physical form, rather than using indirect references, they have limited flexibility.
KARML is not the first attempt to extend KML for AR. ARML4 extended KML with AR specific structures. These extensions were more modest, and focused on adding markup extensions to support specific browser features, such as "wikitude:thumbnail" and "ar-provider". The KARML extension is more comprehensive, and focuses on extending existing KML features and semantics while avoiding application specific additions wherever possible.
A tradition of abstraction and open tools define many technology advances in the field of 3D, AR, and the web. It is clear that technologies must be made accessible to be adopted. Components that are typically hard to work with or understand must be made easy. The Web3D standards of Virtual Reality Modeling Language VRML 97 were an attempt to make 3D content ubiquitous on the web. While later the Virtual-Reality Peripheral Network (VRPN) provides a device-independent and network-transparent interface to virtual-reality peripherals [21].
One feature of Argon is the ability to use panoramic images in the background instead of live video. Commercial systems such as Google StreetView as well as Microsoft's Photosynth and Bing Maps support the creation and navigation of 3D panoramic scenes augmented with geospatial data [18]. We have integrated this concept into KARML and Argon to support the authoring of mixed reality experiences that leverage the live channel data in various ways both at the physical site and for remote viewing. We have developed a web service that allows users to submit panoramas to the system that can be utilized by channel authors via an open API. Our plan is to eventually leverage the panorama service for both display and tracking. Wagner et al developed a method for the real-time creation and tracking of panoramic maps on mobile phones and authoring of experiences that use them. They note that this method can also be used in the creation of panoramic images for offline browsing, for visual enhancements through environment mapping, as well as standard tracking [22].
4 Architecture
In this section, we discuss three main architectural components: how Argon integrates with the web, the internal architecture of Argon, and the KARML markup language. The intent of this dis-
---
4 www.openarml.org
3 www.web3D.org
cussion is to provide the essential details of what we did, both the unique features and the key engineering decisions.
4.1 Argon and The Web Architecture
As we discussed in Section 1, a major design goal of this project is to take advantage of web technology by integrating as tightly as we can into the web. Figure 2 illustrates a spectrum of web programming models that we have leveraged with Argon through this integration. While these same models are commonly used for mobile web development, recognizing their value for the creation of AR applications represents a non-trivial shift in AR application design to a methodology that fully leverages existing distributed computing paradigms. Both the high-level architecture and Argon were designed with the goal of enabling the entire spectrum of web architectures. The example projects presented in later sections embrace one or more of the models depicted in Figure 2.
1) Static KARML/XML. This approach represents the simplest model for serving content to users. Static files are hosted on a web server and requested by a specific URI. All the content elements are contained within the returned document and referenced resources are resolved by the browser without requiring explicit management by the content author, just as with traditional HTML content.
2) KARML + AJAX + Client Side Processing. In this approach, the returned document will include a portion of the content or user interface elements used by the channel and a collection of scripts that use AJAX techniques to make requests to 3rd party data sources. Using the Argon JavaScript API, content elements are instantiated with the returned JSON or XML data. The client side scripts may also contain custom layout and user interaction code provided by a channel author.
3) Web Application With Dynamically Generated KARML. Web applications dynamically generate content similar to that discussed in 1 & 2. The web application keeps track of user sessions and sends updates either through the standard KML NetworkLink mechanism or by responding to AJAX requests.
4) Server Side Aggregation & Processing. An “advanced channel server” communicates with 3rd party data sources and/or other channel servers on the client’s behalf, in effort to provide a maximum level of server side processing. This architecture can support aggregation based on the user’s preferences, even while the user may not be running the browser application or have the client channel loaded. This configuration acts as an intelligent agent, and we envision that this offers an environment where additional processing may take place such as image/content analysis or computation of complex layout/filtering/clustering algorithms. As mentioned in the introduction, the ability to perform these computationally intensive tasks in the cloud allows authors to create experiences that would otherwise require too much computation on the client device, too much data communication, or would require the user to run the browser more than they otherwise would want to.
4.2 Application Architecture
The current implementation of Argon has been built on the iOS platform and is deployed on the iPhone4 and iPad2 devices running iOS version 4.2 and above. The application features a hybrid architecture wherein portions of the application are implemented in native code (in particular, Objective-C and C on iOS) and exposed to content developers through custom bindings to the embedded JavaScript interpreter.
Figure 3 illustrates the data flow between the components of the application as well as illustrating the layering of the user interface, WebKit view-layers, video layer, and panorama layer.
4.2.1 WebKit
At the time the project was started, iOS was the only mobile platform with an efficient implementation of WebKit that featured hardware-accelerated graphics support and support for CSS3 3D transforms. At the time of writing, iOS remains the only mobile platform that supports CSS3 3D transforms and this is a requirement for creating the HTML scene graph into which content elements can be placed and pushed out into the world.
Excluding panoramic content, all content for a given channel is rendered in a single WebKit instance which consists of a view, HTML renderer, and scripting context. Scripts associated with a channel are sandboxed in a manner that mirrors tabs in a desktop browser. Multi-channel functionality is realized by layering multiple overlapping transparent WebKit views/instances on top of the video and panorama views and behind the application user interface.
4.2.2 Private & Public JavaScript APIs
On the scripting side, functionality is divided between Private and Public APIs. As the name suggests, the Private API is not meant to be used by content developers.
The Private and Public JavaScript APIs act as connection points between native code and interpreted code. The Public API also provides a KML DOM whereby content authors may access and modify KML nodes that are implemented as JavaScript object prototypes. Authors use the KML DOM to dynamically instantiate KML elements and add them to the scene.
The main function of the Private API is constructing and maintaining the HTML DOM data structures for KML objects and responding to messages from native code. An example of this messaging interplay is the transmission and use of device orientation data. The application uses the iOS device APIs to obtain and fuse the raw sensor data. The combined data is sent over the
bridge using pre-determined message structures. In the case of device orientation, the message consists of a transformation matrix. The Private API uses this information to transform the scene graph appropriately.
4.2.3 Native/Interpreted Bridge
Communication between native code and interpreted code is implemented through a Native/Interpreted Bridge. The WebKit Event Source API (part of the HTML5 specification) is used for sending high frequency updates (such as orientation and marker tracking) from native code to interpreted code. The Event Source is a string/message based API designed for high frequency unidirectional in-browser updates. Complementary methods and functions exist in the Argon Private JavaScript API that respond to the messages received from native code appropriately and/or notify content authors that various events have occurred.
Method calls from interpreted to native code are achieved by utilizing a URL interception scheme that leverages the fact that the iOS URL loading system lets an application inspect a given URL load request and decide whether to proceed with the load or react to specially encoded URLs in some other fashion. In this scenario, URLs of the form
```
kharma://Class.Method/arguments
```
describe a call to the Method of Class with the provided arguments. When the application encounters a URL of this type it sends a message to the appropriate class to call the desired method with the specified arguments. Content authors do not call the native classes directly. Instead, they call regular JavaScript methods exposed through the Argon Public JavaScript API.
4.3 KARML: KML AR Markup Language
The primary purpose of the KARML markup is to act as a binding between the presentation content and locations in the physical world. Using KML was attractive to us for a number of reasons. First, it is broadly used not only by Google Maps and Google Earth but also as an import and export format by numerous Geographic Information Systems (GIS). Secondly, the Google Earth application is already in some ways a demonstration of a truly global Virtual Reality (VR) system. Finally, standard KML already supports the binding of HTML 2D and COLLADA 3D presentation content to physical locations. Motivated by these traits, we developed the KARML extension in an attempt to re-conceive the existing markup in the context of augmented reality use.
Standard KML already supports the inclusion of arbitrary HTML content into the description element of geolocated features called Placemarks. In what has become a widely used technique, Placemarks generate geolocated labels and icons which, when selected, reveal descriptive information balloons. The following KML example markup demonstrates placing a single image in a balloon, the result of which can be seen Figure 4a:
```
<Placemark id="myPlacemark">
<name>Standard KML</name>
<description>
gpx:balloonVisibility="true" gpx:balloonVisibility="false"
http://argonmaps.gatech.edu/your_content_here.png"
</description>
<Point>
<coordinates>-84.3866,33.7637,578</coordinates>
</Point>
<Placemark>
```
Figure 4b shows how this same markup is rendered in Argon, where we attempt to render standard KML faithfully. Neither the GE application nor KML provide a means to remove the framed balloon decoration. The KARML extension adds a `displayMode` enumerator that indicates balloon HTML content should be rendered undecorated. Replacing line 1 in the above markup with the following markup leverages this feature:
```
<karma:displayMode>undeckored</karma:displayMode>
```
By default, feature balloons are oriented towards the viewer and scaled relatively in depth. A limitation of standard KML is that placemarks can only be given a geospatial translation using the Point element. The KARML extension adds a new Balloon element modeled after the existing KML Model element to add control for the location, orientation and scaling of balloon content. The KARML `orientationMode` and `scaleMode` elements let the user toggle billboard and relative scaling modes respectively. Adding the following markup in place of or in addition to the
Point element positions the same HTML content at a fixed location, orientation and scale:
In the Argon browser, each pixel of HTML content equates to 1 centimeter in the real world. In Google Earth, we use a template COLLADA model (Figure 4c) to help position content in the real world. Figure 4d illustrates how the above markup appears in the Argon browser. Another limitation of standard KML is that all latitudes and longitudes are absolute references to degree coordinates. Any practical AR application is likely to benefit from having both hierarchical frames of reference and alternate units of measurement. The KARML extension adds a `locationMode` which enumerates “fixed” and “relative” modes. Replacing the Balloon element in the markup above with the following markup positions the HTML content relative to another KML feature:
In the above markup, the Location element positions the balloon 6.0 meters north of a KML feature in the same document named “otherPlacemark”. This fragment reference could instead point to content in another KML file currently loaded by the Argon browser. Argon supports several built in references including “#user” (the default) and “#display”. Positioning content relative to the display is functionally equivalent to the following markup which uses the KML ScreenOverlay element to position arbitrary HTML content on the display screen (Figure 4e):
It is also our goal to include other sources of position information such as fiducial markers, Natural Feature Tracking (NFT) and peripheral devices through libraries such as VRPN. An upcoming release of Argon will allow using the following markup in place of the Balloon element to position the same HTML content on typical AR markers (Figure 4f):
Because Argon adds HTML content to each WebView dynamically, the normal document initialization often does not work as expected. The following markup demonstrates how to assign an initialization function by adding JavaScript into the description content:
The above markup binds to a `focusChanged` event in order to call into the Argon Public API, find its associated KML object and set its visibility to the channel focus state. This has the effect of making the feature invisible when the channel is out of focus. In contrast to how it is implemented in the Google Earth application, all KARML content contained within a single Argon channel shares the same HTML DOM and CSS/JavaScript context.
4.4 GeoSpots
The GPS sensors currently in use by mobile devices are heavily filtered and frequently only accurate to within 10 meters. This low accuracy means that objects depicted on the phone in front of the user can easily be actually behind them, effectively limiting the range within which those augmentations can be delivered. The Argon browser lets users manually override the reported tracking of the device by physically aligning themselves at pre-surveyed locations nearby called GeoSpots. The KML standard and Google Earth application use the Camera and LookAt elements to establish viewing locations in the virtual world. In KARML and the Argon browser, we re-appropriate the KML standard by denoting any features that have a Camera element as GeoSpots. In addition to improving tracking accuracy, GeoSpots allow Argon to report an improved accuracy range to the channel so that content authors can respond in kind. Beyond simply manipulating the range of objects within view, increased accuracy may also motivate changes in visual representation (i.e. from labels/icons to more detailed content).
When available, we also go one step further and let the user replace the video at GeoSpot locations with a panoramic image that changes with the orientation of the device (Figure 4g). Although the orientation sensor continues to determine the background viewed within, the relationship between that background and augmentations in the browser remains registered and stable. If the panoramas are an accurate representation of the GeoSpot location, this technique effectively eliminates any error in orientation accuracy. The use of panoramic backdrops not only increases the in situ options for viewing AR content but also greatly expands the potential audience for that content.
5 Illustrative Examples
A number of applications developed by ourselves, groups within Georgia Tech and outside groups have resulted in a rich set of examples that illustrate of the viability of Argon as an AR development platform. In this section we describe several of these projects and highlight how each leverages the unique attributes that Argon’s web-centric models have to offer.
5.1 Server-less AR Mashups
This example demonstrates how the default Argon rendering of standard KML lets users create geospatial AR mashups from serv-
cies like Yahoo Pipes. Yahoo Pipes lets users create composite web services in a drag-and-drop interface and retrieve those results as a map, JSON or KML. This allows anyone to create mashups of web content and deliver them to an Argon browser without hosting or writing any individual markup. Any Yahoo Pipe can be called by entering its URL into the Argon address bar along with parameters indicating it should return the results as KML (Figure 5a). The results returned by Yahoo create a new channel that renders placemarks as icons that can be expanded into balloons and brought to fill the HUD through a series of clicks.
5.2 Webservice-based Searches
Four example channels demonstrate how webservice-based AR searches can be implemented in Argon in as little as fifty lines of HTML and JavaScript code. Like the other three similar searches, the Twitter search channel places an input box in an overlay and uses AJAX techniques to call the Twitter webservice and return JSON data (Figure 5b). The resulting code is almost identical to similar code executed in desktop browsers except that the JavaScript uses the Argon Public API to dynamically create placemarks. These searches also illustrate how channels can register for application events and change their state when focus shifts from one channel to another. When running multiple active channels, a single channel remains in focus at any one time. Tapping on content in an out-of-focus channel changes focus (analogous to the desktop). The search examples register for focus Changed events in JavaScript in order to hide their respective search box and minimize placemarks to labels and icons when out of focus. Each search channel has an overlay image icon along the left of the screen to facilitate switching focus when no content is visible.
5.3 Rapid Server-based AR Development
The AR Greeting Card application (Figure 5c) was developed in about 16 man-hours over the weekend prior to the February 14th debut of Argon in the iTunes store. It consists of a single MySQL table and two PHP scripts. The webpage script presents a form, populates the table with a unique user ID plus desired greeting messages and sends an e-mail to the recipient with a link to a second script. This second PHP script, instead of returning HTML, sets the content type to KML and returns KARML specific to the passed in ID parameter. When Argon is installed on the iPhone or iPad, clicking on a link that uses the kharma scheme launches Argon and loads the KML generated by the URL. In this example, the recipient can click on images positioned relative to themselves to reveal a sequence of up to five messages.
5.4 Region Monitoring and GeoSpot Tracking Override
This example illustrates the use of KML region monitoring and GeoSpots to manage the presentational aspects of AR content. The Clough Undergraduate Learning Center is a new building under construction on the Georgia Tech campus. Regions attached to KML placemarks generate region Changed events when a boundary-crossing event occurs. When inside a region surrounding the construction site, a billboarded placemark over the site instructs the user that they can view a pre-visualization of the new building from one of two nearby GeoSpots (Figure 5d).
Bringing up the Argon map displays nearby GeoSpots along with a detail that includes a textual description and an image of exactly where to stand. By “going to” a GeoSpot, the user overides the GPS location for all active channels, automatically generating a new location Changed event in each and modifying the associated location accuracy. When horizontal accuracy drops to within a threshold, the billboarded message is replaced with a rendering of the new building (created by members of the Georgia Tech Imagine Lab in the School of Architecture) from the GeoSpot location. Clicking on different parts of the building brings up detail renderings. The detail renderings and their associated interaction were developed in an HTML browser by re-appropriating existing online content and then pasted as a whole into separate KML placemark descriptions. Given the inaccuracy of magnetic compasses, there is often mis-registration between the rendering and surrounding buildings. When the user switches to a panoramic view of the construction site, a billboarded placemark over the site appears to be misregistered. This example illustrates the use of KML region monitoring and GeoSpot tracking override to manage the presentational aspects of AR content.
5.5 Rapid AR Development Leveraging Existing Tools
Several projects illustrate how HTML, CSS, JavaScript and PHP skill sets can be leveraged to create AR content in Argon. The 22nd Floor Observation Deck demo was created by Engauge Interactive Atlanta (Figure 6a) by re-using existing material in a re-
ported man-hour investment of about 8 hours. The main application development, including reading of a JSON database, CSS styling and associated image galleries was done primarily in a desktop browser environment. Of over 400 lines of markup, only about forty lines are KML (a KML ScreenOverlay for application code plus image galleries and a KML PhotoOverlay for the GeoSpot) and 10 lines are specific calls to the Argon Public API to create placemarks and automatically move to the GeoSpot. The panorama was created by using the Photosynth application on the iPhone 4 and uploading it to a conversion utility on our website.
A four-student senior CS design project created an Argon-based game, Dotman’s Revenge (Figure 6b), over the course of a semester that features multiple maps, leaderboards and fully realized game-play. The game characters consisted of 2D billboarded images of white pellets and a yellow protagonist. The application logic for the game and PHP-driven scoreboard was developed primarily in a desktop browser environment. Of over 1200 lines of PHP and HTML markup, less than 100 are KML and less than 200 lines of JavaScript were specific to the dynamic creation and deletion of placemark objects.
5.6 Blending of 2D Interfaces and AR Content
Several projects illustrate how projects based primarily on 2D content can incorporate AR aspects using Argon. The Oakland Experience (Figure 6c) is the continuation of an ongoing project based on the narratives of residents at the historic Oakland Cemetery in Atlanta. The application is primarily a linear tour of grave sites at which different audio voiceovers can be selected. The application hides the standard Argon user interface and manages client-side AR concepts to manage the location of the Poring in the game-space. Our colleague remarked that this was likely the most fully realized AR application he had witnessed from students in such a short timespan.
Another application, Poring AR, was developed during a 6-week class taught by one of our colleagues while at Alto University in Finland (Figure 6e). The application, about maintaining the health of virtual creatures called Poring, consists of rich HUD-based interactions and primarily uses AR concepts to manage the location of the Poring in the game-space. Our colleague remarked that this was likely the most fully realized AR application he had witnessed from students in such a short timespan.
5.7 Client-Server Content and Layout Management
Argon facilitates applications that use a combination of client and server-based interactivity and filtering of data. In collaboration with the Georgia Tech Research Network Operations Center (RNOC), the Virtual Tour Guide (VTG) aggregates social media content such as Tweets, Flickr images, YouTube videos through a proxy server into a personalized experience based on a priori social media affiliations (i.e. Facebook “likes”) (Figure 6f). The VTG application uses a server-side geospatial database and current user position to fill 8 bins of orientation space around the user with prioritized content. The application uses regular server polling and the KML NetworkLink Updates scheme to dynamically create, delete and modify placemarks around the user. This polling scheme is combined with user-initiated keyword filtering through the client interface (i.e. “contacts”, “sports”).
In an effort to create a single application experience, the VTG application hides the standard Argon user interface and manages activities such as moving to GeoSpots using Argon Public API calls. The application also does its own display management of placemark balloons. To avoid the overlapping of placemark labels, client-side CSS and JavaScript dynamically manage the apparent position of labels with leader-lines to their actual position. A priority assigned to each placemark delivered from the server is used in a heuristic that dynamically moves the four highest ranked labels towards the four corners of the screen.
6 Conclusions and Future Work
In this paper we have demonstrated that web technologies present a viable and powerful solution for creating mobile augmented reality applications using existing web standards. We described the software architecture of the Argon AR web browser and how our implementation leverages the existing WWW ecosystem to provide an application environment for AR that allows for multiple channels to be viewed simultaneously, bringing us one step closer to the vision of immersive AR. We described our extensions to KML in the form of KARML and provided details and examples of how we have re-appropriated KML for AR applications. We described a number of past and current projects and highlighted the salient aspects of each project with respect to both Argon and the vision of an AR application environment.
In the coming months, we plan to further develop Argon to add new features including full 3D model rendering, support for other markup languages (e.g., GML), natural feature tracking, protocols for inter-channel communication, space management and layout behaviors and abstractions, greater support for use of tracking data across independent channels without prior coordination, the ability to capture and upload images and video, manual control of view orientation and pinch-to-zoom capabilities, expanded client API, online authoring tools, and support for desktop & other mobile platforms.
Acknowledgements
We would like to acknowledge all the people who have contributed to this project, especially the many students at Georgia Tech who have used the browser during the year leading up to its release. We would like to thank Jay Bolter, Matt Sanders, Russ Clark, Jeff Evans, Elizabeth Mynatt and Mark Billinghurst for supporting this project in a myriad of ways, most especially putting their own classes and projects on the line by relying on this vision of a web-based AR environment. This project has been primarily supported by the Alcatel-Lucent University Innovations Program, but also by Motorola, Turner Broadcasting, and the Urban Media Lab at Georgia Tech who have used the browser during the year leading up to its release.
We would like to thank all those who have contributed to the Argon project, especially the many students at Georgia Tech who have used the browser during the year leading up to its release. We would like to thank Jay Bolter, Matt Sanders, Russ Clark, Jeff Evans, Elizabeth Mynatt and Mark Billinghurst for supporting this project in a myriad of ways, most especially putting their own classes and projects on the line by relying on this vision of a web-based AR environment. This project has been primarily supported by the Alcatel-Lucent University Innovations Program, but also by Motorola, Turner Broadcasting, and the Urban Media Lab at Georgia Tech who have used the browser during the year leading up to its release.
We would like to thank all those who have contributed to the Argon project, especially the many students at Georgia Tech who have used the browser during the year leading up to its release. We would like to thank Jay Bolter, Matt Sanders, Russ Clark, Jeff Evans, Elizabeth Mynatt and Mark Billinghurst for supporting this project in a myriad of ways, most especially putting their own classes and projects on the line by relying on this vision of a web-based AR environment. This project has been primarily supported by the Alcatel-Lucent University Innovations Program, but also by Motorola, Turner Broadcasting, and the Urban Media Lab at Georgia Tech who have used the browser during the year leading up to its release.
We would like to thank all those who have contributed to the Argon project, especially the many students at Georgia Tech who have used the browser during the year leading up to its release. We would like to thank Jay Bolter, Matt Sanders, Russ Clark, Jeff Evans, Elizabeth Mynatt and Mark Billinghurst for supporting this project in a myriad of ways, most especially putting their own classes and projects on the line by relying on this vision of a web-based AR environment. This project has been primarily supported by the Alcatel-Lucent University Innovations Program, but also by Motorola, Turner Broadcasting, and the Urban Media Lab at Georgia Tech who have used the browser during the year leading up to its release.
References
|
{"Source-Url": "http://www.alexshill.com/papers/ISMAR2011_Argon.pdf", "len_cl100k_base": 9976, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32261, "total-output-tokens": 12161, "length": "2e13", "weborganizer": {"__label__adult": 0.0004892349243164062, "__label__art_design": 0.003543853759765625, "__label__crime_law": 0.00037384033203125, "__label__education_jobs": 0.0023632049560546875, "__label__entertainment": 0.000331878662109375, "__label__fashion_beauty": 0.0003414154052734375, "__label__finance_business": 0.0004901885986328125, "__label__food_dining": 0.0003733634948730469, "__label__games": 0.0016298294067382812, "__label__hardware": 0.00392913818359375, "__label__health": 0.0008120536804199219, "__label__history": 0.0010519027709960938, "__label__home_hobbies": 0.0001583099365234375, "__label__industrial": 0.0005550384521484375, "__label__literature": 0.0004858970642089844, "__label__politics": 0.00025463104248046875, "__label__religion": 0.0006651878356933594, "__label__science_tech": 0.357421875, "__label__social_life": 0.0001201629638671875, "__label__software": 0.033721923828125, "__label__software_dev": 0.58935546875, "__label__sports_fitness": 0.00032806396484375, "__label__transportation": 0.0006308555603027344, "__label__travel": 0.0003502368927001953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54733, 0.01904]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54733, 0.23478]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54733, 0.91366]], "google_gemma-3-12b-it_contains_pii": [[0, 2552, false], [2552, 9774, null], [9774, 16765, null], [16765, 22289, null], [22289, 27800, null], [27800, 31991, null], [31991, 36772, null], [36772, 41600, null], [41600, 45597, null], [45597, 54733, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2552, true], [2552, 9774, null], [9774, 16765, null], [16765, 22289, null], [22289, 27800, null], [27800, 31991, null], [31991, 36772, null], [36772, 41600, null], [41600, 45597, null], [45597, 54733, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54733, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54733, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54733, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54733, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54733, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54733, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54733, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54733, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54733, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54733, null]], "pdf_page_numbers": [[0, 2552, 1], [2552, 9774, 2], [9774, 16765, 3], [16765, 22289, 4], [22289, 27800, 5], [27800, 31991, 6], [31991, 36772, 7], [36772, 41600, 8], [41600, 45597, 9], [45597, 54733, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54733, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
65b871923f0285aae8850e7b40c7c28307a8c196
|
[REMOVED]
|
{"len_cl100k_base": 14477, "olmocr-version": "0.1.53", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 74852, "total-output-tokens": 17529, "length": "2e13", "weborganizer": {"__label__adult": 0.0003895759582519531, "__label__art_design": 0.0010156631469726562, "__label__crime_law": 0.000377655029296875, "__label__education_jobs": 0.001987457275390625, "__label__entertainment": 9.995698928833008e-05, "__label__fashion_beauty": 0.00020956993103027344, "__label__finance_business": 0.00044798851013183594, "__label__food_dining": 0.0004682540893554687, "__label__games": 0.0008149147033691406, "__label__hardware": 0.004913330078125, "__label__health": 0.0006933212280273438, "__label__history": 0.0005283355712890625, "__label__home_hobbies": 0.00026035308837890625, "__label__industrial": 0.0018892288208007812, "__label__literature": 0.00036215782165527344, "__label__politics": 0.00035071372985839844, "__label__religion": 0.0007457733154296875, "__label__science_tech": 0.284912109375, "__label__social_life": 9.578466415405272e-05, "__label__software": 0.01079559326171875, "__label__software_dev": 0.68603515625, "__label__sports_fitness": 0.0003788471221923828, "__label__transportation": 0.001708984375, "__label__travel": 0.00025844573974609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64345, 0.04783]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64345, 0.47659]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64345, 0.92106]], "google_gemma-3-12b-it_contains_pii": [[0, 1090, false], [1090, 3487, null], [3487, 5473, null], [5473, 6801, null], [6801, 7556, null], [7556, 9893, null], [9893, 11403, null], [11403, 13553, null], [13553, 15296, null], [15296, 17142, null], [17142, 18189, null], [18189, 19121, null], [19121, 20128, null], [20128, 21653, null], [21653, 21653, null], [21653, 23888, null], [23888, 25200, null], [25200, 25319, null], [25319, 25969, null], [25969, 27595, null], [27595, 29939, null], [29939, 31478, null], [31478, 32276, null], [32276, 32720, null], [32720, 35131, null], [35131, 37377, null], [37377, 38335, null], [38335, 40855, null], [40855, 42942, null], [42942, 43391, null], [43391, 46074, null], [46074, 46427, null], [46427, 46427, null], [46427, 47947, null], [47947, 48579, null], [48579, 50288, null], [50288, 50749, null], [50749, 52240, null], [52240, 52853, null], [52853, 55294, null], [55294, 57754, null], [57754, 60210, null], [60210, 62003, null], [62003, 63875, null], [63875, 64345, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1090, true], [1090, 3487, null], [3487, 5473, null], [5473, 6801, null], [6801, 7556, null], [7556, 9893, null], [9893, 11403, null], [11403, 13553, null], [13553, 15296, null], [15296, 17142, null], [17142, 18189, null], [18189, 19121, null], [19121, 20128, null], [20128, 21653, null], [21653, 21653, null], [21653, 23888, null], [23888, 25200, null], [25200, 25319, null], [25319, 25969, null], [25969, 27595, null], [27595, 29939, null], [29939, 31478, null], [31478, 32276, null], [32276, 32720, null], [32720, 35131, null], [35131, 37377, null], [37377, 38335, null], [38335, 40855, null], [40855, 42942, null], [42942, 43391, null], [43391, 46074, null], [46074, 46427, null], [46427, 46427, null], [46427, 47947, null], [47947, 48579, null], [48579, 50288, null], [50288, 50749, null], [50749, 52240, null], [52240, 52853, null], [52853, 55294, null], [55294, 57754, null], [57754, 60210, null], [60210, 62003, null], [62003, 63875, null], [63875, 64345, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64345, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64345, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64345, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64345, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64345, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64345, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64345, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64345, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64345, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64345, null]], "pdf_page_numbers": [[0, 1090, 1], [1090, 3487, 2], [3487, 5473, 3], [5473, 6801, 4], [6801, 7556, 5], [7556, 9893, 6], [9893, 11403, 7], [11403, 13553, 8], [13553, 15296, 9], [15296, 17142, 10], [17142, 18189, 11], [18189, 19121, 12], [19121, 20128, 13], [20128, 21653, 14], [21653, 21653, 15], [21653, 23888, 16], [23888, 25200, 17], [25200, 25319, 18], [25319, 25969, 19], [25969, 27595, 20], [27595, 29939, 21], [29939, 31478, 22], [31478, 32276, 23], [32276, 32720, 24], [32720, 35131, 25], [35131, 37377, 26], [37377, 38335, 27], [38335, 40855, 28], [40855, 42942, 29], [42942, 43391, 30], [43391, 46074, 31], [46074, 46427, 32], [46427, 46427, 33], [46427, 47947, 34], [47947, 48579, 35], [48579, 50288, 36], [50288, 50749, 37], [50749, 52240, 38], [52240, 52853, 39], [52853, 55294, 40], [55294, 57754, 41], [57754, 60210, 42], [60210, 62003, 43], [62003, 63875, 44], [63875, 64345, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64345, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
002cf55cc657ff47a31e67c840bb120ca87497f4
|
INSIDE DISCRETE-EVENT SIMULATION SOFTWARE:
HOW IT WORKS AND WHY IT MATTERS
Thomas J. Schriber
Computer and Information Systems
The University of Michigan
Ann Arbor, Michigan 48109-1234, U.S.A.
Daniel T. Brunner
Systemflow Simulations, Inc.
6366 Guilford Avenue, Suite 300
Indianapolis, Indiana 46220-1750, U.S.A.
ABSTRACT
This paper provides simulation practitioners and consumers with a grounding in how discrete-event simulation software works. Topics include discrete-event systems; entities, resources, control elements and operations; simulation runs; entity states; entity lists; and entity-list management. The implementation of these generic ideas in AutoMod and SLX is described. The paper concludes with several examples of “why it matters” for modelers to know how their simulation software works, including coverage of SIMAN, ProModel and GPSS/H as well as the other two tools.
1 INTRODUCTION
1.1 Background
A “black box” approach is often taken in teaching and learning discrete-event simulation software. The external characteristics of the software are studied, but the foundation on which the software is based is ignored or is touched on only briefly. Choices made in implementation of the foundation might not be studied at all and related to step-by-step model execution. The modeler therefore might not be able to think things through when faced with such needs as developing good approaches for modeling complex situations, using interactive tools to come to a rapid understanding of error conditions arising during model development, and using interactive tools to verify that complex system logic has been captured correctly in a model. The objective of this paper, then, is to describe the logical underpinnings of discrete-event simulation and illustrate this material in terms of various implementations of discrete-event simulation software.
This paper is a revised version of an identically named paper from the 1996 Winter Simulation Conference (Schriber and Brunner 1996). The 1996 paper covered the entity-list management rules and “why it matters” for SIMAN, ProModel, and GPSS/H. An expanded version of the 1996 material containing figures, flow charts, and additional text is in Schriber and Brunner (1998).
1.2 Structure of the Paper
In Sections 2, 3 and 4 we comment on the nature of discrete-event simulation; basic simulation constructs such as entities, resources, control elements, and operations; and model execution. Sections 5 and 6 deal in a general way with entity states and entity management data structures. Section 7 discusses three specific implementations of entity management rules. Section 8 explores “why it matters.”
1.3 Terminology and Conventions
Throughout this paper we use terms that we define as well as terms reserved by the developers of particular simulation tools. Terms we define are boldfaced on first use. Tool-specific terms are Capitalized or, where appropriate, are spelled out in ALL CAPS.
2 ABOUT DISCRETE-EVENT SIMULATION
2.1 The Transaction-Flow World View
The “transaction-flow world view” often provides the basis for discrete-event simulation. In the transaction-flow world view, a system is visualized as consisting of discrete units of traffic that move (“flow”) from point to point in the system while competing with each other for the use of scarce resources. The units of traffic are sometimes called “transactions,” giving rise to the phrase “transaction flow.”
Numerous systems fit the preceding description. Included are many manufacturing, material handling, transportation, health care, civil, natural resource, communication, defense, and information processing systems, and queuing systems in general.
2.2 The Nature of Discrete-Event Simulation
A discrete-event simulation is one in which the state of a model changes at only a discrete, but possibly random, set of simulated time points. Two or more traffic units often have to be manipulated at one and the same time point. Such “simultaneous” movement of traffic at a time point is achieved by manipulating units of traffic serially at that time point. This often leads to logical complexities in discrete-event simulation because it raises questions about the order in which two or more units of traffic are to be manipulated at one time point.
2.3 Discrete-Event Modeling Languages
The challenges faced by a modeler escalate for the designer of a modeling language. The designer must take the logical requirements of discrete-event simulation into account in a generalized way. Choices and tradeoffs exist. As a result, although discrete-event simulation languages are similar in broad terms, they can and typically do differ in subtle but important particulars.
3 ENTITIES, RESOURCES, CONTROL ELEMENTS, AND OPERATIONS
The term entity is used here to designate a unit of traffic (a “transaction”). Entities instigate and respond to events. An event is a happening that changes the state of a model (or system). In a model of an order-filling system, for example, the arrival of an order, which is an event, might be simulated by bringing an entity into the model.
There are two possible types of entities, here referred to as external entities and internal entities. External entities are those whose creation and movement is explicitly arranged for by the modeler. In contrast, internal entities are created and manipulated implicitly by the simulation software itself. For example, internal entities might be used in some languages to simulate machine failures, whereas external entities might be used to simulate the use of machines.
The term resource designates a system element that provides service (such as a drill, an automated guided vehicle, or space in an input buffer). The users of resources are usually entities. (A work-in-process entity claims space in an input buffer, then captures an automated guided vehicle to move it to the input buffer.) Resources are usually capacity-limited, so entities compete for their use and sometimes must wait to use them, experiencing delay as a result.
The term control element designates a construct that supports other delays or logical alternatives based on a system’s state. Control elements can take the form of switches, counters, user data values, and system data values built into the modeling tool. Complex control may rely on truth-valued expressions that use arithmetic and/or Boolean combinations of control elements.
An operation is a step carried out by or on an entity while it moves through a system. The operations applicable to a ship at a harbor might be these: arrive; capture a berth; capture a tugboat; get pulled into the berth; free the tugboat; load cargo; etc.
4 OVERVIEW OF MODEL EXECUTION
4.1 Experiments, Replications, and Runs
A simulation project is composed of experiments. Experiments are differentiated by the use of alternatives in a model’s logic and/or data. An alternate part sequencing rule might be tried, for example, or the quantity of various machines might be varied.
Each experiment consists of one or more replications (trials). A replication is a simulation that uses the experiment’s model logic and data but a different set of random numbers, and so produces different statistical results that can then be analyzed across a set of replications.
A replication involves initializing the model, running it until a run-ending condition is met, and reporting results. This “running it” phase is called a run.
4.2 Inside a Run
During a run the simulation clock (an internally managed, stored data value) tracks the passage of simulated time (as distinct from wall-clock time). The clock advances in discrete steps (typically of unequal size) during the run. After all possible actions have been taken at a given simulated time, the clock is advanced to the time of the next earliest event. Then the appropriate actions are carried out at this new simulated time, etc.
The execution of a run thus takes the form of a two-phase loop: “carry out all possible actions at the current simulated time,” followed by “advance the simulated clock,” repeated over and over again until a run-ending condition comes about. The two phases are here respectively called the Entity Movement Phase (EMP) and the Clock Update Phase (CUP).
5 ENTITY STATES
Entities migrate from state to state while they work their way through a model. An entity is always in one of five alternative states, as detailed below.
5.1 The Active State
The Active State is the state of the currently moving entity. Only one entity moves at any instant of real-time. This entity progresses through its operations nonstop until it encounters a delay. It then migrates to an alternative state. Some other entity then becomes the next active entity. And so on.
5.2 The Ready State
During an Entity Movement Phase there may be more than one entity ready to move, and yet entities can only move (be in the Active State) one-by-one. The Ready State is the state of entities waiting to enter the Active State during the current Entity Movement Phase.
5.3 The Time-Delayed State
The Time-Delayed State is the state of entities waiting for a known future simulated time to be reached so that they can then reenter the Ready State. A “part” entity is in a Time-Delayed State, for example, while waiting for the future simulated time at which an operation being performed on it by a machine will come to an end.
5.4 The Condition-Delayed State
The Condition-Delayed State is the state of entities delayed until some specified condition comes about, e.g., a “part” entity might wait in the Condition-Delayed State until its turn comes to use a machine. Condition-Delayed entities are removed automatically from the Condition-Delayed state when conditions permit.
5.5 The Dormant State
Sometimes it is desirable to put entities into a state from which no escape will be triggered automatically by changes in model conditions. We call this state the Dormant State. Dormant-State entities rely on modeler-supplied logic to transfer them from the Dormant State back to the Ready State. Job-ticket entities might be put into a Dormant State, for example, until an operator entity decides which job-ticket to pull next.
6 ENTITY MANAGEMENT STRUCTURES
Simulation software uses the following lists to organize and track entities in the five entity states.
6.1 The Active Entity
The active entity forms an unnamed “list” consisting only of the active entity. The Active-State entity moves nonstop until encountering an operation that puts it into another state (transfers it to another list) or removes it from the model. A Ready-State entity then becomes the next Active-State entity. Eventually there is no possibility of further action at the current time. The EMP then ends and a Clock Update Phase begins.
6.2 The Current Events List
Entities in the Ready State are kept in a single list here called the current events list (CEL). Entities migrate to the current events list from the future events list, from delay lists, and from user-managed lists. (Each of these latter lists is described below.) In addition, entities cloned from the Active-State entity usually start their existence on the current events list.
6.3 The Future Events List
Entities in the Time-Delayed State belong to a single list into which they are inserted at the beginning of their time-based delay. This list, called the future events list (FEL) here, is usually ranked by increasing entity move time. (Move time is the simulated time at which an entity is scheduled to try to move again.) At the time of entity insertion into the FEL, the entity’s move time is calculated by adding the value of the simulation clock to the known (sampled) duration of the time-based delay.
After an Entity Movement Phase is over, the Clock Update Phase sets the clock’s value to the move time of the FEL’s highest ranked (smallest move time) entity. This entity is then transferred from the FEL to the CEL, migrating from the Time-Delayed State to the Ready State and setting the stage for the next EMP to begin.
The preceding statement assumes there are not other entities on the FEL whose move time matches the clock’s updated value. In the case of move-time ties, some tools will transfer all the time-tied entities from the FEL to the CEL during a single CUP, whereas other tools take a “one entity transfer per CUP” approach.
Languages that work with internal entities usually use the FEL to support the timing requirements of these entities. The FEL is typically composed both of external and internal entities in such languages.
6.4 Delay Lists
Delay lists are lists of entities in the Condition-Delayed State. These entities are waiting for a condition to come about (e.g., waiting their turn to use a machine) so they can be transferred automatically into the Ready State on the current events list. Delay lists, which are generally created automatically by the simulation software, are managed by using related waiting or polled waiting.
If a delay can be related easily to events in the model that might resolve the condition, then related waiting can be used to manage the delay list. For example, suppose a machine’s status changes from busy to idle. In response, the software can automatically remove the next machine-using entity from the appropriate delay list and put it in the Ready State on the current events list. Related waiting is the prevalent approach used to manage conditional delays.
If the delay condition is too complex to be related easily to events that might resolve it, polled waiting can be used. With polled waiting the software checks routinely to see if entities can be transferred from one or more delay lists to the Ready State. Complex delay conditions for which polled waiting can be useful include Boolean combinations of state changes, e.g., a part supply runs low or an output bin needs to be emptied.
6.5 User-Managed Lists
User-managed lists are lists of entities in the Dormant State. The modeler must take steps to establish such lists and provide the logic needed to transfer entities to and from the lists. (The underlying software has no way to know why entities are put into user-managed lists and so has no basis for removing entities from such lists.)
7 IMPLEMENTATION IN TWO TOOLS
The tools chosen for commentary on implementation particulars are AutoMod, Version 8, from AutoSimulations, Inc., and SLX, Release 1, from Wolverine Software Corporation. (See the References.) A previous version of this paper (Schriber and Brunner 1996) covered Systems Modeling Corporation’s SIMAN, ProModel Corporation’s ProModel, and Wolverine Software Corporation’s GPSS/H in similar detail. These five are among more than fifty tools reported in 1995 for discrete-event simulation (Swain 1995). Some other tools might be better suited than any of these for particular modeling activities, but we think that these tools are representative. (Those interested in the possibility of implementing discrete-event simulation models in a non-simulation programming language such as C or C++ are referred to Balci 1988.)
7.1 AutoMod
AutoMod equivalents for the preceding generic terms are given in Table 1. For example, AutoMod uses Actions to specify operations for Loads.
Table 1: AutoMod Terminology
<table>
<thead>
<tr>
<th>Generic Term</th>
<th>AutoMod Equivalent</th>
</tr>
</thead>
<tbody>
<tr>
<td>External Entity</td>
<td>Load</td>
</tr>
<tr>
<td>Internal Entity</td>
<td>Logical Load</td>
</tr>
<tr>
<td>Resource</td>
<td>Resource; Queue; Block</td>
</tr>
<tr>
<td>Control Element</td>
<td>Counter; Process Traffic Limit</td>
</tr>
<tr>
<td>Operation</td>
<td>Action</td>
</tr>
<tr>
<td>Current Events List</td>
<td>Current Event List</td>
</tr>
<tr>
<td>Future Events List</td>
<td>Future Event List</td>
</tr>
<tr>
<td>Delay List</td>
<td>Delay List; Condition Delay List; Load Ready List</td>
</tr>
<tr>
<td>User-Managed List</td>
<td>Order List</td>
</tr>
</tbody>
</table>
7.1.1 The Current Event List
The current events list is named the Current Event List in AutoMod. Cloned Loads, Loads leaving the Future Event List due to a clock update, and Loads ordered off Order Lists are placed immediately on the CEL. The insertion rule is to rank by priority and then FIFO within the priority class.
When the CEL becomes empty, the Condition Delay List (see below) is checked, and Loads may be transferred from there to the CEL. This continues until the CEL is empty and no more Loads can be transferred, at which point the EMP is over and a CUP is initiated.
7.1.2 The Future Event List
The AutoMod Future Event List (FEL) is like future events lists in other tools. Loads arrive on the FEL in the Time-Delayed State by executing a WAIT FOR statement. AutoMod allows the specification of time units (day, hr, min, sec) in a WAIT FOR statement.
The AutoMod CUP will remove multiple Transactions from the FEL if they are tied for the earliest move time, inserting them one by one into their appropriate place on the CEL.
There are also internal entities in AutoMod, called Logical Loads, that do things such as wait on the FEL to trigger scheduled shift breaks.
7.1.3 Delay Lists
Delay Lists (DL) are lists of loads waiting to claim capacity of a finite capacity element (a resource or control element such as an individual Resource, Queue, Block, Counter, or Process). Each finite capacity element within the model has one DL associated with it.
The waiting that results from this mechanism is related to the actual capacity. Whenever capacity is freed, the load waits from the head of the element’s DL gets tentatively placed on the CEL (but a placeholder is left on the DL). When the load is encountered during the EMP, it tries to claim the requested capacity. If it fails (for example because it wants two units but only one is free), the load is returned to the DL in its original place.
Immediately after this evaluation (and before the active load executes any more Actions), if there is still any available capacity, the next load on the DL is placed on the CEL. Processing of the active load then continues. After each time a tentatively placed load is evaluated during the EMP, the existence of available capacity will cause another load to be removed from the DL.
7.1.4 The Condition Delay List
For conditional waiting other than the five cases enumerated above, AutoMod has a WAIT UNTIL statement that results in polled waiting. WAIT UNTIL conditions can be compounded using Boolean operators. If a load executes a WAIT UNTIL and the condition is false, the load is placed on a single global AutoMod list called the Condition Delay List (CDL).
After the CEL has been emptied, but before the simulation clock progresses, all loads on the CDL are moved to the CEL if there has been a state change to at least one element of the same general type (e.g., Queue) that any load on the CDL is waiting for. (This mechanism is primarily polled although the global triggering mechanism is related.)
If the CEL is now non-empty then the EMP resumes. If the condition that a CEL load is waiting for is false, AutoMod moves that load from the Current Event List back to the CDL. The CDL may get emptied multiple times during one EMP until eventually the CEL gets emptied without having triggered a state change related to any load on the CDL. A CUP then occurs.
Because of the potential for repetitive list migration when using WAIT UNTIL, AutoMod’s vendor encourages users to use Order Lists or other explicit control mechanisms to manage complex waiting.
7.1.5 Order Lists
AutoMod implements the Dormant State with Order Lists, which are user-managed lists of Loads. After a Load puts itself onto an Order List (by executing a WAIT TO BE ORDERED Action), it can only be removed by another Load (which executes an ORDER Action). Loads successfully ordered are placed immediately on the CEL (one at a time according to how they were chosen from the Order List, and ranked on the CEL FIFO by priority).
Order Lists can achieve performance improvements over CDL waiting because Order Lists are never scanned except on explicit request.
AutoMod Order Lists offer several interesting wrinkles including the ability for an ordering Load to place a back order if the ORDER action is not satisfied, the ability for a Load on an Order List to be ordered to continue (to the next Action) instead of to a Process (this feature is useful for control handshaking), and the ability to have a function called for each Load on the Order List (by using the ORDER...SATING Action).
7.1.6 Other Lists
AutoMod has a number of material handling constructs that are integrated with Load movement. For vehicle systems there are three other types of lists. Loads on Load Ready Lists (LRL) (one list per vehicle system) are waiting to be picked up by a vehicle. Loads claimed and picked up by a vehicle reside on the vehicle’s Vehicle Claim List (VCL) and Vehicle Onboard List (VOL) respectively, during which time, the vehicle becomes the active “load” and moves among AutoMod’s lists (FEL, CEL, and possibly DLs) instead of the Load.
7.2 SLX
SLX is a hierarchical language in which the built-in primitives are at a lower level than most simulation languages, facilitating user (or developer) definition of the behavior of many system elements. This philosophy allows the SLX developer to create higher-level modeling tools whose constructs have precisely defined yet modifiable behavior.
Equivalents for the generic terms for users of low-level SLX are given in Table 2. For example, SLX uses Control Variables to act as Control Elements. The “control” modifier can be attached to a global or local Variable of any data type (integer, real, string, etc.). A local Variable is analogous to an attribute in other tools.
Note that SLX has two types of Objects: Active and Passive. An Active Object is distinguished from a
Passive Object by the presence of actions - executable Statements - in an Active Object’s Class definition.
Table 2: SLX Terminology (low-level)
<table>
<thead>
<tr>
<th>Generic Term</th>
<th>SLX Equivalent</th>
</tr>
</thead>
<tbody>
<tr>
<td>External Entity</td>
<td>Active Object and its Puck(s)</td>
</tr>
<tr>
<td>Internal Entity</td>
<td>none</td>
</tr>
<tr>
<td>Resource</td>
<td>Control Variable</td>
</tr>
<tr>
<td>Control Element</td>
<td>Control Variable</td>
</tr>
<tr>
<td>Operation</td>
<td>Statement</td>
</tr>
<tr>
<td>Current Events List</td>
<td>Current Events Chain</td>
</tr>
<tr>
<td>Future Events List</td>
<td>Future Events List</td>
</tr>
<tr>
<td>Delay List</td>
<td>Delay List</td>
</tr>
<tr>
<td>User-Managed List</td>
<td>Set (see section 7.2.4)</td>
</tr>
</tbody>
</table>
Table 3 shows how higher-level tools based on SLX might exploit the definitional capabilities of SLX.
Table 3: Tools Based On SLX
<table>
<thead>
<tr>
<th>Generic Term</th>
<th>SLX Equivalent</th>
</tr>
</thead>
<tbody>
<tr>
<td>Resource</td>
<td>Active or Passive Object</td>
</tr>
<tr>
<td>Control Element</td>
<td>Active or Passive Object</td>
</tr>
<tr>
<td>Operation</td>
<td>User-defined Statement</td>
</tr>
<tr>
<td>Delay List</td>
<td>User-defined based on Set</td>
</tr>
<tr>
<td>User-Managed List</td>
<td>User-defined based on Set</td>
</tr>
</tbody>
</table>
7.2.2 The Future Events List
The SLX Future Events List (FEL) is like future events lists in other tools. Loads arrive on the FEL in the Time-Delayed State by executing an ADVANCE statement.
The SLX CUP will remove multiple Pucks from the FEC if they are tied for the earliest move time, inserting them one by one into their appropriate place on the CEC.
Because the low-level primitives in SLX do not include downtimes or even repetitive Puck generation, all activity on the SLX FEL unfolds as specified by the developer of the SLX model. However, if a user is using a model (or a model builder) that contains higher-level primitives defined by a developer, chances are that all kinds of things are going on behind the scenes, hidden from the higher-level user’s view.
7.2.3 Delay Lists
Delay Lists (DL) are lists of Pucks waiting (through WAIT UNTIL) for state changes in any combination of Control Variables and the simulation clock value. All higher-level constructs defined by developers can use this mechanism. Each Control Variable (which may be a local Variable, in which case there is one for each Object in the Class) has a separate DL associated with it.
A DL is ranked by order of insertion. The entire contents of a DL are removed whenever the associated Control Variable changes value and are inserted one at a time into the CEC. Removed Pucks that are waiting on compound conditions are removed from each other Delay List to which they belong. As these entities are encountered on the CEC during the EMP, those failing to pass their WAIT UNTIL are returned to the Delay List(s) for those Control Variables still contributing to the falseness of the condition.
For conditions that include a reference to the clock, the Puck is inserted if necessary into the FEL, subject to early removal from the FEL if the condition becomes true do to other Control Variable changes.
This low-level related waiting mechanism based on Control Variables is the default SLX approach to modeling all types of simple or compound Condition-Delayed states.
7.2.4 Sets and User-Managed Waiting
SLX handles the Dormant State in a unique way. Instead of moving the Puck from the active state to a user-managed list and suspending it, all in the same operation, SLX breaks this operation into two pieces.
First, the Puck should join a Set. But joining a Set does not automatically suspend the Puck. A Puck can belong to any number of Sets. Set membership merely provides other Pucks with access to the member Puck.
To go into the Dormant state, a Puck executes a WAIT statement. It then is suspended indefinitely, outside of any particular list, until another Puck identifies the waiting Puck and executes a REACTIVATE statement on it. Often the REACTIVATEing Puck is scanning a Set to find the Puck to REACTIVATE, but a Set is not exactly the same as a user-managed list in our terminology. A Dormant-state Puck might be a member of no Sets (as long as a pointer to it has been stashed somewhere) or of one or more Sets.
An SLX developer can easily define a user-managed list construct, using Sets, WAIT, and REACTIVATE as building blocks, that mimics those of other languages or offer unique features of its own.
8 WHY IT MATTERS
8.1 Overview
We now describe five situations that reveal some of the practical differences in implementation particulars among SIMAN, ProModel, GPSS/H, AutoMod, and SLX. These differences reflect differing implementation choices made by the software designers.
None of the alternative approaches mentioned in each subsection is either intrinsically “right” or “wrong.” The modeler simply must be aware of the alternative in effect in the simulation software being used and work with it to produce the desired outcome. (If a modeler is unaware of the alternative in effect, it is possible to mis-model a situation and perhaps not become aware of this fact.)
We finish the “why it matters” discussion with some comments on how knowledge of software internals is needed to make effective use of software checkout tools.
8.2 Trying to Re-capture a Resource Immediately
Suppose a part releases a machine, then immediately attempts to re-capture the machine. The modeler might – or might not – want a more highly qualified waiting part, if any, be the next to capture the machine.
Of interest here is the order of events following the giving up of a server. There are at least three alternatives: (1) Coupled with the giving up of the server is the immediate choosing of the next user of the server, without the releasing entity having yet become a contender for the server. (2) The choosing of the next user of the server is deferred until the releasing entity has become a contender. (3) “Neither of the above;” that is, without paying heed to other contenders, the releasing entity recaptures the server immediately.
SIMAN implements (1) by default. ProModel implements (2). GPSS/H and AutoMod implement (3) by default. In SLX, using a low-level Control Variable as the resource state, the result is also (3). (However developers could implement higher-level resource constructs in SLX that behave in any of the three ways.)
8.3 The First in Line is Still Delayed
Suppose two Condition-Delayed entities are waiting in a list because no units of a particular resource are idle. Suppose the first entity needs two units of the resource, whereas the second entity only needs one unit. Now assume that one unit of the resource becomes idle. The needs of the first list entry cannot yet be satisfied, but the needs of the second entity can. What will happen?
There are at least three possible alternatives: (1) Neither entity claims the idle resource unit. (2) The first entity claims the one idle resource unit and waits for a second unit. (3) The second entity claims the idle resource unit and goes on its way.
As in Section 8.2, each of these alternatives comes into play in the tools considered here. SIMAN (SEIZE) and ProModel (GET or USE) implement (1) and (2) respectively by default. AutoMod (GET or USE), GPSS/H (ENTER or TEST), and SLX (WAIT UNTIL on a Control Variable) implement (3) by default.
8.4 Yielding Control
Suppose the active entity wants to give control to one or more Ready-State entities, but then needs to become the active entity again before the simulation clock has been advanced. This might be useful, for example, if the active entity has opened a switch permitting a set of other entities to move past a point in the model, and then needs to re-close the switch after the forward movement has been accomplished. (Perhaps a group of identically-flavored cartons of ice cream is to be transferred from an accumulation point to a conveyor leading to a one-flavor-per-box packing operation.)
In SIMAN and AutoMod, the effect can be accomplished approximately with a DELAY (SIMAN) or WAIT FOR (AutoMod) that puts the active entity into a Time-Delayed State for an arbitrarily short but non-zero simulated time.
In ProModel, “WAIT 0” can be used to put the active entity back on the FEL. It will be returned later (at the same simulated time) by the CUP to the Active State.
In GPSS/H, the active Transaction (“Xact”) can execute a YIELD (BUFFER) Block to shift from the
Active State to the Ready State and restart the CEC scan. Higher-priority (and higher-ranked same priority) Xacts on the CEC can then try to become active, one by one, before the control-yielding Xact itself again becomes active at the same simulated time. (A “PRIORITY PR,YIELD” Block can alternatively be used in order to reposition the just-active Xact behind equal-priority Xacts on the CEC prior to restarting the scan.)
In SLX there is also a YIELD statement. A normal YIELD shifts the active Puck to the back of its priority class on the CEC and picks up the next Puck. It is also possible to YIELD to a specific other Puck that is on the CEC, in which case the active Puck is not shifted.
8.5 Conditions Involving the Clock
If an entity needs to wait until a particular clock value has been reached, every language has a time-delay for FEL waiting. But what if an entity needs to wait for a compound condition involving the clock, such as “wait until my input buffer is empty or it is exactly 5:00 PM?”
A typical approach to this is to clone a dummy (“shadow”) entity to do the time-based waiting. Management of dummy entities can be cumbersome, particularly for very complex rules. ProModel has no polled waiting, so a dummy entity would be required.
If a single entity tries to wait on a compound condition involving the clock, other problems can arise. SIMAN and AutoMod detect the truth of these conditions through their end-of-EMP polling mechanisms. GPSS/H also detects the truth through its version of polled waiting (refusal-mode TEST). But in the absence of a clone that waits on the FEL until exactly 5:00 PM, all three of these tools are subject to the possibility that the first EMP that finds the condition true has a clock value greater than 5:00 PM.
SLX recognizes the clock as a related wait-until target. A WAIT UNTIL using a future clock value in a way that contributes to the falseness of the condition will cause the Puck to be scheduled onto the FEL to force an EMP at the precise time referenced. This solves the greater-than-the-desired-time problem. Note that this Puck may also be waiting on one or more delay lists.
8.6 Mixed-Mode Waiting
Suppose many entities are waiting to capture a particular resource, while a user-defined controller entity is waiting for the condition “shift status is off-shift and number waiting is less than six and resource is not currently in use” to take some action (such as shutting the resource down, in languages that allow user-defined entities to shut down resources, or printing a status message). How can we guarantee that the controller will be able to cut in front of the waiting entities at the appropriate instant (before the resource is recaptured)?
One way to handle this would be through entity priorities, in languages that offer this mechanism. However, as described below, that may not work even if the controller has higher priority than any other entity.
The key issue is the method used to implement the waiting. If it is related for the entities and polled for the compound condition, things can get complicated. (This is what we mean by the term “mixed-mode waiting.”) Every time the resource comes free, a new entity will be selected from a delay list immediately in SIMAN and via the CEL in AutoMod, in both cases preceding the end-of-EMP checking for polled wait conditions (and thereby ignoring the entity priority of the controller). There are many ways to work around this if desired, such as using a different type of operation to force a polled wait for entities wishing to use the resource.
In GPSS/H, using a high-priority controller Transaction at a refusal-mode TEST Block, the controller waits at the front of the CEC. The RELEASE of the Facility will trigger a scan restart and the controller will do its job.
In ProModel there is no polled waiting but there can be related waiting on compound conditions involving Variables. Variables would have to be defined and manipulated for each element of the Boolean condition and, to assure equal competition, the entities might also have to use WAIT UNTIL instead of GET or USE. Another possibility with ProModel would be to have the entity that frees the resource do some state-checking right away (in effect becoming a surrogate for the controller). This is possible because of the deferred-selection method used by ProModel (see Section 8.2).
In the related waiting of SLX, a Puck awaiting a compound condition will be registered on the delay lists of those (and only those) Control Variables that are contributing to the falseness of the condition at the time it is evaluated. The SLX architecture (in which only global or local Control Variables and the clock can be referenced in any sort of conditional wait at the lowest level) assures that there will already be Variables underlying the state changes being monitored. The model developer needs only to be sure they are defined as Control Variables.
8.7 Interactive Model Verification
We now comment briefly on why a detailed understanding of “how simulation software works” supports interactive probing of simulation-model behavior.
In general, simulation models can be run interactively or in batch mode. Interactive runs are of use in checking out (verifying) model logic during model-building and
in troubleshooting a model when execution errors occur. Batch mode is then used to make production runs.
Interactive runs put a magnifying glass on a simulation model while it executes. The modeler can follow the active entity step by step and display the current and future events lists and the delay and user-managed lists as well as other aspects of the model. These activities yield valuable insights into model behavior for the modeler who knows the underlying concepts. Without such knowledge, the modeler might not take full advantage of the interactive tools provided by the software or, worse yet, might even avoid using the tools.
ACKNOWLEDGMENTS
Much of the information in this paper was derived from conversations with software-vendor personnel. The authors gratefully acknowledge the support provided by David T. Sturrock, Deborah A. Sadowski, C. Dennis Pegden and Vivek Bapat, all of Systems Modeling Corporation; Charles Harrell of ProModel Corporation; Kenneth Farnsworth and Tyler Phillips, of AutoSimulations, Inc.; and Robert C. Crain and James O. Henriksen, of Wolverine Software Corporation.
REFERENCES
Swain, J. J. 1995. Simulation survey: Tools for process understanding and improvement. OR/MS Today, August ’95, 64-79, Baltimore, Maryland: INFORMS.
AUTHOR BIOGRAPHIES
DANIEL T. BRUNNER is President of Systemflow Simulations, Inc., a services firm active in manufacturing, material handling, distribution, transportation, health care, computer systems, and mining. He received a B.S.E.E. from Purdue University and an MBA from The University of Michigan. He has served as Winter Simulation Conference Publicity Chair (1988), Business Chair (1992), and General Chair (1996). He is a member of IIE and SCS.
THOMAS J. SCHRIBER is a Professor of Computer and Information Systems at The University of Michigan. He is a Fellow of the Institute of Decision Sciences and is the 1996 recipient of the INFORMS College of Simulation Distinguished Service Award. He teaches modeling and decision analysis and discrete-event simulation in Michigan’s MBA program while doing research and consulting in simulation. He is a member of ASIM (the German-language simulation society), DSI, IIE, and INFORMS.
|
{"Source-Url": "https://repository.lib.ncsu.edu/bitstream/handle/1840.4/5114/1997_0003.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 8257, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 9954, "total-output-tokens": 9159, "length": "2e13", "weborganizer": {"__label__adult": 0.0005140304565429688, "__label__art_design": 0.00044798851013183594, "__label__crime_law": 0.0005092620849609375, "__label__education_jobs": 0.0036716461181640625, "__label__entertainment": 0.0001882314682006836, "__label__fashion_beauty": 0.0002410411834716797, "__label__finance_business": 0.0011615753173828125, "__label__food_dining": 0.00047969818115234375, "__label__games": 0.00482177734375, "__label__hardware": 0.0026302337646484375, "__label__health": 0.0005369186401367188, "__label__history": 0.0006260871887207031, "__label__home_hobbies": 0.00016820430755615234, "__label__industrial": 0.002971649169921875, "__label__literature": 0.00036978721618652344, "__label__politics": 0.0005040168762207031, "__label__religion": 0.0005164146423339844, "__label__science_tech": 0.32763671875, "__label__social_life": 0.0001461505889892578, "__label__software": 0.050445556640625, "__label__software_dev": 0.59814453125, "__label__sports_fitness": 0.0006961822509765625, "__label__transportation": 0.0024242401123046875, "__label__travel": 0.0002968311309814453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40100, 0.02822]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40100, 0.48323]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40100, 0.91873]], "google_gemma-3-12b-it_contains_pii": [[0, 3722, false], [3722, 8307, null], [8307, 12646, null], [12646, 17287, null], [17287, 22040, null], [22040, 25634, null], [25634, 30570, null], [30570, 35889, null], [35889, 40100, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3722, true], [3722, 8307, null], [8307, 12646, null], [12646, 17287, null], [17287, 22040, null], [22040, 25634, null], [25634, 30570, null], [30570, 35889, null], [35889, 40100, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40100, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40100, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40100, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40100, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40100, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40100, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40100, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40100, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40100, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40100, null]], "pdf_page_numbers": [[0, 3722, 1], [3722, 8307, 2], [8307, 12646, 3], [12646, 17287, 4], [17287, 22040, 5], [22040, 25634, 6], [25634, 30570, 7], [30570, 35889, 8], [35889, 40100, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40100, 0.14216]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
b07ec75ae104dbb82d6508ca41a34c65f47cfac3
|
**DESCRIPTION**
*Information and Software Technology* is the international archival journal focusing on research and experience that contributes to the improvement of *software development* practices. The journal's scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of *software engineering* or address ways to improve the engineering and management of *software* development. Areas covered by the journal include:
- Software management, quality and metrics,
- Software processes,
- Software architecture, modelling, specification, design and programming
- Functional and non-functional software requirements
- Software testing and verification & validation
- Empirical studies of all aspects of engineering and managing software development
*Short Communications* is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information.
The journal encourages and welcomes submissions of *systematic literature studies* (reviews and maps) within the scope of the journal. *Information and Software Technology* is the premiere outlet for systematic literature studies in software engineering. Guidelines for conducting systematic reviews are provided [here](#).
*Special Issues and Special Sections proposals*
To submit a proposal for a special issue (original contributions on a topic within the scope of the journal) or a special section with extended papers from a conference of workshop within the scope of the journal, please contact the Special Content Editor, Prof. C. Wohlin (claes.wohlin@bth.se).
**Benefits to authors**
We also provide many author benefits, such as free PDFs, a liberal copyright policy, special discounts on Elsevier publications and much more. Please click here for more information on our author services.
Please see our Guide for Authors for information on article submission. If you require any further information or help, please visit our Support Center.
AUDIENCE
Software project managers, management information systems managers, information centre managers, software engineers and developers in industry and commercial organizations, software and systems houses, total solution vendors, academics.
IMPACT FACTOR
2017: 2.627 © Clarivate Analytics Journal Citation Reports 2018
ABSTRACTING AND INDEXING
SciSearch
Ergonomics Abstracts
INSPEC Computer and Control Abstracts
IT-Digest
Science Citation Index Expanded
ACM Guide to Computing Literature
Applied Science and Technology Index
Computer Literature Index
CompuScience
Current Contents
Deadline Newsletter
Engineering Index
Research Alert
Scopus
EDITORIAL BOARD
Editor-in-Chief
Günther Ruhe, Dept. of Computer Science, University of Calgary, 2500 University Drive NW, Calgary, T2N 1N4, Alberta, Canada
Special Content Editor
Jeffrey Carver, University of Alabama, Tuscaloosa, Alabama, USA
Associate Editors
Laurie Williams, Dept. of Computer Science, North Carolina State University, Engineering Building II, Raleigh, NC 27695-8206, USA, Fax: 919-515-7896
Guilherme Horta Travassos, Centro de Tecnologia, Federal University of Rio de Janeiro, Cidade Universitária, CEP 21945-970, Rio de Janeiro, Brazil
Tracy Hall, Professor of Software Engineering, School of Computing & Communications, Lancaster University, B40, InfoLab21, Lancaster, England, UK
Emeritus Editor
Claes Wohlin, Department of Software Engineering, Blekinge Institute of Technology, 37179, Karlskrona, Sweden
Editorial Board
Bram Adams, Polytechnique Montreal, Montreal, Quebec, Canada
Christian Bird, Microsoft Research, Washington, USA
Sjaak Brinkkemper, Universiteit Utrecht, Utrecht, Netherlands
Yuanfang Cai, Drexel University, Philadelphia, Pennsylvania, USA
Ivica Crnkovic, Chalmers University of Technology, Goteborg, Sweden
Maya Daneva, University of Twente, Enschede, Netherlands
Tore Dybå, SINTEF, Trondheim, Norway
Sebastian Elbaum, University of Nebraska at Lincoln, Lincoln, Nebraska, USA
Xavier Franch, Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
Sudipto Ghosh, Colorado State University, Fort Collins, Colorado, USA
Paul Grünbacher, Johannes Kepler University Linz, Linz, Austria
Mark Harman, Facebook, London, UK
Rachel Harrison, Oxford Brookes University, Oxford, UK
Miryung Kim, University of California at Los Angeles (UCLA), Los Angeles, California, USA
Mario Linares Vásquez, Universidad de Los Andes, Bogotá, Colombia
David Lo, Singapore Management University, Singapore
Stephen MacDonell, University of Otago, Dunedin, New Zealand
Daniel Méndez Fernandez, Technische Universität München, Garching, Germany
Tim Menzies, North Carolina State University, Raleigh, North Carolina, USA
James Miller, University of Alberta, Edmonton, Alberta, Canada
Ipek Ozkaya, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
Dietmar Pfahl, University of Tartu, Tartu, Estonia
Brian Robinson, ABB Corporate Research, Raleigh, North Carolina, USA
Klaus Schmid, University of Hildesheim, Hildesheim, Germany
Carolyn Seaman, University of Maryland, Baltimore County (UMBC), Baltimore, Maryland, USA
Fabio Queda Bueno da Silva, Universidade Federal de Pernambuco (UFPE), Recife, Brazil
Miroslaw Staron, Göteborgs Universitet, Göteborg, Sweden
Qing Wang, Chinese Academy of Sciences (CAS), Beijing, China
Publicity Co-Chairs
Daniel Méndez Fernandez, Technical University of Munich, Garching, Germany
Maleknaz Nayebi, Ecole Polytechnique Montreal, Montreal, Quebec, Canada
GUIDE FOR AUTHORS
Your Paper Your Way
We now differentiate between the requirements for new and revised submissions. You may choose to submit your manuscript as a single Word or PDF file to be used in the refereeing process. Only when your paper is at the revision stage, will you be requested to put your paper in to a 'correct format' for acceptance and provide the items required for the publication of your article.
To find out more, please visit the Preparation section below.
INTRODUCTION
Original high-quality research and review papers falling within the Aims and Scope of the journal will be considered for publication. Contributions are normally received with the understanding that they comprise original, unpublished material and are not being submitted for publication elsewhere. Translated material, which has not been published in English, will also be considered.
Types of Paper
Research Papers, Short Communications and Review Articles. We also actively encourage the submission of Systematic Review Articles.
Submission checklist
You can use this list to carry out a final check of your submission before you send it to the journal for review. Please check the relevant section in this Guide for Authors for more details.
Ensure that the following items are present:
One author has been designated as the corresponding author with contact details:
• E-mail address
• Full postal address
All necessary files have been uploaded:
Manuscript:
• Include keywords
• All figures (include relevant captions)
• All tables (including titles, description, footnotes)
• Ensure all figure and table citations in the text match the files provided
• Indicate clearly if color should be used for any figures in print
Graphical Abstracts / Highlights files (where applicable)
Supplemental files (where applicable)
Further considerations
• Manuscript has been 'spell checked' and 'grammar checked'
• All references mentioned in the Reference List are cited in the text, and vice versa
• Permission has been obtained for use of copyrighted material from other sources (including the Internet)
• A competing interests statement is provided, even if the authors have no competing interests to declare
• Journal policies detailed in this guide have been reviewed
• Referee suggestions and contact details provided, based on journal requirements
For further information, visit our Support Center.
BEFORE YOU BEGIN
Ethics in publishing
Please see our information pages on Ethics in publishing and Ethical guidelines for journal publication.
Declaration of interest
All authors must disclose any financial and personal relationships with other people or organizations that could inappropriately influence (bias) their work. Examples of potential conflicts of interest include employment, consultancies, stock ownership, honoraria, paid expert testimony, patent applications/registrations, and grants or other funding. Authors should complete the declaration of interest
statement using this template and upload to the submission system at the Attach/Upload Files step.
If there are no interests to declare, please choose: 'Declarations of interest: none' in the template.
This statement will be published within the article if accepted. More information.
Submission declaration and verification
Submission of an article implies that the work described has not been published previously (except in
the form of an abstract, a published lecture or academic thesis, see 'Multiple, redundant or concurrent
publication' for more information), that it is not under consideration for publication elsewhere, that
its publication is approved by all authors and tacitly or explicitly by the responsible authorities where
the work was carried out, and that, if accepted, it will not be published elsewhere in the same form, in
English or in any other language, including electronically without the written consent of the copyright-
holder. To verify originality, your article may be checked by the originality detection service Crossref
Similarity Check.
Preprints
Please note that preprints can be shared anywhere at any time, in line with Elsevier's sharing policy.
Sharing your preprints e.g. on a preprint server will not count as prior publication (see 'Multiple,
redundant or concurrent publication' for more information).
Use of inclusive language
Inclusive language acknowledges diversity, conveys respect to all people, is sensitive to differences,
and promotes equal opportunities. Articles should make no assumptions about the beliefs or
commitments of any reader, should contain nothing which might imply that one individual is superior
to another on the grounds of race, sex, culture or any other characteristic, and should use inclusive
language throughout. Authors should ensure that writing is free from bias, for instance by using 'he
or she', 'his/her' instead of 'he' or 'his', and by making use of job titles that are free of stereotyping
(e.g. 'chairperson' instead of 'chairman' and 'flight attendant' instead of 'stewardess').
Changes to authorship
Authors are expected to consider carefully the list and order of authors before submitting their
manuscript and provide the definitive list of authors at the time of the original submission. Any
addition, deletion or rearrangement of author names in the authorship list should be made only
before the manuscript has been accepted and only if approved by the journal Editor. To request such
a change, the Editor must receive the following from the corresponding author: (a) the reason
for the change in author list and (b) written confirmation (e-mail, letter) from all authors that they
agree with the addition, removal or rearrangement. In the case of addition or removal of authors,
this includes confirmation from the author being added or removed.
Only in exceptional circumstances will the Editor consider the addition, deletion or rearrangement of
authors after the manuscript has been accepted. While the Editor considers the request, publication
of the manuscript will be suspended. If the manuscript has already been published in an online issue,
any requests approved by the Editor will result in a corrigendum.
Copyright
Upon acceptance of an article, authors will be asked to complete a 'Journal Publishing Agreement' (see
more information on this). An e-mail will be sent to the corresponding author confirming receipt of
the manuscript together with a 'Journal Publishing Agreement' form or a link to the online version
of this agreement.
Subscribers may reproduce tables of contents or prepare lists of articles including abstracts for internal
circulation within their institutions. Permission of the Publisher is required for resale or distribution
outside the institution and for all other derivative works, including compilations and translations. If
excerpts from other copyrighted works are included, the author(s) must obtain written permission
from the copyright owners and credit the source(s) in the article. Elsevier has preprinted forms for
use by authors in these cases.
For gold open access articles: Upon acceptance of an article, authors will be asked to complete an
'Exclusive License Agreement' (more information). Permitted third party reuse of gold open access
articles is determined by the author's choice of user license.
Author rights
As an author you (or your employer or institution) have certain rights to reuse your work. More
information.
Elsevier supports responsible sharing
Find out how you can share your research published in Elsevier journals.
**Role of the funding source**
You are requested to identify who provided financial support for the conduct of the research and/or preparation of the article and to briefly describe the role of the sponsor(s), if any, in study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication. If the funding source(s) had no such involvement then this should be stated.
**Funding body agreements and policies**
Elsevier has established a number of agreements with funding bodies which allow authors to comply with their funder's open access policies. Some funding bodies will reimburse the author for the gold open access publication fee. Details of existing agreements are available online.
**Open access**
This journal offers authors a choice in publishing their research:
**Subscription**
- Articles are made available to subscribers as well as developing countries and patient groups through our universal access programs.
- No open access publication fee payable by authors.
- The Author is entitled to post the accepted manuscript in their institution's repository and make this public after an embargo period (known as green Open Access). The published journal article cannot be shared publicly, for example on ResearchGate or Academia.edu, to ensure the sustainability of peer-reviewed research in journal publications. The embargo period for this journal can be found below.
**Gold open access**
- Articles are freely available to both subscribers and the wider public with permitted reuse.
- A gold open access publication fee is payable by authors or on their behalf, e.g. by their research funder or institution.
Regardless of how you choose to publish your article, the journal will apply the same peer review criteria and acceptance standards.
For gold open access articles, permitted third party (re)use is defined by the following Creative Commons user licenses:
**Creative Commons Attribution (CC BY)**
Lets others distribute and copy the article, create extracts, abstracts, and other revised versions, adaptations or derivative works of or from an article (such as a translation), include in a collective work (such as an anthology), text or data mine the article, even for commercial purposes, as long as they credit the author(s), do not represent the author as endorsing their adaptation of the article, and do not modify the article in such a way as to damage the author's honor or reputation.
**Creative Commons Attribution-NonCommercial-NoDerivs (CC BY-NC-ND)**
For non-commercial purposes, lets others distribute and copy the article, and to include in a collective work (such as an anthology), as long as they credit the author(s) and provided they do not alter or modify the article.
The gold open access publication fee for this journal is **USD 2550**, excluding taxes. Learn more about Elsevier's pricing policy: https://www.elsevier.com/openaccesspricing.
**Green open access**
Authors can share their research in a variety of different ways and Elsevier has a number of green open access options available. We recommend authors see our open access page for further information. Authors can also self-archive their manuscripts immediately and enable public access from their institution's repository after an embargo period. This is the version that has been accepted for publication and which typically includes author-incorporated changes suggested during submission, peer review and in editor-author communications. Embargo period: For subscription articles, an appropriate amount of time is needed for journals to deliver value to subscribing customers before an article becomes freely available to the public. This is the embargo period and it begins from the date the article is formally published online in its final and fully citable form. Find out more.
This journal has an embargo period of 24 months.
Elsevier Researcher Academy
Researcher Academy is a free e-learning platform designed to support early and mid-career researchers throughout their research journey. The "Learn" environment at Researcher Academy offers several interactive modules, webinars, downloadable guides and resources to guide you through the process of writing for research and going through peer review. Feel free to use these free resources to improve your submission and navigate the publication process with ease.
Language (usage and editing services)
Please write your text in good English (American or British usage is accepted, but not a mixture of these). Authors who feel their English language manuscript may require editing to eliminate possible grammatical or spelling errors and to conform to correct scientific English may wish to use the English Language Editing service available from Elsevier's WebShop.
Submission
Our online submission system guides you stepwise through the process of entering your article details and uploading your files. The system converts your article files to a single PDF file used in the peer-review process. Editable files (e.g., Word, LaTeX) are required to typeset your article for final publication. All correspondence, including notification of the Editor's decision and requests for revision, is sent by e-mail.
Please also note that the maximum length for a research paper is 15,000 words with the exception for systematic literature review or systematic mapping studies where the maximum length is 20,000 words. Also notice that figures and tables count 200 words each. Manuscripts longer than the respective limits, will be sent back to authors.
PREPARATION
NEW SUBMISSIONS
Submission to this journal proceeds totally online and you will be guided stepwise through the creation and uploading of your files. The system automatically converts your files to a single PDF file, which is used in the peer-review process.
As part of the Your Paper Your Way service, you may choose to submit your manuscript as a single file to be used in the refereeing process. This can be a PDF file or a Word document, in any format or layout that can be used by referees to evaluate your manuscript. It should contain high enough quality figures for refereeing. If you prefer to do so, you may still provide all or some of the source files at the initial submission. Please note that individual figure files larger than 10 MB must be uploaded separately.
References
There are no strict requirements on reference formatting at submission. References can be in any style or format as long as the style is consistent. Where applicable, author(s) name(s), journal title/book title, chapter title/article title, year of publication, volume number/book chapter and the article number or pagination must be present. Use of DOI is highly encouraged. The reference style used by the journal will be applied to the accepted article by Elsevier at the proof stage. Note that missing data will be highlighted at proof stage for the author to correct.
Formatting requirements
There are no strict formatting requirements but all manuscripts must contain the essential elements needed to convey your manuscript, for example Structured Abstract, Keywords, Introduction, Materials and Methods, Results, Conclusions, Artwork and Tables with Captions.
If your article includes any Videos and/or other Supplementary material, this should be included in your initial submission for peer review purposes.
Divide the article into clearly defined sections.
Figures and tables embedded in text
Please ensure the figures and the tables included in the single file are placed next to the relevant text in the manuscript, rather than at the bottom or the top of the file. The corresponding caption should be placed directly below the figure or table.
SHORT COMMUNICATIONS
Short communications at IST are a mean to quickly disseminate novel and impactful results. Short Communications have a limit of 2500 words in length (approx. 4 pages, figure and table count 200 words each) and must have no more than 10 references.
To meet a vital need to rapidly disseminate current scientific findings, short communications will be reviewed using a streamlined process. Papers are peer reviewed and (1) accepted as written or (2) rejected within four (4) weeks of submission. Minor revisions are allowed in the likelihood of an accept decision. The review and decision process will primarily focus on (i) novelty, (ii) technical soundness, (iii) expected impact on the state-of-the-art and (iv) overall presentation and readability.
**Peer review**
This journal operates a single blind review process. All contributions will be initially assessed by the editor for suitability for the journal. Papers deemed suitable are then typically sent to a minimum of two independent expert reviewers to assess the scientific quality of the paper. The Editor is responsible for the final decision regarding acceptance or rejection of articles. The Editor's decision is final. More information on types of peer review.
**REVISED SUBMISSIONS**
**Use of word processing software**
Regardless of the file format of the original submission, at revision you must provide us with an editable file of the entire article. Keep the layout of the text as simple as possible. Most formatting codes will be removed and replaced on processing the article. The electronic text should be prepared in a way very similar to that of conventional manuscripts (see also the Guide to Publishing with Elsevier). See also the section on Electronic artwork.
To avoid unnecessary errors you are strongly advised to use the 'spell-check' and 'grammar-check' functions of your word processor.
**LaTeX**
You are recommended to use the Elsevier article class elsarticle.cls to prepare your manuscript and BibTeX to generate your bibliography. Our LaTeX site has detailed submission instructions, templates and other information.
**Article structure**
**Subdivision - numbered sections**
Divide your article into clearly defined and numbered sections. Subsections should be numbered 1.1 (then 1.1.1, 1.1.2, ...), 1.2, etc. (the abstract is not included in section numbering). Use this numbering also for internal cross-referencing: do not just refer to 'the text'. Any subsection may be given a brief heading. Each heading should appear on its own separate line.
**Essential title page information**
- **Title.** Concise and informative. Titles are often used in information-retrieval systems. Avoid abbreviations and formulae where possible.
- **Author names and affiliations.** Please clearly indicate the given name(s) and family name(s) of each author and check that all names are accurately spelled. You can add your name between parentheses in your own script behind the English transliteration. Present the authors' affiliation addresses (where the actual work was done) below the names. Indicate all affiliations with a lower-case superscript letter immediately after the author's name and in front of the appropriate address. Provide the full postal address of each affiliation, including the country name and, if available, the e-mail address of each author.
- **Corresponding author.** Clearly indicate who will handle correspondence at all stages of refereeing and publication, also post-publication. This responsibility includes answering any future queries about Methodology and Materials. Ensure that the e-mail address is given and that contact details are kept up to date by the corresponding author.
- **Present/permanent address.** If an author has moved since the work described in the article was done, or was visiting at the time, a 'Present address' (or 'Permanent address') may be indicated as a footnote to that author's name. The address at which the author actually did the work must be retained as the main, affiliation address. Superscript Arabic numerals are used for such footnotes.
**Highlights**
Highlights are a short collection of bullet points that convey the core findings of the article. Highlights are optional and should be submitted in a separate editable file in the online submission system. Please use 'Highlights' in the file name and include 3 to 5 bullet points (maximum 85 characters, including spaces, per bullet point). You can view example Highlights on our information site.
Abstract
A concise and factual abstract is required of no more than 300 words, including headings. To support this, the journal has started using (July 1, 2009) "structured abstracts". A structured abstract should contain the following headings (as in-line or run-in headings in bold): Context, Objective, Method, Results and Conclusions. An abstract is often presented separately from the article, so it must be able to stand alone. For this reason, references should be avoided, but if essential, then cite the author(s) and year(s). Also, non-standard or uncommon abbreviations should be avoided, but if essential they must be defined at their first mention in the abstract itself. Please see below for an example of a structured abstract:
Context: Throughout an organisation, people have different responsibilities and worktasks, hence, it is probable that different roles have different priorities when it comes to what should be improved within a company. This has been found in previous studies in marketing, but is this true for software improvement as well?
Objective: This paper evaluates how different roles in a software development organization view different issues in software process improvement and if such differences could be used in order to provide more tailor-made process improvements within an organization and uses this as a working hypothesis.
Method: A quantitative questionnaire containing five different weighted questions related to software process improvement was developed. 84 employees from all levels of a Swedish telecommunication company were then approached, of which 63 responded.
Results: The different roles disagreed in three of the questions while they agreed in two of the questions. The disagreement was related to issues about importance of improvement, urgency of problems, and threat against successful process management, while the questions where the roles agreed focused on communication of the processes (documentation and teaching).
Conclusion: It is concluded that it is important to be aware and take into account the different needs of different roles. This will make it possible to provide improvements tailored to specific roles which will probably help to overcome resistance to process improvements. It is also important to look into other areas and companies (for example, marketing) where it could be beneficial when conducting process improvements.
Graphical abstract
Although a graphical abstract is optional, its use is encouraged as it draws more attention to the online article. The graphical abstract should summarize the contents of the article in a concise, pictorial form designed to capture the attention of a wide readership. Graphical abstracts should be submitted as a separate file in the online submission system. Image size: Please provide an image with a minimum of 531 × 1328 pixels (h × w) or proportionally more. The image should be readable at a size of 5 × 13 cm using a regular screen resolution of 96 dpi. Preferred file types: TIFF, EPS, PDF or MS Office files. You can view Example Graphical Abstracts on our information site. Authors can make use of Elsevier's Illustration Services to ensure the best presentation of their images and in accordance with all technical requirements.
Keywords
Immediately after the abstract, provide a maximum of 6 keywords, using British spelling and avoiding general and plural terms and multiple concepts (avoid, for example, 'and', 'of'). Be sparing with abbreviations: only abbreviations firmly established in the field may be eligible. These keywords will be used for indexing purposes.
Abbreviations
Define abbreviations that are not standard in this field in a footnote to be placed on the first page of the article. Such abbreviations that are unavoidable in the abstract must be defined at their first mention there, as well as in the footnote. Ensure consistency of abbreviations throughout the article.
Acknowledgements
Collate acknowledgements in a separate section at the end of the article before the references and do not, therefore, include them on the title page, as a footnote to the title or otherwise. List here those individuals who provided help during the research (e.g., providing language help, writing assistance or proof reading the article, etc.).
Formatting of funding sources
List funding sources in this standard way to facilitate compliance to funder's requirements:
Funding: This work was supported by the National Institutes of Health [grant numbers xxxx, yyyy]; the Bill & Melinda Gates Foundation, Seattle, WA [grant number zzzz]; and the United States Institutes of Peace [grant number aaaa].
It is not necessary to include detailed descriptions on the program or type of grants and awards. When funding is from a block grant or other resources available to a university, college, or other research institution, submit the name of the institute or organization that provided the funding.
If no funding has been provided for the research, please include the following sentence:
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Nomenclature and Units
All measurements and data should be given in SI units, or if SI units do not exist in an internationally accepted unit. If you use any symbol or unit not generally recognised, please include an explanation the first time it is used.
Math formulae
Please submit math equations as editable text and not as images. Present simple formulae in line with normal text where possible and use the solidus (/) instead of a horizontal line for small fractional terms, e.g., X/Y. In principle, variables are to be presented in italics. Powers of e are often more conveniently denoted by exp. Number consecutively any equations that have to be displayed separately from the text (if referred to explicitly in the text).
Footnotes
Footnotes should be used sparingly. Number them consecutively throughout the article. Many word processors build footnotes into the text, and this feature may be used. Should this not be the case, indicate the position of footnotes in the text and present the footnotes themselves separately at the end of the article.
Artwork
Electronic artwork
General points
• Make sure you use uniform lettering and sizing of your original artwork.
• Preferred fonts: Arial (or Helvetica), Times New Roman (or Times), Symbol, Courier.
• Number the illustrations according to their sequence in the text.
• Use a logical naming convention for your artwork files.
• Indicate per figure if it is a single, 1.5 or 2-column fitting image.
• For Word submissions only, you may still provide figures and their captions, and tables within a single file at the revision stage.
• Please note that individual figure files larger than 10 MB must be provided in separate source files. A detailed guide on electronic artwork is available.
You are urged to visit this site; some excerpts from the detailed information are given here.
Formats
Regardless of the application used, when your electronic artwork is finalized, please 'save as' or convert the images to one of the following formats (note the resolution requirements for line drawings, halftones, and line/halftone combinations given below):
EPS (or PDF): Vector drawings. Embed the font or save the text as 'graphics'.
TIFF (or JPEG): Color or grayscale photographs (halftones): always use a minimum of 300 dpi.
TIFF (or JPEG): Bitmapped line drawings: use a minimum of 1000 dpi.
TIFF (or JPEG): Combinations bitmapped line/half-tone (color or grayscale): a minimum of 500 dpi is required.
Please do not:
• Supply files that are optimized for screen use (e.g., GIF, BMP, PICT, WPG); the resolution is too low.
• Supply files that are too low in resolution.
• Submit graphics that are disproportionately large for the content.
Color artwork
Please make sure that artwork files are in an acceptable format (TIFF (or JPEG), EPS (or PDF), or MS Office files) and with the correct resolution. If, together with your accepted article, you submit usable color figures then Elsevier will ensure, at no additional charge, that these figures will appear in color online (e.g., ScienceDirect and other sites) regardless of whether or not these illustrations are reproduced in color in the printed version. For color reproduction in print, you will receive information regarding the costs from Elsevier after receipt of your accepted article. Please indicate your preference for color: in print or online only. Further information on the preparation of electronic artwork.
Figure captions
Ensure that each illustration has a caption. A caption should comprise a brief title (not on the figure itself) and a description of the illustration. Keep text in the illustrations themselves to a minimum but explain all symbols and abbreviations used.
Tables
Please submit tables as editable text and not as images. Tables can be placed either next to the relevant text in the article, or on separate page(s) at the end. Number tables consecutively in accordance with their appearance in the text and place any table notes below the table body. Be sparing in the use of tables and ensure that the data presented in them do not duplicate results described elsewhere in the article. Please avoid using vertical rules and shading in table cells.
References
Citation in text
Please ensure that every reference cited in the text is also present in the reference list (and vice versa). Any references cited in the abstract must be given in full. Unpublished results and personal communications are not recommended in the reference list, but may be mentioned in the text. If these references are included in the reference list they should follow the standard reference style of the journal and should include a substitution of the publication date with either 'Unpublished results' or 'Personal communication'. Citation of a reference as 'in press' implies that the item has been accepted for publication.
Reference links
Increased discoverability of research and high quality peer review are ensured by online links to the sources cited. In order to allow us to create links to abstracting and indexing services, such as Scopus, CrossRef and PubMed, please ensure that data provided in the references are correct. Please note that incorrect surnames, journal/book titles, publication year and pagination may prevent link creation. When copying references, please be careful as they may already contain errors. Use of the DOI is highly encouraged.
A DOI is guaranteed never to change, so you can use it as a permanent link to any electronic article. An example of a citation using DOI for an article not yet in an issue is: VanDecar J.C., Russo R.M., James D.E., Ambeh W.B., Franke M. (2003). Aseismic continuation of the Lesser Antilles slab beneath northeastern Venezuela. Journal of Geophysical Research, https://doi.org/10.1029/2001JB000884. Please note the format of such citations should be in the same style as all other references in the paper.
Web references
As a minimum, the full URL should be given and the date when the reference was last accessed. Any further information, if known (DOI, author names, dates, reference to a source publication, etc.), should also be given. Web references can be listed separately (e.g., after the reference list) under a different heading if desired, or can be included in the reference list.
Data references
This journal encourages you to cite underlying or relevant datasets in your manuscript by citing them in your text and including a data reference in your Reference List. Data references should include the following elements: author name(s), dataset title, data repository, version (where available), year, and global persistent identifier. Add [dataset] immediately before the reference so we can properly identify it as a data reference. The [dataset] identifier will not appear in your published article.
References in a special issue
Please ensure that the words 'this issue' are added to any references in the list (and any citations in the text) to other articles in the same Special Issue.
Reference management software
Most Elsevier journals have their reference template available in many of the most popular reference management software products. These include all products that support Citation Style Language styles, such as Mendeley. Using citation plug-ins from these products, authors only need to select the appropriate journal template when preparing their article, after which citations and bibliographies will be automatically formatted in the journal's style. If no template is yet available for this journal, please follow the format of the sample references and citations as shown in this Guide. If you use reference management software, please ensure that you remove all field codes before submitting the electronic manuscript. More information on how to remove field codes from different reference management software.
Users of Mendeley Desktop can easily install the reference style for this journal by clicking the following link:
http://open.mendeley.com/use-citation-style/information-and-software-technology
When preparing your manuscript, you will then be able to select this style using the Mendeley plug-ins for Microsoft Word or LibreOffice.
Reference formatting
There are no strict requirements on reference formatting at submission. References can be in any style or format as long as the style is consistent. Where applicable, author(s) name(s), journal title/book title, chapter title/article title, year of publication, volume number/book chapter and the article number or pagination must be present. Use of DOI is highly encouraged. The reference style used by the journal will be applied to the accepted article by Elsevier at the proof stage. Note that missing data will be highlighted at proof stage for the author to correct. If you do wish to format the references yourself they should be arranged according to the following examples:
Reference style
Text: Indicate references by number(s) in square brackets in line with the text. The actual authors can be referred to, but the reference number(s) must always be given.
Example: '..... as demonstrated [3,6]. Barnaby and Jones [8] obtained a different result ....'
List: Number the references (numbers in square brackets) in the list in the order in which they appear in the text.
Examples:
Reference to a journal publication:
Reference to a journal publication with an article number:
Reference to a book:
Reference to a chapter in an edited book:
Reference to a website:
Reference to a dataset:
Video
Elsevier accepts video material and animation sequences to support and enhance your scientific research. Authors who have video or animation files that they wish to submit with their article are strongly encouraged to include links to these within the body of the article. This can be done in the same way as a figure or table by referring to the video or animation content and noting in the body text where it should be placed. All submitted files should be properly labeled so that they directly relate to the video file's content. In order to ensure that your video or animation material is directly
usable, please provide the file in one of our recommended file formats with a preferred maximum size of 150 MB per file, 1 GB in total. Video and animation files supplied will be published online in the electronic version of your article in Elsevier Web products, including ScienceDirect. Please supply 'stills' with your files: you can choose any frame from the video or animation or make a separate image. These will be used instead of standard icons and will personalize the link to your video data. For more detailed instructions please visit our video instruction pages. Note: since video and animation cannot be embedded in the print version of the journal, please provide text for both the electronic and the print version for the portions of the article that refer to this content.
Data visualization
Include interactive data visualizations in your publication and let your readers interact and engage more closely with your research. Follow the instructions here to find out about available data visualization options and how to include them with your article.
Supplementary material
Supplementary material such as applications, images and sound clips, can be published with your article to enhance it. Submitted supplementary items are published exactly as they are received (Excel or PowerPoint files will appear as such online). Please submit your material together with the article and supply a concise, descriptive caption for each supplementary file. If you wish to make changes to supplementary material during any stage of the process, please make sure to provide an updated file. Do not annotate any corrections on a previous version. Please switch off the 'Track Changes' option in Microsoft Office files as these will appear in the published version.
Research data
This journal encourages and enables you to share data that supports your research publication where appropriate, and enables you to interlink the data with your published articles. Research data refers to the results of observations or experimentation that validate research findings. To facilitate reproducibility and data reuse, this journal also encourages you to share your software, code, models, algorithms, protocols, methods and other useful materials related to the project.
Below are a number of ways in which you can associate data with your article or make a statement about the availability of your data when submitting your manuscript. If you are sharing data in one of these ways, you are encouraged to cite the data in your manuscript and reference list. Please refer to the "References" section for more information about data citation. For more information on depositing, sharing and using research data and other relevant research materials, visit the research data page.
Data linking
If you have made your research data available in a data repository, you can link your article directly to the dataset. Elsevier collaborates with a number of repositories to link articles on ScienceDirect with relevant repositories, giving readers access to underlying data that gives them a better understanding of the research described.
There are different ways to link your datasets to your article. When available, you can directly link your dataset to your article by providing the relevant information in the submission system. For more information, visit the database linking page.
For supported data repositories a repository banner will automatically appear next to your published article on ScienceDirect.
In addition, you can link to relevant data or entities through identifiers within the text of your manuscript, using the following format: Database: xxxx (e.g., TAIR: AT1G01020; CCDC: 734053; PDB: 1XFN).
Mendeley Data
This journal supports Mendeley Data, enabling you to deposit any research data (including raw and processed data, video, code, software, algorithms, protocols, and methods) associated with your manuscript in a free-to-use, open access repository. During the submission process, after uploading your manuscript, you will have the opportunity to upload your relevant datasets directly to Mendeley Data. The datasets will be listed and directly accessible to readers next to your published article online.
For more information, visit the Mendeley Data for journals page.
**Data in Brief**
You have the option of converting any or all parts of your supplementary or additional raw data into one or multiple data articles, a new kind of article that houses and describes your data. Data articles ensure that your data is actively reviewed, curated, formatted, indexed, given a DOI and publicly available to all upon publication. You are encouraged to submit your article for *Data in Brief* as an additional item directly alongside the revised version of your manuscript. If your research article is accepted, your data article will automatically be transferred over to *Data in Brief* where it will be editorially reviewed and published in the open access data journal, *Data in Brief*. Please note an open access fee of 500 USD is payable for publication in *Data in Brief*. Full details can be found on the *Data in Brief* website. Please use this template to write your Data in Brief.
**MethodsX**
You have the option of converting relevant protocols and methods into one or multiple MethodsX articles, a new kind of article that describes the details of customized research methods. Many researchers spend a significant amount of time on developing methods to fit their specific needs or setting, but often without getting credit for this part of their work. MethodsX, an open access journal, now publishes this information in order to make it searchable, peer reviewed, citable and reproducible. Authors are encouraged to submit their MethodsX article as an additional item directly alongside the revised version of their manuscript. If your research article is accepted, your methods article will automatically be transferred over to MethodsX where it will be editorially reviewed. Please note an open access fee is payable for publication in MethodsX. Full details can be found on the *MethodsX* website. Please use this template to prepare your MethodsX article.
**Data statement**
To foster transparency, we encourage you to state the availability of your data in your submission. This may be a requirement of your funding body or institution. If your data is unavailable to access or unsuitable to post, you will have the opportunity to indicate why during the submission process, for example by stating that the research data is confidential. The statement will appear with your published article on ScienceDirect. For more information, visit the Data Statement page.
**AFTER ACCEPTANCE**
**Online proof correction**
Corresponding authors will receive an e-mail with a link to our online proofing system, allowing annotation and correction of proofs online. The environment is similar to MS Word: in addition to editing text, you can also comment on figures/tables and answer questions from the Copy Editor. Web-based proofing provides a faster and less error-prone process by allowing you to directly type your corrections, eliminating the potential introduction of errors. If preferred, you can still choose to annotate and upload your edits on the PDF version. All instructions for proofing will be given in the e-mail we send to authors, including alternative methods to the online version and PDF. We will do everything possible to get your article published quickly and accurately. Please use this proof only for checking the typesetting, editing, completeness and correctness of the text, tables and figures. Significant changes to the article as accepted for publication will only be considered at this stage with permission from the Editor. It is important to ensure that all corrections are sent back to us in one communication. Please check carefully before replying, as inclusion of any subsequent corrections cannot be guaranteed. Proofreading is solely your responsibility.
**Offprints**
The corresponding author will, at no cost, receive a customized Share Link providing 50 days free access to the final published version of the article on ScienceDirect. The Share Link can be used for sharing the article via any communication channel, including email and social media. For an extra charge, paper offprints can be ordered via the offprint order form which is sent once the article is accepted for publication. Both corresponding and co-authors may order offprints at any time via Elsevier’s Webshop. Corresponding authors who have published their article gold open access do not receive a Share Link as their final published version of the article is available open access on ScienceDirect and can be shared through the article DOI link.
**AUTHOR INQUIRIES**
Visit the Elsevier Support Center to find the answers you need. Here you will find everything from Frequently Asked Questions to ways to get in touch. You can also check the status of your submitted article or find out when your accepted article will be published.
|
{"Source-Url": "https://www.elsevier.com/journals/information-and-software-technology/0950-5849?generatepdf=true", "len_cl100k_base": 9836, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 33720, "total-output-tokens": 11070, "length": "2e13", "weborganizer": {"__label__adult": 0.0008373260498046875, "__label__art_design": 0.002483367919921875, "__label__crime_law": 0.000858306884765625, "__label__education_jobs": 0.177490234375, "__label__entertainment": 0.00042057037353515625, "__label__fashion_beauty": 0.0006618499755859375, "__label__finance_business": 0.005657196044921875, "__label__food_dining": 0.0008697509765625, "__label__games": 0.0017499923706054688, "__label__hardware": 0.0013265609741210938, "__label__health": 0.0026493072509765625, "__label__history": 0.0013980865478515625, "__label__home_hobbies": 0.0006546974182128906, "__label__industrial": 0.0008420944213867188, "__label__literature": 0.00443267822265625, "__label__politics": 0.0004355907440185547, "__label__religion": 0.000995635986328125, "__label__science_tech": 0.2235107421875, "__label__social_life": 0.0007944107055664062, "__label__software": 0.031646728515625, "__label__software_dev": 0.5380859375, "__label__sports_fitness": 0.0005855560302734375, "__label__transportation": 0.0009169578552246094, "__label__travel": 0.0005178451538085938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50929, 0.00465]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50929, 0.13218]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50929, 0.89112]], "google_gemma-3-12b-it_contains_pii": [[0, 2098, false], [2098, 4323, null], [4323, 5637, null], [5637, 8613, null], [8613, 13093, null], [13093, 17147, null], [17147, 21267, null], [21267, 25537, null], [25537, 29480, null], [29480, 33194, null], [33194, 37532, null], [37532, 41835, null], [41835, 46137, null], [46137, 50929, null], [50929, 50929, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2098, true], [2098, 4323, null], [4323, 5637, null], [5637, 8613, null], [8613, 13093, null], [13093, 17147, null], [17147, 21267, null], [21267, 25537, null], [25537, 29480, null], [29480, 33194, null], [33194, 37532, null], [37532, 41835, null], [41835, 46137, null], [46137, 50929, null], [50929, 50929, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50929, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50929, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50929, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50929, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50929, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50929, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50929, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50929, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50929, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50929, null]], "pdf_page_numbers": [[0, 2098, 1], [2098, 4323, 2], [4323, 5637, 3], [5637, 8613, 4], [8613, 13093, 5], [13093, 17147, 6], [17147, 21267, 7], [21267, 25537, 8], [25537, 29480, 9], [29480, 33194, 10], [33194, 37532, 11], [37532, 41835, 12], [41835, 46137, 13], [46137, 50929, 14], [50929, 50929, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50929, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
dccd654a60a9ee3a79d113df5e4b766e1798f7d8
|
Citation for published version
DOI
Link to record in KAR
http://kar.kent.ac.uk/20989/
Document Version
UNSPECIFIED
Copyright & reuse
Content in the Kent Academic Repository is made available for research purposes. Unless otherwise stated all content is protected by copyright and in the absence of an open licence (eg Creative Commons), permissions for further reuse of content should be sought from the publisher, author or other copyright holder.
Versions of research
The version in the Kent Academic Repository may differ from the final published version. Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the published version of record.
Enquiries
For any further enquiries regarding the licence status of this document, please contact: researchsupport@kent.ac.uk
If you believe this document infringes copyright then please contact the KAR admin team with the take-down information provided at http://kar.kent.ac.uk/contact.html
On the Serialisation of Parallel Programs
P.H.Welch and G.R.R.Justo
Computing Laboratory, University of Kent at Canterbury, CT2 7NF.
Abstract. This paper argues that one of the key techniques for making the most efficient use of multi-processor architectures is the serialisation of parallel code. Parallel algorithms are presented as having strong engineering merits that will form the natural basis for systems design in the future. Parallelisation of serial code is regarded as having only short-term value (for “dusty-decks”, whose correctness cannot be verified) as well as being mathematically intractable. Serialisation, on the other hand, is much easier to automate and can be profitably employed today. Several serialising transforms for occam processes are presented and applied to various simulation and image compression tasks.
2. Introduction
This paper reviews and interprets some of the practice and experience of programming parallel computing systems we have obtained at the University of Kent over the past six years. We present in a semi-formal, but disciplined, manner some of the practical skills we believe should be regularly applied to the development of parallel programs. We are by no means alone in our beliefs. We are alarmed, however, that they do not seem to be recognised by the “mainstream” computer science community.
The chief lessons are these:
• parallelism is a major structuring method that enables us to manage complexity (in the design, verification and maintenance of systems);
• system design, therefore, should be (highly) parallel from the start;
• in general, there should be many more logical processes than physical processors (“parallel slackness”);
• to optimise performance, parallel sub-networks running on individual processing nodes may need serialising. Tools to automate (or, at least, help in) such serialisation are badly needed.
Expressed positively like this, these do not seem to be too contentious. It is the negative conclusions we can draw from them, however, that seem to raise eyebrows:
• design standards that exclude parallelism also exclude security for complex applications. This leads to growing losses — both financial and human life;
• efficient and robust systems cannot be built by “first getting them to work serially on one processor” and then “parallelising” them;
• existing “dusty-deck” codes, that represent massive financial investments that “cannot afford to be wasted”, also represent massive serial codes that are becoming unmaintainable and are certainly unverifiable. These are technical dead-ends — as commercial pressures will gradually make clear to all those who persist with them;
• tools to assist the parallelisation of large-scale serial code are very difficult to make, will be very expensive to buy and will not be needed by the time they are half-made to work.
Regardless of the reaction of your eyebrows to the above assertions, please read on!
1. Some Merits of Parallel Design
Parallelism does not extend the range of functions that can be computed. The parallel operator in CSP [0] is completely defined in terms of its serial choice operator. The only motive for its introduction is that it simplifies the expression (i.e. the “programming”) of the behaviour of most processes (above a low level of complexity) and, hence, our ability to reason about them.
The parallel construct in occam [1] is directly based upon CSP theory and directly reflects the above properties. Parallelism (or, at least, occam parallelism) should be regarded as a high-level programming structure and used freely. It may be “compiled” down to low-level serial code (just as WHILE and FOR loops may be implemented by unstructured GOTOS), but that low-level code is almost always much more complicated and harder to understand. Nevertheless, this serialisation can always be done and sometimes there are good reasons for doing it — see below. The reverse operation, parallelisation, requires the “de-compilation” of low-level code back to high-level structures — an activity that never produces satisfactory results!
We are arguing the case for parallelism on the grounds that it simplifies and clarifies the development of complex systems — not that it makes them go faster! History supports this view. Software parallelism was first experimented with in the early 1970s in an effort to make operating systems work — or, at least, to make them work for longer periods between crashes! These systems were supporting uni-processor computers, so that the question of exploiting concurrency to improve performance did not arise. Indeed, a performance penalty (due to the overheads for managing the software concurrency) was cheerfully accepted if the overall reliability could be increased to tolerable levels.
We are very fortunate these days that parallel hardware lets us apply concurrency to increase the performance of computer systems. In our excitement over all the MIPS and MFLOPS that are now at our disposal, we must not forget the powerful benefits for clear thinking that were the original motivation for going parallel.
2. Some Designs Just Have to be Parallel
The following test.rig provides a user-interface for controlling and monitoring the state of a continuously running machine :-

Its required behaviour is as follows :-
- the user supplies keystrokes to the keyboard channel and receives display information from the screen channel;
• responding to user keystrokes, the test.rig generates control messages to the machine under test and updates the user’s display to indicate what it has done. Erroneous keystrokes “bleep” the user’s display;
• at the same time, the test.rig receives continuous information from its monitor channels about the machine state. This information flow is too great to display in its raw form and has to be filtered and summarised before being dynamically presented to the user in some meaningful way;
• the user may freeze the display at any moment by pressing a “pause” key — the next keypress resumes normal operations.
The next figure describes a reusable design for the implementation of such a test.rig. It shows a natural parallel construction out of four processes — each one performing its own logically self-contained function.
The keyboard.handler :-
• validates and forwards characters from keyboard to generate;
• invalid characters (pressed by the user by mistake) are not passed on — instead an “error” signal is output to the screen.handler;
• if the “pause” character arrives, it outputs a “pause” signal to the screen.handler, waits for another keystroke and sends a “resume” signal.
The generate process :-
• receives validated characters from the keyboard;
• interprets these as instructions to modify an internal data-base recording the state of various control options in the machine under test;
• issues appropriate commands down the relevant control line;
• formats a display packet to reflect any changed control value and sends this to the screen.handler.
The filter process :-
• continuously receives data from its monitor channels about the state of the machine;
• filters this data by integrating it into a “history” database (internal to this process);
• reports meaningful summaries about changing machine state in display packets to the
screen.handler.
The screen.handler:–
• multiplexes formatted display packets straight through to its screen channel;
• “error” signals from the keyboard are interpreted by “bleeping” the screen;
• a “pause” signal causes this process to lock on to its channel from the keyboard and
await a “resume” signal — freezing further screen output;
• the key.handler signals take priority over display packets.
We claim that this design is much simpler than any equivalent serial one. Each process
has responsibility for one distinct area of operation. Its data-structures are its own affair
and its algorithm is expressed from its own (“object-oriented”) point of view — not that of
an external controller. Strong engineering principles are followed in this design: processes
have tightly controlled external interfaces (only channels) and high internal cohesion (with
all design details private). Each process is now sufficiently simple so that a serial (occam)
implementation is probably clearer than any natural language specification.
The same could not be said about any serial implementation for the whole test.rig! That
would require an integration of the algorithms and data structures of the four processes
into a single thread of control. Such an integration would invert their object-oriented char-
acter and greatly damage their clarity. Worse still, in order to maintain the same freedom
to synchronise with its environment that the parallel implementation enjoyed, it would
sometimes have to ALT across all its channels — both input and output! It must maintain
this freedom — deadlock would threaten if it ever committed itself solely to output a con-
trol adjustment to any part of the machine that happened to be close to another part from
which feedback monitoring was being obtained!! The output guards may be removed by
further transformation [2], but it would now have got very obscure indeed.
It would therefore be very risky to attempt a serial implementation of the test.rig. The
parallel implementation is the correct one — even though we never have any need or inten-
tion of distributing it over more than one processor!
3. Serialisation and General Purpose Parallel Computing
If you have one processing unit, then you have an excuse for trying to devise your algo-
rithms with a single thread of control. If you have two processors, then two process logic
would seem appropriate. If you have eight processors, you can make a case that the most
effective way to exploit them to solve a particular problem is to program it up as eight par-
allel processes. What is not credible, however, is to replace each “eight” in the preceding
sentence by, say, “twenty three” or “one hundred and eighty seven”!
Even if we stick to one parallel computing architecture and one particular installation of
that architecture, the number of working compute nodes allocated to us for any particular
run will be somewhat variable. To cope with these conditions, we must design our algo-
rithms with (apparently) excessive parallelism — at least ten times as many processes as we
are ever likely to be allocated processors. Then, without re-designing the software, it
becomes possible to configure it to the resources we are actually given. Ideally, this should
happen automatically as the system is being loaded (when its resources become apparent).
Even better, we can envisage the possibility of dynamic balancing of the software processes
against the given hardware (e.g. during your run, some nodes may fail or be taken away
from you by the operating system or you may even be granted extra ones!).
For the moment, with current \textit{occam/transputer} systems, we need a few minutes notice of the resources we are going to be given in order to change some configuration constants or, in the worst case, perform some mechanical code transformations and re-compile. However, the parallel slackness means that no re-design is necessary.
There is one other compelling reason for designing with an excess of parallelism. Only parallel “farming” algorithms run compute nodes that operate most of their time completely independently. All other parallel paradigms require significant interaction between processors. Consider the view from a particular processing node. Because the time to acquire information held on other nodes is so great compared with the time to load information from our own node, we must find some useful work for our node to get on with whilst awaiting external information or accept a low efficiency of use from our node. With a large number of processes being managed by our node, it is very unlikely they will all become blocked awaiting external events at the same time. Hence, there is always something profitable to be doing and we obtain high efficiency. See Valiant’s papers \cite{valiant} for a detailed analysis on the merits of this “parallel slackness”.
We now have three grounds for designing systems with a high degree of parallelism:
- it is good (software) engineering — i.e. it makes system design, verification and maintenance easier;
- it gives us portability across different physical configurations of a particular multi-processor architecture (ultimately, \textit{occam} will give us portability across different architectures as well);
- it enables a high efficiency of use for each individual processor — i.e. it makes the system go faster!
Serialisation of these excessively parallel designs now becomes a viable optimisation technique. It is not always applicable \textit{but}, with so many processes allocated to each processor, it can:
- \textit{save time}: by eliminating context switches and the copying of data packets between processes;
- \textit{save space}: by having a common data area for shared data-structures, rather than separate buffers in each process.
With a sub-microsecond overhead in \textit{transputers}, context switching is not really a problem. However, significant savings can sometimes be achieved on the other two items.
Beware that serialisation — whilst always possible — will not always prove to be an optimisation. The user-interface component described in the previous section is constrained to operate at the speed of a user-terminal. Serialising its processing logic will not address that bottleneck!
Beware also that serialisation can — and usually does — lead to an explosion in the length and complexity of the resulting code. This can be so excessive as to render the whole operation impractical. In the following sections, we describe some examples where the synchronisation characteristics of the processes we are combining are sufficiently well-behaved to allow the serialisations to work. Note that the resulting code should be considered as “compiled” code — software engineering principles are not upheld and this is not the level at which the components should be maintained.
In the above paragraphs, we have only been discussing the processes that \textit{directly} contribute to the algorithm that solves the original problem. On any particular node, there will be
a further collection of (high-priority) processes to manage external communication and events. This is because a physical node in a credible multi-processor machine will itself be a parallel device. It may have only one compute engine, but it will certainly have multiple communications engines that can operate at the same time. Therefore, there is a minimum level of parallelism to which each node must be programmed if it is to be used to its full advantage.
Thus, even an efficient “farm” worker on a transputer needs a harness of eight support processes (to drive all its links bi-directionally and in parallel with its main task). For occam and the current generation transputer (T2s, T4s and T8s), these high-priority buffers, auto-prompters, multiplexors and forwarders have become well known, very simple and standardised. So much so that in the new generation T9000 transputers, some of these processes are in hardware! No attempt, of course, should be made to serialise any (remaining) high-priority processes with the background application-specific tasks.
4. Serial “In-Lining” of a Simple Server
In simulating the growth of “diffusion limited aggregates” [5], the computationally intensive innermost loop consisted of executing a random walk over a regular lattice. Anything that could be done to speed up these walks had a direct and equal effect on the speed of the entire simulation.
Each step of the walk consisted of obtaining three random bits to decide the direction of the step, making the step (i.e. updating some coordinates) and checking to see if you had reached a “sticky” tile (which indicated the end of the walk). We used the random number algorithm from [6] that produces acceptable sequences for our application, whilst being computationally light. Despite this lightness, most of the time for each step was spent computing these numbers — there being so little else to do!
The proper way to implement the random number generator is as a server, continuously pushing its results towards its client :-
```
random (n, initial) --> application
where :-
PROC random (VAL INT n, initial, CHAN OF INT out)
-- outputs n random bits per communication
INT seed:
... other state declarations
SEQ
seed := initial
... initialise rest of state
WHILE TRUE
INT word:
SEQ
... compute n random bits in word & update seed etc.
out ! word
:
```
and :-
```plaintext
PROC application (CHAN OF INT from.random, ...)
...
local declarations
...
body
and where, deep inside the body, the innermost loop goes :-
```
WHILE walking
INT next:
SEQ
from.random ? next
... rest of step
```
This is good engineering. The application has no responsibility for maintaining the random number seed nor for the random number logic. The seed is a private data-structure, encapsulated and hidden by the random number server that alone needs to know about it.
However, we want to remove the overhead of running the server as a separate process from its client. The serialisation in this case is quite easy. We first have to decide which (if any) of two threads of control to retain as defining the structure of the unified thread of control. The logic of the client application is fairly complex outside its innermost loop and would not take kindly to the inversion of its logic if it were not chosen. On the other hand, the server control structure is rather trivial and can, therefore, take the necessary knocks.
So, the application stays in charge! It must inherit the parameters of its absorbed server — apart, of course, from the connecting service channel that now disappears :-
```
new.application (n, initial)
```
Internally, it picks up any persistent data-structures from the server (i.e. seed etc.) and installs any server initialisation code :-
PROC new.application (VAL INT n, initial, ...)
... (old) local declarations
INT seed: -- from random
... other state declarations -- from random
SEQ
seed := initial -- from random
... initialise rest of state -- from random
... (old) body
:
We must “in-line” the server loop code wherever the body used to demand service:
WHILE walking
INT next:
SEQ
{{{ from.random ? next
INT word:
SEQ
... compute n random bits in word & update seed etc.
next := word
}}}}
... rest of step
We no longer have a context switch to be performed and the server communication has been replaced by an assignment.
Finally, we observe that the transient data-structure word (inherited from the server) can be dispensed with, along with the data-copying assignment, and we compute the result directly where it is needed:
WHILE walking
INT next:
SEQ
... compute n random bits in next & update seed
... rest of step
From an engineering point of view, this code is not as manageable as the original. Client and server data-structures are mixed up and so is the logic that operates on them. However, our walking speed has increased from 93,000 steps per second to 127,000!
5. Serialisation of Pipe-Lined Logic
5.0. Basic Principles
Some pipelines are designed specifically for the buffering characteristics they introduce and their ability to service their supplier and consumer processes in parallel. For example, this technique enables transputers to communicate and compute at the same time. Serialisation is probably not the right way to try to optimise these pipelines — see [7], [8] and [9] for a discussion on this.
Other pipelines are introduced to separate the phases of a particularly complex function into manageable stages. We concentrate on these and show how to serialise them so as to preserve their overall functionality, but not worry too much about the buffering services they originally provided. The environment in which such a pipeline is applied is only interested in the mathematical transformation being performed — indeed, one of the optimisations being sought through this serialisation is the elimination of superfluous data-buffers and data-copying. Formally, the semantics of the (originally pipelined) component will be preserved with respect to an environment that is always willing to accept its output.
Consider a component process with a single input and a single output channel. We call such a component a \textit{p-q-transformer} if it synchronises with its environment by cycling through the sequence: first do \textit{p} inputs and then do \textit{q} outputs.
If it is implemented with code of the form:\[
\begin{align*}
\text{PROC } & \text{transform (CHAN OF A in, CHAN OF B out) } \text{\ldots state declarations} \\
& \quad \text{SEQ } \quad \text{\ldots initialise state} \\
& \quad \text{WHILE running } \text{SEQ } \quad \text{\ldots do } \textit{p} \text{ inputs } \quad \text{\ldots compute } \quad \text{\ldots do } \textit{q} \text{ outputs}
\end{align*}
\]
where we also allow computation to be interleaved amongst the above inputs and outputs, we say the transformer is in \textit{normal} form.
Serialising a pipeline of \textit{normal} form \textit{1-1-transformers} is fairly easy. It becomes a new \textit{normal} form \textit{1-1-transformer} that contains all the state variables of the original pipeline components (modulo some name changes to avoid any clashes). All initialisations on these states are first performed (in any sequence) and its main cycle then:\[
\begin{align*}
\text{\ldots inputs (as in the first component of the pipeline);} \\
\text{\ldots performs the sequences of computations made from the individual computations from each component in the pipeline. The order of this sequence is the same as the order of the components in the pipeline. The communications between pipeline components become assignments between corresponding state variables;} \\
\text{\ldots outputs (as in the last component of the pipeline).}
\end{align*}
\]
In the computation phase above, there is plenty of opportunity for state-variable and assignment elimination.
If a transformer is not in \textit{normal form}, then part of its state is governed by where it is in its code. By introducing further state variables to represent these positions and testing these within its \textit{compute} section, any non-\textit{normal} form transformer can always be transformed into \textit{normal} form.
\textbf{5.1. Structure Clash within the Pipeline}
The result of normalising and serialising a pipeline of \textit{1-1-transformers} will be more complex than the original code. Things get really exciting, however, when we do the same for a pipeline of \textit{p-q-transformers} with differing \textit{p} and \textit{q} values!
Consider part of an image compression pipeline:\[
\begin{align*}
\text{a} & \quad \text{encode} & \quad \text{b} & \quad \text{pack} & \quad \text{c}
\end{align*}
\]
where channels \textit{a}, \textit{b} and \textit{c} respectively carry the protocols:\[
\begin{align*}
\text{PROTOCOL PICTURE IS } \text{[height][width]BYTE:} \\
\text{PROTOCOL BITS IS BOOL:} \\
\text{PROTOCOL PACKET IS } \text{[packet.size]INT:}
\end{align*}
\]
A stream of (fragments of) pictures arrive on channel a and are “Huffman-encoded” into a compressed bit-stream on channel b. The encoding operates on differences between neighbouring pixels — small ones are Huffman-encoded, larger differences are transmitted plain (preceded by an “escape” code). The bit-stream from b is packed into a decently sized packet for onward transmission down c (and out of the transputer).
The encode process is a 1-“many”-transformer, where “many” is data-dependent. The pack process is a packet.size-1-transformer. There is a serious structure clash here! The parallel design protects us completely from its difficulties:–
PROC encode (CHANNEL OF PICTURE in, CHAN OF BITS out)
WHILE TRUE
[height][width]BYTE picture:
SEQ
in ? picture
SEQ i = 0 FOR height
VAL [width]BYTE line IS picture[i]:
... compress line
where :
{{ compress line
INT last.pixel:
SEQ
last.pixel := 127
SEQ j = 0 FOR width
VAL INT pixel IS INT line[j]:
SEQ
VAL INT diff IS (pixel — last.pixel) + 255:
VAL INT n IS n.bits[diff]:
INT code:
SEQ
code := h.code[diff]
... emit bottom n bits of code
last.pixel := pixel
}}
and where :
VAL [510]INT n.bits IS [ ... ]:
VAL [510]INT h.code IS [ ... ]:
are compile-time constant tables holding, respectively, the number of bits and the actual code values for each possible change in pixel intensity. Finally :
{{ emit bottom n bits of code
SEQ k = 0 FOR n
SEQ
out ! (code /\ 1) = 1
code := code >> 1
}}
The structure of the above code is derived naturally from the specification of encode. The same thing happens for :
PROC pack (CHAN OF BITS in, CHAN OF PACKET out)
WHILE TRUE
[packet.size]INT packet:
SEQ
SEQ p = 0 FOR packet.size
INT word IS packet[p]:
SEQ
word := 0
... input bits into word
out ! packet
:
where :
{{{{ input bits into word
INT bit:
SEQ
bit := 1
SEQ q = 0 FOR WORD.SIZE
BOOL b:
SEQ
in ? b
IF
b
word := word \ b
TRUE
SKIP
bit := bit << 1
}}}}
This completes the programming. The structure clash between the synchronisation characteristics of the two elements is absorbed by the run-time scheduler. The use of parallelism to design such a clean solution to this problem was first described (to our knowledge) in the book by Jones and Goldsmith [10].
5.2. A Serialising Optimisation
The problem with leaving the code like this is that the bit-stream channel (whether mapped on to memory or an external link) imposes a bottleneck on the data-flow! It must be removed — i.e. we must serialise the encode and pack processes.
We have to choose which process structure to preserve — it does not really matter which. Let us choose to preserve encode (since it has three nested loops in its cycle and pack has only two).
The state of the pack process is represented by its variables packet, word, bit, p and q. Import these variables into what used to be the structure of the encode process and is now the serialised :
PROC encode.pack (CHAN OF PICTURE in, CHAN OF PACKET out)
INT word, bit, p, q:
[packet.size]INT packet:
SEQ
word, bit, p, q := 0, 1, 0, 0
... structure of the encode process
The encode structure is unchanged except for its single output (deep inside its emit fold). This output triggered a cycle of the pack process — it is replaced by a fold that contains that logic with its housekeeping all inverted:
```haskell
{{{
out ! (code \ 1) = 1
}_SEQ
..., 'pack' response to the communication
..., 'pack' housekeeping
}}}
```
where:
```haskell
{{{
'pack' response to the communication
}_SEQ
(code \ 1) = 1
word := word \ bit
TRUE
SKIP
bit := bit << 1
}}}
```
and:
```haskell
{{{
'pack' housekeeping
}_SEQ
q := q + 1
IF
q = WORD.SIZE
SEQ
packet[p], word, bit, q := word, 0, 1, 0
p := p + 1
IF
p = packet.size
SEQ
out ! packet
p := 0
TRUE
SKIP
SKIP
TRUE
}}}
```
That completes the transformation. Designing such complex serial code directly would not be a good idea!!
The alternative transformation — i.e. retaining the structure of pack and inverting encode into it — leads to a very different serial structure. This is left as an exercise for the reader! Note, however, that the transformations (via the original parallel code) will prove the equivalence of two very different serial versions.
5.3. Further Optimisations Now Become Possible
Of course, now that the code is serial and the innermost loops from the two original processes have been interleaved and can see each other’s data-structures, further optimisations become possible. For instance, the resulting innermost loop (in the emit fold) transfers \( n \) bits from code over to word one bit at a time! Clearly, this loop can be removed and the transfer done in one go:
On Serialisation
```plaintext
{{
emit bottom n bits of code
SEQ
word := word \ (code \ll q)
q := q + n
IF
q \geq WORD.SIZE
SEQ
q := q - WORDSIZE
packet[p], word := word, code >> (n - q)
p := p + 1
IF
p = packet.size
SEQ
out ! packet
p := 0
TRUE
SKIP
TRUE
SKIP
}}
```
Note that `code` should now be declared as a `VAL` and that the `bit` pointer and, of course, the innermost loop control variable `k` are no longer needed.
All these codes plus the necessary buffers can be fitted into the on-chip memory of a T2 transputer. On a 20 MHz T800 ( alas, we have no T2s), the original clean parallel code took 7.3 $\mu$secs. to produce one compressed bit of output. The first serialisation reduced this to 4.6 $\mu$secs. The last optimisation above (that was enabled by the serialisation) reduced this further to 2.1 $\mu$secs. We could go on, but again we leave this to the interested reader. [Of course, all run-time checks — including those for array-bound violation — were left on for the above timings. Switching them off is always a false economy!]
### 6. Arbitrary Topologies with Well-Behaved Synchronisation
#### 6.0. Basic Principles
Our final example is taken from the field of continuous system simulation (e.g. distribution networks for gas or electricity, urban traffic flow, digital circuit emulation ...). The simple way to design the simulation is to create a network of software processes that directly mirrors the physical network of processes in the real system. Any topology — including those with feedback — must be allowed.
In general, attempts to optimise an arbitrary process network by serialisation will lead to an impractical explosion in the size of the resulting code. However, the synchronisation characteristics of the processes studied here are simple and regular — each process communicates continuously and in parallel with all its topological neighbours. This is generalised “systolic” computing — irregular networks with feedback are allowed as well as regular meshes. For such systems, serialisation does not cause a bang!
In [11], the notion of an `I/O-PAR` process was introduced. Informally, an `I/O-PAR` process is one that, whenever it communicates, communicates on all its channels in parallel. The following two processes are `I/O-PAR` and in normal form :=
A key property of I/O-PAR processes is that any parallel network of them is deadlock-free and remains I/O-PAR — that is why it is so easy and safe to design with them!
Clearly, a network of I/O-PAR processes can synchronise with its environment more freely than one in normal form. At any particular moment, such a network may have communicated on one of its channels several more times than it has communicated on one of its other channels (where “several” is bounded by the maximum “diameter” of the network). However, a network in this condition will always be offering its environment communications on its more backward channels (that would enable the number of times they have been used to catch up with the leader). For an I/O-PAR process in normal form, the “several” is limited to one.
If we place a collection of I/O-PAR processes in an environment that is itself I/O-PAR (with respect to its connections to that collection), then that collection may be serialised into an I/O-PAR process in normal form without changing the semantics of the whole system.
These results are more formally presented in [12], together with the serialising transformations and some proofs! Here we are somewhat less formal. Suppose we want to run processes A and B in parallel:
```
PROC A.B ( ... )
... 'internal' channels for connecting A and B
PAR
A ( ... )
B ( ... )
```
where the parameters for A.B are the union of those for A and those for B, less their interconnecting channels.
To serialise them, we extract a set of execution paths that can be expressed in I/O-PAR normal form. This certainly loses some of the paths that were available to the original parallel code — but since we are only going to run the derived code in an I/O-PAR environment that is not going to exploit those extra paths, this does not matter! The serialised code is:
```
PROC A.B ( ... )
... declarations A
... declarations B
SEQ
... initialise A
... initialise B
... serialised A and B loop
```
Since they concern separate sets of state variables, the order of the initialise sections derived from A and B is irrelevant. Since no communications are involved (i.e. the external
environment cannot detect what is happening), it is safe to serialise them. The same is true for the respective compute sections inside the loop:
```plaintext
{{{ serialised A and B loop
WHILE TRUE
SEQ
PAR
... parallel i/o A (except 'internals')
... parallel i/o B (except 'internals')
... 'internal' assignments
... compute A
... compute B
}}}
```
The position of the respective parallel i/o sections clearly represents a synchronisation behaviour with its environment that the original parallel code could have chosen. That is all we promised to do!
In parallel with those communications are a set of assignments between the state variables of A and B. These are derived from the original “internal” communications between A and B. Again, because no external communications are involved, it is safe to serialise these assignments (in any order — because the anti-alias and usage rules of occam ensure there can be no data-dependencies!). Also, because there are no usage conflicts with the i/o (currently happening in parallel), it is safe to move these assignments to the start of the compute region of the cycle:
```plaintext
{{{ serialised A and B loop
WHILE TRUE
SEQ
PAR
... parallel i/o A (except 'internals')
... parallel i/o B (except 'internals')
... 'internal' assignments
... compute A
... compute B
}}}
```
This last change is, of course, undetectable by its environment and the code is now normal form I/O-PAR — as required.
Another key property of processes, discussed in [11, 12], is I/O-SEQ. This is similar to I/O-PAR except that input communications are serialised before output ones. However, input communications are still all parallel — i.e. when one input happens, all inputs must happen. The same is true for outputs. The following process is in normal I/O-SEQ form:
```plaintext
PROC C ( ... )
... declarations C
SEQ
... initialise C
WHILE TRUE
SEQ
... parallel inputs C
... compute C (part 0)
... parallel outputs C
... compute C (part 1)
```
The second general result is this: if we run an \textit{I/O-SEQ} process in parallel with an \textit{I/O-PAR} process that supplies all its input, they may be serialised into an \textit{I/O-PAR} process in \textit{normal} form (again modulo an environment that is itself \textit{I/O-PAR}).
Suppose that these conditions apply to processes \texttt{A} and \texttt{C} above. A valid (sub-)set of execution paths is given by the serialisation:
\begin{verbatim}
PROC A.C ( ... )
... declarations A and C
SEQ
... initialise A and C
WHILE TRUE
SEQ
PAR
... parallel i/o A (except ‘internals’)
SEQ
... ‘internal’ assignments (from A to C)
... compute C (part 0)
PAR
... parallel outputs C (except ‘internals’)
... ‘internal’ assignments (from C to A)
... compute A and C (part 1) - any order
: Again, we may move the internal assignments and computations around a bit:
\begin{verbatim}
WHILE TRUE
SEQ
... ‘internal’ assignments (from A to C)
... compute C (part 0)
... ‘internal’ assignments (from C to A)
PAR
... parallel i/o A (except ‘internals’)
... parallel outputs C (except ‘internals’)
... compute A and C (part 1) - any order
: while the parallel usage rules ensured that there were no data-dependencies to prevent us!
6.1. Applying the Transforms
We will take a concrete example from [11]. Fundamental gates used in digital logic circuits are emulated by \textit{I/O-PAR} processes. For instance, a two-input \texttt{and} gate is given by:
\begin{verbatim}
PROC nand (CHAN OF INT in.0, in.1, out)
INT a.0, a.1, b.0, b.1:
SEQ
b.0, b.1 := undefined, undefined
WHILE TRUE
SEQ
PAR
in.0 ? a.0
in.1 ? a.1
out ! ~(b.0 \or b.1)
PAR
in.0 ? b.0
in.1 ? b.1
out ! ~(a.0 \or a.1)
: \end{verbatim}
Because *occam* channels have to be "point-to-point", branches in wiring have to be represented by active processes:
```
PROC delta (CHAN OF INT in, out.0, out.1)
WHILE TRUE
INT x:
SEQ
in ? x
PAR
out.0 ! x
out.1 ! x
```
The *I/O-SEQ* nature of the above process corresponds to a digital logic component with zero propagation delay. Such components, therefore, have no impact on the timing characteristics of the circuit being emulated and may be freely used.
The previous *nand* process corresponds to a component with a propagation delay equal to one (emulated) sample interval between incoming logic values. Variable length propagation delays can be easily modelled by adding *I/O-PAR* "delay-line" processes, parametrised to the required value.
A four-valued logic is emulated in these processes: TRUE and FALSE (represented by 11 and 00 respectively) and two "undefined" levels (represented by 10 and 01). Notice that, for a word length of 32, up to 16 independent sets of wavefront trials can be conducted simultaneously.
A simple circuit with feedback is the *latch*:
```
PROC latch (CHAN OF INT in.0, in.1, out.0, out.1)
CHAN OF INT p, q, r, s:
PAR
nand (in.0, r, p)
nand (s, in.1, q)
delta (p, out.0, s)
delta (q, r, out.1)
```
which can, of course, be instantly implemented:
```
PROC latch (CHAN OF INT in.0, in.1, out.0, out.1)
CHAN OF INT p, q, r, s:
PAR
nand (in.0, r, p)
nand (s, in.1, q)
delta (p, out.0, s)
delta (q, r, out.1)
```
To serialise this, let us first join the *I/O-PAR* logic gate with its adjacent *I/O-SEQ* "fan-out" process. This transformation is based upon the second one given in the previous subsection, extended in the obvious way to cope with the two *I/O-PAR* phases of the *nand* cycle:
```
PROC latch (CHAN OF INT in.0, in.1, out.0, out.1)
CHAN OF INT p, q, r, s:
PAR
nand (in.0, r, p)
nand (s, in.1, q)
delta (p, out.0, s)
delta (q, r, out.1)
```
To serialise this, let us first join the *I/O-PAR* logic gate with its adjacent *I/O-SEQ* "fan-out" process. This transformation is based upon the second one given in the previous subsection, extended in the obvious way to cope with the two *I/O-PAR* phases of the *nand* cycle:
On Serialisation
PROC nand.delta (CHAN OF INT in.0, in.1, out.0, out.1)
INT a.0, a.1, b.0, b.1, a:
SEQ
b.0, b.1 := undefined, undefined
WHILE TRUE
a := ~(b.0 /\ b.1) -- first phase
PAR
in.0 ? a.0
in.1 ? a.1
out.0 ! a
out.1 ! a
a := ~(a.0 /\ a.1) -- second phase
PAR
in.0 ? b.0
in.1 ? b.1
out.0 ! a
out.1 ! a
This is now in (two-phase) I/O-PAR normal form. Notice that the variables a.0 and a.1 need only have very local scope — that of the first parallel communications in the loop and its following assignment. Next, by moving the first assignment in the loop to the end of the loop (and, of course, duplicating it in the initialisation part), we observe that the same is true for the variables b.0 and b.1. Localising both pairs of definitions and renaming them to c.0 and c.1, we end up with a loop whose body is a sequence of two identical phases. This collapses to a simple I/O-PAR normal form:
PROC nand.delta (CHAN OF INT in.0, in.1, out.0, out.1)
INT a:
SEQ
a := ~(undefined /\ undefined)
WHILE TRUE
INT c.0, c.1:
SEQ
PAR
in.0 ? c.0
in.1 ? c.1
out.0 ! a
out.1 ! a
a := ~(c.0 /\ c.1)
Now, the latch runs two instances of this nand.delta in parallel. This may now be serialised by applying the first transform from the previous section. We have to rename the internal state-variables to avoid clashes — we do this by adding the suffix .hi to those from the “higher” nand.delta and .lo to the “lower” one:
PROC latch (CHAN OF INT in.0, in.1, out.0, out.1)
INT a.hi, c.0.hi, c.1.hi:
INT a.lo, c.0.lo, c.1.lo:
SEQ
a.hi, a.lo := other.undefined, other.undefined
WHILE TRUE
SEQ
PAR
in.0 ? c.0.hi
out.0 ! a.hi
in.1 ? c.1.lo
out.1 ! a.lo
c.1.hi, c.0.lo := a.lo, a.hi
a.hi := ~(c.0.hi \ c.1.hi)
a.lo := ~(c.0.lo \ c.1.lo)
:
Clearly, the variables c.1.hi and c.0.lo may be dispensed with and their assignment costs saved — the a.lo and a.hi values being used directly in the final assignments. Renaming c.0.hi and c.1.lo as t.hi and t.lo respectively and localising their declaration, we are left with :=
PROC latch (CHAN OF INT in.0, in.1, out.0, out.1)
INT a.hi, a.lo:
SEQ
a.hi, a.lo := other.undefined, other.undefined
WHILE TRUE
INT t.hi, t.lo:
SEQ
PAR
in.0 ? t.hi
in.1 ? t.lo
out.0 ! a.hi
out.1 ! a.lo
a.hi, a.lo := ~(t.hi \ a.lo), ~(a.hi \ t.lo)
:
Looking at the resulting code, it is possible that it could have been coded like that in the first place. However, the serialised code only collapsed to this simple form because of the symmetry in the original circuit. Less regular circuits would require serial code we would not like to compose directly! For example, a latch circuit whose gates imposed different propagation delays!!
The resource demands from the two versions of the above latch component are significantly different. The parallel version requires 308 bytes of workspace and processes incoming signal sample “wavefronts” at the rate of one every 36 µsecs. The final serial version only requires 84 bytes of workspace and cycles in 15 µsecs.
We would expect similar benefits to be obtained from serialising larger circuits — enabling them to be emulated in the same (real) time from the same (transputer) hardware resources. Without automatic tools, however, we would not like to try it!
7. Discussion
A recent article [13] on parallel computing in the popular computing magazine BYTE ends with the following paragraph: |
"The hardware issue has already been solved, thanks to the INMOS transputer. Software remains the final hurdle to clear if parallel processing via multicomputers is to emerge as a popular alternative to sequential processing."
This point-of-view is a little worrying. Whilst the article mentions C and FORTRAN, it makes no reference to occam. Yet occam was devised specifically to address the software and hardware issues associated with parallel computing [14, 15, 16, 17] — the security weaknesses in "standard" programming languages disqualifying them from being robust platforms on which to build concurrent systems. Occam was developed simultaneously with the transputer and the latter would not exist (as we know it) without the former.
If you only pick up half the groceries, don’t complain if you get hungry!
Our experience from working with occam is that hardware issues and software issues are no different from one another. We adopt the same approach for each. Both are designed as parallel systems — it’s just that the hardware designs tend to stay parallel, while the software elements sometimes get serialised a little bit!
The second sentence of the above quotation endorses the common belief that it is something in the parallelism that causes the difficulty in the software. The theme of this paper is that this is false — it is the attempt to write complex serial code directly that causes (and always has caused) the problems.
We have argued that parallelism is a high-level programming concept. It enables us to capture complex system behaviour much more directly, concisely and simply than any equivalent (low-level) serial code. On the practical side, to develop application software that will be portable and efficient across different architectures and configurations of multiprocessor, we need much more parallelism in our algorithms and data-structures than we are ever likely to be offered in hardware.
We have considered three different applications (the simulated growth of diffusion limited aggregates, image compression and digital logic emulation) and demonstrated three different parallel paradigms (client/server, pipe-lines and arbitrary feedback networks) that yield, respectively, well-engineered solutions for them. For each of these cases, we have shown how to transform ("compile") them into equivalent serial code that is more efficient in terms of its space and time requirements, but is more complex and less well-engineered. In general, we have little confidence in our ability to produce such serial code directly and none in our ability to maintain it.
The serialising optimisations we have used are all constructively defined and can clearly be automated. We see an urgent need for tools to do these transformations for us. In the medium term, serialising will be an everyday activity for parallel programmers and programmers make too many mistakes on their own! We also need these serialising tools integrated into a secure development environment alongside their complementary ("folding") editor, compiler and maintenance tools — i.e. the INMOS TDS [18], or something that shares its philosophy, must be re-born. Eventually, serialisation may be hidden from us by being incorporated into the compiler, loader or dynamic load balancer.
"Computer algebra" tools (e.g. [19]) have been developed and are in significant use by mathematicians to help them manipulate their formulae. It seems extraordinary that computer scientists (who made those tools for the mathematicians) have not demanded similar help. We are far too confident in our abilities to manipulate our formulae (i.e. programs) —
The evidence of our inability is widespread.
The real reason for the lack of program manipulation tools (in significant use) is that the programming languages (in significant use) do not allow formulae (i.e. programs) with the same simple algebraic properties as are enjoyed in mathematics. Languages such as C and FORTRAN have ill-defined and highly complex semantics that rule out the prospect for any formal analysis or manipulation.
On the other hand, transformation tools exist for some functional programming languages and also, of course, for occam [20, 21]. Occam is the exception to the general statement in the preceding paragraph. It is the only programming language in significant (industrial) use that only allows formulae (i.e. programs) with simple algebraic properties. Nevertheless, the use of the Oxford tools [21] is not very widespread — it seems not to have gone much beyond INMOS (and its sub-contractors), where it has played a crucial role in the design of major features of the T9000 transputer [22]. The Oxford tools do not include the serialising transforms described in this paper. We are keen to use such tools at Kent and work is in progress here to produce them.
Finally, we summarise our approach to (parallel) computing applications:
- design a solution incorporating as much parallelism as naturally falls out from the application — this is usually massive;
- balance this across the number of processors at our disposal — this is easy so long as the parallelism in the algorithm greatly exceeds the parallelism in the hardware;
- for each individual node, serialise the worker processes so long as this yields significant optimisations — i.e. a complete serialisation is not always necessary and may be counter-productive. We need tools to assist us in this.
We have been fortunate in being able to avoid working with “dusty decks”. Extracting parallel code from them (in order to exploit parallel hardware) is as hard as extracting “high-level” source code from a raw assembler listing. It’s quicker and safer and cheaper to go back to the original problems and re-write them from scratch using the higher paradigm. We must, of course, use a proper multi-processing language that allows the use of formal methods and enables us, and automated tools, to work.
8. Acknowledgements
The work of one of the authors (GRRJ) has been funded by the Brazilian Research Council (CNPq), under grant No. 205034/88-8, and we are especially grateful for their support.
We are also indebted to the community of parallel system engineers within the Computing Laboratory at the University of Kent, who have created the culture from which the particular experiences reported in this paper have been drawn. That work has been variously supported by the Computer Board Initiative on Software Environments for Parallel Computers, the SERC/DTI Transputer Initiative, the Royal Armament Research and Development Establishment and the COMETT training programme of the EEC.
9. References
|
{"Source-Url": "https://kar.kent.ac.uk/20989/1/SerialWelch.pdf", "len_cl100k_base": 11587, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 54521, "total-output-tokens": 14303, "length": "2e13", "weborganizer": {"__label__adult": 0.0003590583801269531, "__label__art_design": 0.0005116462707519531, "__label__crime_law": 0.0003333091735839844, "__label__education_jobs": 0.0009756088256835938, "__label__entertainment": 0.00010347366333007812, "__label__fashion_beauty": 0.0001691579818725586, "__label__finance_business": 0.0003132820129394531, "__label__food_dining": 0.0003993511199951172, "__label__games": 0.0006089210510253906, "__label__hardware": 0.003154754638671875, "__label__health": 0.0005950927734375, "__label__history": 0.0004427433013916016, "__label__home_hobbies": 0.00016248226165771484, "__label__industrial": 0.0007987022399902344, "__label__literature": 0.00038743019104003906, "__label__politics": 0.0002808570861816406, "__label__religion": 0.0007224082946777344, "__label__science_tech": 0.182373046875, "__label__social_life": 0.00010126829147338869, "__label__software": 0.01003265380859375, "__label__software_dev": 0.79541015625, "__label__sports_fitness": 0.00033283233642578125, "__label__transportation": 0.0009751319885253906, "__label__travel": 0.0002620220184326172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54291, 0.01617]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54291, 0.38356]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54291, 0.9083]], "google_gemma-3-12b-it_contains_pii": [[0, 1145, false], [1145, 4011, null], [4011, 6625, null], [6625, 8408, null], [8408, 12135, null], [12135, 15601, null], [15601, 17996, null], [17996, 19413, null], [19413, 21898, null], [21898, 24775, null], [24775, 26457, null], [26457, 27952, null], [27952, 29633, null], [29633, 32028, null], [32028, 34203, null], [34203, 36207, null], [36207, 38101, null], [38101, 40462, null], [40462, 41957, null], [41957, 43808, null], [43808, 47591, null], [47591, 50760, null], [50760, 53479, null], [53479, 54291, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1145, true], [1145, 4011, null], [4011, 6625, null], [6625, 8408, null], [8408, 12135, null], [12135, 15601, null], [15601, 17996, null], [17996, 19413, null], [19413, 21898, null], [21898, 24775, null], [24775, 26457, null], [26457, 27952, null], [27952, 29633, null], [29633, 32028, null], [32028, 34203, null], [34203, 36207, null], [36207, 38101, null], [38101, 40462, null], [40462, 41957, null], [41957, 43808, null], [43808, 47591, null], [47591, 50760, null], [50760, 53479, null], [53479, 54291, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54291, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54291, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54291, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54291, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54291, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54291, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54291, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54291, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54291, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54291, null]], "pdf_page_numbers": [[0, 1145, 1], [1145, 4011, 2], [4011, 6625, 3], [6625, 8408, 4], [8408, 12135, 5], [12135, 15601, 6], [15601, 17996, 7], [17996, 19413, 8], [19413, 21898, 9], [21898, 24775, 10], [24775, 26457, 11], [26457, 27952, 12], [27952, 29633, 13], [29633, 32028, 14], [32028, 34203, 15], [34203, 36207, 16], [36207, 38101, 17], [38101, 40462, 18], [40462, 41957, 19], [41957, 43808, 20], [43808, 47591, 21], [47591, 50760, 22], [50760, 53479, 23], [53479, 54291, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54291, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
82eb37e0772ef87f454a557caea463538fb841a2
|
How Do Free/Open Source Developers Pick Their Tools? A Delphi Study of the Debian Project
Martin F. Krafft
Debian Developer
Munich, Germany
mail@martin-krafft.net
Klaas-Jan Stol
Lero—the Irish Software Research Centre, University of Limerick, Ireland
klaas-jan.stol@lero.ie
Brian Fitzgerald
Lero—the Irish Software Research Centre, University of Limerick, Ireland
bf@lero.ie
ABSTRACT
Free and Open Source Software (FOSS) has come to play a critical role in the global software industry. Organizations are widely adopting FOSS and interacting with open source communities, and hence organizations have a considerable interest in seeing these communities flourishing. Little research has focused on the tools used to develop that software. Given the absence of formal mandate that would appear in traditional organizations, an open question is what influences a FOSS contributor’s decision to adopt a tool and how workflows get established in FOSS teams. In this paper we report on a Delphi study conducted in the Debian Project, one of the largest FOSS projects. Drawing from data collected in three phases from a panel of 21 carefully selected and well-informed participants, we identified 15 factors that affect decisions to adopt tools and relate those to existing models of innovation and diffusion.
CCS Concepts
• Software and its engineering → Software maintenance tools; Collaboration in software development
Keywords
Free/open source software, tools, Delphi study, qualitative study
1. INTRODUCTION
Tools play an essential part in software development [15, 32], and research in this area has been extensive [18]. New tools and technologies are continuously emerging, which in turn affect the way software is developed. Much research on software tools and environments has focused on industrial software engineering development contexts [4, 20, 30]. However, a significant development has been the rise of Free and Open Source Software (FOSS), which has gained significant attention from both researchers and practitioners in the past two decades [13]. Since then, FOSS has been widely adopted in industry [19], and represents an important part of many software products. Therefore, a good understanding of how such projects work is essential. While there has been much research on FOSS, the use and selection of software development tools in FOSS has received very little attention [5], despite the fact that tools play a critical role in FOSS development.
One highly successful FOSS project is the Debian Project, founded in 1993, and run entirely as a development community comprising over 2,500 volunteers. As such, it is one of the largest FOSS projects [1]. The Debian project produces several operating systems, of which Debian GNU/Linux is the most popular, providing over 43,000 packages for ten different hardware architectures. Furthermore, the Debian System is the basis for around 150 derivative distributions, including the popular company-controlled Ubuntu distribution produced by Canonical. After more than 20 years in existence, some of the project’s processes are still difficult to scale, which is needed to meet the tremendous growth the project has seen. Activities such as library transitions currently require dozens of contributors to work hand-in-hand, and often stall because of bottlenecks. Day-to-day tasks are often tedious and error-prone, relying on developers to maintain consistency: keeping track of patches, triaging bugs, following policy changes, and working with both ‘upstream’ projects (the original source projects included in a Debian distribution), and ‘downstream’ derivatives, to name just a few challenges. Looking at the way these processes are currently handled, it is surprising that contributors of a system as technically sound and universally applicable as Debian are still doing manually what a computer should be doing for them. Tasks such as those mentioned above could be streamlined and optimized to avoid redundancy and points of failure due to their brittle integration.
Improved tools and techniques are necessary to increase the efficiency of the Debian Project’s contributors. Debian is a volunteer-controlled project—most of its contributors are not paid to work on the project, and have therefore limited time available as most of them have daytime jobs. Some sophisticated tools and techniques already exist and new technologies emerge frequently, but these are not readily adopted. Many contributors try to identify and communicate better approaches, but an in-depth understanding of individual adopters’ behavior is lacking. As a result, new ideas only slowly rise to become competitors with existing approaches. Software technology transfer has been defined as a considerable concern [36, 38], but a deep understanding of the factors that influence tool adoption among voluntary FOSS developers is largely missing as of yet. Existing frameworks and theories such as the Technology Adoption Model (TAM) tend to focus exclusively on either individuals or non-volunteer (commercial or not-for-profit) organizations, and therefore are unsuitable to explain adoption in volunteer-based communities.
This research focuses on a challenge commonly found in volunteer-driven communities: a lack of authoritarian structures makes it impossible to mandate change. While the Debian Project, its members, the collaboration between them, and the approaches used are in continuous flux, there is no obvious means to drive change in a given direction, because ultimately, a decision to change lies with each individual, and project-wide change thus depends on the entire community. Given the large size of the Debian Project, we chose to focus on one specific area that is of
particular importance to Debian’s success as a distribution containing tens of thousands of software packages: software packaging. Given the critical importance of tools in large FOSS projects and the lack of insight on how these tools are selected in volunteer-driven projects, we investigated the following question:
Research Question: What factors influence the Debian package maintainers’ decision to adopt new tools or techniques?
This study focused specifically on the Debian Project as the primary author is a long-standing contributor to this project [23]. The paper proceeds with a background discussion on the Debian project and innovation in FOSS projects (Sec. 2). We then present the details of the Delphi study that was conducted (Sec. 3). This is followed by a presentation of the results of our study, namely a set of factors that affect the adoption of tools and techniques (Sec. 4). The paper continues with a discussion of the findings, the implications for research and practice, as well as the threats to validity of this study, followed by an outlook on future work (Sec. 5).
2. BACKGROUND AND RELATED WORK
2.1 Package Management in Debian
Debian as a FOSS project takes an extraordinary and somewhat radical approach, in promising that “the Debian system and all its components will be free,” and that the project “will never make the system require the use of a non-free component” [9]. Traditionally, the Debian Project distinguishes only between developers and non-developers, all of whom are considered users. Over the years, as the project grew and more people contributed in an ever-increasing variety of ways, additional roles emerged. In the Debian Project, not only developers adopt new tools and techniques, but every contributor. Moreover, the clear trend towards more intensive collaboration within the project, across teams, and even across distributions results in higher degrees of interdependencies between individuals. One contributor’s decision in favor of or against a tool may have a significant effect on another contributor’s decision to adopt it, and on this level, it matters little who is an official project member and who is not. The Debian Project is an organization driven entirely by volunteers. The project does not pay any of its developers, nor does it let its sponsors or its legal entity have any influence in the project’s technical interests.
The packaging workflows used by Debian contributors are sub-optimal; common methods are minimally integrated at best, and package maintainers lose time and energy on repetitive, error-prone tasks. This causes individual frustration and slows project progress. Ironically, improved tools featuring better integration, collaboration facilities, and greater degrees of automation do exist. In the two decades since Debian’s foundation, a number of packaging automation tools have been widely adopted, so large-scale workflow improvements do happen. However, countless others never reached significant levels of use, and this begs the question as to what factors might be at play. Recently, the project has seen strong trends towards techniques supporting distributed development, which promise solutions to many of the (centralized) deadlocks and bottlenecks in the project. Yet, project-wide acceptance has been slow, for reasons that are not always obvious.
2.2 Explaining Adoption of New Tools
Tools, and tool integration specifically has gained sustained attention from researchers studying traditional organizational contexts. However, in the FOSS context, Crowston et al. [5] observed that “surprisingly little research has examined the use of different software development tools.” Previously, Oezbek and Prechelt sought a suitable research method for studying process innovation in FOSS projects [37]. However, their focus was specifically on innovation by people (e.g., researchers) that are not members of a FOSS community. Shaikh and Cornford studied the adoption of a commercial version control system (BitKeeper) in the Linux kernel project in 2002 [45]. The key reason why CVS (the most popular version control system (VCS) at the time) was not adopted was technical, as CVS did not have all the features that Linus Torvalds required as benevolent dictator of the Linux kernel project. Torvalds subsequently started development on Git [46], a popular distributed VCS.
Redwine and Riddle discuss a number of factors that either inhibit or facilitate software engineering technology maturation [40]. Some critical factors they discuss are a clear recognition of need, tuneability, and management commitment, and inhibitors include high cost and contracting disincentives. However, Redwine and Riddle focus on the maturity of technologies (i.e., the product), rather than the process that influences adoption of new tools. Also, factors such as management support and contracting disincentives do not play a role in volunteer-driven FOSS projects.
Numerous frameworks have been proposed to understand and explain technology diffusion and adoption, which can roughly be divided into two groups: (1) frameworks and theories that consider adoption at the individual level (e.g. Rogers [41]), and (2) those that focus on the organizational level (e.g. Kwon and Zmud [25]).
Perhaps the best-known model in the first category (focusing on the individual) is the Technology Acceptance Model (TAM) [8], which has been referred to as “the most influential and commonly employed theory for describing an individual’s acceptance of information systems” [27]. The model was highly revolutionary at the time of its conception, and has since been extended in several ways. However, it has also been criticized for its limitations [2], one of which is that it is based only on attitude and behavior.
Another model is the Task-Technology Fit (TTF) model [16], which states that the ‘fit’ between a task and technology is “the matching of the functional capability of available software with the activity demands of the task” [10], but this ignores human and social factors such as personal preferences of developers. Others focus on specific contexts; for example, both Rossi et al. [42], and Fitzgerald et al. [14] present frameworks to explain FOSS adoption in the public sector. Eckhardt et al. studied the impact of socio-cultural influences, referring to the influences of colleagues and other departments in an organization [12].
To summarize, tool adoption in FOSS communities, consisting of independent volunteers, cannot be explained by existing theories and frameworks for a number of reasons:
- **Exclusive perspective on individuals or organizations.** Tool adoption in FOSS communities does not happen exclusively on an individual or organization level.
- **Unsuitable for volunteer-driven communities.** Existing frameworks focusing on organizational adoption are not suitable as volunteer-driven FOSS communities have no formal authoritarian leadership (‘management’) or business-focus [48]; there are no change agents [38].
- **Assuming independence of adopters.** Existing frameworks tend to assume independence among adopters, i.e. absence of network effects. FOSS communities, however, rely on close collaboration among its contributors and therefore imply a level of dependency among project members.
- **General focus on innovations.** Many frameworks consider the diffusion and adoption of ‘innovations’ in a general sense, but not software tools specifically.
It is also worth noting that many of the frameworks proposed are based on observations of “historical accounts” and attempt to provide a generalized process of technology adoption (e.g. [40]); others are based on a set of factors that have been identified a priori before any feedback is solicited from experts (e.g. [42]). While these approaches are not invalid, they ignore perhaps the most important stakeholders, namely the volunteers adopting the technology.
3. RESEARCH DESIGN
This paper reports the results of a Delphi study modeled on the Policy Delphi approach. The Delphi method has seen very limited adoption in the software engineering discipline. We first present a brief discussion of the method. We then discuss how the Delphi panel was selected, and provide details of the Delphi process including data collection and analysis. A more detailed description is offered in the first author’s dissertation [24].
3.1 The Delphi Method
The Delphi method was developed at the RAND Corporation in the 1940s as a way of finding “the most reliable consensus of opinion of a group of experts” [6]. The original Delphi study sought to investigate the impact of technology on warfare and was exploratory in nature [7]. A Delphi study “may be characterized as a method for structuring a group communication process so that the process is effective in allowing a group of individuals, as a whole, to deal with a complex problem” [28]. It is an instance of moderated communication: a facilitator serves a series of questions to the participants, who return their answers to the facilitator. The answers are anonymized, collated, and returned to all participants, who can then modify their response in the light of the feedback from the previous round. Alternatively, the facilitator may pass out a new set of questions, which have been designed to incorporate the returns from the previous round.
Since the Delphi approach was originally put forth by Dalkey and Helmer [6], several researchers have modified the method resulting in a number of variants. This study’s design was based on a Policy Delphi approach [47]. Whereas the Delphi method traditionally aimed at gaining consensus, the policy Delphi aims “to support decisions by structuring and discussing the diverse views of the ‘preferred future’” [22] and “seeks to generate the strongest possible opposing views” regarding an issue [47]—or what one might call ‘dissensus’ to differentiate from consensus-seeking Delphi studies [29]. Rather than a tool for decision making, the policy Delphi can be used to “generate options and suggest alternative courses of action for consideration” [34]. The aim of this study was to understand the process of tool adoption in a FOSS community; a Policy Delphi was deemed appropriate, as it would help to suggest ‘courses of action’ for FOSS developers.
3.2 Selection of Delphi Panelists
Selecting the right participants for a Delphi panel is key to a successful study. Okoli and Pawlowski called panelist selection “perhaps the most important yet most neglected aspect of the Delphi method” [33] and Judd claimed that deciding ‘who is an expert’ is “the single most confounding factor in panel selection” [21]. An important consideration in the study design was that the primary author is a long-time member of the Debian community, and it was important to include people that he was not closely acquainted with, and with whom the research issues had not been discussed prior to this study.
To select members for the Delphi panel, project members were identified who took part in team efforts, or otherwise cooperated with others in the project. These were identified through several channels, such as scanning the various mailing lists, IRC (Internet Relay Chat) logs, the package maintainer database, and notes from various meetings and discussions at Debian conferences. A total of 162 people were asked to nominate colleagues whom they deemed to have deep insights into the adoption behavior of Debian contributors, along with a short reason for nomination. Self-nominations were explicitly mentioned as an option. From this, 98 responses were received, with a total of 429 nominations. From the list of nominations, 48 people were identified who received three nominations; of those, 10 were excluded due to unavailability. From 50 other nominees who had two nominations each, five were manually selected whose nominations made them particularly interesting candidates, resulting in a group of 43 people. In order to determine the nature of the candidate’s project work and collaboration within the project, candidates were asked to provide some information about their involvement, resulting in 36 responses. Based on these responses, candidates were organized on four dimensions, namely whether or not a candidate (1) was involved as a team player; (2) had a uniform set of tasks; (3) used uniform tools; and (4) was interested in workflow improvement. Stratified purposive sampling [35] was employed to select candidates that represented maximum diversity on the dimensions in Fig. 1 [11], as is desirable for a Delphi. In total, 21 panelists were selected that had ‘extreme’ profiles on the four dimensions. This panel size lies well within the recommended range of 15-30 carefully selected participants [29].
3.3 Data Collection and Analysis
The Delphi study took several months to complete, and consisted of four phases (see Fig. 2), which are described next. Three rounds were carried out as part of the Delphi study, which was followed by a ‘reduction’ phase so as to identify a parsimonious set of factors.
Phase 1: Brainstorming. In the first phase, a brainstorming round was conducted to obtain a broad sense of the factors that shape package maintainers’ decisions regarding the adoption or rejection of tools or techniques. The aim of this phase was to exhaustively seek factors. Participants were asked to identify at least six factors that influence the decision to adopt or reject new tools or techniques. In order to encourage participants to be as open and frank as possible, anonymity was assured.
The first phase resulted in responses totaling 3,500 lines. To make the discussion of these responses manageable, responses were organized in a number of categories—Schmidt suggests to use up to 20 categories [44]. A first round of analysis resulted in a set of 104 keywords. These were reduced to a set of 40 categories using a concept-mapping approach described by Novak and Cañas [31]. Through further analysis 14 categories were merged, resulting in 26 remaining categories. As the primary researcher deemed 26 categories too many for the next phase, three colleagues were invited to a card-sorting exercise [43]. This resulted in a further reduction to 15 categories; the 3,500 lines of responses were reduced to 1,300 lines, and organized into the 15 categories.
Phase 2: Enrichment. The goal of the second phase of the study was to enrich the data by seeking further qualifications from the panel, identifying the statements that panelists commonly agreed on, and identifying any discrepancies between the panelists’ judgments. To that end, the 15 categories of related statements resulting from the first phase were sent to the panel. Specifically, panelists were invited to refute factors they did not agree with, identify links between comments from other panelists, and provide additional information as they saw fit. The instructions to the panelists encouraged them to read critically as their agreement would be assumed by default. By only asking for qualifications and refutations, the intention was to enrich the data gathered thus far without excessively burdening the panelists.
The responses in this second round comprised approximately 6,500 lines of text (116 pages of text). Where several statements were related or contradicting, these statements were presented to the panelists in follow-up emails in order to seek further clarification. In total, almost 400 emails were exchanged (one email per issue). Relevant information retrieved from this process was inserted into the list of statements, resulting in a total of approximately 8,700 lines of text (156 pages of text).
This set of data was analyzed by identifying non-obvious and insightful statements by the panel. Long statements were shortened while paying specific attention to capturing the context and essence without losing any critical detail. This resulted in 281 categories. These were subsequently organized into groups of related statements, first by identifying redundant statements (resulting in 152 remaining statements), and subsequently by organizing them in categories of related statements. This resulted in 24 categories. For each category, a short descriptive paragraph was subsequently proof read by three colleagues.
Phase 3: Instantiation. The goal of the third phase was to identify the salient factors to adoption or rejection decisions among Debian contributors. Rather than seeking a ranking of factors or agreement among the panelists as would be typical for a traditional Delphi study, panelists were requested to provide “stories from the trenches.” A ranking of factors would be ‘weak’ given the diversity among the panelists, as many categories would rank closely to each other. If, on the other hand, the panel had not been as diverse, the ranking would not have been representative of the project.
Panelists were asked to select the three most important factors they had experienced in the context of their packaging work in the Debian project, and share details about how these factors had previously manifested and were expected to do so again in their immediate environments. Further clarification was sought through 40 follow-up emails with the panelists.
Phase 4: Reduction. In the fourth and final phase, we sought to achieve parsimony by combining factors that were similar in essence. For example, two factors that emerged from an earlier phase of the study were ‘modularity’ and ‘transparency.’ The former refers to the level of granularity (fine vs. coarse-grained), which affects the ability for a maintainer to follow the various steps. A tool that defines an interface at a high level of abstraction (i.e. a coarse-grained interface) exhibits a lower level of transparency because it is harder to follow the internal mechanisms of the tool. Because these two factors were so closely related as two sides of the same coin, these were combined into a joint factor.
4. RESULTS OF THE DELPHI STUDY
The Delphi study resulted in a set of 15 factors presented below. Each factor is summarized followed by an elaborated discussion.
Factor 1: Sedimentation
New ideas can take time to gain widespread acceptance. People reject ideas until they understand the underlying problems, are able to formulate them succinctly, and identify the benefits of a solution.
For new technologies to be accepted, awareness of such technologies must grow and the benefits they offer must be clear, but this process can take a significant amount of time. One panelist mentioned the example of distributed version control systems (DVCS): “DVCSs have been around for years, and it’s only now (last 2 years) that we see a real growth in users.” Technologies may seem to be too revolutionary at first for the wider community to perceive them as ‘ready’ for adoption. New technological solutions may address problems that people may not have clearly formulated ‘in their heads,’ or ‘seem irrelevant.’ Through a process of ‘sedimentation,’ a new technology slowly gains recognition, and at some point people may become sufficiently comfortable to start using it. However, this process could take years.
Factor 2: Marketing
Using appropriate channels and content, active promotion or marketing of a new tool or technique can feed excitement and exposure of the innovation, and can stimulate others to evaluate them.
One panelist explained: “Having some buzz and excitement around a new tool or technique seems to help. If several people are blogging about using something, lots of other people will become aware of it, and start thinking about using it.” However, it is important to use appropriate channels. While success stories and a positive attitude are stimulating, especially when needs are met instead of created, inappropriate corporate links (given the ‘free’ nature of Debian, and FOSS projects in general), premature promotion and TV-style marketing on the other hand, can have negative effects and ought to be avoided.
The Debian Project is not lacking any communication media—an abundant and diverse collection of communication channels is available which can facilitate the spread of information, including mailing lists, blogs, and IRC channels. This forces volunteers (with limited time) to select a subset to concentrate on, which potentially creates smaller, well connected ‘cliques’ or ‘tribes’ who may not interact with one another.
While mailing lists seem to be the first choice for spreading information, blogs can have a huge impact. One panelist recalled how the project’s extensive adoption of Git was partly due to the ‘buzz’ on Planet Debian (an aggregate of blogs of Debian developers) on this topic, despite the fact that other systems such as Mercurial and Bazaar had a reputation for being easier to use. An appropriate marketing approach should consider a number of aspects, including frequency (repeated exposure), timing, the choice...
of channels to use, and the content of marketing message. Finally, care should be taken not to ‘overhype’ so as to prevent disappointing potential adopters.
**Factor 3: ‘Peercolation’**
Information spreads through networks of peers, and information that flows between peers is often accorded a higher weight. Those with significant experience in an area and who can clearly explain a tool’s benefits, get more respect. People tend to favor peers they trust.
People tend to favor peers they trust, or with whom they have overlaps in interest or heritage. The term ‘peercolation’ was coined by one of the panelists to describe the percolation of information (and particularly knowledge of innovations) through networks of trusted peers. While related to ‘marketing’ (discussed above), one panelist clearly distinguished the two concepts: “I think of ‘peercolation’ as the spread of tools and techniques through normal use of them for one’s work and normal discussion, whereas marketing is instead the conscious attempt to spread a particular tool or technique.”
One panelist argued that most Debian community members prefer to use ‘standard’ tools. There is a general perception that there are many good ways ‘of doing things,’ and that anything that is broadly used is likely good enough. In other words, tools that have a significant momentum and are widely adopted are likely to get more support from others. Another case where people depend on their peers is when there is little time for an individual evaluation of a tool, and they must then rely on trust to shortcut the evaluation. As one panelist illustrated: “knowing what people you trust are using or interested in is a major factor.” Others still prefer to evaluate a new tool themselves. One factor at play here is the credibility, or status, of peers. Most panelists agreed that messages from respected peers weigh heavier than messages from others.
Finally, an innovation’s pedigree may also affect its adoption, related to the question of why and by whom a tool is developed. For example, one panelist claimed that Bazaar (a DVCS) had “a bad start in Debian,” because it was developed by Canonical, the corporate entity sponsoring Ubuntu. This is of particular importance in a FOSS project such as Debian, given its core principle of independence. While anti-corporate feelings were not generally shared among the panelists, there are some within the Debian community that have some anti-corporate bias. Pedigree is not of exclusive importance to involve corporate involvement; FOSS tools, too, may be critically viewed. One panelist referred to Git as an example: “It is the very aura of the kernel that puts people off Git. The perception is that a tool designed for kernel development would be overkill for simple user space tasks.”
**Factor 4: First Impressions**
First impressions usually establish inertia for or against an innovation. A clearly defined mission statement that explains the rationale and principles of the innovation that does not require specialized domain knowledge is likely to positively affect first impressions of outsiders.
A user’s first impression may prove to be an important factor in the decision to keep using a tool or technique. One panelist recalled a project in which a DVCS-style package management approach was attempted. However, the combination of the size of the packages, the specific DVCS selected, and the infrastructure that was used for hosting the repository, resulted in a system that was too slow and ‘extremely painful’ to be used effectively, as he described: “this negative initial experience has made me very reluctant to use that system again, even though many people describe it as ‘much faster now.’” Another panelist commented that this reaction was curious, considering the “release early, release often” spirit commonly found in free software [39]. While a negative first impression may make a significant ripple through the community, positive first impressions have far less impact since fresh enthusiasm about a new tool is usually taken with a grain of salt.
A clearly defined purpose or mission statement will positively impact the forming of a good first impression. One panelist explained that, “often, the designers of tools tend to assume that all future users will have their knowledge, skills and wisdom, which is a fallacy.”
**Factor 5: Elegance**
Elegance is a subjective reward, but community members expose common preferences, including technical excellence, perfectionism and aesthetics.
Working on something that pleases can help increase one’s efficiency. The design, quality of implementation, and technical correctness of a tool or technique can be important factors to some users, but users also have personal preferences that cannot be easily qualified and which may lead to irrational behavior. One panelist illustrated this: “One of the most significant factors for tool adoption for me is a perception of ‘cleanliness.’” Another panelist added: “aesthetics is part of the efficiency. I’m more prone to be efficient and willing to modify something that pleases me, than something horrible and broken.”
The strive for technical excellence in the project is common, and one panelist claimed that “the quality of implementation might sometimes be an important factor to decide about adopting a tool or not.” Others agreed with this strive for perfection among contributors as a core cultural trait of the Debian Project, where contributors are not told what to do, do not work to deadlines, and simply want to properly maintain their packages. Perceptions of ‘elegance’ are inherently subjective, and impressions can turn into ‘religious beliefs’ and the defense of tools against all forms of criticism without any facts to back up claims. This is common in cases where tools are more or less equivalent in features (e.g., the Vi vs. Emaes text editors).
**Factor 6: Resistance**
Initial resistance to new ideas can help to separate good ideas from bad ones. Resistance can be met with conversion instructions, support, and patience.
Resistance to change is a common negative factor to adoption behavior, but not without positive aspects. One panelist explained an inherent resistance to changing the status quo: “I think it is inertia: you have settled on a workflow that ‘does the job’ and even if it has some glitches, it is generally okay, and the corner cases happen not too often.”
Reasons for resistance include a general time scarcity, a preference to get ‘actual work’ done, a lack of understanding of the new proposed concepts, and a categorical unwillingness to depart from existing approaches. On the other hand, resistance can help to filter out the good from the bad ideas. The latter are unlikely to withstand resistance for longer periods of time, and consequently the project does not lose time with tools that will not survive, and avoids having to recover from problems caused by mistaken adoption. Inadequately preparing for or supporting a change can cause a loss of interest, which makes overcoming others’ resistance difficult. One panelist described how the disorganized state of the wiki page tracking the discussion on machine-readable copyright files made the process so inaccessible that potential supporters
turned away. On the other hand, advocates who took care to maintain available information and actively managed the discussion had more success in having their proposals adopted in the project.
Finally, some changes might affect maintainers of large numbers of packages more than the majority of the project members. They might raise resistance in order to defend themselves against an increased workload due to a proposed change. Related to this is the case where an improvement over previous approaches may not result in greater efficiency for the individual, but only at the project level. One panelist explained: “Everyone tries to work as much as possible in the limited free time s/he has. This means that a new tool/technique increasing the time needed to fulfill a task will not be adopted, no matter how better coded, elegant or scalable it is.”
Potential adopters will weigh adoption cost against a tool/technique’s benefit. Debian contributors will without a doubt consider tools or techniques that automate manual labor and reduce maintenance costs. However, they are also aware of the costs of adopting a new tool, as one panelist explained: “People may acknowledge the benefits of a tool, they will be reluctant to spend too much time on it before reaping the benefit, as they will want to be ‘getting things done.’” The cost-benefit trade-off is influenced by a number of factors, such as the time investment needed, pragmatism (‘good enough’), and ‘doing the right thing,’ that is, finding the right tool for a given problem, even if this takes more time than a manual approach.
**Factor 7: Sustainability**
Confidence in the development direction and future of a tool or technique makes it a sustainable choice. Maintainers should incorporate feedback and enable users to influence the direction of development.
User faith in the development direction and future sustainability of a tool or technique was also found to be a factor. Developers want a certain level of confidence that a tool develops in the ‘right’ direction, and that its maintainers incorporate feedback, allowing users to influence that direction. A lack of such confidence increases the risk of a waste of invested time. Debian contributors tend to seek tools and techniques that will not disappear or become neglected. While predicting which tools will ‘survive’ is impossible, two key considerations are how well a tool is maintained, and the community that has formed around it to maintain the tool. As one panelist explained: “Since Debian packages change and software changes and requirements change, the tool needs to evolve. This requires an active development community. Tools that aren’t being actively developed end up being more work to use.”
Maintainers of tools can play an active role in addressing potential adopters’ needs and perceptions of sustainability. One panelist recalled his analysis of the entire source archive to identify the number of packages using debhelper (the tool he was developing) versus other tools. He explained that, “Improving market share was mostly a matter of figuring out why people were not using it and adding the features they needed, and responding to bugs and feature requests quickly.”
**Factor 8: Quality Documentation and Examples**
Well-maintained documentation and clear examples are needed for widespread adoption. Early adopters seek background information including rationale for the innovation; later adopters seek tutorials. Examples provide practical starting points.
The availability of quality documentation and examples was also found to play a significant role in an adoption/rejection decision.
Documentation is a necessity, especially when a tool/technique diverges far from the current processes, as one participant explained: “The importance of documentation is directly proportional to the amount of divergence from similar tools or existing workflow patterns.” Mailing list archives and source code are not sufficient for widespread adoption—documentation needs to be maintained and must cater for different types of users; early adopters want background information and care about motivation, while late adopters tend to seek tutorial-style documentation. High quality documentation is also a sign of the maturity and stability of a tool, as one participant explained: “Documenting ideas can be seen as a sign that they are serious and get stable.”
**Factor 9: Trialability and Scalability**
New tools and techniques are evaluated in the context of individuals’ own use-cases to determine their worth. Trying out a new tool should be as easy as possible, since that is one of the best ways to form an impression.
The importance of the ease with which a new tool or technique can be tried was succinctly illustrated by one panelist: “The first time I try out a new system, am I able to do anything (even something silly) with it in the first 10 minutes of using it?” Tools that require complex configuration or infrastructure to be set up in order to run have lower trialability; one panelist gave a comparative example: “lintian [a package checker] has great trialability because you don’t actually have to do anything to test-drive the tool, you just run it. SVN-buildpackage requires a bit more involvement.” Directly related to trialability is a tool’s scalability. The idea of scaled use is that one can put a tool or technique to use with ease for basic tasks, and still continue using the same approach as the complexity in usage scenarios increases. Some tools may be easy to use due to simplifications that would inhibit more complex use-cases.
Over time, people will have established workflows, and adapted tools to fit those. They may want to improve and evolve those workflows, rather than replace or revolutionize them. Volunteer free time is limited and fragmented (e.g., a few hours per day), and adopting new tools or techniques often requires significant chunks of time, and thus presents a potential inhibitor to adopting such new tools or techniques. One participant illustrated this as follows: “Large monolithic changes to processes tend to take a lot of time, require a lot of debugging and can be disruptive to a general goal of getting things done. [...] It’s much easier to adopt a tool or technique that can be applied in small chunks or in a self-contained area, or slowly over time.” In other words, the level to which a tool facilitates a gradual or evolutionary adoption may be more appealing than one that would cause a disruption (revolution) to the existing workflow.
**Factor 10: Compatibility and Genericity**
Compatibility means that less time will be required for a new tool. The ability to reuse tools in other contexts will also positively affect adoption. There is a delicate balance between flexibility and usability.
Learning a new tool requires a time investment from developers, and with limited time available to work on a project, they will be very selective regarding how they spend their time. Tools that can be easily learned, or which automate tasks that you are already doing can be readily adopted; tools that build on known concepts or are ‘finger-compatible’ are easier to learn—this was thought to have played a role in Subversion’s adoption rate (replacing CVS). One panelist emphasized that adopting a new tool will have a negative, temporary impact on productivity: “Developers will build up their own arsenal of tools and their accompanying work-
flows, and changing [tools] will cost productivity, so it is important that the impact of the switch be limited.” Another panelist agreed, arguing that if a tool is too distinctive that it distorts normal workflow patterns or requires adjustments to long-established patterns, the perceived ‘quality’ of the tool will be diminished.
Compatibility is important on the conceptual level as well. One panelist criticized Git for its terminology that is incompatible with the terminology used in existing version control systems (VCS), rather than using compatible language shared with existing VCSs.
Related to compatibility is genericity, which refers to the preference for tools that are usable in different contexts. Being able to streamline work by reusing the same (or similar) tool/technique in different scenarios can play a decisive role in an adopt/reject decision. Once adopted, generic solutions tend to be reused, as one participant explained: “People usually have their favorite packaging helpers, patch systems, etc., and when creating a new package will often reach for the last similar one.” Another panelist also argued that, “reusability of tools is a major factor in their adoption in Debian, often very much at the expense of elegance.”
**Factor 11: Modularity and Transparency**
A very fine-grained solution may require code duplication due to the need to repeat similar sequences of instructions, but has the advantage that it offers more transparency and understanding of the various steps. Monolithic solutions on the other hand are less flexible and transparent. Such a higher level of abstraction leads to loss of control and makes a tool more difficult to understand.
Different tasks and communities need different levels of abstraction. We illustrate this point using an example of two widespread build utilities: debhelper and CDBS. Debhelper is a collection of scripts in the Unix spirit, each with a well-defined task and a consistent interface [39]. CDBS, which uses debhelper internally, presents a more abstract interface to the user, and exposes a large number of options to configure the build process. There was little agreement among developers about which was better as both tools have benefits and drawbacks. Participants emphasized a number of benefits in an abstraction layer such as CDBS—less code duplication, for example. Another benefit was that CDBS encourages maintainers only to specify the ways in which their packages deviate from the default behavior. Panelists responsible for large numbers of packages seemed in favor of higher levels of abstraction. However, others argued that, “sometimes it’s better and clearer to explicitly have ‘repeated’ code.” Also, one participant thought defects within CDBS were more difficult to fix, and claimed that, “using CDBS means you are ceding control of your package to the maintainer of that central tool, because routing around damage becomes substantially more difficult.” Striving for the ‘right’ level of abstraction involves a compromise between individual flexibility and regularity across the project.
**Factor 12: Maturity**
Tools must exhibit a sufficient level of maturity, i.e., they must provide a reliable base before people will trust and depend on them. This implies that it should not change in ways that would require users to re-learn or change their scripts.
Tools or techniques must provide a reliable base before people will depend on them. A tool/technique must be usable to attract and sustain followers; it should not change continuously and require users to re-learn or change their scripts, as one panelist explained: “Implementation maturity is important, i.e., stable interfaces and relative freedom from serious bugs over time.” While the maturity of a tool is important, this does not suggest that tools should only be introduced when the software is ‘finished,’ as one participant explained: “Release early as long as the software gets a simple, yet meaningful, job done: that all gives people a reason to start using it, and hopefully contribute to its growth.”
The importance of maturity depends on the ‘reach’ of impact that a tool has within the project. For changes such as those to the Debian source package format (which would have a very far-reaching impact), “you need to consider every single aspect.”
Tools that seek to replace existing approaches will be scrutinized and compared to the current ones, and any problems will become barriers to adoption, as people can just stick with the status quo. On the other hand, new approaches that fill niches, or solve problems that were hitherto not addressed or unknown, instill lower expectations and requirements. The tolerance to new problems decreases as tools age, but project members also seem to build up a group tolerance to existing problems over time, which become ‘well known’ and get documented. Expectations of tools proposed to address such problems seem to grow, and tools that are not complete solutions are easily rejected.
**Factor 13: Network Effects**
Network effects have become increasingly noticeable within the Debian project, as the project has shifted more towards team collaboration to alleviate the bottlenecks due to the voluntary nature of package maintainers and increasing package count. It is crucial to make changes harmoniously with collaborators, and as such, network effects may slow down adoption of new tools, but also help to ensure that the quality of tools that are in use remains high. The need to collaborate can restrict individual developers in their choice of tools. A community member is free to choose any text editor, for example, but when it comes to build dependencies (such as a patch management system), the tool used has to be compatible with those used by others. This effect slows down adoption of new tools within teams, even if a new tool constitutes an improvement. Making the use of a certain tool mandatory could be possible under certain circumstances (e.g. a small team), but doing so may alienate other collaborators that may be unwilling to follow suit. One panelist elaborated: “I think the reason for team conservatism is more that changing a tool or technique in a team would force the change on all members: the larger the team, the more the inertia.”
The choice of a tool may also be affected by accessibility factors; potentially, tools may be adopted which are not ‘the best’ but which are more accessible to contributors. A conservative adoption strategy as a result of group inertia towards adopting new tools may result in a selection of ‘better’ tools, as ‘bad’ ones may not be accepted by a team of collaborators.
**Factor 14: Consensus**
Achieving consensus is a necessary process in a volunteer-driven project. Decisions that are made without consulting the community can be considered cabalistic and their authority questioned.
Pioneering work is necessary, but concrete solutions need to follow from that. It is important to build consensus among experienced people. Too much discussion can, however, cause loss of focus and hinder change. Debian contributors cannot be forced to use a particular tool, although tools may be mandated through a process of standardization (discussed below). Contributors who use incompatible and non-standard tools will effectively be forced...
to bear the cost of a migration in that case. The process of building consensus is important to prevent increased resistance (discussed above). There seems to be an expectation for a minimum level of discussion; if a decision is made without giving everyone a chance to participate in the lead-up discussion, it may be considered ‘cabalist’ and a decision’s authority may be questioned. Debian prides itself on its openness, and non-public discussions are frowned upon. However, private discussions (among a small group of people) may be useful to anticipate disagreement when preparing controversial proposals (i.e. a new tool or technique that will have a significant impact on the status quo). The ease with which consensus can be achieved depends on the level of controversy that a new tool may introduce; non-controversial changes may be adopted readily, but the amount of discussion increases for topics that will have broad implications for the project.
**Factor 15: Standards and Uniformity**
A number of sources of standards exist in the Debian project in both explicit and implicit form. The most important explicit sources are the Debian Policy and the Developer’s Reference [3]. The Debian Policy is a binding document describing rules to which packages must abide to be included in the Debian System. The Developer’s Reference is a collection of responsibilities and best practices for developers. Furthermore, there are several unwritten rules or best practices; for example, the use of certain outdated tools (yada, dbs) should be avoided. Many best practices have not been formalized, either because consensus (discussed above) has not been reached, or nobody has taken initiative towards that end. One panelist clarified that, “Debian has a strong culture around the idea [that] ‘we’re all volunteers and so no-one can tell another volunteer how they should do something.’ As a result, policy can only describe current practice, not lead it.” Another panelist added that, “dictating through the [Debian] policy is a very good way to make people mad at it.”
Standards and uniformity should be sought at the right level, i.e., at the level of interfaces, rather than specific tools. This allows people to use different tools, while adhering to a standard interface. One panelist used the following example: “Having all packages of a team in the same Subversion repository doesn’t mean I must use the SVN tool myself. Yay for git-svn.” (Git-svn is a tool that allows bridging between SVN and Git repositories.) Another panelist argued: “In the end it does not matter whether you prefer Git or Subversion, CDBS or debhelper, because what we want is a Debian package which fits nicely in with the rest.”
Consistency across the project as a whole can motivate change. A good reason for adoption of a tool can be the desire to re-align outlying factions, i.e., teams who are doing things differently. One panelist commented that, “While diversity is good to let many techniques compete, there’s a time when they have matured where that diversity hurts more than anything else.”
Uniformity may also be triggered by critical events, such as the ‘OpenSSL debacle,’ which refers to a Debian-only patch to the OpenSSL package causing its cryptographic key to be predictable [26]. While this incident cannot be reduced to a lack of uniformity, some argued that the problem could have been prevented if there had been a uniform, canonical resource to track divergence between Debian and upstream software. The discussion that followed resulted in improved guidelines for patch management as well as a project-wide patch tracker.
**Figure 3. Multi-stage model for innovations in open source**
### 5. DISCUSSION AND CONCLUSION
#### 5.1 A Model for Innovation in FOSS Projects
The 15 factors that resulted from the Delphi study each play a role in the “cycle of innovation” in open source projects. Based on well known and widely used models of innovation adoption (e.g. [25, 41]), we suggest a multi-stage model for innovations in FOSS projects. The model is shown in Fig. 3 and consists of seven stages numbered A to G, which are discussed in more detail below. It is important to note that the factors’ influence is not restricted to any particular stage—rather, each stage has a primary focus in which certain factors are more prevalent than others.
**5.1.1 Stage A. Knowledge**
Stage A, “Knowledge,” refers to the idea that knowledge about new tools needs to spread before these tools can be adopted. Diffusion of knowledge happens through a number of channels, and several factors affect the spreading of knowledge. The first three factors, sedimentation (representing the time factor) marketing (pro-active advertising) and ‘peercolation’ (opinions of respected peers) are key in this first stage.
**5.1.2 Stage B. Individual Persuasion**
In the second stage, potential individual adopters form an opinion about an innovation before they decide to adopt—this separation between persuasion and decision follows Rogers’ innovation-decision process [41]. Several factors play a role in this stage. Making a good first impression is important, as well as an individual’s opinion regarding a new tool’s elegance. Resistance must be overcome, and finally an individual must be convinced of the new tool’s sustainability. This stage is concerned with forming a favorable (or unfavorable) attitude.
**5.1.3 Stage C. Individual Decision**
If a potential individual adopter has formed a favorable opinion about a new tool to the extent that it has become a candidate for adoption, some practical factors come into play. Based on quality documentation and examples, an individual may start to gauge how the new tool can be used. The ease with which the tool can be tried out (trialability) and scaled up (scalability) will affect this decision. Furthermore, an important consideration in the decision stage is also whether or not the tool is compatible with the current workflow, and whether or not the tool is sufficiently generic that it can be used in other contexts—i.e. the time investment made for converting may be well worth it.
5.1.4 Stage D. Individual Implementation
Once a decision is made, an individual adopter may start implementation. It is important to note that this stage may still be aborted—implementation here does not imply successful adoption, but merely that efforts are made to start using the tool in practice. Of particular importance in this stage are the **modularity and transparency** of the tool, as these directly affect an adopter’s understanding of the level of precision that can be achieved to automate the task that the tool aims to enable.
5.1.5 Stage E. Organizational Adaptation
Organizational adaptation is the next stage. In the case of a FOSS project the organization should be interpreted as the community. This stage starts after a considerable number of individuals have adopted an innovation. Knowledge of the innovation will have spread through the community and individual adoptions will converge, and the community will extend, re-invent or combine (adapt) innovations in a shape that works best for the community as a whole. In this stage, the **maturity** of an innovation becomes important—individuals may have different thresholds for accepting flaws, but for a community to accept an innovation it must exhibit a sufficient level of maturity. Also, in this stage **network effects** come into play, as a successful adoption of an innovation depends on community-wide acceptance.
5.1.6 Stage F. Organizational Acceptance
After organizational adaptation, the next stage is organizational acceptance. Once **consensus** has been achieved regarding the use of an innovation, this stage is completed. However, in the context of innovation adoption theory, acceptance is not the conclusive stage; it merely confirms that organizational members (or in our case, community members) are induced to commit to the innovation's usage. A further stage is necessary.
5.1.7 Stage G. Organizational Incorporation
Kwon and Zmud [25] stated that “the innovation becomes embedded within an organization’s routine and when the innovation is being applied to its full potential within an organization.” Thus, Incorporation is achieved when routinization occurs, that is usage of the technology application is encouraged as a normal activity, and also when infusions has been reached—increased organizational effectiveness is obtained by using the innovation. Explicit routinization happens through defining a policy or best practice in a standards document, but de facto standards are often sufficient to be considered routine without such definition. The stage organizational incorporation is considered to be achieved when an innovation has been promoted as a **standard**. Uniformity is a final factor at play in the decision to adopt a certain tool or technique. Uniformity reduces complexity and increases consistency across the project.
5.2 Threats to Validity
Several researchers have argued that trustworthiness is a more appropriate way to judge the validity of qualitative research such as this study. We adopt Guba’s criteria [17] to evaluate naturalistic inquiries to differentiate them from quantitative studies which typically consider validity types such as internal and external validity. These criteria are credibility, transferability, dependability, and confirmability.
Credibility. We believe the identified factors are all plausible, and our confidence is strengthened by the fact that all factors were identified through a longitudinal process of several months involving 21 experts. This means that the factors have been discussed at great length; none of the panelists indicated that any of the factors should not be included. Furthermore, the Delphi study included a specific phase in which the expert panel was asked for specific instances, thus bringing the factors to life. Thus, we believe the Delphi process itself, having taken several iterations, has contributed to the credibility of the findings.
Transferability. This study focused specifically on the Debian Project, one of the largest FOSS projects comprising tens of thousands of packages. Some of the factors might be of less importance in smaller projects. Most FOSS projects are significantly smaller, even when excluding those projects with only a single contributor. Nevertheless, even smaller projects should consider network effects and consensus, and technical considerations such as elegance are always desirable characteristics. We observe that none of the factors are tied specifically to the Debian Project, and as such we believe these factors can apply to all volunteer-driven projects. We argue that those projects with significant company involvement, and thus with stakeholders that have significant influence to ‘push’ changes, can also benefit from being cognizant of these factors.
Dependability. In our study, the Delphi panel consisted of 21 carefully selected participants through a stratified purposeful sampling strategy. We identified panelists across a number of ‘dimensions’ so as to include people with a wide variety of insights and opinions, as is desirable for Delphi studies. Furthermore, the research process itself is completely recorded, thus establishing an **audit trail** of intermediate research artifacts. This facilitates full traceability of findings back to the original input from panelists.
Confirmability. In selecting the panelists, we took great care in selecting members with whom we had no prior interaction, which was of particular importance given the lead author’s role within the Debian community. Another tactic is that of member checking, which is inherently built into the multi-phased Delphi process. As insights and opinions were recorded, they were analyzed, rephrased and summarized and presented back to the panelists.
5.3 Conclusion
We observed a tension between, on the one hand, the availability of efficient tools and techniques that could help large projects such as Debian scale better, and the slow adoption of these tools and techniques on the other hand. The underpinning challenge lies in the voluntary nature of FOSS projects and the lack of authoritarian decision-making structures to enforce those changes.
This study investigated which factors influence the Debian package maintainers’ decision to adopt new tools or techniques. Using a policy Delphi study conducted over the course of several months involving a panel of 21 carefully selected participants, we distilled 15 factors that affect the decision to use tools and techniques in an FOSS context. These were subsequently organized in a seven-stage model for innovation in open source projects.
The contribution of this paper is twofold. The first contribution is insight into the various factors that affect decisions to adopt novel tools and techniques by FOSS developers in the Debian project. While there have been several studies of the Debian project, to the best of our knowledge this is the first study investigating the adoption of tools and techniques used in the Debian project specifically, and in FOSS projects more generally. As pointed out in Sec. 1, very few studies have addressed this issue.
The second contribution is methodological, through its demonstration of the viability and use of the policy Delphi method to study a contemporary phenomenon in software engineering research in general, and FOSS in particular. The Delphi method has seen very little use in the software engineering discipline, but it offers a very
rigorous approach to conducting field research which has built-in mechanisms such as member checking which help to assess the validity of the findings. Very little research has focused on adoption and diffusion within FOSS communities (as opposed to research on adoption of FOSS products by end-users and organizations). Therefore, we believe this qualitative study focusing on a FOSS project contributes an alternative approach to this area.
6. ACKNOWLEDGMENTS
This work was supported, in part, by Science Foundation Ireland grant 13/RC/2094 to Lero; the Irish Research Council New Foundations 2014; and Enterprise Ireland IR/2013/0021 to SCALARE.
7. REFERENCES
|
{"Source-Url": "https://ulir.ul.ie/bitstream/handle/10344/5399/Stol_2016_feel.pdf?sequence=2", "len_cl100k_base": 12319, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32531, "total-output-tokens": 15059, "length": "2e13", "weborganizer": {"__label__adult": 0.0002665519714355469, "__label__art_design": 0.0006089210510253906, "__label__crime_law": 0.0002346038818359375, "__label__education_jobs": 0.0034637451171875, "__label__entertainment": 6.16312026977539e-05, "__label__fashion_beauty": 0.00012254714965820312, "__label__finance_business": 0.0007915496826171875, "__label__food_dining": 0.00022411346435546875, "__label__games": 0.0004296302795410156, "__label__hardware": 0.0003733634948730469, "__label__health": 0.00023627281188964844, "__label__history": 0.0002446174621582031, "__label__home_hobbies": 0.00011557340621948242, "__label__industrial": 0.0002346038818359375, "__label__literature": 0.000270843505859375, "__label__politics": 0.0002257823944091797, "__label__religion": 0.000255584716796875, "__label__science_tech": 0.007640838623046875, "__label__social_life": 0.00015246868133544922, "__label__software": 0.01534271240234375, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.0001748800277709961, "__label__transportation": 0.0002639293670654297, "__label__travel": 0.0001461505889892578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69200, 0.02863]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69200, 0.43423]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69200, 0.9512]], "google_gemma-3-12b-it_contains_pii": [[0, 5745, false], [5745, 13237, null], [13237, 20137, null], [20137, 26747, null], [26747, 34049, null], [34049, 41528, null], [41528, 48877, null], [48877, 55025, null], [55025, 62504, null], [62504, 69200, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5745, true], [5745, 13237, null], [13237, 20137, null], [20137, 26747, null], [26747, 34049, null], [34049, 41528, null], [41528, 48877, null], [48877, 55025, null], [55025, 62504, null], [62504, 69200, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69200, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69200, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69200, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69200, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69200, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69200, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69200, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69200, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69200, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69200, null]], "pdf_page_numbers": [[0, 5745, 1], [5745, 13237, 2], [13237, 20137, 3], [20137, 26747, 4], [26747, 34049, 5], [34049, 41528, 6], [41528, 48877, 7], [48877, 55025, 8], [55025, 62504, 9], [62504, 69200, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69200, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
b08d428669d79e5e252d3b228db3765889ecfc36
|
XML tools and architecture for Named Entity recognition
Andrei Mikheev, Claire Grover and Marc Moens
HCRC Language Technology Group,
University of Edinburgh,
2 Buccleuch Place, Edinburgh EH8 9LW, UK.
mikheev@harlequin.co.uk C.Grover@ed.ac.uk M.Moens@ed.ac.uk
November 26, 1998
Overview
Named Entity recognition involves identifying expressions which refer to (for example) people, organisations, locations, or artefacts in texts. This paper reports on the development of a Named Entity recognition system developed fully within the XML paradigm.
In section 1 we describe the nature of the Named Entity recognition task and the complexities involved. The system we developed was entered as part of a DARPA-sponsored competition, and we will briefly describe the nature of that competition.
We then give an overview of the design philosophy behind our Named Entity recognition system and describe the various XML tools that were used both in the development of the system and that make up the runtime system (section 2), and give a detailed description of how these tools were used to recognise temporal and numerical expressions (section 3) and names of people, organisations and locations (section 4). We conclude with a description of the results we achieved in the competition, and how these compare to other systems (section 5), and give details on the availability of the system (section 6).
1 Named Entity recognition
1.1 Named Entities
Named Entity recognition involves processing a text and identifying certain occurrences of words or expressions as belonging to particular categories of Named Entities (NE). When this is done within the XML paradigm, the result is annotated text where each NE is annotated with information about the type of NE the system found.
Consider the following sentence:
*Now at Harlequin Ltd. (Edinburgh office)*
On Jan 13th, John Briggs Jnr contacted Wonderful Stockbrokers Inc in New York and instructed them to sell all his shares in Acme.
A Named Entity recognition system might annotate this sentence as follows:
On <NE TYPE="DATE">Jan 13th</NE>, <NE TYPE="PERSON">John Briggs Jnr</NE> contacted <NE TYPE="COMPANY">Wonderful Stockbrokers Inc</NE> in <NE TYPE="PLACE">New York</NE> and instructed them to sell all his shares in <NE TYPE="COMPANY">Acme</NE>.
What counts as a Named Entity depends on the application that makes use of the annotations. One such application is document retrieval or automated document forwarding: documents annotated with NE information can be searched or forwarded more accurately than raw text. For example, NE annotation allows you to search for all texts that mention the company Philip Morris, ignoring documents about an unrelated person called Philip Morris. Or you can have all documents forwarded to you about a person called Gates, without receiving documents about things called gates. In a document collection annotated with Named Entity information you can easily find documents about the space shuttle Columbia without getting documents about Columbia D.C. Or you can retrieve all documents that talk about Hope (in Alabama), without also getting documents about people called Hope or about expectations and desires.
Another use of Named Entity recognition is in the construction of back-of-the-book indexes (e.g. an index for an encyclopedia). In such an index you probably want to distinguish discussions of Alfred Nobel from mentions of people who won the Nobel prize, rather than just giving page numbers for every single occurrence of “Nobel”. This can be done if the NE recognition system has annotated mentions of Nobel as a person differently from mentions of Nobel as an artefact. Similarly, such an index will probably want to distinguish between Alzheimer the disease and Alzheimer the doctor, or between Java the programming language and Java the country.
Current work on metadata standardization (XML-Data, RDF) is concerned with the development of a syntax for annotating this kind of information. The system described here is intended to provide such annotation automatically.
1.2 Named Entities in the LTG system
We recently designed and built a Named Entity recognition system and entered the system in the Message Understanding Competition MUC. This is a competition on information extraction from text, sponsored by the U.S. Defense Advanced Research Projects Agency [8]. The Named Entities our system recognises and the type of annotation it uses for the markup are therefore the ones stipulated by the MUC competition rules. Here are some examples;
**Temporal expressions.** For the competition, absolute and relative temporal expressions needed to be marked up as <TIMEX> entities of type DATE or TIME. For example:
<TIMEX TYPE="DATE">all of 1987</TIMEX>
<TIMEX TYPE="DATE">from 1990 through 1992</TIMEX>
<TIMEX TYPE="DATE">first-half</TIMEX> profit
the <TIMEX TYPE="DATE">1986-87 academic year</TIMEX>
<TIMEX TYPE="TIME">8:24 a.m. Chicago time</TIMEX>
<TIMEX TYPE="TIME">early Friday evening</TIMEX>
<TIMEX TYPE="TIME">9 p.m. </TIMEX><TIMEX TYPE="DATE">Monday</TIMEX>
the <TIMEX TYPE="TIME">morning after the</TIMEX>
<TIMEX TYPE="DATE">July 17</TIMEX> disaster</TIMEX>
on <TIMEX TYPE="DATE">All Saints’ Day</TIMEX>
Mentions of currencies and percentages. Numeric expressions, monetary expressions and percentages, whether in numeric or alphabetic form, had to be marked up as <NUMEX> entities of type MONEY or of type PERCENT. For example:
<TIMEX TYPE="MONEY">175 to 180 million Canadian dollars</TIMEX>
<TIMEX TYPE="MONEY">10- and 20-dollar</TIMEX> bills
<TIMEX TYPE="MONEY">several million New Pesos</TIMEX>
the equivalent of less than <TIMEX TYPE="MONEY">a U.S. penny</TIMEX>
more than <TIMEX TYPE="PERCENT">95%</TIMEX>
Names of organisations, persons and locations. These are marked up as <ENAMEX> entities of type ORGANIZATION, PERSON or LOCATION. For example:
in <TIMEX TYPE="LOCATION">North and South America</TIMEX>
<TIMEX TYPE="LOCATION">U.S.</TIMEX> exporters
the <TIMEX TYPE="ORGANIZATION">U.S. Fish and Wildlife Service</TIMEX>
some <TIMEX TYPE="ORGANIZATION">Treasury</TIMEX> bonds and securities
the <TIMEX TYPE="PERSON">Clinton</TIMEX> government
<TIMEX TYPE="ORGANISATION">Microsoft</TIMEX> chairman
<TIMEX TYPE="PERSON">Bill Gates</TIMEX> said yesterday...
Also, nicknames of organisations (e.g. “Big Blue”), locations (e.g. “the Big Apple”) and people (e.g. “Mr. Fix-It”) needed to be marked up as ENAMEX entities of the appropriate type.
1.3 The complexity of Named Entity recognition
Named Entity recognition is a difficult task for a number of reasons. First, the definition of what is and is not a Named Entity can be very complex. For example, according to the MUC competition rules, the following should not be marked up:
Artefacts. Artefacts like “the space shuttle Columbia” don’t get marked up. The “Wall Street Journal” and “MTV” are organisations, and should be marked up as such. But when someone is reading the Wall Street Journal or watching MTV, they are artefacts, and should not be marked up. “Boeing” is an organisation, whose stocks may rise when Acme Corp orders another “Boeing”. That second occurrence of “Boeing” is an artefact and should not be marked up; but the first occurrence of “Boeing” is an organisation and should be marked up.
Things named after people. “Nobel” and “Alzheimer” are names of people, and occurrences of their names should be tagged as such. But in “Nobel Prize” or “Alzheimer’s” their names should not be tagged.
Numbers which are not currencies or percentages. For example, one should not add markup to expressions like “unchanged at 95.05”, “went up 12 points” or “1.5 times”.
These rules may look ad hoc, but that is an accurate reflection of the nature of the Named Entity recognition task: what is and is not a Named Entity depends on the application that will make use of the Named Entities. The application may require you to distinguish Alfred Nobel from the Nobel prize, but need not. Also, in the system we developed we don’t distinguish different types of artefacts—we only distinguish artefacts from organisations, people and locations, and leave the artefactual use of words like Boeing (the aircraft), Nobel (the prize) or Columbia (the space shuttle) unmarked. But one can easily imagine applications where transport vehicles (like a Boeing or a space shuttle) need to be marked separately from all other artefacts.
A second difficulty is that it is important to tag exactly the right words. The entire string “Arthur Andersen Consulting” should be marked as an ORGANIZATION; one should not mark the substring “Arthur Andersen” as a PERSON. In “Canada’s Parliament”, “Canada” (without the ‘s) should be marked up as LOCATION; “Parliament” should be marked up as ORGANIZATION. Again, this may appear ad hoc and the definition of how much should be marked up will be defined by the application. But for any application, consistency of NE markup, however ad hoc it may seem, is crucial.
The third and biggest problem is that Named Entities are expressed with words which can refer to many other things. One might think that Named Entity recognition could be done by using lists of (e.g.) names of people, places and organisations, but that is not the case. To begin with, the lists would be huge: it is estimated that there are 1.5 million unique surnames just in the U.S. [11]. It is not feasible to list all possible surnames in the world in a Named Entity recognition system.
There is a similar problem with company names. A list of all current companies worldwide would be huge, if at all available, and would be out of date tomorrow since new companies are formed all the time. In addition, company names can occur in variations: a list of company names might contain “The Royal Bank of Scotland plc”, but that company might also be referred to as “The Royal Bank of Scotland”, “The Royal” or “The Royal plc”. These variations would all have to be listed as well.
But even if it was possible to list all possible organisations and locations and people, there would still be the problem of overlaps between the list. Names such as Emerson or Washington could be names of people as well as places; Philip Morris could be a person or an organisation. In addition, such lists would also contain words like “Hope” (a location) and “Thinking Machines” (a company), whereas these words could also occur in contexts where they don’t refer to named entities. One could add some intelligence to the system and only tag these words when they have a capital letter. But that would still lead to erroneous markup when “Hope” occurs at the start of a sentence, or when “Thinking Machines” occurs in an all-capitalised headline.
---
Named Entity recognition in XML — 4— Mikheev et al
Identifying temporal expressions seems easier—after all, there are only 12 months, and we can list these and reliably identify them. But a system that does this might get confused when it finds a mention of “the Chinese-built Long March rocket” or a reference to someone called “April May”, expressions which obviously should not be marked up as dates.
1.4 The MUC Competition
The MUC competition for which we built our system took place in March 1998. Prior to the competition, participants received a detailed coding manual which specified what should and should not be marked up, and how the markup should proceed. They also received a few hundred articles from the New York Times Service, marked up by the organisers according to the rules of the coding manual.
For the competition itself, participants received 100 articles. They then had 5 days to perform the chosen information extraction tasks (in our case: Named Entity recognition) without human intervention, and markup the text with the Named Entities found. The resulting marked up file then had to be returned to the organisers for scoring.
Scoring of the results is done automatically by the organisers. The scoring software compares a participant’s answer file against a carefully prepared key file; the key file is considered to be the “correctly” annotated file. Amongst many other things, the scoring software calculates a system’s recall and precision scores:
Recall: Number of correct tags in the answer file over total number of tags in the key file.
Precision: Number of correct tags in the answer file over total number of tags in the answer file.
Recall and precision are generally accepted ways of measuring system performance in this field. For example, suppose you have a text which is 1000 words long, and 20 of these words express a location. Now imagine a dumb system which assigns the LOCATION tag to every single word in the text. This system will have tagged correctly all 20 locations, since it tagged everything as LOCATION; its recall score is 20/20, or 100%. But of the 1000 LOCATION tags it assigned, only those 20 were correct; its precision is therefore only 20/1000, or 2%.
Here is an invented example of the kind of text the participants in the MUC competition had to process. The reason for inventing an example is that it allows us to demonstrate a wider range of phenomena in a more compact way:
<DOC>
<PREAMBLE>
GENERAL TRENDS ANALYST PREDICTS LITTLE SPRING EXPLOSION
By Liza McDonald
</PREAMBLE>
<TEXT>
<P>Flavel Donne Jr, an analyst with General Trends Inc, announced 2 days ago that Little Spring would come to a loud end on May 29, 1999. General Trends, which is based in Little Spring, has been producing predictions like this since early 1963.</P>
<P>Donne is C.E.O. of General Trends and also of Adam Kluver Ltd. But John May, 29,
spokesman for Adam Kluver, said yesterday they distanced themselves from Donne’s prediction. He added that their stock had gone down 12% since May and is now valued at 130 million Canadian dollars. Flavel Donne was 42 last Thursday.
The example was constructed to illustrate a wide range of phenomena:
- Company names are fictitious, and not part of any lists of existing company names.
- Company names are multi-word expressions, which contain common words (general, trends) or which look like person names (Adam Kluver).
- Company names are sometimes referred to only in part: “Adam Kluver Inc.” is also referred to as Adam Kluver, which could be mistaken for a person; “General Trends Ltd” is also referred to as “General Trends”, which—especially in the capitalized headline—could be mistaken as a common noun phrase (an analyst of general trends).
- Person names have unusual christian names (Flavel, which we invented and is unlikely to be in any list of Christian names) or possibly confusing surnames (May, which could be mistaken for a temporal expression).
- There are multi-word person names (“Flavel Donne Jr”), but the same person is also referred to as just “Donne”.
- The text contains dates, percentages and monetary values, which should be tagged. It also contains other numbers, which should not be tagged: in “Donne is 42”, the number should not be tagged; in “2 days ago”, the “2” should not be tagged, but the whole expression should be tagged as a temporal expression.
- In one instance, “May” followed by a number indicates a date, in another it indicates the name of a person followed by an age. This should result in different markup.
Our MUC system produces the following output:
```
<DOC>
<PREAMBLE>
<ENAMEX TYPE='ORGANIZATION'>GENERAL TRENDS</ENAMEX> ANALYST PREDICTS
<ENAMEX TYPE='LOCATION'>LITTLE SPRING</ENAMEX> EXPLOSION
By <ENAMEX TYPE='PERSON'>Liza McDonald</ENAMEX>
</PREAMBLE>
<TEXT>
</P>
<ENAMEX TYPE='PERSON'>Flavel Donne Jr</ENAMEX>, an analyst with
<ENAMEX TYPE='ORGANIZATION'>General Trends Inc</ENAMEX>, announced
<TIMEX TYPE='DATE'>2 days ago</TIMEX> that
<ENAMEX TYPE='LOCATION'>Little Spring</ENAMEX> would come to a loud end
on <TIMEX TYPE='DATE'>May 29, 1999</TIMEX>.<ENAMEX TYPE='ORGANIZATION'>General Trends</ENAMEX>, which is based in
<ENAMEX TYPE='LOCATION'>Little Spring</ENAMEX>, has been producing predictions like this since <TIMEX TYPE='DATE'>early 1963</TIMEX>.
</P>
```
2 LTG text handling tools
2.1 SGML awareness
At the Language Technology Group we have developed a suite of reusable text processing tools. These are modular tools with stream input/output; each tool does a very specific job, but can be combined with other tools in a pipeline. Different combinations of the same tools can thus be used in a pipeline for completing different text processing tasks.
Our architecture imposes an additional constraint on the input/output streams: they should have a common syntactic format. For this common format we use eXtensible Markup Language (XML).
A tool in our architecture is thus a piece of software which uses an API for all its access to XML data and performs a particular task: exploiting markup which has previously been added by other tools, removing markup, or adding new markup to the stream(s) with or without removing the previously added markup. This approach allows us to remain entirely within the XML paradigm during text processing. At the same time, we can be very general in the design of our tools, each of which can be used for many different purposes. Furthermore, because we can pipe data through processes, the UNIX operating system itself provides the natural "glue" for integrating data-level applications.
The XML-handling API in our workbench are our LT NSL and LT XML libraries ([12], [13]). They allow a tool to read, change or add attribute values and character data to XML elements and to address a particular element in an XML stream using a query language called lquery. lquery offers a way of specifying particular nodes in the XML document structure. For example, the newspaper articles we were dealing with in the MUC competition can be represented as the sgml tree illustrated in Figure 1.
Queries in lquery are coded as strings which give a (partial) description of a path from the root of the XML document (the top-level element) to the desired XML element(s). For example, the query
```xml
=./TEXT/.*/S[STATUS="PARSED"]
```
refers to any <S> element whose attribute STATUS has the value PARSED and which occurs at any level of nesting inside a <TEXT> element which, in turn, can occur anywhere inside the
document’s top-level element. It does not apply, e.g., to <S> elements inside the document <PREAMBLE>.
The example shows that an ltquery query is a sequence of terms, separated by slashes. Each term in the query describes either an XML element or a nested sequence of XML elements. Element names can be followed by a list of attribute specifications in square brackets. An item that ends in a * matches a nested sequence of zero or more XML elements, each of which match the item without the *. For example, P* will match a <P> element, arbitrarily deeply nested inside other <P> elements. A full stop will match any XML element name; thus, a simple way of finding a <P> element anywhere inside a document is to use the query .*//P.
A condition with an index n matches only the nth sub-element of the enclosing element. Index counting starts from 0. Thus, DOC/TEXT/P[n] will give all first paragraphs under <TEXT> elements which are under <DOC>.
The simplest way of configuring our XML tools is to specify in a query where the tool should apply its processing. Using the syntax of ltquery we can directly specify which parts of the stream we want to process and which parts we want to skip. This also allows us to provide a tool with processing resources (e.g. grammars) specifically tailored to those document parts the tool is attending to. For example, we have a tool called fsgmatch which can be used to identify certain SGML elements in the input text and wrap them into larger SGML elements, according to rules specified in resource grammars. It can be called with different resource grammars for different document parts. Here is an example pipeline using fsgmatch:
```bash
>> cat text | fsgmatch -q "./*/DATE\NWORDS" date.gr
| fsgmatch -q "./*/PREAMBLE" preamb.gr
| fsgmatch -q "./*/TEXT/P[0]" first.gr
```
In this pipeline, fsgmatch takes the input text, and processes the data that has been marked up as <DATE> or <NWORDS> using a resource grammar called date.gr; then it processes the data in <PREAMBLE> using the resource grammar preamb.gr; and then it processes the first paragraph in the <TEXT> section using the grammar first.gr.
This technique allows one to tailor resource grammars very precisely to particular parts of the text. For example, the reason for applying first.gr to the first paragraph of a newspaper
*Named Entity recognition in XML*
article is that that paragraph often contains unusual information which occurs nowhere else in the article in that form. Here is the start of a typical article:
CAPE CANAVERAL, Fla. &MD; Working in chilly temperatures Wednesday...
In our analysis of the MUC newspaper articles, we noticed that if an article starts with capitalized words followed by &MD; the capitalized words indicate a location. It is easy to capture this in a grammar. But the phenomenon only occurs in text initial <P> elements. And it is very efficient to be able to tell fsgmatch only to apply that specialised grammar to the first <P> element of any text it is processing.
We have developed a range of SGML and XML-aware processing tools. Some of them are low-level tools, such as sgdelparse which strips unwanted markup from a document, or sgdel and sgtr, which are SGML-aware versions of the UNIX tools sed and tr; some are higher-level tools, such as the SGML transducer fsgmatch mentioned above. Combinations of these tools provide us with the means to explore large text collections and to do fast prototyping of text processing applications. We have used these tools in the development of systems for many different applications, such as statistical text categorization [2], information extraction in a medical domain [3], collocation extraction for lexicography [1], etc. A detailed description of the tools, their interactions and applications can be found in [4] and [10]; information can also be found at our website, http://www.ltg.ed.ac.uk/software/. In the rest of this section, we will concentrate on some of the higher-level SGML-aware tools used in the Named Entity recognition system.
2.2 lttok
lttok is an SGML-aware tokeniser. Tokenisers take an input stream and divide it up into words or tokens, according to some agreed definition of what a token is. This is not just a matter of finding white spaces between characters. For example, one needs to decide whether “I’ve” and “can’t” are one or two tokens. Also, for some applications one may want to treat as one token multi-word expressions like “Tony Blair Jnr”, “President Bill Clinton”, “Mr de Toqueville” or “January 17th, 1998”. And hyphenated words like “first-quarter-charge” can be treated as a single token or three tokens, depending on the application.
The Ltg tokeniser lttok works at the character level: it looks at the characters in the input stream and, using finite-state machinery, bundles them into tokens according to rules specified in its resource grammars. The input to lttok can be SGML-marked up text, and lttok can be directed to only process parsed character data within certain SGML or XML elements.
Here is an example of the use of lttok:
```
cat text | lttok -q ""/P|TITLE|PREAMBLE|TRAILER" -mark W -attr C standard.gr
```
lttok tokenises the character data in all the <P> elements as well as in the TITLE, the PREAMBLE and the TRAILER, using the rules in the resource grammar standard.gr. The tokens it finds will be marked up using the SGML element <W>, and attribute information will be added using the attribute name C. The resource file stipulates what the possible values are for this attribute. Here is some example output from this pipeline:
---
**Named Entity recognition in XML**
Mikheev et al
Because of instructions in the resource file standard.gr, lttok also added the attribute C to each <W> element, whose value is W in the case of a word, CM in the case of a comma, CD in the case of a numeral, etc. This is information which other processing tools can make use of.
### 2.3 ltstop
As the above example shows, although the tokeniser adds annotation for commas, it does not add annotation for full stops. The reason for this is that not every period is a full stop; some are part of an abbreviation. Depending on the choice of resource file for lttok, a period will either always be attached to the preceding word (as in the above example, where the full stop stays with the sentence-final word “dollars” and with the abbreviation “C.E.O.”) or it will always be split off.
This creates an ambiguity where a sentence-final period is also part of an abbreviation, as in our example “...and also of General Trends Ltd. But...” For many reasons it is useful to know where a sentence ends, and looking for a full stop followed by a space and a capital letter is not always sufficient, as illustrated in “It is the B.B.C. Secretary-General who...”
To resolve this ambiguity we use a special program, ltstop, which applies a maximum entropy model pre-trained on a corpus [7]. The statistical model knows which features are relevant in deciding whether a word is an abbreviation (e.g., usual length of abbreviations, capitalization, preceding words, ...) or when a word is sentence-final, or both. It has acquired these features automatically, on the basis of a corpus in which abbreviations and full-stops have been hand-annotated.
In the above example, ltstop will split the period from ordinary sentence-final words and create an end-of-sentence token <W C="."></W>; or it will leave the period with the word if it is an abbreviation; or, in the case of sentence-final abbreviations, it will leave the period with the abbreviation and in addition create a virtual full stop <W C="."></W>
Like the other LGT tools ltstop can be targeted at particular SGML elements. In our example, we want to target it at <W> elements within <P> elements—the output of lttok. It can be used with different maximum entropy models, trained on different types of corpora.
For our example, the full pipeline looks as follows:
```sh
cat text | lttok -q "*/P/TITLE/PMBLE/TRAILER" -mark W -attr C standard.gr | ltstop -q "*/P/W" fs_model.me > text.stop
```
This will generate the following output in text.stop:
---
**Named Entity recognition in XML**
Mikheev et al
Note how `ltstop` left periods with abbreviations like “C.E.O.”, separated off the full stop after “dollars”, and left the period with “Ltd.” but added a final stop to this sentence, making explicit that the period after “Ltd.” has two distinct functions.
2.4 `ltapos`
Another standard `ltg` tool we use in our `muc` system is our part-of-speech tagger `ltapos` [6]. Part-of-speech tagging (pos tagging) involves annotating words (as identified by the tokeniser) with information as to whether they are a verb, a noun, etc. To do this, pos taggers look up words in a lexicon which will tell them that, e.g., “left” is most likely to be a past tense verb (as in “he left”) or an adjective (“my left foot”), but could also be a past participle (“they have left”), a noun (“on the left”), or an adverb (“go left”). Taggers also have statistical co-occurrence information, e.g. that an adjective is more likely to be followed by a noun than by a verb.
Part-of-speech tagged text is useful input for a Named Entity recognition system. For example, in “GIVE ME THE BILL”, “BILL” will be tagged as a noun; in “GIVE ME BILL”, “BILL” will be tagged as a proper name. The theoretical difference between a noun and a proper name is not important for present purposes, except that names of people tend to be proper names rather than nouns. On the basis of this information, a Named Entity recognition system can decide that “BILL” in the first sentence is more unlikely to be a `<PERSON>` than in the second sentence. This is obviously not sufficient to make a decision either way as to what sort of named entity “BILL” is, but it provides some extra evidence which can be used in combination with (for example) contextual clues.
Our part-of-speech tagger `ltapos` is SGML-aware: it reads a stream of SGML elements specified by the query and applies a statistical technique to assign the most likely POS tags. An important feature of the tagger is an advanced module for handling words which are not in the lexicon [5]. This proved to be crucial for name spotting: given that part-of-speech information can be a great help in detecting names, the POS tagger needs to be able to postag unknown words—like the word “Donne” in “Donne is 42”.
`ltapos` also carries out a few other tasks which are useful for Named Entity recognition. For capitalised words, `ltapos` adds information as to whether the word exist in lowercase in the lexicon (marked as L="1") or whether it exists in lowercase elsewhere in the same document (marked as L="d"), or none of the above (marked as L="#"). This information is particularly useful for multi-word Named Entities, which contain common words: suppose a text contains the sentence “Suspended Ceiling Contractors Ltd denied the charge”. Since the sentence-initial word has a capital letter, it could be an adjective modifying the company “Ceiling Contractors Ltd”, or it could be part of the company name, “Suspended Ceiling Contractors Ltd”. By marking early on that “suspended” also occurs in the lexicon in lowercase, the system will later know to be cautious about how many words to include in the `<ORGANIZATION>`
This is what the pipeline looks like:
```
cat text.tok | ltpos -q ".*/P|PREAMBLE|TRAILER|BC|TITLE"
-pos_attr C -lookup_attr L posgram > text.pos
```
The call to ltpos specifies that the part of speech tags should be entered as values to the attribute C; in other words, it changes the current W values of the C attribute to POS values. POS values are reasonably mnemonic abbreviations, fairly standard in the computational linguistics literature—such as JJ for adjective, CC for conjunction, NN for singular noun, NNP for singular proper name, and DT for determiner. The pipeline gives the following output:
```
<W L='##' C='NNP'>Flavel</W> <W L='##' C='NNP'>Donne</W> <W L='##' C='NNP'>Jr</W>
```
2.5 fsqmatch
The core tool in our MUC system is fsqmatch. fsqmatch is an SGML transducer. It takes certain types of SGML elements and wraps them into larger SGML elements. In addition, it is also possible to use fsqmatch for character-level tokenisation, but in this paper we will only describe its functionality at the SGML level.
fsqmatch can be called with different resource grammars, e.g. one can develop a grammar for recognising names of organisations or temporal expressions. Like the other ltg tools, it is possible to use fsqmatch in a very targeted way, telling it only to process SGML elements within certain other SGML elements, and to use a specific resource grammar for that purpose.
The combined functionality of lttok and fsqmatch gives system designers many degrees of freedom. Suppose you want to map character strings like “25th” or “3rd” into SGML entities. You can do this at the character level, using lttok, specifying that strings that match [0-9]+[-]?((st)|(nd)|(rd)|(th)) should be wrapped into the SGML structure <W C="ORD">. Or you can do it at the SGML level: if your tokeniser had marked up numbers like “25” as <W C="CD"> then you can write a rule for fsqmatch saying that <W C="NUM"> followed by a <W> element whose character data consist of th, nd, rd or st can be wrapped into an <W C="ORD"> element.
A transduction rule in fsqmatch can access and utilize any information stated in the element attributes, check sub-elements of an element, do lexicon lookup for character data of an element, etc. For instance, a transduction rule can say: “if there are one or more W elements (i.e. words) with attribute C (i.e. part of speech tag) set to NNP (proper noun) followed by a W element with character data “Ltd.”, then wrap this sequence into an ENAMEX element with attribute TYPE set to ORGANIZATION.
Transduction rules can check left and right contexts, and they can access sub-elements of complex elements; for example, a rule can check whether the last W element under an NG element (i.e. the head noun of a noun group) is of a particular type, and then include the whole noun group into a higher level construction. Element contents can be looked up in a lexicon. The lexicon lookup supports multi-word entries and multiple rule matches are always resolved to the longest one.
An example of a small but useful thing we use \texttt{fsgmatch} for is to assign certain "semantic" tags which are particularly useful for MUC processing. For example, words ending in -\textit{yst} and -\textit{ist} (analyst, geologist) as well as words occurring in a special list of words (spokesman, director) are recognised as professions and marked as such (\texttt{S="PROF"}). Adjectives ending in -\textit{an} or -\textit{ese} whose root form occurs in a list of locations (American, Japanese) are marked as locative adjectives (\texttt{S="LOC,J\textsc{J}"}).
To achieve this, it makes most sense to invoke \texttt{fsgmatch} immediately after \texttt{ltpos}:
\begin{verbatim}
cat text.pos | fsgmatch -q ".*/P|PREAMBLE|TRAILER" sem.gr
\end{verbatim}
Because \texttt{fsgmatch} plays such a crucial role in our MUC system, we describe it and the rules in the resource grammars in more detail in the following section.
3 TIMEX, NUMEX
Temporal and numerical expressions in English newspapers have a fairly structured appearance which can be captured by means of grammar rules. We developed a grammar for the temporal expressions we needed to capture. We also compiled lists of temporal entities, like days of the week and names of months (including abbreviations), and holidays and festivals (like “Hannukah” and “Hogmanay”). We also compiled a grammar of numerical expressions, as well as a list of currencies. The \texttt{SGML} transducer \texttt{fsgmatch} uses these resources to wrap the appropriate strings with \texttt{timex} and \texttt{numex} tags.
Figure 2 is an excerpt of the kind of resource file used by \texttt{fsgmatch} to identify certain \texttt{timex} expressions in the texts.
One of the rules in Figure 2 is called \texttt{day-name}. Its type is \texttt{DISJF}, which means that, for the rule to be successful, one of its subrules (\texttt{day-name-full} or \texttt{day-name-abbrev}) should succeed.
The rule \texttt{day-name-full} checks whether the input matches \texttt{CCAPWRD}—it checks if the input is an \texttt{SGML} entity labelled \texttt{<W>} (i.e. a word), whose \texttt{PCDATA} match the regular expression given in the entity definition for \texttt{CCAPWRD} (i.e. whether it is a capitalized word). When it finds a matching \texttt{SGML} item, it checks whether this word also occurs in the file \texttt{TIM\textunderscore lex}—a file containing many temporal expressions, such as Monday, January, Tue, and Hogmanay, with tags indicating whether they are days of the week, holidays, etc. If the capitalized word is found in that file, its tag is checked. If the tag is found to be \texttt{DY} the \texttt{<W>} element is wrapped in a \texttt{<TIMEX>} element of type \texttt{DATE}.
It is worth pointing out that the resource files of which Figure 2 is a small excerpt are themselves structured as \texttt{XML} documents. We firmly believe that a good strategy for building text processing applications like the \texttt{NE} system is to build them using \texttt{XML} annotated stream input/output, but it does not follow from this that all the resource files that are called in the course of this process should also be in \texttt{XML}. However, because the production of the resource
files was done by a number of different people, working within XML with commonly agreed
DTDs was found to be helpful.
The TIMEX and NUMEX components of our MUC system do not make use of part-of-speech
tagged information, and can be run before or after lttck and lttstop.
4 ENAMEX
For recognising enamex elements, we similarly compiled grammars and collected resources,
such as names of locations and organisations, first names (for use in name recognition), etc.
But as demonstrated in section 1, a MUC system cannot rely too much on such lists, and
different strategies need to be used for high-precision enamex recognition. In fact, we have
also run our NE system without any lexical resources and report on these experiments in [9].
The basic philosophy underlying our approach is as follows. When looking at a string of words like “Adam Kluver” it is not possible to say whether this is the name of a person or an organisation. However, somewhere in the text, there is likely to be some contextual material which makes it clear which of those it is. Our strategy is to only make a decision once we have identified this bit of contextual information.
We further assume that, once we have identified contextual material which makes it clear that “Adam Kluver” is (e.g.) the name of a company, then any other mention of “Adam Kluver” in that document will be referring to that company. If the author at some point had wanted to also refer to a person called “Adam Kluver”, s/he would have provided some extra context to make this clear, and this context would have been picked up in the first step.
If no suitable context is found anywhere in the text to decide what sort of Named Entity “Adam Kluver” is, the system can check other resources, e.g. a list of known company names. But this method only applies after substantial context checking has been carried out.
In our MUC system, we implemented this approach as a combination of symbolic transduction of SGML elements with probabilistic partial matching, in 5 stages:
1. sure-fire rules
2. partial match 1
3. relaxed rules
4. partial match 2
5. title assignment
We describe each in turn.
ENAMEX: 1. Sure-fire Rules
In the first step, our SGML transducer fsgmatch is used with sure-fire rules. These rules are very context-oriented and they fire only when a possible candidate expression is surrounded by a suggestive context. Sure-fire rules rely on known corporate designators (Ltd., Inc., etc.), person titles (Mr., Dr., Sen.), and definite contexts such as those in Figure 3. The sure-fire rules apply after POS tagging, so at this stage words like “analyst” have already been identified as PROF (professions), and words like “brother” as REL (relatives).
An example of a transduction rule is presented in Figure 4.
At this stage our MUC system treats information from the lists as likely rather than definite and always checks if the context is either suggestive or non-contradictive. For example, a likely company name with a conjunction is left untagged at this stage if the company is not listed in a list of known companies: in a sentence like “this was good news for China International Trust and Investment Corp”, it is not clear whether the text deals with one or two companies, and no markup is applied.
<table>
<thead>
<tr>
<th>Context Rule</th>
<th>Assign</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>Xxxx+ is? a? JJ* PROF</td>
<td>PERS</td>
<td>Yuri Gromov, a former director</td>
</tr>
<tr>
<td>Xxxx+ is? a? JJ* REL</td>
<td>PERS</td>
<td>John White is beloved brother</td>
</tr>
<tr>
<td>Xxxx+ himself</td>
<td>PERS</td>
<td>White himself</td>
</tr>
<tr>
<td>Xxxx+ , DD+, shares in Xxxx+</td>
<td>ORG</td>
<td>shares in Trinity Motors</td>
</tr>
<tr>
<td>PROF of/at/with Xxxx+ in/at LOC</td>
<td>LOC</td>
<td>director of Trinity Motors</td>
</tr>
<tr>
<td>Xxxx+ area</td>
<td>LOC</td>
<td>in Washington</td>
</tr>
</tbody>
</table>
Beribidjan area
Figure 3: Examples of sure-fire transduction material for enamax. Xxxx+ is a sequence of capitalized words; DD is a digit; PROF is a profession; REL is a relative; JJ* is a sequence of zero or more adjectives; LOC is a known location.
Similarly, the system postpones the markup of unknown organizations whose name starts with a sentence initial common word, as in “Suspended Ceiling Contractors Ltd denied the charge”. Since the sentence-initial word has a capital letter, it could be an adjective modifying the company “Ceiling Contractors Ltd”, or it could be part of the company name, “Suspended Ceiling Contractors Ltd”.
Names of possible locations found in our gazetteer of place names are marked as LOCATION only if they appear with a context that is suggestive of location. “Washington”, for example, can just as easily be a surname or the name of an organization. Only in a suggestive context, like “in the Washington area”, will it be marked up as location.
ENAMEX: 2. Partial Match 1
After the sure-fire symbolic transduction the system performs a probabilistic partial match of the identified entities. This is implemented as an interaction between two tools. The first tool collects all named entities already identified in the document. It then generates all possible partial orders of the composing words preserving their order, and marks them if found elsewhere in the text. In our example, “Adam Kluver Ltd” had already been recognised as an organisation by the sure-fire rule. In this second step, any occurrences of “Adam Kluver”, “Kluver Ltd”, “Adam Ltd” and “Adam Kluver” are also tagged as possible organizations. This markup, however, is not definite since some of these words (such as “Adam”) could refer to a different entity.
This annotated stream goes to a second tool, a pre-trained maximum entropy model. It takes into account contextual information for named entities, such as their position in the sentence, whether they exist in lowercase in general, whether they were used in lowercase elsewhere in the same document, etc. These features are passed to the model as attributes of the partially matched words. If the model provides a positive answer for a partial match, the match is wrapped into a corresponding ENAMEX element.
ENAMEX: 3. Rule Relaxation
Once this has been done, the system again applies the SGMiL transduction rules. But this time the rules have much more relaxed contextual constraints and extensively use the information from already existing markup and from the lexicon compiled during processing, e.g. containing partial orders of already identified named entities.
At this stage the system will mark word sequences which look like person names. For this it uses a grammar of names: if the first capitalized word occurs in a list of first names and the following word(s) are unknown capitalized words, then this string can be tagged as a PERSON. Here we are no longer concerned that a person name can refer to a company. If the name grammar had applied earlier in the process, it might erroneously have tagged “Adam Kluger” as a PERSON instead of an ORGANIZATION. At this point in the chain of enamex processing, that is not a problem anymore: “Adam Kluger” will by now already have been identified as an ORGANIZATION by the sure-fire rules or during partial matching. If it hasn’t, then it is likely to be the name of a person.
At this stage the system will also attempt to resolve conjunction problems in names of organisations. For example, in “this was good news for China International Trust and Investment Corp”, it is not clear whether the text is referring to one organisation or two. The system checks if possible parts of the conjunctions were used in the text on their own and thus are names of different organizations; if not, the system has no reason to assume that more than one company is being talked about.
In a similar vein, the system resolves the attachment of sentence initial capitalized modifiers, the problem alluded to above with the “Suspended Ceiling Contractors Ltd” example: if the modifier was seen with the organization name elsewhere in the text, then the system has good evidence that the modifier is part of the company name; if the modifier does not occur
anywhere else in the text with the company name, it is assumed not to be part of the it.
At this stage known organizations and locations from the lists available to the system are marked in the text, again without checking the context in which they occur.
ENAMEX: 4. Partial Match 2
At this point, the system has exhausted its resources (name grammar, list of locations, etc). The system then performs another partial match to annotate names like “White” when “James White” had already been recognised as a person, and to annotate company names like “Hughes” when “Hughes Communications Ltd.” had already been identified as an organisation. As in Partial Match 1, this process of partial matching is again followed by a probabilistic assignment supported by the maximum entropy model.
ENAMEX: 5. Title Assignment
Because titles of news wires are in capital letters, they provide little guidance for the recognition of names. In the final stage of ename processing, entities in the title are marked up, by matching or partially matching the entities found in the text, and checking against a maximum-entropy model trained on document titles. For example, in “GENERAL TRENDS ANALYST PREDICTS LITTLE SPRING EXPLOSION” “GENERAL TRENDS” will be tagged as an organization because it partially matches “General Trends Inc” elsewhere in the text, and “LITTLE SPRING” will be tagged as a location because elsewhere in the text there is supporting evidence for this hypothesis.
5 Conclusion
5.1 Performance
In the MUC competition, our system’s combined precision and recall score was 93.39%. This was the highest score, better in a statistically significant way than the score of the next best system. Scores varied from 93.39% to 69.67%. Further details on this can be found in [8].
The table in Figure 5 shows the progress of the performance of the system we fielded for the MUC competition through the five stages.
As one would expect, the sure-fire rules give very high precision (around 96-98%), but very low recall—in other words, they don’t find many ename entities, but the ones they find are correct. Subsequent phases of processing add gradually more and more ename entities (recall increases from around 40% to around 90%), but on occasion introduce errors (resulting in a slight drop in precision). Our final score for ORGANISATION, PERSON and LOCATION is given in the bottom line of Figure 5.
<table>
<thead>
<tr>
<th>Stage</th>
<th>ORGANIZATION</th>
<th>PERSON</th>
<th>LOCATION</th>
</tr>
</thead>
<tbody>
<tr>
<td>Partial Match 1</td>
<td>R: 75</td>
<td>P: 98</td>
<td>R: 80</td>
</tr>
<tr>
<td>Partial Match 2</td>
<td>R: 85</td>
<td>P: 96</td>
<td>R: 93</td>
</tr>
<tr>
<td>Title Assignment</td>
<td>R: 91</td>
<td>P: 95</td>
<td>R: 95</td>
</tr>
</tbody>
</table>
Figure 5: Scores obtained by the system through different stages of the analysis. R - recall P - precision.
5.2 The system
One of the design features of the system which sets it apart from other Named Entity recognition systems is that it is designed fully within the SGML paradigm: the system is composed of several tools which are connected via a pipeline with data encoded in SGML or XML. This allows the same tool to apply different strategies to different parts of the texts using different resources. The tools do not convert from SGML into an internal format and back, but operate at the SGML or XML level.
Our system does not rely heavily on lists or gazetteers but instead treats information from such lists as “likely” and concentrates on finding contexts in which such likely expressions are definite. In fact, the first phase of the enameX analysis uses virtually no lists but still achieves substantial recall.
The system is document centred. This means that at each stage the system makes decisions according to a confidence level that is specific to that processing stage, and draws on information from other parts of the document. The system is hybrid, applying symbolic rules and statistical partial matching techniques in an interleaved fashion.
5.3 Limitations
Unsurprisingly, the major problem for the system are single capitalized words, mentioned just once or twice in the text and without suggestive contexts. In such a case the system cannot apply contextual assignment, assignment by analogy or lexical lookup and fails to markup the entity. As the results of the MUC competition show, this is a relatively rare occurrence.
6 Availability
A runtime version of the system described here is available for free at http://www.ltg.ed.ac.uk/software/ne/.
We also have a set of tools which can be used to develop a Named Entity recognition system. The tool suite is called LT TTT, and is available from http://www.ltg.ed.ac.uk/software/ltt/.
LT TT consists of lttok, ltstop and fsgmatch, a number of resource files for tokenisation, for end-of-sentence disambiguation, and for the recognition of temporal expressions, and tools for extending these resource grammars or for creating new ones.
It also has a visual interface which uses XSL style sheets to render the XML Named Entity annotation in a form that is easier to inspect.
The part of speech tagger is available as a separate tool. See http://www.ltg.ed.ac.uk/software/pos/.
Acknowledgements
The work reported in this paper was supported in part by grant GR/L21952 (Text Tokenisation Tool) from the Engineering and Physical Sciences Research Council, UK. For help with the development of the MUC system authors wish to thank Colin Matheson, Steven Finch and Irina Nazarova. Henry Thompson, David Mc Kelvie, Richard Tobin and many other members of the Language Technology Group contributed to the development of the many LTG tools that were used in the development of the MUC system.
References
Named Entity recognition in XML – 20– Mikheev et al
Conference held in Fairfax, VA, April 29–May 1, 1998. Los Altos: Morgan Kaufmann, forthcoming.
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/18765369/Mikheev_Grover_ET_AL_1999_XML_Tools_And_Architecture_for_Named_Entity_Recognition.pdf", "len_cl100k_base": 11489, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 105263, "total-output-tokens": 13230, "length": "2e13", "weborganizer": {"__label__adult": 0.00044655799865722656, "__label__art_design": 0.0012378692626953125, "__label__crime_law": 0.0007476806640625, "__label__education_jobs": 0.003736495971679687, "__label__entertainment": 0.0003883838653564453, "__label__fashion_beauty": 0.00031638145446777344, "__label__finance_business": 0.0006976127624511719, "__label__food_dining": 0.0004074573516845703, "__label__games": 0.0008153915405273438, "__label__hardware": 0.0010499954223632812, "__label__health": 0.0007023811340332031, "__label__history": 0.0006351470947265625, "__label__home_hobbies": 0.0001323223114013672, "__label__industrial": 0.0005869865417480469, "__label__literature": 0.003589630126953125, "__label__politics": 0.0005931854248046875, "__label__religion": 0.0007462501525878906, "__label__science_tech": 0.3466796875, "__label__social_life": 0.000274658203125, "__label__software": 0.052520751953125, "__label__software_dev": 0.58251953125, "__label__sports_fitness": 0.00026988983154296875, "__label__transportation": 0.0005478858947753906, "__label__travel": 0.00021958351135253904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52090, 0.021]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52090, 0.72666]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52090, 0.921]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1860, false], [1860, 4781, null], [4781, 7314, null], [7314, 10797, null], [10797, 13641, null], [13641, 16073, null], [16073, 18263, null], [18263, 20639, null], [20639, 23930, null], [23930, 26490, null], [26490, 29634, null], [29634, 32657, null], [32657, 35886, null], [35886, 36532, null], [36532, 39171, null], [39171, 41870, null], [41870, 43859, null], [43859, 46265, null], [46265, 48786, null], [48786, 51181, null], [51181, 52090, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1860, true], [1860, 4781, null], [4781, 7314, null], [7314, 10797, null], [10797, 13641, null], [13641, 16073, null], [16073, 18263, null], [18263, 20639, null], [20639, 23930, null], [23930, 26490, null], [26490, 29634, null], [29634, 32657, null], [32657, 35886, null], [35886, 36532, null], [36532, 39171, null], [39171, 41870, null], [41870, 43859, null], [43859, 46265, null], [46265, 48786, null], [48786, 51181, null], [51181, 52090, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52090, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52090, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52090, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52090, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52090, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52090, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52090, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52090, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52090, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52090, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1860, 2], [1860, 4781, 3], [4781, 7314, 4], [7314, 10797, 5], [10797, 13641, 6], [13641, 16073, 7], [16073, 18263, 8], [18263, 20639, 9], [20639, 23930, 10], [23930, 26490, 11], [26490, 29634, 12], [29634, 32657, 13], [32657, 35886, 14], [35886, 36532, 15], [36532, 39171, 16], [39171, 41870, 17], [41870, 43859, 18], [43859, 46265, 19], [46265, 48786, 20], [48786, 51181, 21], [51181, 52090, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52090, 0.05263]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
a81bb9b665a7130475f2b0def1bdb23851e618e5
|
Debsources: Live and Historical Views on Macro-Level Software Evolution*
Matthieu Caneill
Polytech Grenoble
Université Joseph Fourier, France
matthieu.caneill@e.ujf-grenoble.fr
Stefano Zacchirolı
Univ Paris Diderot, Sorbonne Paris Cité
PPS, UMR 7126, CNRS, F-75205 Paris, France
zack@pps.univ-paris-diderot.fr
ABSTRACT
Context. Software evolution has been an active field of research in recent years, but studies on macro-level software evolution—in particular, on the evolution of large software collections over many years—are scarce, despite the increasing popularity of intermediate vendors as a way to deliver software to final users.
Goal. We want to ease the study of both day-by-day and long-term Free and Open Source Software (FOSS) evolution trends at the macro-level, focusing on the Debian distribution as a proxy of relevant FOSS projects.
Method. We have built Debsources, a software platform to gather, search, and publish on the Web all the source code of Debian and measures about it. We have set up a public Debsources instance at http://sources.debian.net, integrated it into the Debian infrastructure to receive live updates of new package releases, and written plugins to compute popular source code metrics. We have injected all current and historical Debian releases into it.
Results. The obtained dataset and Web portal provide both long-term views over the past 20 years of FOSS evolution and live insights on what is happening at sub-day granularity. By writing simple plugins (∼100 lines of Python each) and adding them to our Debsources instance, we have been able to easily replicate and extend past empirical analyses on metrics as diverse as lines of code, number of packages, and rate of change—and make them perennial. We have obtained slightly different results than our reference study, but confirmed the general trends and updated them in light of 7 extra years of evolution history.
Conclusions. Debsources is a flexible platform to monitor large FOSS collections over long periods of time. Its main instance and dataset are valuable resources for scholars interested in macro-level software evolution.
*This work has been partially performed at, and supported by IRILL http://www.irill.org. Unless noted otherwise, all URLs and data in the text have been retrieved on March 9, 2014.
Categories and Subject Descriptors
D.2.8 [Software Engineering]: Metrics—product metrics;
H.4 [Information Systems Applications]: Miscellaneous;
K.2 [History of Computing]: [Software]
General Terms
measurement
Keywords
software evolution, source code, free software, open source, Debian
1. INTRODUCTION
For several decades now [21, 18] software evolution has been an active field of research. Given its natural availability and openness, numerous empirical studies on software evolution have targeted Free and Open Source Software (FOSS), with more than 100 noteworthy papers cited in recent systematic literature reviews [27, 3]. Despite the abundant research efforts, few studies have investigated macro-level software evolution (or “evolution in the large”), i.e., have considered large software collections as coherent wholes and observed their evolution, as collections, rather than the evolution of individual software products contained therein.
This lack of studies is not due to a lack of interest in studying software collections. To begin with, their relevance w.r.t. current practices is hard to dispute: with the massive popularization of “app stores” and the steady market share of package-based software distributions, software is increasingly delivered to users as part of curated collections maintained by intermediate software vendors. Additionally, software collections are also useful to study evolution at the granularity of individual software products: they contribute to eliminate (researcher) selection bias, which is often cited as the main threat to validity in evolution studies [27]. Finally, well-established software collections are enjoying remarkably long lives—now spanning several decades—outliving many of the software products they ship; software collections therefore offer remarkable opportunities for gathering long-term historical insights on the practice of software.
The study of software collections, however, poses specific challenges for scholars, due to an apparent tendency at growing ad hoc software ecosystems, made of homegrown tools, technical conventions, and social norms that might be hard to take into account when conducting empirical studies. We believe that the relative scarcity of macro-level evolution...
Contributions. We focus on Debian,\footnote{http://www.debian.org} one of the most reputed and oldest (founded in 1993) FOSS distributions, often credited as the largest organized collection of FOSS, and a popular data source for empirical software engineering studies (e.g., \cite{28,11,19,9}). Our aim is to ease the study of macro-level FOSS evolution patterns, using the assumption that Debian is a representative sample of relevant FOSS projects. More specifically, we want to support both long-term evolution studies—looking back as far as possible—as well as studies of present, day-by-day evolution patterns of software currently shipped by Debian.
To that end we have built Debsources, a software platform to gather, search, and publish on the Web the source code of Debian and measures about it. We have set up a Debsources instance at \url{http://sources.debian.net}, integrated it into the Debian infrastructure to receive live updates of new packages, and injected all current and historical Debian releases into it. To assess the usefulness of the platform we have used the obtained dataset to replicate the major studies on macro-level software evolution \cite{24,11} which, as it happens, targeted Debian too.
Debsources has made the data gathering process very easy. Thanks to its extensible design we just had to write a few short Python plugins to compute classical software metrics, trigger an update, and wait a few days to obtain the dataset. As a consequence of us doing so, the dataset needed to replicate the original studies is now live and perennial. Each Debian package release gets immediately processed by our plugins and the obtained results augment the dataset publicly available at our Debsources instance, which has quickly gained popularity in the Debian community.
Debsources is Free Software\footnote{http://anonscm.debian.org/gitweb/?p=qa/debsources.git} released under the AGPL3 license. It can be deployed elsewhere to serve similar needs.
To conduct the replication study we have queried the obtained dataset and charted the most interesting facts. Over all, we have been able to: (1) confirm the general trends observed in \cite{24,11}, (2) extend them to take into account the subsequent 7 years of Debian evolution history, and (3) shed some light into some of the hypotheses made at the time, thanks to the more fine-grained knowledge of source files (and in particular of their checksums) that Debsources allows. We have also found some discrepancies; for the most part they seem due to the original study considering a smaller subset of the Debian archive than we did.
Paper structure. Section 2 gives an overview of the life cycle of Debian packages and releases. Section 3 details the architecture of Debsources, while Section 4 presents our data gathering process and the resulting dataset. Section 5 discusses the results of the replication study. Before concluding, Section 6 compares Debsources with related work.
Data availability. The software, dataset, and results discussed in this paper are available, in greater detail, at \url{http://data.mancoosi.org/papers/esem2014/}.
2. DEBIAN MINING FUNDAMENTALS
Debian \cite{14} is a large and complex project. In this section we present the main notions needed for mining Debian as a collection of FOSS projects, in source code format.
The life-cycles of Debian packages and releases are depicted in Figure 1. As a distribution, Debian is essentially an intermediary between upstream authors—who release software as source code tarballs or equivalent—and final users that install the corresponding binary packages using package management tools like \texttt{apt-get} \cite{5}.
Debian package maintainers are in charge of the integration work that transforms upstream tarballs into packages. They usually work on source packages, which are bundles made of upstream tarballs (e.g., \texttt{proj.x.y.z.orig.tar.gz}), Debian-specific patches (\texttt{*.diff.gz}), and machine readable metadata (\texttt{*.dsc}). The metadata of all source packages corresponding to a Debian release are aggregated into metadata index files called \texttt{Sources}. A sample source package entry
\begin{verbatim}
Package: emacs19
Priority: standard
Section: editors
Version: 19.34-19.1
Binary: emacs19, emacs19.el
Maintainer: Mark W. Eichin <eichin@[...]>
Architecture: any
Directory: dists/hamm/main/source/editors
Files:
75c[...]
db5 649 emacs19_19.34-19.1.dsc
f7[...]
d40 10875510 emacs19_19.34.orig.tar.gz
f[...]
d[...]
d8 15233 emacs19_19.34-19.1.diff.gz
\end{verbatim}
Figure 2: sample Debian source package metadata
Table 1: Debian release information; * denotes, here and in the remainder, unreleased suites.
<table>
<thead>
<tr>
<th>ver.</th>
<th>name</th>
<th>cur. alias</th>
<th>release date</th>
<th>cycle (days)</th>
<th>archived</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.1</td>
<td>buzz</td>
<td>17/06/1996</td>
<td>n/a</td>
<td>721</td>
<td>yes</td>
</tr>
<tr>
<td>1.2</td>
<td>rex</td>
<td>12/12/1996</td>
<td>178</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<td>1.3</td>
<td>bo</td>
<td>05/06/1997</td>
<td>175</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<td>2.0</td>
<td>hamm</td>
<td>24/07/1998</td>
<td>414</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<td>2.1</td>
<td>slink</td>
<td>09/03/1999</td>
<td>228</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<td>2.2</td>
<td>potato</td>
<td>15/08/2000</td>
<td>525</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<td>3.0</td>
<td>woody</td>
<td>19/07/2002</td>
<td>703</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<td>3.1</td>
<td>sarge</td>
<td>06/06/2005</td>
<td>1053</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<td>4.0</td>
<td>etch</td>
<td>08/04/2007</td>
<td>671</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<td>5.0</td>
<td>lenny</td>
<td>15/02/2009</td>
<td>679</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<td>6.0</td>
<td>squeeze</td>
<td>oldstable</td>
<td>06/02/2011</td>
<td>721</td>
<td>no</td>
</tr>
<tr>
<td>7</td>
<td>wheezy stable</td>
<td>04/05/2013</td>
<td>818</td>
<td>no</td>
<td></td>
</tr>
<tr>
<td>8</td>
<td>jessie* testing</td>
<td>tbd</td>
<td>tbd</td>
<td>no</td>
<td></td>
</tr>
<tr>
<td>n/a</td>
<td>sid* unstable</td>
<td>n/a</td>
<td>n/a</td>
<td>n/a no</td>
<td></td>
</tr>
</tbody>
</table>
from an ancient Sources file is shown in Figure 2. Similar indexes, called Packages, exist for binary packages.
Several metadata fields are worth noting. Source packages are versioned by concatenating the upstream version, a “-” sign, and a Debian-specific version. Source packages are also organized in two-level sections: packages only containing software considered free by Debian belong to the top-level (and implicit) section main; other packages are either in the contrib or non-free top-level sections, resulting in complete sections like Section: non-free/games. Each source package gets compiled to one or several binary packages, defining the granularity at which users can install software. In Figure 2, Emacs 19 corresponds to two distinct binary packages, one for the editor itself and another one for its Elisp modules.
When ready, the maintainer uploads both source and binary packages to the development release (or “suite”) called unstable (a.k.a. sid). Since Debian supports many hardware architectures, a network of build daemons (build) fetch incoming source packages from unstable, build them for all supported architectures, and upload the resulting binary packages back to unstable.
After a semi-automatic software qualification process called migration [28], which might take several days or weeks, packages flow to the testing suite. At the end of each development cycle migrations are stopped, testing is polished, and eventually released as the new Debian stable release.
Packages are distributed to users via an ad-hoc content delivery network made of hundreds of mirrors around the world. Each mirror contains all “live” suites, i.e., the suites discussed thus far plus the former stable release (oldstable). When a new stable is released, oldstable gets stashed away to a different archive—http://archive.debian.org—which is separately mirrored and contains all historical releases.
For reference, Table 1 summarizes information about Debian suites to date, their codenames, and which suites are currently archived. We note in passing that the average development cycle of Debian stable releases is 560 days (resp. 774 over the past 12 years, since woody) with a standard deviation of 270 days (resp. 133 days).
3. ARCHITECTURE
In this paper we focus on two distinct aspects of Debsources. On the one hand Debsources is a software platform that can be deployed to gather data about the evolution of Debian and all Debian-like distributions—we present this aspect in this section. On the other hand we have set up a specific Debsources instance and used it to gather a large dataset about Debian evolution history—we discuss this aspect in the next section.
The architecture of Debsources and its data flow are depicted in Figure 3. On the back end, Debsources inputs are the mirror network (for live suites) and archive.debian.org (for archived ones). Live suites can be mirrored running periodically (e.g., via cron) the dedicated debmirror tool,1 which understands the Debian archive structure. Note that the archive format supported by debmirror is shared across all Debian-based distributions (or derivatives), e.g. Ubuntu, allowing to use Debsources on them. Archived suites require a more low-level mirroring approach (e.g., using rsync) due to the fact that the Debian archive structure has changed in incompatible ways over time.
For Debian live suites it is possible to receive “push” notifications of mirror updates—which usually happen 4 times a day—and use them to trigger debmirror runs, minimizing the update lag. To that end one needs to get in touch with a Debian mirror operator and ask for specific arrangements. Archived suite can only be mirrored in “pull” style, but they only change at each stable release, on average every 2 years. If needed, Debsources can be told to mirror only specific suites, for both live and archived suites.
After each mirror update, the Debsources updater is run. Its update logic is a simple sequence of 3 phases:
1. extraction and indexing of new packages;
2. garbage collection of disappeared packages, provided that a customizable grace period has also elapsed;
3. update of overall statistics about known packages.
Debsources storage is composed of 3 parts: the local mirror, the source packages—extracted to individual directories using the standard Debian tool dpkg-source—and a Postgres DB, whose schema is given in Figure 4. Note that throughout the paper, unless otherwise specified, we use “package” to mean “source package”. The DB contains information about package metadata, suites, and individual source files.
---
1http://packages.debian.org/sid/debmirror
A plugin system is available and accounts for Debsources flexibility. Each time the updater touches a package in the data storage (e.g., by adding or removing it), it sends a notification to all enabled plugins. Plugins can further process packages, including their metadata and all of their source code, and update the DB accordingly. Plugins can declare and use their own tables (see the starred tables in Figure 4) or use general purpose plugin tables such as metrics. In essence Debsources does the heavy lifting of maintaining a general purpose storage for Debian source code, enabling plugin authors to focus on data extraction.
To assess the usefulness of this design we have developed plugins to compute popular source code metrics: disk usage (mostly as a plugin example for developers), physical source lines of code (SLOC) using sloccount [29], user-defined “symbols” (functions, classes, types, etc.) using Exuberant Ctags, and SHA256 checksums of all source files—arguably not a metric per se, but useful to detect duplicates and refine other metrics on that basis. Note that simpler metrics like the number of source files do not need specific plugins, because Debsources already tracks individual files.
We are quite pleased with the little effort needed to implement the plugins: if we exclude boilerplate code, the most complex plugin (ctags) is ∼100 lines of Python code, most of which needed to parse ctags files. All plugins described above are part of the standard Debsources distribution.
On the front end, Debsources offers several interfaces. For final users, the Debsources web app implements a HTML + JavaScript interface with features like browsing, syntax highlighting, code annotations (via URL parameters), DB searches on metadata, and regular expression searches on the code via Debian Code Search [26]. The same features are exposed to developers via a JSON API. Additionally, scholars interested in aggregate queries can directly access the low-level Debsources DB using (Postgres) SQL.
### 4. DATASET
Debsources is not meant to be a centralized single-instance platform: multiple instances of it can be deployed and tuned to serve different distributions or data gathering needs. On the other hand there is also value in having notable Deb-sources instances and using them to maintain large datasets about the evolution of Debian. In this section we present one such instance—http://sources.debian.net or, for short, sources.d.n—and its dataset.
sources.d.n is publicly accessible and meant to track all Debian suites, both live and archived. It can be queried via the web UI and JSON API. For security reasons no public access to the underlying DB is possible, but DB dumps are available on demand. Anyone can recreate an equivalent Debsources instance by following the very same process we have used to build sources.d.n, namely:
1. deploy Debsources
2. configure it to mirror a nearby Debian mirror; optional: get in touch with mirror admins to receive push update notifications—we have obtained this for sources.d.n
3. trigger an initial update run using update-debsources
4. mirror archive.debian.org with rsync
5. inject all archived suites using suite-archive add
The process is I/O-bound and the time needed to complete it depends mostly on I/O write speed. For reference, it took us ∼5 days to inject archived suites + 8 days for the live ones ∼2 weeks—using 7.2 kRPM disks in RAID5, which is arguably a quite slow setup by today standards and certainly not one optimized for write speed. The resulting disk usage is as follows: 150 GB for the local mirror (100 GB used by live suites) + 610 GB for extracted packages + 75 GB for the DB (45 GB used by indexes on large tables) = ∼840 GB, which is quite tolerable for server-grade deployments.
sources.d.n is configured with all the plugins discussed in Section 3: disk usage, sloccount, ctags, and checksums. We haven’t thoroughly benchmarked the injection process, but a significant part of the processing time (∼40–50%) is used to compute and insert ctags in the DB.
Some figures about the major tables in sources.d.n DB are reported in Table 2. The 16 injected suites include all live suites (including small suites not discussed here like backports and -updates) and all archived suites, with the exception of Debian 1.1 buzz and 1.3 rex. The exception is because those releases did not have Sources indexes, nor .dsc files for all packages. Supporting their absence is not difficult, but requires an additional abstraction layer that is not implemented in Debsources yet. Previous studies [10, 24] have ignored the same releases, presumably for the same reasons.
Table 2: table sizes in the sources.d.n dataset
<table>
<thead>
<tr>
<th>table</th>
<th>rows</th>
</tr>
</thead>
<tbody>
<tr>
<td>suites_info</td>
<td>16</td>
</tr>
<tr>
<td>packages</td>
<td>28,454</td>
</tr>
<tr>
<td>suites</td>
<td>119,078</td>
</tr>
<tr>
<td>metrics* (i.e., disk usage)</td>
<td>81,582</td>
</tr>
<tr>
<td>sloccounts*</td>
<td>290,961</td>
</tr>
<tr>
<td>checkums*</td>
<td>3,495,057</td>
</tr>
<tr>
<td>ctags*</td>
<td>317,853,685</td>
</tr>
</tbody>
</table>
The exception of Debian 1.1 buzz and 1.3 rex is that those releases did not have Sources indexes, nor .dsc files for all packages.
Table 3: Debian release sizes
<table>
<thead>
<tr>
<th>suite</th>
<th>pkgs</th>
<th>files (k)</th>
<th>du (GB)</th>
<th>sloq/ctags (M)</th>
<th>pkg (k)</th>
</tr>
</thead>
<tbody>
<tr>
<td>hamm</td>
<td>1,373</td>
<td>348.4</td>
<td>4.1</td>
<td>35.1</td>
<td>25.6</td>
</tr>
<tr>
<td>sink</td>
<td>1,880</td>
<td>484.6</td>
<td>6.0</td>
<td>52.2</td>
<td>5.9</td>
</tr>
<tr>
<td>potato</td>
<td>2,962</td>
<td>686.0</td>
<td>8.6</td>
<td>69.1</td>
<td>23.3</td>
</tr>
<tr>
<td>woody</td>
<td>5,583</td>
<td>1394.5</td>
<td>18.2</td>
<td>143.3</td>
<td>25.7</td>
</tr>
<tr>
<td>sarge</td>
<td>9,050</td>
<td>2394.0</td>
<td>34.1</td>
<td>216.3</td>
<td>23.9</td>
</tr>
<tr>
<td>etch</td>
<td>10,550</td>
<td>2879.7</td>
<td>45.0</td>
<td>281.9</td>
<td>26.7</td>
</tr>
<tr>
<td>lenny</td>
<td>12,517</td>
<td>3713.9</td>
<td>61.8</td>
<td>351.0</td>
<td>28.0</td>
</tr>
<tr>
<td>squeeze</td>
<td>14,965</td>
<td>4913.2</td>
<td>89.2</td>
<td>462.5</td>
<td>30.9</td>
</tr>
<tr>
<td>wheezy</td>
<td>17,570</td>
<td>6588.1</td>
<td>125.8</td>
<td>609.2</td>
<td>34.7</td>
</tr>
<tr>
<td>jessie*</td>
<td>19,983</td>
<td>8017.1</td>
<td>157.8</td>
<td>786.7</td>
<td>39.4</td>
</tr>
<tr>
<td>sid*</td>
<td>21,232</td>
<td>9872.2</td>
<td>188.5</td>
<td>972.6</td>
<td>45.8</td>
</tr>
</tbody>
</table>
The table shows the Debian release sizes over time with data in Table 3. The table includes the suite name, number of packages, file size, disk usage, and other metrics for each release.
Average of 2.86 versions per package. The number of mappings between (versioned) packages and suites, ~120,000, is significantly higher than the number of packages due to packages occurring in multiple releases.
We index and checksum ~30 M source files, a whopping ~320 M ctags, and ~300,000 (language/package) pairs for an average of 3.56 different programming languages occurring in each (versioned) package. These are just preliminary observations that can be made on the basis of simple row counts; we will refine them in the next section.
5. MACRO-LEVEL EVOLUTION
Using the sources.d.n dataset we can replicate the findings of the former major study on macro-level software evolution [10] (reference study, or ref. study in the following). We present in this section our experiences in doing so. In addition to the general usefulness of conducting replication studies—indeed claim verification, method comparison, etc.—replicating today (2014) that study (2009) is particularly useful, because we now have data about 7 extra years (+77%, up to a total of 16 years) of evolution history pertaining to maintenance sustainability, we think it’s more appropriate to include all sections. To verify this hypothesis we have recomputed sizes using mainly only obtaining package counts closer to, but still higher than, those...
5.1 Total size
The total sizes of all considered suites are given in Table 3 and plotted over time in Figure 5. Using the sources.d.n dataset it has been easy to compute extra metrics (n. of source code files, disk usage, and ctags) in addition to those already computed in ref. study (n. of packages and SLOC).
When comparing with ref. study it is clear that we have considered more packages in each release: 300 more for hamm, up to 400 more for etch. A first potential reason is that they might have restricted their analysis to the main section of the Debian archive, whereas we have included all sections. Strictly speaking contrib and non-free are not part of Debian, but they are maintained by Debian people using Debian resources; given that several claims in software evolution pertain to maintenance sustainability, we think it’s more appropriate to include all sections. To verify this hypothesis we have recomputed sizes using main only obtaining package counts closer to, but still higher than, those...
5.2 Package size
We have studied the frequency distribution of package sizes in SLOC for all suites in the dataset. In Figure 6 we show the distributions for the two releases considered in our reference study (hamm and woody) plus the last two stable releases. Recent history confirms the observations of the ref. study: larger packages are getting larger and larger, with now 2 packages (the Linux kernel and the Chromium browser) past the 10 millions SLOC mark in the last stable release. At the same time more and more small packages enter the distribution over time, with about 50% of wheezy packages below 3,900 SLOC.
What has changed since ref. study is the relative stability back then, of the average package size—see Table 3. Post-etch the average package size has gone up gradually but considerably, from 26 kSLOC (etch) up to 34.7 kSLOC (+33%) in wheezy. It appears that the increase in the number of small packages added to the distribution is no longer enough to compensate the growth in size of large packages. A possible explanation is the emergence of more strict criteria in accepting new packages in Debian, with the effect of filtering out "non mature", and usually small, software. A more far-fetched explanation, if we take Debian as a rep...
Figure 5: Debian release sizes over time
5.3 Package maintenance
Using the sources.d.n dataset we can study package changes across releases (“package maintenance”, in the wording of ref. study) by considering in turn pairs of suites, using one of them as reference, and classifying packages in the other as: common (appearing in both suites no matter the version), removed (present in the reference but not in the other), or new (vice versa). We can furthermore identify unchanged packages (\(\subseteq\) common) as those appearing with the same version in the two suites. We have done this classification for all pairs of subsequent suites. A significant excerpt of the results is given in the upper part of Table 4.
Once again we obtain similar, but not identical results w.r.t. the reference study, which only gives common and unchanged measurements for hamm and etch. Restricting to main closes the gap almost entirely. The small number of packages that persisted unchanged from hamm to etch (148) shrank even further in jessie but is still non-zero—16 years later!—and seems to be stabilizing at around 80. Looking into those packages we find legacy, but still perfectly functional tools like netcat.
It is important to note that—even though this point is not immediately clear in ref. study—unchanged packages are not packages that have not been touched at all across releases, but only packages whose upstream version (e.g., 1.2.3) has not changed. Their Debian version might have changed, and in fact redoing the analysis using the complete package versions (e.g., 1.2.3-4) we find that unchanged packages w.r.t. hamm drop to 0 already at woody, “only” 3 releases later. This suggests that long lasting unchanged packages might have been abandoned upstream, but are still maintained in Debian via package patches, without going through the burden of replacing upstream maintainers.
To put things in perspective we have also computed the average package life, defined as the period of time between the release of the first suite in which a package appears as new (w.r.t. the previous release) and that of the first suite in which it is removed (ditto). The result is 944 days, only 20% higher than the average release duration since woody. In spite of a few long lasting unchanged entries, software in Debian seem to have a fairly high turnover.
We have also briefly looked into the percentage of common and unchanged packages w.r.t. the previous release: both values increase slightly post-etch, but now show a remarkable stability around 87% (common) and 43% (unchanged)—the ratio of change appears to be stable across releases.
An acknowledged limitation of our reference study is that, using only version information, one cannot assess the size of upstream changes: they can find out that a package in different suites went through (at least) one new upstream release, but not if that means that a single file has been changed, or rather if a large number of files have been. With file and checksum information from the sources.d.n data set we can be more precise.
In the lower part of Table 4 we compare each stable release with the preceding one (all pairs comparisons have been omitted due to space constraints). For each comparison we give the total amount of modified packages (\(\subseteq\) common \(\setminus\) unchanged), and the average percentage of files affected by the change w.r.t. the previous release. The latter ratio has been computed by comparing the sets of file checksums in the two versions: if a checksum from the previous release disappears in the new one we count that as one “file” change; the same goes for newly appearing checksums. One can certainly be more precise than this, for instance by computing the size of actual package diff-s, but that requires a dataset that includes the actual content of source files. Checksum comparison, like other fingerprinting techniques, is an interesting trade-off which arguably remains in the realm of pure metadata analyses.
The absolute number of modified packages appears to grow with the release size over time. Sarge is an exception to that rule, showing an anomalous high number of modified packages, but sarge is peculiar also in its very long development cycle, almost twice the average release duration. This suggests that the number of modified packages is also correlated with release duration. On the other hand, the average amount of modified files shows a remarkable stability post-etch, at around 60%, with larger fluctuations around that value in early releases. The percentage might seem high,
Table 4: changes between Debian releases: ‘c’ for common, ‘u’ for unchanged, and ‘m’ for modified packages
<table>
<thead>
<tr>
<th>from</th>
<th>to</th>
<th>from</th>
<th>to</th>
</tr>
</thead>
<tbody>
<tr>
<td>slink</td>
<td>potato</td>
<td>woody</td>
<td>sarge</td>
</tr>
<tr>
<td>hamm</td>
<td></td>
<td>1324c</td>
<td>1198c</td>
</tr>
<tr>
<td></td>
<td></td>
<td>842u</td>
<td>463u</td>
</tr>
<tr>
<td>slink</td>
<td></td>
<td>1655c</td>
<td>1455c</td>
</tr>
<tr>
<td></td>
<td></td>
<td>742u</td>
<td>384u</td>
</tr>
<tr>
<td>potato</td>
<td></td>
<td>2456c</td>
<td>2118c</td>
</tr>
<tr>
<td></td>
<td></td>
<td>935u</td>
<td>551u</td>
</tr>
<tr>
<td>woody</td>
<td></td>
<td>4588c</td>
<td>3935c</td>
</tr>
<tr>
<td></td>
<td></td>
<td>1688u</td>
<td>1156u</td>
</tr>
<tr>
<td>sarge</td>
<td></td>
<td>7671c</td>
<td>6828c</td>
</tr>
<tr>
<td></td>
<td></td>
<td>3832u</td>
<td>2597u</td>
</tr>
<tr>
<td>etch</td>
<td></td>
<td>9230c</td>
<td>8041c</td>
</tr>
<tr>
<td></td>
<td></td>
<td>4578u</td>
<td>2906u</td>
</tr>
<tr>
<td>lenny</td>
<td></td>
<td>10530c</td>
<td>9631c</td>
</tr>
<tr>
<td></td>
<td></td>
<td>5272u</td>
<td>3676u</td>
</tr>
<tr>
<td>squeeze</td>
<td></td>
<td>1317c</td>
<td>13117c</td>
</tr>
<tr>
<td></td>
<td></td>
<td>6812u</td>
<td>5425u</td>
</tr>
<tr>
<td>wheezy</td>
<td></td>
<td>16543c</td>
<td>16543c</td>
</tr>
<tr>
<td></td>
<td></td>
<td>10132u</td>
<td>10519u</td>
</tr>
<tr>
<td>jessie*</td>
<td></td>
<td>19795c</td>
<td>19795c</td>
</tr>
</tbody>
</table>
but note that unchanged packages (i.e., 0% changes) are excluded from the count and that Debian release cycles are quite long for active upstream projects. Further by-hand investigation on selected projects have confirmed that active projects do indeed change that much over similar periods. These results seem to hint at a polarization in the evolution of individual FOSS projects, between active projects that evolve steadily and dormant, possibly feature-complete ones that cease evolving while still remaining useful.
5.4 Programming languages
The evolution of programming languages over time is also easy to study using sources.d.n. We show the most popular (in terms of SLOC) languages per release in Table 5 and their evolution over time, in both absolute and relative terms, in Figure 7. (Complete data for all suites and languages is available at http://sources.debian.net/stats/.)
This time we got significantly different numbers w.r.t. the reference study, while still confirming most of their conclusions. We wonder if an additional reason for discrepancies here might be the exclusion of Makefile, SQL, and XML from their analysis, given that sloccount excludes them by default, unless --addlangall is used. For reference, there are 5.4 MSLOC of makefile and 2.7 MSLOC of SQL in wheezy, cumulatively ∼1% of the total, unlikely to affect general trends. XML is a more significant omission though, as it is the 4th most popular language in wheezy. It is debatable whether XML should be considered a programming language, but its popularity hints at its usage for expressing program logic in declarative ways. For this reason we do not think it should be disregarded.
C is invariably the most popular language and its growth, in absolute terms, is steady; in relative terms its growth is not as fast as other languages, and most notably C++. Post-squeeze however the ratio at which C was losing ground to C++ slows down and almost entirely stops. (The increase...
in C’s popularity in jessie should probably be disregarded, due to the multiple version issue already discussed.)
Another interesting post-etch phenomenon is the decrease of shell script popularity, together with the consolidation of Perl decline. During the same period Python increases its popularity and is now the 5th most popular language. This suggests that Python is replacing Perl and shell script as a more maintainable glue code language.
Two other post-etch trends are worth noting: Lisp has almost halved its popularity and the under-representation of Java, hypothesized in ref. study, is now gone. Even though far behind C++, Java is the 3rd most popular language in recent releases, with a significant margin over the 4th, and has more than tripled its popularity since etch.
### 5.5 File size
Finally, we have computed the average file size (in SLOC) per language, and analyzed its evolution across releases. In this case the sources.d.n dataset is at loss w.r.t. our reference study, because the SLOC plugin currently does not compute the number of files per language (which needs passing --filecount to sloccount), but only SLOC counts. To compute average file sizes we have therefore divided per-language totals by the number of per-language files, computing the latter by only looking at file extensions. To do so we have adopted the same conventions used by sloccount for preliminary language classification, but we haven’t been able to further re-classify files as sloccount does, for instance on the basis of shebang lines like #!/bin/sh. This can be seen as a drawback of a metadata-only dataset, but is in fact a simple limitation of the current SLOC plugin implementation: instead of using a single table to collect per-language totals, the plugin should declare two, and use the extra one to map individual files entries to their languages as detected by sloccount. Fixing this is on our roadmap.
On the bright side, this difference opens the opportunity to methodological comparisons. Our results are shown in Table 6. Ref. study only lists average file sizes for 5 languages. Limited to those languages we note that the absolute numbers for C and Lisp are remarkably similar, suggesting that file extension detection is very accurate for those languages. Significant differences are visible for C++, where we found higher averages, probably due to the fact that the amount of C++ files is being underestimated by only looking at file extensions, likely due to extensions shared with C. Finally, we found much higher averages for shell (up to 4x), but that is more easily explained. Most shell scripts tend not to have file extensions, and have therefore been excluded from our count. Scripts that do have an extension are required by the Debian Policy to reside outside the execution $PATH. As a consequence, shipped .sh files tend to be shell libraries, used by relatively uncommon large applications written in shell script.
Despite the differences in absolute numbers we can confirm the continued stability of C, Lisp, Perl, and Java average sizes, basically unchanged over almost 20 years. The stability of C, considering its continued growth in absolute terms, is remarkable. The growth of shell script averages, already observed in ref. study, has inverted its trend and is now decreasing since etch, likely due to the already observed increase of Python popularity—whose average file size is increasing as well. A plausible general pattern for average file size growth is to increase while the corresponding language is still growing in popularity, to eventually stabilize and remain so for a long while.
### 5.6 Threats to validity
We haven’t replicated the (binary) package dependency analysis part of ref. study. We cannot replicate it exactly because currently Debsources does not retrieve Packages indexes and we consider out of scope for it to do so. On the other hand we can easily add a plugin to parse debian/control files, and extract dependencies from there. That will have the advantage of separating maintainer-defined dependencies from automatically generated ones, which arguably have a smaller impact on package maintainability.
The sources.d.n data set, due to the reasons discussed in Section 4, does not include the first 2 years of Debian release history. This has no impact on the replication study, given that our reference study didn’t consider them either. But it would still be interesting to add those years to our dataset, in order to peek into the early years of organized FOSS collections. Additionally, due to a regression in dpkgsource, we have not extracted all packages from archived suite. We have patched dpkg-source to overcome the limitation, but we are still missing a total of 12 (small) packages from archive.debian.org. We do not expect such a tiny amount to significantly impact our results.
Both sloccount and Exuberant Ctags are starting to show their age and suffer from a lack of active maintenance. During the development of Debsources we have reported various bugs against them, all related to the lack of support for “recent” languages; for instance, Scala and JavaScript are currently completely ignored by sloccount. This does not threaten the validity of the replication study, because ref. study relies on sloccount too, but it is starting to become problematic for dataset accuracy. The specific case of
6 http://bugs.debian.org/740883
JavaScript is particularly worrisome, due to its increasing popularity for server-side Node.js applications.
6. RELATED WORK
The scarcity of macro-level software evolution studies is one of the main motivations for this work. To the best of our knowledge, Barahona et al. [10] and its preliminary version [24] are the main studies in the field. We have replicated their findings and compared them with ours in Section 5.
Other works have studied the size and composition of specific releases of large FOSS distributions such as Red Hat 7.1 [29], Debian Potato [9], and Debian Sarge [2]. Our work improves over those by adding the time axis, which is fundamental in software evolution. An inconvenient of our approach is the reliance on a Debian-like archive structure. This is undoubtedly a limiting factor, but we believe it should be put in perspective considering that Debsources supports all Debian-based distributions, which account for their findings and compared them with ours in Section 5.
The Ultimate Debian Database (UDD) [20] has assembled a large dataset about Debian and some of its derivatives, and is a popular target for mining studies [30]. UDD too lacks the time axis—with the sole exception of a history table—convenience that researchers can start from. When consistently used in conjunction with FOSS platforms, that should be enough to investigate how far we can go with the evolution of large FOSS distributions, focusing on the source code of Debian and resemble the work described by Cerf [4]—who is worried about the evolution of software distribution. In spite of differences in absolute results, we have been able to confirm the general evolution trends observed back then, extend them to take into account the subsequent 7 years of history, and shed light into hypotheses made back then thanks to the fine-grained, file-level knowledge that Debsources allows.
Even though the bottom lines are the same, it is disturbing that we have not been able to either obtain identical results, or definitely ascertain the origin of the discrepancies. Empirical software engineering should be reproducible [22] and to that end we need more publicly accessible datasets that researchers can start from. When consistently used in conjunction with FOSS platforms, that should be enough to improve over the status quo.
More generally, the reproducibility issue and some of the difficulties we have encountered (e.g., the non backward compatible changes in Debian archive format and the dpkg-source regression) are instances of the more general “bit rot” problem described by Cerf [4]—who is worried about the long-term preservation of digital information, and rightfully so. We think that datasets like sources.d.n can help on both the reproducibility and information preservation front.
Several Debsources extensions are in the working. On the one hand we want to refine our ability to compute differences across releases and investigate how far we can go with fingerprinting techniques before having to compute all pairs
---
Table 6: average file size (in SLOC) per language (top-12, from left to right), based on file extension
<table>
<thead>
<tr>
<th>suite</th>
<th>ansic</th>
<th>cpp</th>
<th>java</th>
<th>xml</th>
<th>sh</th>
<th>python</th>
<th>perl</th>
<th>lisp</th>
<th>asm</th>
<th>fortran</th>
<th>cs</th>
<th>ph</th>
</tr>
</thead>
<tbody>
<tr>
<td>hamm</td>
<td>239</td>
<td>239</td>
<td>100</td>
<td>499</td>
<td>102</td>
<td>232</td>
<td>435</td>
<td>92</td>
<td>133</td>
<td>56</td>
<td>57</td>
<td></td>
</tr>
<tr>
<td>silink</td>
<td>251</td>
<td>199</td>
<td>99</td>
<td>747</td>
<td>119</td>
<td>254</td>
<td>403</td>
<td>124</td>
<td>121</td>
<td>125</td>
<td>44</td>
<td></td>
</tr>
<tr>
<td>potato</td>
<td>252</td>
<td>226</td>
<td>81</td>
<td>363</td>
<td>136</td>
<td>261</td>
<td>414</td>
<td>131</td>
<td>144</td>
<td>83</td>
<td>136</td>
<td></td>
</tr>
<tr>
<td>woody</td>
<td>255</td>
<td>303</td>
<td>89</td>
<td>230</td>
<td>141</td>
<td>255</td>
<td>434</td>
<td>245</td>
<td>154</td>
<td>163</td>
<td>121</td>
<td></td>
</tr>
<tr>
<td>sarge</td>
<td>237</td>
<td>305</td>
<td>103</td>
<td>171</td>
<td>127</td>
<td>278</td>
<td>423</td>
<td>195</td>
<td>166</td>
<td>93</td>
<td>138</td>
<td></td>
</tr>
<tr>
<td>etch</td>
<td>237</td>
<td>315</td>
<td>112</td>
<td>194</td>
<td>1875</td>
<td>269</td>
<td>383</td>
<td>229</td>
<td>167</td>
<td>119</td>
<td>179</td>
<td></td>
</tr>
<tr>
<td>lenny</td>
<td>232</td>
<td>297</td>
<td>109</td>
<td>201</td>
<td>1539</td>
<td>262</td>
<td>415</td>
<td>199</td>
<td>171</td>
<td>127</td>
<td>168</td>
<td></td>
</tr>
<tr>
<td>squeeze</td>
<td>219</td>
<td>302</td>
<td>112</td>
<td>225</td>
<td>1236</td>
<td>238</td>
<td>432</td>
<td>194</td>
<td>182</td>
<td>123</td>
<td>164</td>
<td></td>
</tr>
<tr>
<td>wheezy</td>
<td>222</td>
<td>321</td>
<td>155</td>
<td>220</td>
<td>1074</td>
<td>228</td>
<td>419</td>
<td>217</td>
<td>224</td>
<td>132</td>
<td>161</td>
<td></td>
</tr>
<tr>
<td>jessie</td>
<td>230</td>
<td>302</td>
<td>117</td>
<td>233</td>
<td>1064</td>
<td>258</td>
<td>439</td>
<td>182</td>
<td>218</td>
<td>136</td>
<td>146</td>
<td></td>
</tr>
</tbody>
</table>
diff-s. On the other hand we want to attack the ambitious goal of injecting into sources.d.n releases of as much Debian derivatives as possible, scaling up considerably the size of the ecosystem we are able to study at present. We think it is feasible to do so without switching to a version control system as data storage (which would bring its own non-trivial decisions about the adopted branching structure), but implementing instead file-level deduplication using checksums. Deduplication will also dramatically reduce the amount of resources needed to study the history of Debian development, for instance by injecting Debian sid snapshots at the desired granularity from http://snapshot.debian.org.
The largest Debsources instance to date (http://sources.debian.net) has already filled a niche in the Debian infrastructure and quickly gathered popularity due to its code browsing and search functionalities. What is more interesting from a scientific point of view is Debsources ability to turn one-shot evolution studies into live, perennial monitors of evolution traits that scholars have identified as worth of attention. We look forward to others joining us in developing Debsources plugins that allow to make more and more evolution studies perennial.
8. REFERENCES
|
{"Source-Url": "https://upsilon.cc/~zack/research/publications/debsources-esem-2014.pdf", "len_cl100k_base": 11239, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 36684, "total-output-tokens": 13131, "length": "2e13", "weborganizer": {"__label__adult": 0.00028395652770996094, "__label__art_design": 0.00029969215393066406, "__label__crime_law": 0.00019669532775878904, "__label__education_jobs": 0.001003265380859375, "__label__entertainment": 7.891654968261719e-05, "__label__fashion_beauty": 0.0001018047332763672, "__label__finance_business": 0.0003542900085449219, "__label__food_dining": 0.00023424625396728516, "__label__games": 0.0005459785461425781, "__label__hardware": 0.0005311965942382812, "__label__health": 0.00021827220916748047, "__label__history": 0.000301361083984375, "__label__home_hobbies": 6.401538848876953e-05, "__label__industrial": 0.00016760826110839844, "__label__literature": 0.0002970695495605469, "__label__politics": 0.0001773834228515625, "__label__religion": 0.00024962425231933594, "__label__science_tech": 0.01331329345703125, "__label__social_life": 0.0001055598258972168, "__label__software": 0.023773193359375, "__label__software_dev": 0.95703125, "__label__sports_fitness": 0.00014841556549072266, "__label__transportation": 0.00020968914031982425, "__label__travel": 0.00014591217041015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48019, 0.10086]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48019, 0.2117]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48019, 0.89042]], "google_gemma-3-12b-it_contains_pii": [[0, 4604, false], [4604, 9224, null], [9224, 14982, null], [14982, 20138, null], [20138, 24840, null], [24840, 29398, null], [29398, 32794, null], [32794, 38230, null], [38230, 42452, null], [42452, 48019, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4604, true], [4604, 9224, null], [9224, 14982, null], [14982, 20138, null], [20138, 24840, null], [24840, 29398, null], [29398, 32794, null], [32794, 38230, null], [38230, 42452, null], [42452, 48019, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48019, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48019, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48019, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48019, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48019, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48019, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48019, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48019, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48019, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48019, null]], "pdf_page_numbers": [[0, 4604, 1], [4604, 9224, 2], [9224, 14982, 3], [14982, 20138, 4], [20138, 24840, 5], [24840, 29398, 6], [29398, 32794, 7], [32794, 38230, 8], [38230, 42452, 9], [42452, 48019, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48019, 0.2915]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
5d6342bd58e30d1b3a46def5edfd096d84cf54c9
|
How Elegant Code Evolves with Hardware: The Case of Gaussian Elimination
Jack Dongarra and Piotr Luszczek
The increasing availability of advanced-architecture computers, at affordable costs, has had a significant effect on all spheres of scientific computation. In this chapter, we'll show the need for designers of computing algorithms to make expeditious and substantial adaptations to algorithms, in reaction to architecture changes, by closely examining one simple but important algorithm in mathematical software: Gaussian elimination for the solution of linear systems of equations.
At the application level, science has to be captured in mathematical models, which in turn are expressed algorithmically and ultimately encoded as software. At the software level, there is a continuous tension between performance and portability on the one hand, and understandability of the underlying code. We'll examine these issues and look at trade-offs that have been made over time. Linear algebra—in particular, the solution of linear systems of equations—lies at the heart of most calculations in scientific computing. This chapter focuses on some of the recent developments in linear algebra software designed to exploit advanced-architecture computers over the decades.
There are two broad classes of algorithms: those for dense matrices and those for sparse matrices. A matrix is called sparse if it contains a substantial number of zero elements. For sparse matrices, radical savings in space and execution time can be achieved through specialized storage and algorithms. To narrow our discussion and keep it simple, we will look only at the dense matrix problem (a dense matrix is defined as one with few zero elements).
Much of the work in developing linear algebra software for advanced-architecture computers is motivated by the need to solve large problems on the fastest computers available. In this chapter, we'll discuss the development of standards for linear algebra software, the building blocks for software libraries, and aspects of algorithm design as
influenced by the opportunities for parallel implementation. We'll explain motivations for this work, and say a bit about future directions.
As representative examples of dense matrix routines, we will consider Gaussian elimination, or LU factorization. This examination, spanning hardware and software advances over the past 30 years, will highlight the most important factors that must be considered in designing linear algebra software for advanced-architecture computers. We use these factorization routines for illustrative purposes not only because they are relatively simple, but also because of their importance in several scientific and engineering applications that make use of boundary element methods. These applications include electromagnetic scattering and computational fluid dynamics problems.
The past 30 years have seen a great deal of activity in the area of algorithms and software for solving linear algebra problems. The goal of achieving high performance in code that is portable across platforms has largely been realized by the identification of linear algebra kernels, the Basic Linear Algebra Subprograms (BLAS). We will discuss the LINPACK, LAPACK, and ScaLAPACK libraries, which are expressed in successive levels of the BLAS. See “Further Reading” at the end of this chapter for discussions of these libraries.
The Effects of Computer Architectures on Matrix Algorithms
The key motivation in the design of efficient linear algebra algorithms for advanced-architecture computers involves the storage and retrieval of data. Designers wish to minimize the frequency with which data moves between different levels of the memory hierarchy. Once data is in registers or the fastest cache, all processing required for this data should be performed before it gets evicted back to the main memory. Thus, the main algorithmic approach for exploiting both vectorization and parallelism in our implementations uses block-partitioned algorithms, particularly in conjunction with highly tuned kernels for performing matrix-vector and matrix-matrix operations (the Level-2 and Level-3 BLAS). Block partitioning means that the data is divided into blocks, each of which should fit within a cache memory or a vector register file.
The computer architectures considered in this chapter are:
- Vector machines
- RISC computers with cache hierarchies
- Parallel systems with distributed memory
- Multi-core computers
Vector machines were introduced in the late 1970s and early 1980s. They were able in one step to perform a single operation on a relatively large number of operands stored in vector registers. Expressing matrix algorithms as vector-vector operations was a natural fit for this type of machines. However, some of the vector designs had a limited ability to load and store the vector registers in main memory. A technique called chaining allowed this limitation to be circumvented by moving data between the registers before accessing main memory. Chaining required recasting linear algebra in terms of matrix-vector operations.
RISC computers were introduced in the late 1980s and early 1990s. While their clock rates might have been comparable to those of the vector machines, the computing speed lagged behind due to their lack of vector registers. Another deficiency was their creation of a deep memory hierarchy with multiple levels of cache memory to alleviate the scarcity of bandwidth that was, in turn, caused mostly by a limited number of memory banks. The eventual success of this architecture is commonly attributed to the right price point and astonishing improvements in performance over time as predicted by Moore’s Law. With RISC computers, the linear algebra algorithms had to be redone yet again. This time, the formulations had to expose as many matrix-matrix operations as possible, which guaranteed good cache reuse.
A natural way of achieving even greater performance levels with both vector and RISC processors is by connecting them together with a network and letting them cooperate to solve a problem bigger than would be feasible on just one processor. Many hardware configurations followed this path, so the matrix algorithms had to follow yet again as well. It was quickly discovered that good local performance has to be combined with good global partitioning of the matrices and vectors.
Any trivial divisions of matrix data quickly uncovered scalability problems dictated by so-called Amdahl’s Law: the observation that the time taken by the sequential portion of a computation provides the minimum bound for the entire execution time, and therefore limits the gains achievable from parallel processing. In other words, unless most of computations can be done independently, the point of diminishing returns is reached, and adding more processors to the hardware mix will not result in faster processing.
For the sake of simplicity, the class of multi-core architectures includes both symmetric multiprocessing (SMP) and single-chip multi-core machines. This is probably an unfair simplification, as the SMP machines usually have better memory systems. But when applied to matrix algorithms, both yield good performance results with very similar algorithmic approaches: these combine local cache reuse and independent computation with explicit control of data dependences.
**A Decompositional Approach**
At the basis of solutions to dense linear systems lies a decompositional approach. The general idea is the following: given a problem involving a matrix $A$, one factors or decomposes $A$ into a product of simpler matrices from which the problem can easily be solved. This divides the computational problem into two parts: first determine an appropriate decomposition, and then use it in solving the problem at hand.
Consider the problem of solving the linear system:
$$Ax = b$$
where $A$ is a nonsingular matrix of order $n$. The decompositional approach begins with the observation that it is possible to factor $A$ in the form:
$$A = LU$$
where $L$ is a lower triangular matrix (a matrix that has only zeros above the diagonal) with ones on the diagonal, and $U$ is upper triangular (with only zeros below the diagonal). During the decomposition process, diagonal elements of $A$ (called pivots) are used to divide the elements below the diagonal. If matrix $A$ has a zero pivot, the process will break with division-by-zero error. Also, small values of the pivots excessively amplify
the numerical errors of the process. So for numerical stability, the method needs to interchange rows of the matrix or make sure pivots are as large (in absolute value) as possible. This observation leads to a row permutation matrix $P$ and modifies the factored form to:
$$P^T A = LU$$
The solution can then be written in the form:
$$x = A^{-1} Pb$$
which then suggests the following algorithm for solving the system of equations:
1. Factor $A$
2. Solve the system $Ly = Pb$
3. Solve the system $Ux = y$
This approach to matrix computations through decomposition has proven very useful for several reasons. First, the approach separates the computation into two stages: the computation of a decomposition, followed by the use of the decomposition to solve the problem at hand. This can be important, for example, if different right hand sides are present and need to be solved at different points in the process. The matrix needs to be factored only once and reused for the different right hand sides. This is particularly important because the factorization of $A$, step 1, requires $O(n^3)$ operations, whereas the solutions, steps 2 and 3, require only $O(n^2)$ operations. Another aspect of the algorithm's strength is in storage: the $L$ and $U$ factors do not require extra storage, but can take over the space occupied initially by $A$.
For the discussion of coding this algorithm, we present only the computationally intensive part of the process, which is step 1, the factorization of the matrix.
**A Simple Version**
For the first version, we present a straightforward implementation of LU factorization. It consists of $n-1$ steps, where each step introduces more zeros below the diagonal, as shown in Figure 14-1.
---
**Figure 14-1. LU factorization**
A tool often used to teach Gaussian elimination is MATLAB. It features a scripting language (also called MATLAB) that makes developing matrix algorithms very simple. The language might seem very unusual to people familiar with other scripting languages because it is oriented to process multidimensional arrays. The unique features of the language that we use in the example code are:
- Transposition operator for vectors and matrices: ’ (single quote)
- Matrix indexing specified as:
- Simple integer values: $A(m, k)$
- Ranges: $A(k : n, k)$
• Other matrices: $A([k \ m], : )$
• Built-in matrix functions such as `size` (returns matrix dimensions), `tril` (returns the lower triangular portion of the matrix), `triu` (returns the upper triangular portion of the matrix), and `eye` (returns an identity matrix, which contains only zero entries, except for the diagonal, which is all ones)
Example 14-1 shows the simple implementation.
Example 14-1. Simple variant (MATLAB coding)
```matlab
function [L,U,p] = lutx(A)
% LUTX Triangular factorization, textbook version
% [L,U,p] = lutx(A) produces a unit lower triangular matrix L,
% an upper triangular matrix U, and a permutation vector p,
% so that L*U = A(p,:)
[n,n] = size(A);
p = (1:n)';
for k = 1:n-1
% Find index 'm' of largest element 'r' below diagonal in k-th column
[r,m] = max(abs(A(k:n,k)));
m = m+k-1; % adjust 'm' so it becomes a global index
% Skip elimination if column is zero
if (A(m,k) ~= 0)
% Swap pivot row
if (m ~= k)
A([k,m],:) = A([m k],:); % swap rows 'k' and 'm' of 'A'
p([k m]) = p([m k]); % swap entrix 'k' and 'm' of 'p'
end
% Compute multipliers
i = k+1:n;
A(i,k) = A(i,k)/A(k,k);
% Update the remainder of the matrix
j = k+1:n;
A(i,j) = A(i,j) - A(i,k)*A(k,j);
end
% Separate result
L = tril(A,-1) + eye(n,n);
U = triu(A);
end
```
The algorithm presented in Example 14-1 is row-oriented, in the sense that we are taking a scalar multiple of the “pivot” row and adding it to the rows below to introduce zeros below the diagonal. The beauty of the algorithm lies in its similarity to the mathematical notation. Hence, this is the preferred way of teaching the algorithm for the first time so that students can quickly turn formulas into running code.
This beauty, however, has its price. In the 1970s, Fortran was the language for scientific computations. Fortran stores two-dimensional arrays by column. Accessing the array in a row-wise fashion within the matrix could involve successive memory reference to
locations separated from each other by a large increment, depending on the size of the declared array. The situation was further complicated by the operating system’s use of memory pages to effectively control memory usage. With a large matrix and a row-oriented algorithm in a Fortran environment, an excessive number of page swaps might be generated in the process of running the software. Cleve Moler pointed this out in the 1970s (see "Further Reading").
To avoid this situation, one needed simply to interchange the order of the innermost nested loops on \( i \) and \( j \). This simple change resulted in more than 30 percent savings in wall-clock time to solve problems of size 200 on an IBM 360/67. Beauty was thus traded for efficiency by using a less obvious ordering of loops and a much more obscure (by today’s standard) language.
**LINPACK’s DGEFA Subroutine**
The performance issues with the MATLAB version of the code continued as, in the mid-1970s, vector architectures became available for scientific computations. Vector architectures exploit pipeline processing by running mathematical operations on arrays of data in a simultaneous or pipelined fashion. Most algorithms in linear algebra can be easily vectorized. Therefore, in the late 70s there was an effort to standardize vector operations for use in scientific computations. The idea was to define some simple, frequently used operations and implement them on various systems to achieve portability and efficiency. This package came to be known as the Level-1 Basic Linear Algebra Subprograms (BLAS) or Level-1 BLAS.
The term *Level-1* denotes vector-vector operations. As we will see, Level-2 (matrix-vector operations), and Level-3 (matrix-matrix operations) play important roles as well.
In the 1970s, the algorithms of dense linear algebra were implemented in a systematic way by the LINPACK project. LINPACK is a collection of Fortran subroutines that analyze and solve linear equations and linear least-squares problems. The package solves linear systems whose matrices are general, banded, symmetric indefinite, symmetric positive definite, triangular, and tridiagonal square. In addition, the package computes the QR and singular value decompositions of rectangular matrices and applies them to least-squares problems.
LINPACK uses column-oriented algorithms, which increase efficiency by preserving locality of reference. By column orientation, we mean that the LINPACK code always references arrays down columns, not across rows. This is important since Fortran stores arrays in column-major order. This means that as one proceeds down a column of an array, the memory references proceed sequentially through memory. Thus, if a program references an item in a particular block, the next reference is likely to be in the same block.
The software in LINPACK was kept machine-independent partly through the introduction of the Level-1 BLAS routines. Almost all of the computation was done by calling Level-1 BLAS. For each machine, the set of Level-1 BLAS would be implemented in a machine-specific manner to obtain high performance.
Example 14-2 shows the LINPACK implementation of factorization.
---
<table>
<thead>
<tr>
<th>Example 14-2. LINPACK variant (Fortran 66 coding)</th>
</tr>
</thead>
<tbody>
<tr>
<td>subroutine dgefa(a,lda,n,ipvt,info)</td>
</tr>
<tr>
<td>integer lda,n,ipvt(1),info</td>
</tr>
</tbody>
</table>
---
double precision a(lda,1)
double precision t
integer idamax,j,k,kp1,l,nm1
C
C gaussian elimination with partial pivoting
C
info = 0
nm1 = n - 1
if (nm1 .lt. 1) go to 70
do 60 k = 1, nm1
kp1 = k + 1
l = idamax(n-k+1,a(k,k),1) + k - 1
ipvt(k) = l
if (a(l,k) .eq. 0.0d0) go to 40
t = a(l,k)
a(l,k) = a(k,k)
a(k,k) = t
10 continue
t = -1.0d0/a(k,k)
call dscal(n-k,t,a(k+1,k),1)
do 30 j = kp1, n
t = a(l,j)
if (l .eq. k) go to 20
a(l,j) = a(k,j)
a(k,j) = t
20 continue
call daxpy(n-k,t,a(k+1,k),1,a(k+1,j),1)
do 60 continue
40 continue
info = k
50 continue
60 continue
70 continue
ipvt(n) = n
if (a(n,n) .eq. 0.0d0) info = n
return
end
The Level-1 BLAS subroutines DAXPY, DSCAL, and IDAMAX are used in the routine DGEFA. The main difference between Example 14-1 and Example 14-2 (other than the programming language and the interchange of loop indexes) is the use of routine DAXPY to encode the inner loop of the method.
It was presumed that the BLAS operations would be implemented in an efficient, machine-specific way suitable for the computer on which the subroutines were executed. On a vector computer, this could translate into a simple, single vector operation. This avoided leaving the optimization up to the compiler and explicitly exposing a performance-critical operation.
In a sense, then, the beauty of the original code was regained with the use of a new vocabulary to describe the algorithms: the BLAS. Over time, the BLAS became a widely adopted standard and were most likely the first to enforce two key aspects of software: modularity and portability. Again, these are taken for granted today, but at the time they were not. One could have the cake of compact algorithm representation and eat it too, because the resulting Fortran code was portable.
Most algorithms in linear algebra can be easily vectorized. However, to gain the most out of such architectures, simple vectorization is usually not enough. Some vector computers are limited by having only one path between memory and the vector registers. This creates a bottleneck if a program loads a vector from memory, performs some arithmetic operations, and then stores the results. In order to achieve top performance, the scope of the vectorization must be expanded to facilitate chaining operations together and to minimize data movement, in addition to using vector operations. Recasting the algorithms in terms of matrix-vector operations makes it easy for a vectorizing compiler to achieve these goals.
Thus, as computer architectures became more complex in the design of their memory hierarchies, it became necessary to increase the scope of the BLAS routines from Level-1 to Level-2 and Level-3.
**LAPACK DGETRF**
As mentioned before, the introduction in the late 1970s and early 1980s of vector machines brought about the development of another variant of algorithms for dense linear algebra. This variant was centered on the multiplication of a matrix by a vector. These subroutines were meant to give improved performance over the dense linear algebra subroutines in LINPACK, which were based on Level-1 BLAS. Later on, in the late 1980s and early 1990s, with the introduction of RISC-type microprocessors (the “killer micros”) and other machines with cache-type memories, we saw the development of LAPACK Level-3 algorithms for dense linear algebra. A Level-3 code is typified by the main Level-3 BLAS, which, in this case, is matrix multiplication.
The original goal of the LAPACK project was to make the widely used LINPACK library run efficiently on vector and shared-memory parallel processors. On these machines, LINPACK is inefficient because its memory access patterns disregard the multilayered memory hierarchies of the machines, thereby spending too much time moving data instead of doing useful floating-point operations. LAPACK addresses this problem by reorganizing the algorithms to use block matrix operations, such as matrix multiplication, in the innermost loops (see the paper by E. Anderson and J. Dongarra under "Further Reading"). These block operations can be optimized for each architecture to account for
its memory hierarchy, and so provide a transportable way to achieve high efficiency on
diverse modern machines.
Here we use the term “transportable” instead of “portable” because, for fastest possible
performance, LAPACK requires that highly optimized block matrix operations be
implemented already on each machine. In other words, the correctness of the code is
portable, but high performance is not—if we limit ourselves to a single Fortran source
code.
LAPACK can be regarded as a successor to LINPACK in terms of functionality, although
it doesn’t always use the same function-calling sequences. As such a successor, LAPACK
was a win for the scientific community because it could keep LINPACK’s functionality
while getting improved use out of new hardware.
Example 14-3 shows the LAPACK solution to LU factorization.
Example 14-3. LAPACK solution factorization
```fortran
SUBROUTINE DGETRF( M, N, A, LDA, IPIV, INFO )
INTEGER INFO, LDA, M, N
INTEGER IPIV( * )
DOUBLE PRECISION A( LDA, * )
DOUBLE PRECISION ONE
PARAMETER ( ONE = 1.0D+0 )
INTEGER I, IINFO, J, JB, NB
EXTERNAL DGEMM, DGETF2, DLASWP, DTRSM, XERBLA
EXTERNAL ILAENV
INTRINSIC MAX, MIN
INFO = 0
IF( M.LT.0 ) THEN
INFO = -1
ELSE IF( N.LT.0 ) THEN
INFO = -2
ELSE IF( LDA.LT.MAX( 1, M ) ) THEN
INFO = -4
END IF
IF( INFO.NE.0 ) THEN
CALL XERBLA( 'DGETRF', -INFO )
RETURN
END IF
IF( M.EQ.0 .OR. N.EQ.0 ) RETURN
NB = ILAENV( 1, 'DGETRF', ' ', M, N, -1, -1 )
IF( NB.LE.1 .OR. NB.GE.MIN( M, N ) ) THEN
CALL DGETF2( M, N, A, LDA, IPIV, INFO )
ELSE
DO 20 J = 1, MIN( M, N ), NB
JB = MIN( MIN( M, N ), J+JB-1 )
CALL DGETF2( M-J+1, JB, A( J, J ), LDA, IPIV( J ), IINFO )
CALL DLASWP( J-1, A, LDA, J, J+JB-1, IPIV, 1 )
20 CONTINUE
APPLY INTERCHANGES TO COLUMNS 1:J-1.
CALL DLASWP( J-1, A, LDA, J, J+JB-1, IPIV, 1 )
```
Most of the computational work in the algorithm from Example 14-3 is contained in three routines:
**DGEMM**
Matrix-matrix multiplication
**DTRSM**
Triangular solve with multiple right hand sides
**DGETF2**
Unblocked LU factorization for operations within a block column
One of the key parameters in the algorithm is the block size, called NB here. If NB is too small or too large, poor performance can result—hence the importance of the ILAENV function, whose standard implementation was meant to be replaced by a vendor implementation encapsulating machine-specific parameters upon installation of the LAPACK library. At any given point of the algorithm, NB columns or rows are exposed to a well-optimized Level-3 BLAS. If NB is 1, the algorithm is equivalent in performance and memory access patterns to the LINPACK’s version.
Matrix-matrix operations offer the proper level of modularity for performance and transportability across a wide range of computer architectures, including parallel systems with memory hierarchy. This enhanced performance is primarily due to a greater opportunity for reusing data. There are numerous ways to accomplish this reuse of data to reduce memory traffic and to increase the ratio of floating-point operations to data movement through the memory hierarchy. This improvement can bring a three- to ten-fold improvement in performance on modern computer architectures.
The jury is still out concerning the productivity of writing and reading the LAPACK code: how hard is it to generate the code from its mathematical description? The use of vector notation in LINPACK is arguably more natural than LAPACK’s matrix formulation. The mathematical formulas that describe algorithms are usually more complex if only matrices are used, as opposed to mixed vector-matrix notation.
Recursive LU
Setting the block size parameter for the LAPACK’s LU might seem like a trivial matter at first. But in practice, it requires a lot of tuning for various precisions and matrix sizes. Many users end up leaving the setting unchanged, even if the tuning has to be done only once at installation. This problem is exacerbated by the fact that not just one but many LAPACK routines use a blocking parameter.
Another issue with LAPACK’s formulation of LU is the factorization of tall and narrow panels of columns performed by the DGETF2 routine. It uses Level-1 BLAS and was found to become a bottleneck as the processors became faster throughout the 90s without corresponding increases in memory bandwidth.
A solution came from a rather unlikely direction: divide-and-conquer recursion. In place of LAPACK’s looping constructs, the newer recursive LU algorithm splits the work in half, factorizes the left part of the matrix, updates the rest of the matrix, and factorizes the right part. The use of Level-1 BLAS is reduced to an acceptable minimum, and most of the calls to Level-3 BLAS operate on larger portions of the matrix than LAPACK’s algorithm. And, of course, the block size does not have to be tuned anymore.
Recursive LU required the use of Fortran 90, which was the first Fortran standard to allow recursive subroutines. A side effect of using Fortran 90 was the increased importance of the LDA parameter, the leading dimension of A. It allows more flexible use of the subroutine, as well as performance tuning for cases when matrix dimension m would cause memory bank conflicts that could significantly reduce available memory bandwidth.
The Fortran 90 compilers use the LDA parameter to avoid copying the data into a contiguous buffer when calling external routines, such as one of the BLAS. Without LDA, the compiler has to assume the worst-case scenario when input matrix a is not contiguous and needs to be copied to a temporary contiguous buffer so the call to BLAS does not end up with an out-of-bands memory access. With LDA, the compiler passes array pointers to BLAS without any copies.
Example 14-4 shows recursive LU factorization.
Example 14-4. Recursive variant (Fortran 90 coding)
```
recursive subroutine rdgetrf(m, n, a, lda, ipiv, info)
implicit none
integer, intent(in) :: m, n, lda
double precision, intent(inout) :: a(lda,*)
integer, intent(out) :: ipiv(*)
integer, intent(out) :: info
integer :: mn, nleft, nright, i
double precision :: tmp
double precision :: pone, negone, zero
parameter (pone=1.0d0)
parameter (negone=-1.0d0)
parameter (zero=0.0d0)
intrinsic min
```
integer idamax
external dgemm, dtrsm, dlaswp, idamax, dscal
mn = min(m, n)
if (mn .gt. 1) then
nleft = mn / 2
nright = n - nleft
call rdgetrf(m, nleft, a, lda, ipiv, info)
if (info .ne. 0) return
call dlaswp(nright, a(1, nleft+1), lda, 1, nleft, ipiv, 1)
call dtrsm('L', 'L', 'N', 'U', nleft, nright, pone, a, lda,
$ a(1, nleft+1), lda)$
call dgemm('N', 'N', m-nleft, nright, nleft, negone,
$ a(nleft+1, 1) , lda, a(1, nleft+1), lda, pone,$
$ a(nleft+1, nleft+1), lda)$
call rdgetrf(m - nleft, nright, a(nleft+1, nleft+1), lda,
$ ipiv(nleft+1), info)$
if (info .ne. 0) then
info = info + nleft
return
end if
do i = nleft+1, m
ipiv(i) = ipiv(i) + nleft
end do
call dlaswp(nleft, a, lda, nleft+1, mn, ipiv, 1)
else if (mn .eq. 1) then
i = idamax(m, a, 1)
ipiv(1) = i
tmp = a(i, 1)
if (tmp .ne. zero .and. tmp .ne. -zero) then
call dscal(m, pone/tmp, a, 1)
a(i,1) = a(1,1)
a(1,1) = tmp
else
info = 1
end if
end if
return
end
There is a certain degree of elegance in the recursive variant. No loops are exposed in the routine. Instead, the algorithm is driven by the recursive nature of the method (see the paper by F. G. Gustavson under "Further Reading").
The Recursive LU Algorithm consists of four basic steps, illustrated in Figure 14-2:
1. Split the matrix into two rectangles \((m \times n/2)\); if the left part ends up being only a single column, scale it by the reciprocal of the pivot and return.
2. Apply the LU algorithm to the left part. Apply transformations to the right part (perform the triangular solve \(A_{12} = L^{-1}A_{12}\) and matrix multiplication \(A_{22} = A_{22} - A_{21}A_{12}\)).
3. Apply the LU algorithm to the right part.
Most of the work is performed in the matrix multiplications, which operate on successive matrices of size \(n/2, n/4, n/8\), etc. The implementation in Example 14-4 can show about a 10 percent improvement in performance over the LAPACK implementation given in Example 14-3.
In a sense, any of the previous renditions of the LU algorithm could be considered a step backwards in terms of code elegance. But divide-and-conquer recursion was a tremendous leap forward (even dismissing the modest performance gains). The recursive algorithm for matrix factorization can now be taught to students alongside other recursive algorithms, such as various kinds of sorting methods.
By changing just the size of matrix parts, it is possible to achieve the same memory access pattern as in LINPACK or LAPACK. Setting \(n_{\text{left}}\) to 1 makes the code operate on vectors, just as in LINPACK, whereas setting \(n_{\text{left}}\) to \(NB>1\) makes it behave like LAPACK’s blocked code. In both cases, the original recursion deteriorates from divide-and-conquer to the tail kind. The behavior of such variations of the recursive algorithm can be studied alongside a quicksort algorithm with various partitioning schemes of the sorted array.
Finally, we leave as an exercise to the reader to try to mimic the recursive code without using recursion and without explicitly handling the recursive call stack—an important problem to solve if the Fortran compiler cannot handle recursive functions or subroutines.
**ScaLAPACK PDGETRF**
LAPACK is designed to be highly efficient on vector processors, high-performance “superscalar” workstations, and shared-memory multiprocessors. LAPACK can also be used satisfactorily on all types of scalar machines (PCs, workstations, and mainframes). However, LAPACK in its present form is less likely to give good performance on other types of parallel architectures—for example, massively parallel Single Instruction Multiple Data (SIMD) machines, or Multiple Instruction Multiple Data (MIMD) distributed-memory machines. The ScaLAPACK effort was intended to adapt LAPACK to these new architectures.
By creating the ScaLAPACK software library, we extended the LAPACK library to scalable MIMD, distributed-memory, concurrent computers. For such machines, the memory hierarchy includes the off-processor memory of other processors, in addition to the hierarchy of registers, cache, and local memory on each processor.
Like LAPACK, the ScaLAPACK routines are based on block-partitioned algorithms in order to minimize the frequency of data movement between different levels of the memory hierarchy. The fundamental building blocks of the ScaLAPACK library are distributed-memory versions of the Level-2 and Level-3 BLAS, and a set of Basic Linear Algebra Communication Subprograms (BLACS) for communication tasks that arise frequently in parallel linear algebra computations. In the ScaLAPACK routines, all interprocessor communication occurs within the distributed BLAS and the BLACS, so the source code of the top software layer of ScaLAPACK looks very similar to that of LAPACK.
The ScaLAPACK solution to LU factorization is shown in Example 14-5.
*Example 14-5. ScaLAPACK variant (Fortran 90 coding)*
```fortran
SUBROUTINE PDGETRF( M, N, A, IA, JA, DESCA, IPIV, INFO )
INTEGER BLOCK_CYCLIC_2D, CSRC_, CTXT_, DLEN_, DTYPE_, LLD_, MB_, NB_, N_, RSRC_
PARAMETER ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1,
$ CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6,
$ RSRC_ = 7, CSRC_ = 8, LLD_ = 9 )
DOUBLE PRECISION ONE
PARAMETER ( ONE = 1.0D+0 )
CHARACTER COLBTOP, COLCTOP, ROWBTOP
INTEGER I, ICOFF, ICTXT, IINFO, IN, IROFF, J, JB, JN,
$ MN, MYCOL, MYROW, NPCOL, NPROW
INTEGER IDUM1( 1 ), IDUM2( 1 )
EXTERNAL BLACS_GRIDINFO, CHK1MAT, IGAMN2D, PCHK1MAT, PB_TOPGET,
$ PB_TOPSET, PDGEMM, PDGETF2, PDLASWP, PDTRSM, PXERBLA
INTEGER ICEIL
EXTERNAL ICEIL
INTRINSIC MIN, MOD
* Get grid parameters
ICTXT = DESCA( CTXT_ )
CALL BLACS_GRIDINFO( ICTXT, NPROW, NPCOL, MYROW, MYCOL )
* Test the input parameters
INFO = 0
IF( NPROW.EQ.-1 ) THEN
INFO = -(600+ICTXT_)
ELSE
CALL CHK1MAT( M, 1, N, 2, IA, JA, DESCA, 6, INFO )
IF( INFO.EQ.0 ) THEN
IROFF = MOD( IA-1, DESCA( MB_ ) )
ICOFF = MOD( JA-1, DESCA( NB_ ) )
IF( IROFF.NE.0 ) THEN
INFO = -4
ELSE IF( ICOFF.NE.0 ) THEN
INFO = -5
ELSE IF( DESCA( MB_ ).NE.DESCA( NB_ ) ) THEN
INFO = -(600+NB_)
END IF
END IF
CALL PCHK1MAT( M, 1, N, 2, IA, JA, DESCA, 6, 0, IDUM1, IDUM2, INFO )
END IF
IF( INFO.NE.0 ) THEN
CALL PXERBLA( ICTXT, 'PDGETRF', -INFO )
RETURN
END IF
```
*Example 14-5 continued (Fortran 90 coding)*
IF( DESCA( M_ ).EQ.1 ) THEN
IPIV( 1 ) = 1
RETURN
ELSE IF( M.EQ.0 .OR. N.EQ.0 ) THEN
RETURN
END IF
* Split-ring topology for the communication along process rows
CALL PB_TOPGET( ICTXT, 'Broadcast', 'Rowwise', ROWBTOP )
CALL PB_TOPGET( ICTXT, 'Broadcast', 'Columnwise', COLBTOP )
CALL PB_TOPGET( ICTXT, 'Combine', 'Columnwise', COLCTOP )
CALL PB_TOPGET( ICTXT, 'Broadcast', 'Rowwise', 'S-ring' )
CALL PB_TOPSET( ICTXT, 'Broadcast', 'Rowwise', 'S-ring' )
CALL PB_TOPSET( ICTXT, 'Broadcast', 'Columnwise', ' ' )
CALL PB_TOPSET( ICTXT, 'Combine', 'Columnwise', ' ' )
* Handle the first block of columns separately
MN = MIN( M, N )
IN = MIN( ICEIL( IA, DESCA( MB_ ) )*DESCA( MB_ ), IA+M-1 )
JN = MIN( ICEIL( JA, DESCA( NB_ ) )*DESCA( NB_ ), JA+MN-1 )
JB = JN - JA + 1
* Factor diagonal and subdiagonal blocks and test for exact
* singularity.
CALL PDGETF2( M, JB, A, IA, JA, DESCA, IPIV, INFO )
IF( JB+1.LE.N ) THEN
* Apply interchanges to columns JN+1:JA+N-1.
CALL PDLASWP( 'Forward', 'Rows', N-JB, A, IA, JN+1, DESCA, IA, IN, IPIV )
* Compute block row of U.
CALL PDTRSM( 'Left', 'Lower', 'No transpose', 'Unit', JB,
N-JB, ONE, A, IA, JA, DESCA, A, IA, JN+1, DESCA )
* IF( JB+1.LE.M ) THEN
* Update trailing submatrix.
CALL PDGEMM( 'No transpose', 'No transpose', M-JB, N-JB, JB,
-ONE, A, IN+1, JA, DESCA, A, IA, JN+1, DESCA,
ONE, A, IN+1, JN+1, DESCA )
END IF
END IF
* Loop over the remaining blocks of columns.
DO 10 J = JN+1, JA+MN-1, DESCA( NB_ )
JB = MIN( MN-J+JA, DESCA( NB_ ) )
I = IA + J - JA
* Factor diagonal and subdiagonal blocks and test for exact
* singularity.
CALL PDGETF2( M-J+JA, JB, A, I, J, DESCA, IPIV, IINFO )
* IF( IINFO.EQ.0 .AND. IINFO.GT.0 ) INFO = IINFO + J - JA
* Apply interchanges to columns JA:J-JA.
CALL PDLASWP( 'Forward', 'Rowwise', J-JA, A, IA, JA, DESCA, I,I+JB-1, IPIV )
IF( J-JA+JB+1.LE.N ) THEN
* Apply interchanges to columns J+JB:JA+N-1.
CALL PDLASWP( 'Forward', 'Rowwise', N-J-JB+JA, A, IA, J+JB,
DESC, I, I+JB-1, IPIV )
* Compute block row of U.
CALL PDTRSM( 'Left', 'Lower', 'No transpose', 'Unit', JB,
In order to simplify the design of ScaLAPACK, and because the BLAS have proven to be very useful tools outside LAPACK, we chose to build a Parallel BLAS, or PBLAS (described in the paper by Choi et al; see "Further Reading"), whose interface is as similar to the BLAS as possible. This decision has permitted the ScaLAPACK code to be quite similar, and sometimes nearly identical, to the analogous LAPACK code.
It was our aim that the PBLAS would provide a distributed memory standard, just as the BLAS provided a shared memory standard. This would simplify and encourage the development of high-performance and portable parallel numerical software, as well as providing manufacturers with just a small set of routines to be optimized. The acceptance of the PBLAS requires reasonable compromises between competing goals of functionality and simplicity.
The PBLAS operate on matrices distributed in a two-dimensional block cyclic layout. Because such a data layout requires many parameters to fully describe the distributed matrix, we have chosen a more object-oriented approach and encapsulated these parameters in an integer array called an array descriptor. An array descriptor includes:
- The descriptor type
- The BLACS context (a virtual space for messages that is created to avoid collisions between logically distinct messages)
- The number of rows in the distributed matrix
- The number of columns in the distributed matrix
- The row block size
- The column block size
- The process row over which the first row of the matrix is distributed
- The process column over which the first column of the matrix is distributed
- The leading dimension of the local array storing the local blocks
By using this descriptor, a call to a PBLAS routine is very similar to a call to the corresponding BLAS routine:
```fortran
CALL DGEMM ( TRANSA, TRANSB, M, N, K, ALPHA,
A( IA, JA ), LDA,
B( IB, JB ), LDB, BETA,
```
```fortran
$ DESCA )
* Update trailing submatrix.
CALL PDGEMM( 'No transpose', 'No transpose', M-J-JB+JA,
END IF
END IF
10 CONTINUE
IF( INFO.EQ.0 ) INFO = MN + 1
CALL IGAMN2D(ICTXT, 'Rowwise', ' ', 1, 1, INFO, 1, IDUM1,IDUM2, -1,-1, MYCOL)
IF( INFO.EQ.MN+1 ) INFO = 0
CALL PB_TOPSET( ICTXT, 'Broadcast', 'Rowwise', ROWBTOP )
CALL PB_TOPSET( ICTXT, 'Broadcast', 'Columnwise', COLBTOP )
CALL PB_TOPSET( ICTXT, 'Combine', 'Columnwise', COLCTOP )
RETURN
END
```
C( IC, JC ), LDC )
CALL PDGEMM( TRANSA, TRANSB, M, N, K, ALPHA,
A, IA, JA, DESC_A,
B, JB, DESC_B, BETA,
C, IC, JC, DESC_C )
DGEMM computes \( C = \beta C + \alpha \op(A) \op(B) \), where \( \op(A) \) is either \( A \) or its transpose depending on \( \text{TRANSA} \), \( \op(B) \) is similar, \( \op(A) \) is \( M \times K \), and \( \op(B) \) is \( K \times N \). PDGEMM is the same, with the exception of the way submatrices are specified. To pass the submatrix starting at \( A(IA,JA) \) to DGEMM, for example, the actual argument corresponding to the formal argument \( A \) is simply \( A(IA,JA) \). PDGEMM, on the other hand, needs to understand the global storage scheme of \( A \) to extract the correct submatrix, so \( IA \) and \( JA \) must be passed in separately.
\( \text{DESC}_A \) is the array descriptor for \( A \). The parameters describing the matrix operands \( B \) and \( C \) are analogous to those describing \( A \). In a truly object-oriented environment, matrices and \( \text{DESC}_A \) would be synonymous. However, this would require language support and detract from portability.
Using message passing and scalable algorithms from the ScaLAPACK library makes it possible to factor matrices of arbitrarily increasing size, given machines with more processors. By design, the library computes more than it communicates, so for the most part, data stays locally for processing and travels only occasionally across the interconnect network.
But the number and types of messages exchanged between processors can sometimes be hard to manage. The context associated with every distributed matrix lets implementations use separate “universes” for message passing. The use of separate communication contexts by distinct libraries (or distinct library invocations) such as the PBLAS insulates communication internal to the library from external communication. When more than one descriptor array is present in the argument list of a routine in the PBLAS, the individual BLACS context entries must be equal. In other words, the PBLAS do not perform “inter-context” operations.
In the performance sense, ScaLAPACK did to LAPACK what LAPACK did to LINPACK: it broadened the range of hardware where LU factorization (and other codes) could run efficiently. In terms of code elegance, the ScaLAPACK’s changes were much more drastic: the same mathematical operation now required large amounts of tedious work. Both the users and the library writers were now forced into explicitly controlling data storage intricacies, because data locality became paramount for performance. The victim was the readability of the code, despite efforts to modularize the code according to the best software engineering practices of the day.
**Multithreading for Multi-core Systems**
The advent of multi-core chips brought about a fundamental shift in the way software is produced. Dense linear algebra is no exception. The good news is that LAPACK’s LU factorization runs on a multi-core system and can even deliver a modest increase of performance if multithreaded BLAS are used. In technical terms, this is the fork-join model of computation: each call to BLAS (from a single main thread) forks a suitable number of threads, which perform the work on each core and then join the main thread of computation. The fork-join model implies a synchronization point at each join operation.
The bad news is that the LAPACK’s fork-join algorithm gravely impairs scalability even on small multi-core computers that do not have the memory systems available in SMP systems. The inherent scalability flaw is the heavy synchronization in the fork-join model (only a single thread is allowed to perform the significant computation that occupies the critical section of the code, leaving other threads idle) that results in lock-step execution and prevents hiding of inherently sequential portions of the code behind parallel ones. In other words, the threads are forced to perform the same operation on different data. If there is not enough data for some threads, they will have to stay idle and wait for the rest of the threads that perform useful work on their data. Clearly, another version of the LU algorithm is needed such that would allow threads to stay busy all the time by possibly making them perform different operations during some portion of the execution.
The multithreaded version of the algorithm recognizes the existence of a so-called critical path in the algorithm: a portion of the code whose execution depends on previous calculations and can block the progress of the algorithm. The LAPACK’s LU does not treat this critical portion of the code in any special way: the DGETF2 subroutine is called by a single thread and doesn’t allow much parallelization even at the BLAS level. While one thread calls this routine, the other ones wait idly. And since the performance of DGETF2 is bound by memory bandwidth (rather than processor speed), this bottleneck will exacerbate scalability problems as systems with more cores are introduced.
The multithreaded version of the algorithm attacks this problem head-on by introducing the notion of look-ahead: calculating things ahead of time to avoid potential stagnation in the progress of the computations. This of course requires additional synchronization and bookkeeping not present in the previous versions—a trade-off between code complexity and performance. Another aspect of the multithreaded code is the use of recursion in the panel factorization. It turns out that the use of recursion can give even greater performance benefits for tall panel matrices than it does for the square ones.
Example 14-6 shows a factorization suitable for multithreaded execution.
```
Example 14-6. Factorization for multithreaded execution (C code)
```void SMP_dgetrf(int n, double *a, int lda, int *ipiv, int pw,
int tid, int tsize, int *pready, ptm *mtx, ptc *cnd) {
int pcnt, pfctr, ufrom, uto, ifrom, p;
double *pa = a, *pl, *pf, *lp;
pcnt = n / pw; /* number of panels */
pfctr = tid + (tid ? 0 : tsize); /* first panel that should be factored by this thread after the very first panel (number 0) gets factored */
/* this is a pointer to the last panel */
lp = a + (size_t)(n - pw) * (size_t)lda;
/* for each panel (that is used as source of updates) */
for (ufrom = 0; ufrom < pcnt; ufrom++, pa += (size_t)pw * (size_t)(lda + 1)){
p = ufrom * pw; /* column number */
/* if the panel to be used for updates has not been factored yet; 'ipiv' does not be consulted, but it is to possibly avoid accesses to 'pready' */
if (! ipiv[p + pw - 1] || ! pready[ufrom]) {
if (ufrom % tsize == tid) { /* if this is this thread's panel */
```
pfactor( n - p, pw, pa, lda, ipiv + p, pready, ufrom, mtx, cnd );
} else if (ufrom < pcnt - 1) /* if this is not the last panel */
LOCK( mtx );
while (! pready[ufrom]) { WAIT( cnd, mtx ); }
UNLOCK( mtx );
}
/* for each panel to be updated */
for (uto = first_panel_to_update( ufrom, tid, tsize ); uto < pcnt;
uto += tsize) {
/* if there are still panels to factor by this thread and preceding panel
has been factored; test to 'ipiv' could be skipped but is in there to
decrease number of accesses to 'pready' */
if (pfctr < pcnt && ipiv[pfctr * pw - 1] && pready[pfctr - 1]) {
p = ifrom * pw;
pl = a + (size_t)p * (size_t)(lda + 1);
pf = pl + (size_t)(pfctr - ifrom) * (size_t)pw * (size_t)lda;
pupdate( n - p, pw, pl, pf, lda, p, ipiv, lp );
}
p = pfctr * pw;
pl = a + (size_t)p * (size_t)(lda + 1);
pfactor( n - p, pw, pl, lda, ipiv + p, pready, pfctr, mtx, cnd );
pfctr += tsize; /* move to this thread's next panel */
}
/* if panel 'uto' hasn't been factored (if it was, it certainly has been
updated, so no update is necessary) */
if (uto > pfctr || ! ipiv[uto * pw]) {
p = ufrom * pw;
pf = pa + (size_t)(uto - ufrom) * (size_t)pw * (size_t)lda;
pupdate( n - p, pw, pa, pf, lda, p, ipiv, lp );
}
}
The algorithm is the same for each thread (the SIMD paradigm), and the matrix data is
partitioned among threads in a cyclic manner using panels with \(pw\) columns in each panel
(except maybe the last). The \(pw\) parameter corresponds to the blocking parameter NB of
LAPACK. The difference is the logical assignment of panels (blocks of columns) to
threads. (Physically, all panels are equally accessible, because the code operates in a
shared memory regimen.) The benefits of blocking in a thread are the same as they were
in LAPACK: better cache reuse and less stress on the memory bus. Assigning a portion
of the matrix to a thread seems an artificial requirement at first, but it simplifies the code
and the bookkeeping data structures; most importantly, it provides better memory
affinity. It turns out that multi-core chips are not symmetric in terms of memory access
bandwidth, so minimizing the number of reassignments of memory pages to cores
directly benefits performance.
The standard components of LU factorization are represented by the \texttt{pfactor()} and
\texttt{pupdate()} functions. As one might expect, the former factors a panel, whereas the
latter updates a panel using one of the previously factored panels.
The main loop makes each thread iterate over each panel in turn. If necessary, the panel is factored by the owner thread while other threads wait (if they happen to need this panel for their updates).
The look-ahead logic is inside the nested loop (prefaced by the comment for each panel to be updated) that replaces DGEMM or PDGEMM from previous algorithms. Before each thread updates one of its panels, it checks whether it’s already feasible to factor its first unfactored panel. This minimizes the number of times the threads have to wait because each thread constantly attempts to eliminate the potential bottleneck.
As was the case for ScaLAPACK, the multithreaded version detracts from the inherent elegance of the LAPACK’s version. Also in the same spirit, performance is the main culprit: LAPACK’s code will not run efficiently on machines with ever-increasing numbers of cores. Explicit control of execution threads at the LAPACK level rather than the BLAS level is critical: parallelism cannot be encapsulated in a library call. The only good news is that the code is not as complicated as ScaLAPACK’s, and efficient BLAS can still be put to a good use.
A Word About the Error Analysis and Operation Count
The key aspect of all of the implementations presented in this chapter is their numerical properties.
It is acceptable to forgo elegance in order to gain performance. But numerical stability is of vital importance and cannot be sacrificed, because it is an inherent part of the algorithm’s correctness. While these are serious considerations, there is some consolation to follow. It may be surprising to some readers that all of the algorithms presented are the same, even though it’s virtually impossible to make each excerpt of code produce exactly the same output for exactly the same inputs.
When it comes to repeatability of results, the vagaries of floating-point representation may be captured in a rigorous way by error bounds. One way of expressing the numerical robustness of the previous algorithms is with the following formula:
\[
\frac{||r||}{||A||} \leq ||e|| \leq ||A^{-1}|| ||r||
\]
where error \( e = x - y \) is the difference between the computed solution \( y \) and the correct solution \( x \), and \( r = Ay - b \) is a so-called "residual." The previous formula basically says that the size of the error (the parallel bars surrounding a value indicate a norm—a measure of absolute size) is as small as warranted by the quality of the matrix \( A \). Therefore, if the matrix is close to being singular in numerical sense (some entries are so small that they might as well be considered to be zero) the algorithms will not give an accurate answer. But otherwise, a relatively good quality of result can be expected.
Another feature that is common to all the versions presented is the operation count: they all perform \( 2/3n^3 \) floating-point multiplications and/or additions. The order of these operations is what differentiates them. There exist algorithms that increase the amount of floating-point work to save on memory traffic or network transfers (especially for distribute-memory parallel algorithms.) But because the algorithms shown in this chapter have the same operation count, it is valid to compare them for performance. The computational rate (number of floating-point operations per second) may be used instead
of the time taken to solve the problem, provided that the matrix size is the same. But comparing computational rates is sometimes better because it allows a comparison of algorithms when the matrix sizes differ. For example, a sequential algorithm on a single processor can be directly compared with a parallel one working on a large cluster on a much bigger matrix.
**Future Directions for Research**
In this chapter we have looked at the evolution of the design of a simple but important algorithm in computational science. The changes over the past 30 years have been necessary to follow the lead of the advances in computer architectures. In some cases these changes have been simple, such as interchanging loops. In other cases, they have been as complex as the introduction of recursion and look-ahead computations. In each case, however, the code's ability to efficiently utilize the memory hierarchy is the key to high performance on a single processor as well as on shared and distributed memory systems.
The essence of the problem is the dramatic increase in complexity that software developers have had to confront, and still do. Dual-core machines are already common, and the number of cores is expected to roughly double with each processor generation. But contrary to the assumptions of the old model, programmers will not be able to consider these cores independently (i.e., multi-core is not “the new SMP”) because they share on-chip resources in ways that separate processors do not. This situation is made even more complicated by the other nonstandard components that future architectures are expected to deploy, including the mixing of different types of cores, hardware accelerators, and memory systems.
Finally, the proliferation of widely divergent design ideas shows that the question of how to best combine all these new resources and components is largely unsettled. When combined, these changes produce a picture of a future in which programmers will have to overcome software design problems vastly more complex and challenging than those in the past in order to take advantage of the much higher degrees of concurrency and greater computing power that new architectures will offer.
So the bad news is that none of the presented code will work efficiently someday. The good news is that we have learned various ways to mold the original simple rendition of the algorithm to meet the ever-increasing challenges of hardware designs.
**Further Reading**
|
{"Source-Url": "http://www.netlib.org/utk/people/JackDongarra/PAPERS/beautiful-code.pdf", "len_cl100k_base": 12911, "olmocr-version": "0.1.48", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 50929, "total-output-tokens": 15047, "length": "2e13", "weborganizer": {"__label__adult": 0.0003447532653808594, "__label__art_design": 0.00047087669372558594, "__label__crime_law": 0.00040984153747558594, "__label__education_jobs": 0.0013380050659179688, "__label__entertainment": 0.00010782480239868164, "__label__fashion_beauty": 0.00019812583923339844, "__label__finance_business": 0.0003066062927246094, "__label__food_dining": 0.000415802001953125, "__label__games": 0.0007109642028808594, "__label__hardware": 0.0019741058349609375, "__label__health": 0.0005855560302734375, "__label__history": 0.0003867149353027344, "__label__home_hobbies": 0.0001583099365234375, "__label__industrial": 0.0009398460388183594, "__label__literature": 0.0002875328063964844, "__label__politics": 0.0003809928894042969, "__label__religion": 0.0006661415100097656, "__label__science_tech": 0.16845703125, "__label__social_life": 0.00011593103408813477, "__label__software": 0.01055908203125, "__label__software_dev": 0.8095703125, "__label__sports_fitness": 0.000461578369140625, "__label__transportation": 0.0008668899536132812, "__label__travel": 0.0002300739288330078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55247, 0.0229]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55247, 0.78744]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55247, 0.87772]], "google_gemma-3-12b-it_contains_pii": [[0, 2073, false], [2073, 5135, null], [5135, 8539, null], [8539, 10865, null], [10865, 12950, null], [12950, 16362, null], [16362, 17049, null], [17049, 20536, null], [20536, 22360, null], [22360, 24175, null], [24175, 26797, null], [26797, 28147, null], [28147, 31093, null], [31093, 33332, null], [33332, 35539, null], [35539, 38097, null], [38097, 41489, null], [41489, 44859, null], [44859, 47328, null], [47328, 50708, null], [50708, 53804, null], [53804, 55247, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2073, true], [2073, 5135, null], [5135, 8539, null], [8539, 10865, null], [10865, 12950, null], [12950, 16362, null], [16362, 17049, null], [17049, 20536, null], [20536, 22360, null], [22360, 24175, null], [24175, 26797, null], [26797, 28147, null], [28147, 31093, null], [31093, 33332, null], [33332, 35539, null], [35539, 38097, null], [38097, 41489, null], [41489, 44859, null], [44859, 47328, null], [47328, 50708, null], [50708, 53804, null], [53804, 55247, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55247, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55247, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55247, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55247, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55247, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55247, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55247, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55247, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55247, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55247, null]], "pdf_page_numbers": [[0, 2073, 1], [2073, 5135, 2], [5135, 8539, 3], [8539, 10865, 4], [10865, 12950, 5], [12950, 16362, 6], [16362, 17049, 7], [17049, 20536, 8], [20536, 22360, 9], [22360, 24175, 10], [24175, 26797, 11], [26797, 28147, 12], [28147, 31093, 13], [31093, 33332, 14], [33332, 35539, 15], [35539, 38097, 16], [38097, 41489, 17], [41489, 44859, 18], [44859, 47328, 19], [47328, 50708, 20], [50708, 53804, 21], [53804, 55247, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55247, 0.00766]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
8d9cd4b6ebff28442eee5c217dab692ca450aefb
|
Debugging Data Flows in Reactive Programs
Herman Banken
Delft University of Technology
Delft, The Netherlands
hermanb@ch.tudelft.nl
Erik Meijer
Delft University of Technology
Delft, The Netherlands
h.j.m.meijer@tudelft.nl
Georgios Gousios
Delft University of Technology
Delft, The Netherlands
g.gousios@tudelft.nl
ABSTRACT
Reactive Programming is a style of programming that provides developers with a set of abstractions that facilitate event handling and stream processing. Traditional debug tools lack support for Reactive Programming, leading developers to fallback to the most rudimentary debug tool available: logging to the console.
In this paper, we present the design and implementation of RxFiddle, a visualization and debugging tool targeted to Rx, the most popular form of Reactive Programming. RxFiddle visualizes the dependencies and structure of the data flow, as well as the data inside the flow. We evaluate RxFiddle with an experiment involving 111 developers. The results show that RxFiddle can help developers finish debugging tasks faster than with traditional debugging tools.
CCS CONCEPTS
• Software and its engineering → Software testing and debugging; Data flow languages; Software maintenance tools;
KEYWORDS
reactive programming, debugging, visualization, program comprehension
ACM Reference format:
1 INTRODUCTION
Software often needs to respond to external events and express computations as data flows. Traditionally, handling asynchronous events was done using the Observer design pattern [23] (in object-oriented environments) or callback functions [22] (when the host language supports higher-order functions). Using these patterns, the system consuming the data does not have to block waiting for new data to arrive, but instead it yields control until new data is available. While these patterns decouple the data producer from the consumers, they typically lead to dynamic registration, side effects on the consumer side, and inversion of control [17, 46].
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
ICSE ’18, Gothenburg, Sweden
© 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM.
978-1-4503-5638-1/18/05...$15.00
DOI: 10.1145/3180155.3180156
RQ1 How do developers debug RP?
Before designing tools it is important to understand the practices they must support along with the problems in the current state of the art [50]. For this, we performed an extensive analysis of the literature (both scientific and practitioner-oriented) and conducted interviews with RP practitioners.
RQ2 How can we design a tool that helps developers debug RP?
By examining the results of RQ1, the limitations of traditional debuggers and the opportunities that RP programs offer in terms of structure and explicit dependencies between data flows, we design a novel RP debugger. We validate the design’s feasibility by providing an implementation for the popular JavaScript RP library RxJS.
RQ3 Can our specialized RP debugger speed up comprehension & debugging?
To validate our design and examine whether specialized tooling can improve the debugging experience, we measure the speed and correctness of comprehension with an open experiment.
2 BACKGROUND: REACTIVE PROGRAMMING AND RX
RP is a declarative programming paradigm for working with streams of input data. According to a definition of reactivity3 a reactive program must interact with the environment “at a speed which is determined by the environment”. Conceptually, when a reactive program is run, it sets up a data processing pipeline and waits until input arrives, i.e., when the environment changes. Reactive Programming languages and libraries provide developers with a set of abstractions and methods to create such programs.
Many RP implementations share a notion of a collection that abstracts over time, in contrast to space like standard collections. This collection comes in different flavors, such as Observable (Rx [38]), Signal (Elm [14]), Signal/Event (REScala [47]) or Behavior/Event (FRP [18]). The implementations differ in the precise semantics of a speed which is determined by the environment, not the program itself. “This implementations differ in the precise semantics of a speed which is determined by the environment, not the program itself.”
To answer our research questions, we employ a three-phase Sequential Exploratory Strategy, one of the mixed methods research approaches [13, 28]. First, we interview professional developers and review available documentation (RQ1) to form a understanding about current debugging practices. Second, we apply this understanding to design a debugger and implement it to test its feasibility (RQ2). Finally, we validate the debugger using an experiment (RQ3).
Assembly. It is important to note that Observables are lazy; initially they only specify a blueprint of the desired data flow. Creating this specification is called the assembly phase. In Figure 1a, the assembly phase consists of the calls to of(), map() and filter(), creating respectively Observables o1, o2 and o3 (Figure 1b).
Subscription. When the subscribe method of an Observable is called, the data flow is prepared by recursively subscribing “up” the stream: every subscribe call creates an Observer, that is passed to the input Observable, which again subscribes an Observer to its input Observable, until finally the root Observables are subscribed to. We call this the subscription phase. In Figure 1a, inside the single subscribe() call, the Observer object s1 is created, and passed to o3, which in turn will recursively subscribe to o2 with a new Observer s2 with destination s1, until the full chain is subscribed (Figure 1b).
Runtime. After the root Observables are subscribed to, they can start emitting data. This is the runtime phase. Depending on the nature of the Observable, this might attach event listeners to UI elements, open network connections or start iterating over in-memory data. Events are pushed to s3, to s2 and finally to s1 which calls console.log() in Figure 1a.
Rx identifies three types of events that can occur during the runtime phase: next, error and complete events. next events contain the next value in the flow, an error event signifies an unsuccessful termination to a stream, while a complete event denotes the successful termination of the stream. There are restrictions on their order: an Observable may first emit an unlimited amount of next events, and then either an error or a complete event. Observables do not need to emit any next events, and do not need to terminate.
More complex programs feature operators that merge Observables4, split Observables5 or handle higher-order Observables6, resulting in more complex graphs. An example of a higher-order Observable operation (flatMap()) is shown in Figure 1d. While merging and splitting happens on an Observable level (the source property still points to one or more dependencies), higher-order Observable flattening only manifests within Observer structures (there is no reference between the Observables). Figure 1e shows this with an inner Observable that is subscribed twice (for both values 2 and 3, value 1 is skipped), resulting in two identical data flows over o1. The data flow through $s_{4,n}$ and $s_{4,m}$ is pushed into s1, flattening the data flow.
Marble Diagram. The term Marble Diagram comes from the shape of the glyphs in the images used to explain Rx in the official documentation. An example is shown in Figure 1c. The diagrams contain one or more timelines containing the events that enter and leave Observables. Next events are typically represented with a circle, error events with a cross and complete event with a vertical line. From the diagram developers can understand how operators work by inspecting the difference between the timelines, where events might be skipped, added, transformed or delayed. Mapping time on the x-axis provides insight that is missing when inspecting only a single time slice.
3 RESEARCH DESIGN
To answer our research questions, we employ a three-phase Sequential Exploratory Strategy, one of the mixed methods research approaches [13, 28]. First, we interview professional developers and review available documentation (RQ1) to form a understanding about current debugging practices. Second, we apply this understanding to design a debugger and implement it to test its feasibility (RQ2). Finally, we validate the debugger using an experiment (RQ3).
4 RQ1: RP DEBUGGING PRACTICES
To validate the need for better tools we must first understand how existing tools are used (RQ1). For this, we interview developers, as we want to explore and understand how they use existing tools and techniques to debug Rx code. The questions are semi-structured. We first establish a general understanding of the experience of the subjects. We then ask several open questions regarding their use of RP, how subjects debug RP and test RP. Table 1 lists the questions used as a guideline for the interviews.
Five developers with professional programming experience ranging from 4 to 12 years were interviewed. The first four developers (D1-D4) work in Company A, which builds reactive systems [8] using various RP solutions. Developer experience with Rx ranges from a month to over a year. The fifth developer (D5) works in Company B, and is concerned with building and maintaining a large scale distributed server application, that uses Rx to handle asynchronous events.
4.1 Interviews
In the following paragraphs we discuss the results of Q6-Q10 in detail. Not every subject answered each question in the same detail, so we discuss the answers that provide meaningful insights in the current practice.
Testing. Of the 4 subjects of Company A, none performed tests specifically for Rx logic. “Just running the application”, is enough according to D3, saying that they only test the business logic in their application and consider the Rx code as “glue” which either works or not. In contrast, D5 and his team at Company B extensively test their application using the Rx library’s built-in test facilities like “marble tests” and the TestScheduler [44]. Using tests, the subject confirms his beliefs about the behavior of the chain of operators, while tests are also helpful when refactoring code.
Debugging. All subjects independently mention using temporary printf() debugging statements (printing messages to the system output, e.g. with console.log() in JavaScript). Subjects use printf() debugging to “add more context” (D1) to their debug sessions. Printing which values flow through the flow allows them to “quickly reason what happens” (D3). Breakpoints are only used when the cost of recompilation is high, for example when TypeScript is used instead of Javascript: developers prefer to attach their debugger to a running program session rather than inserting printf() statements and restarting the session.
Often, it is difficult to use existing debuggers to inspect the life cycle of Observables (subscribe() and dispose()), as the corresponding code lives within the Rx library. Debugging inside the Rx library was described as “painful” by D2, when using the Node.js debugger to step through the inners of Rx. Alternative solutions used by our subjects are (1) creating a custom debug() operator which prints these life cycle events (D5), and (2) creating custom Observables (with Observable.create()) that override the default lifecycle methods with facilities to print life cycle events (D2, D5). While printf() debugging and breakpoints are useful in various degrees when executing a single Observable chain, these methods both become considerably more difficult and “overview is easily lost” when executing multiple chains concurrently (D3, D5).
Documentation. Subjects give different reasons to consult the documentation, but the most common reason is to “find an operator for what I need” (D1). They feel that there might be an operator
Understanding the subjects
Q1 Explain your (professional) experience.
Q2 Assess your experience on a scale from beginner to expert.
Q3 Explain your (professional) reactive programming experience.
Q4 Assess your RP experience on a scale from beginner to expert.
Q5 Have you ever refactored or reworked RP code?
Content questions
Q6 How do you test or verify the workings of Rx code?
Q7 How do you debug Rx code?
Q8 How do you use documentation on Rx?
Q9 What difficulties do you experience with RP?
Q10 What is your general approach to understand a piece of Rx?
Table 1: Interview questions
that precisely matches their needs, however knowing all operators by heart is not common (the JavaScript Rx Observable API has 28 static methods and 114 instance methods), therefore subjects sometimes end up doing an extensive search for some specific operator. Another reason to visit the documentation is to comprehend how operators in existing code work. For this, subjects use the Marble Diagrams at RxMarbles.com [36] (D2, D5), the RxJS 4 documentation on GitHub (D2, D5), the RxJS 5 documentation at ReactiveX.io [44] (D1, D4, D5) and the online book IntroToRx.com [10] (D4). D1 specifically mentions the need for more examples in the documentation.
Difficulties experienced. The IDE does not help with developing Rx (D2, D4); according to D4 "Rx is more about timing than about types", and "...you miss some sort of indication that the output is what you expect". It is not always clear what happens when you execute a piece of code, "mostly due to Observables sometimes being lazy" (D2). Flows are clear and comprehensible in the scope of a single class or function, but for application-wide flows it becomes unclear (D3, D4 and D5). D3 mostly used RxScala and mentions that creating micro services helps in this regard. D1 mentions that "you need to know a lot as a starting [RxJS] developer", giving the example of the many ways to cleanup. D1 used both logging while analyzing existing code and learning to overcome inexperience.
Understanding. Subjects first look at which operators are used, then they reason about what types and values might flow through the stream (D2, D3, D4 and D5), using various methods. By analyzing the variable names D2 forms an expectation of the resulting value types, then reasoning backwards, to see how this data is derived. Running the code, is used when possible by D5, to observe the outcome of the stream, as this "shows the intentions of the original developer". If it remains unclear how the data is transformed, the subject injects a debug() operator or looks up operators in the documentation.
4.2 Analysis of Literature
Developers can learn Rx through several sources, such as the official documentation at ReactiveX.io, books, online courses, and blog posts. We gathered resources to be analyzed by selecting 4 popular books about Rx, and complement this with the official documentations and an article by a core contributor of RxJS. All reviewed resources either mention debugging briefly and suggest using the do() operator for printf() debugging, or teach the developer printf() debugging via code samples.
The RxJS 4 documentation [3] and two books [20, 42] propose the use of the do() operator for debugging. Esposito and Ciceri [20] further explain how to best format the log statements and introduce ways to limit the logging by modifying the Observable through means of throttling and sampling. The RxJava book [42] also contains tips to use the various do-operators to integrate with existing metric tools. To our knowledge the only article [37] addressing issues of debugging Rx is by Staltz, one of the contributors of RxJS, noting that conventional debuggers are not suitable for the higher level of abstraction of Observables. Staltz proposes three ways to debug Rx: (1) tracing to the console, (2) manually drawing the dependency graph, and (3) manually drawing Marble Diagrams.
We analyzed a set of 13 books about RxJS, which was created by selecting 69 books matching "RxJS" from the O'Reilly Safari catalogue [2], and further reducing the set by filtering on the terms "debug" and "debugger". While, none of the remaining books had a chapter about debugging, many of these books use printf() debugging in their code samples. Notably, Blackheathe suggests [7], in a "Future Directions" chapter, that special debuggers could provide a graphical representation of FRP state over time and would allow debugging without stepping into the FRP engine.
4.3 Overview of practices
The available literature matches the results of the interviews: printf() debugging is both commonly advised and used. While the conventional debugger works in some cases, this is mostly the case for the procedural logic that interleaves Rx logic. Rx-specific debuggers are suggested, but not implemented. We found that developers use printf() debugging to learn the behavior of Observables, behavior meaning both their values flowing through and their (one or many) subscriptions.
Overall, we identified four overarching practices when debugging Rx code:
(1) Gaining high-level overview of the reactive structure.
(2) Understanding dependencies between Observables.
(3) Finding bugs and issues in reactive behavior.
(4) Comprehending behavior of operators in existing code.
5 RQ2: DEBUGGER DESIGN
In this section, we describe the design of a visualizer for the ReactiveX (Rx) family of RP libraries to answer RQ2. Given the findings of RQ1, the requirements for our visualizer are:
REQ1 Provide an overview of Observable flows. This overview should support practices 1 and 2, by graphically representing the relations between all Observables and their interactions.
REQ2 Provide detailed view inside the data flow. This view should support practices 3 and 4 by giving access to both data flow and life-cycle events and should be able to show the behavior of an operator visually.
To meet those requirements, we propose a visualizer consisting of two parts: (1) a Data Flow Graph and (2) a Dynamic Marble
To provide more overview, we process the graph to merge the two
- observable
- used to identify the same Observable in multiple places in the graph.
- straightening
- of which we use the algorithms for
- selected
- is thus highlighted, straightened and positioned at
- physical character location, we cluster /f_lows per Observable. Fur-
- minimizes storyline crossings.
- bundled
- in the graph. If multiple subscriptions on the same Observable are
- combined and used. /T_he dynamic marble diagram offers a more
- selected
- can introduce the
- created, multiple /f_lows are kept in the graph and they are bundled
- combined and used. /T_he dynamic marble diagram offers a more
- individual
5.1 Data Flow Graph
*Simple* graphs. When running an RP program, Observables are created that depend on other Observables (their source) and Ob-
- Observer sequences together, simplifying it in the
- result in a Data Flow Graph (DFG) as in Figure 2a. To
- do so, we retain only the Observer subgraph nodes, complementing them with the metadata of the corresponding Observable nodes.
- Higher-order relations are retained, as shown in Figure 2. Figure 3B shows the DFG in practice.
**Layout.** Layout is used to add an extra layer of information to
- graph. If multiple subscriptions on the same Observable are created, multiple flows are kept in the graph and they are bundled together in the resulting layout. Using it, developers can find related flows. They can also identify possible performance optimizations; for example, when they see Observables to be reused often, they can introduce the `share()` operator to optimize subscriptions.
Our layout engine is based on StoryFlow [33]. StoryFlow was ini-
- initially focused on RxJS (JavaScript).
- Useful execution events. Depending on the language and platform, specific instrumentation is required. What the instrumentation
does is wrap calls to functions that i) create or modify the DFG, and ii) introduce events to Observers. The instrumentation uses an
- operational protocol consisting of 4 functions to drive the debugger
- interface.
The **Host instrumentation** instruments the Rx library to emit
- useful execution events. Depending on the language and platform,
- instrumentation must run inside the host language, while the Visualizer
- can use a different language and platform.
5.3 Architecture
To support the visualization, we design a debugger architecture consisting of two components: a host instrumentation and a visual-
- Host instrumentation:**
**Instrumentation.** With JavaScript being a dynamic language, we use a combination of prototype patching and Proxies [1] to instru-
- the RxJS library: the Observable and Observer prototypes are patched to return Proxies wrapping the API method calls. The
- instrumentation passes every method entry and method exit to the
- instrumentation passes every method entry and method exit to the
- Linking step.
5.2 Dynamic Marble Diagrams
We extend the original notion of the Marble Diagram by introducing animation; our dynamic marble diagrams update live when new
- operation which is impossible using a classic debugger. Handcrafted
- marble diagrams can use custom shapes and colors to represent
- are a green dot, errors are a black cross and complete are a
- visualizer, the website also contains a code editor for
- visualizer, the website also contains a code editor for
- execution purposes.
- components can run in their own environment. The instru-
- instrumentation must run inside the host language, while the Visualizer
- instrumentation must run inside the host language, while the Visualizer
- instrumentation must run inside the host language, while the Visualizer
5.4 Implementation
To validate our design and to provide an implementation to the de-
- `map(x => x * 2)`
- `filter(x => x < 3)`
- `flatMap(x => inner)`
Figure 2: Simplified DFGs corresponding to examples in Figure 1
Linking. We distinguish between method calls from the different phases (Section 2). From the assembly phase, we detect when Observables are used as target or arguments of a call or as return value, and create a graph node for each detected Observable. We add an edge between the call target and call arguments and returned Observables, denoting the source relation. Also, we tag the returned Observable with the call frame information (time, method name, arguments). In the subscription phase, we detect calls to subscribe(): the destination Observers are passed as arguments, so we create the graph nodes and save the relation as an edge. In the runtime phase, we detect next, error and complete calls on Observers and add these as meta data to the Observer nodes.
Graph Loggers. From the Linking step the graph mutations are streamed to the environment of the visualizer, where the graph is rebuilt. Depending on the host language, a different protocol is used: RxFiddle’s code editor executes the code in a Worker [1] and transmits events over the postMessage [1] protocol, while RxFiddle for Node.js transmits over WebSockets. Being able to support multiple protocols, extends the possible use cases; our prototype implements a code editor for trivial programs, a Node.js plugin for server applications, and Chrome DevTool extensions for web applications.
Visualizer. The visualizer receives the current state in the form of a graph from the Logger. It then uses the Observers in the graph to create the DFG. To layout the DFG using StoryFlow, we first rank the graph using depth first search, remove slack [24] and reverse edges, in order to create a directed acyclic graph. We then add dummy nodes to replace long edges with edges spanning a single rank. Finally, we order and align the nodes in the ranks assigning coordinates for the visualization. It is important that layouting is fast, as it runs every time the DFG is changed. To render the Marble Diagrams, the flow to and from the selected Observer is gathered, by recursively traversing the graph in the direction of the edges.
6 RQ3: EVALUATION
In this section, we evaluate our debugger to assess the efficacy of our approach. To do so, we use an experiment, in which we control for the debugger facilities that subjects use. The “control” group is provided a classic web development environment, while the “treatment” group uses RxFiddle.
Ko et al. [31] describes two commonly used measures for experiments regarding tools in Software Engineering: success on task, and time on task. The goal of our experiment is to measure the time required to solve programming problems correctly. If our reasoning for RQ2 is right and our debugger design lends itself for RP, we expect to see that the group using RxFiddle can more quickly reason about the reactive code at hand and can trace bugs faster. We do not use success or correctness as a measure for the experiment, as we expect both groups to be able to complete the tasks correctly: while the current debugging situation is non-optimal, it is still used in practice, indicating that it works at least to some extend. The construct of time also matches debugging better; developers need to continue debugging until they find an explanation or a solution to their problem, while assumptions can be tested and corrected.
We measure the time from the moment the participant received the question until the correct answer is given. Participants use either the built-in Chrome Browser debugger (group Console) or the RxFiddle debugger (group RxFiddle). This single alternative is provided to the experiment UI (which acts as a small IDE) offers all the debugging capabilities subjects of our preliminary interviews (RQ1) reported to use.
The experiment consists of a questionnaire, a warm-up task and four programming tasks, all available in a single in-browser application, of which the source code is available at [4]. The questionnaire contains questions regarding age, experience in several programming languages and several reactive programming frameworks.
We use this self estimation as a measurement of skill instead of a pretest, since it is a faster and better estimator [21, 30, 49]. The warm-up program is placed in the same environment as the programming problems and contains several tasks designed to let the participants use every control of this test environment. The first two programming problems require the participants to obtain an understanding about the behavior of the program and report the findings. The last two programming problems contain a program with a bug. The participants are asked to find the event that leads to the bug in the third problem and to identify and textually propose a solution in the fourth problem. The first two problems are synthetic examples of two simple data flows, taken and adapted from the Rx documentation, while the latter two are carefully constructed to match the documented use of Rx operators and contain some mocked (otherwise remote) service which behaves like a real world example. In T3, an error in an external service propagates through the Rx stream. In T4, concurrent requests lead to out-of-order processing of responses.
We use a between-subjects design for our setup. While this complicates the results — subjects have different experience and skills — we can not use a within-subjects design as it would be impossible to control for the learning effect incurred when asking subjects to perform survey questions with and without the tool. This also allows us to restrict the amount of tasks to incorporate in the experiment, requiring less time from our busy subjects. In the experiment environment, subjects can answer the question and then hit “Submit”; alternatively they can “Pass” if they do not know the answer.
6.1 Context
The experiment was run both in an offline and in an online setting. The offline experiment was conducted at a Dutch software engineering company. Subjects are developers with several years of experience with RP. As we did not try to measure the effect of learning a new tool, we explained RxFiddle in the introductory talk and added the warm-up question to get every participant to a minimum level of understanding.
The online experiment was announced by the authors on Twitter, and consequently retweeted by several core contributors to RP libraries, and via various other communication channels, such as Rx-related Slack and Gitter topics. Subjects to the online experiment took the test at their own preferred location and have possibly very different backgrounds. We created several short video tutorials and included these in the online experiment to introduce the participants to the debug tool available to them and the tasks they needed to fulfill. The introductory talk given to the offline subjects was used as the script for the videos, in an attempt to get all participants to the same minimum level of understanding.
6.2 Results
The online experiment was performed outside of our control, and some participants quit the experiment prematurely. In total we had 111 subjects (13 offline, 98 online) starting the survey, of those 98 completed the preliminary questionnaire, and 89, 74, 67, and 58 subjects started respectively T1, T2, T3 and T4. All of the subjects in the offline setting started all tasks. Figure 4b shows the outcome of the tasks; in the remainder of this section we consider only the outcomes marked as “Correct”.
Overall. Figure 4b shows the time until the correct answer was given per task. Here, we consider the combined results from the offline experiment and the online experiment. We make no assumptions about the underlying distribution, so we perform a non-parametric Wilcoxon Mann-Whitney U test (H0: times for the Console group and RxFiddle group are drawn from the same population) to see if the differences are significant, and a Cliff’s delta test for ordinal data to determine the effect size. The results are shown in Figure 4a.
For task T3, we can reject H0 with high significance (p < 0.05), the RxFiddle group is faster. For the tasks T1, T2 and T4 we can not reject H0 (p > 0.05), meaning the RxFiddle group and Console group perform or could perform equally.
Control for experience. To investigate this further, we split the results for different groups of subjects. When we control for the self-assessed Rx experience, we see bigger differences for all tasks for groups with more experience, as shown in Figure 4c and Figure 4d (we split at the median; exp_rx > “Beginner”-level). Still, for tasks T1, T2, and T4 we can not reject H0, but the results are more significant comparing only experienced subjects.
7 DISCUSSION
We now discuss our main findings, how RxFiddle resolves the debugging problem of Rx, and contrast our design to other design choices and possibilities of future work.
7.1 Main results
Quick and dirty debugging. Through interviews and literature we establish that current debugging practices for RP consist mostly of printf() debugging. The shortcomings of this method were evident from the interviews: it works reliably only for synchronous execution or when small amounts of events being logged, otherwise the overview is lost. Furthermore, the time-context of events and dependency-context of flows are not available using this method. We attribute the prevalence of printf() debugging to this “quick and dirty” method being available in every language and on every platform, without a viable alternative.
Improved context: being complete, disposing doubts. With our design and complementary implementation, we show that our abstract model of RP is suitable for visualization on two levels: overview and detail. At the overview level, we complement the dependencies visible in source code with a graph of the resulting structure, showing the run-time effect of certain operators on the reactive structure. At the detail level, we add the time context, by showing previous values on a horizontal time line, and the dependency context, by showing input and output flows above and below the flow of interest. While the results of our evaluation could be observed as a negative, RxFiddle is a new tool, where subjects have only just been exposed to the tool and received only a short training. We expect that by designing a debugger model so close to the actual abstractions, our debugger works especially well for users with
some knowledge of these abstractions; while only T3 shows better performance with high significance, we observe slightly better results when controlling for experience. Future research might investigate the effect of experience in more detail, including the use of more complicated tasks, with larger samples.
In the presented research, we did not perform tests with subjects using their own code. However, during piloting and after the release of RxFiddle, we received positive feedback regarding the completeness of the visualization. As one user put it, "by using RxFiddle when learning and understanding what RxJS does in our project, I have a feeling of improved control over our Observables, Subscriptions and the reactive parts of our app". Specifically the life-cycle events, which are generally hard to debug using printf debugging, are more clear: "Initially we were reluctant to manually subscribe, but after seeing that 'complete' often triggers a 'dispose', the team became more confident to sometimes use subscribe() directly". Future research might address this evaluation aspect by designing experiments specifically using code owned by the users.
7.2 Implications
The developers using Rx in practice now have an alternative to printf debugging. Developers can try RxFiddle on their codebase to better understand the reactive behavior of their application, and potentially detect and verify (performance) bugs they are not aware of. At least one example of this has already occurred in practice: one of our interview subjects reported a bug\(^8\) in the groupBy() implementation of RxJS, which resulted in retention of subscriptions, increased memory usage and finally led to an out-of-memory exception. The subject detected the bug in practice and required extensive amount of debugging involving the Node.js debugger to trace down; the same bug is immediately obvious in RxFiddle when examining the life-cycle events using the visualization.
Contributors of RP libraries could use tools like the RxFiddle visualizer in documentation to provide executable samples, which would allow for a better learning experience, and at the same time would introduce novice developers to other ways of debugging than printf debugging.
7.3 Limitations and Future Work
Multiple inputs and outputs. If we compare our debugger visualization to the visualization of learning tools, like RxMarbles [36] or RxViz [41], the main difference is that those tools show all input and output Observables of a single operator concurrently, while RxFiddle shows one input and output Observable per Marble Diagram, part of a single full flow (a path through the graph). The choice to show a full flow allows developers to trace events from the start until the end of the flow, but restricts us in showing only a single ancestor flow per node at each vertical position, as adding a third dimension would clutter the (currently 2D) visualization. For future research, it would be interesting to compare (1) the different ways Observable streams can be combined in Marble Diagrams and (2) which visualization elements can be added to explicitly show causality and lineage for events and show durations for subscriptions.
Edge visualization. In our graph visualization, the edges represent the dependencies and the path of the events. Nodes with multiple incoming edges merge the events, however users could falsely think that all event data ends up in the outgoing path: besides data flows, Rx also uses Observables for timing, as durations (window()), as stop conditions (takeUntil()), or as toggles (pausable()). Different visual representations for joining paths could be explored to distinguish between using Observables for data or for timing.
Graph scalability. Debugging large reactive systems over longer periods of time can result in significantly larger Observable graphs and Marble Diagrams than currently evaluated. During tests of RxFiddle with larger applications like RxFiddle itself and an existing Angular application, the graph became too large to render in real time. Besides rendering performance, a potentially even bigger issue is with communicating large graphs to the developer. We propose several extensions to RxFiddle to remedy this issue: (1) pruning the graph of old flows to show only the active flows, (2)
---
\(^8\)https://github.com/ReactiveX/rxjs/issues/2661
When a simulated breakpoint is reached, the execution resumes above a certain threshold of events, this high volume interface providing extension points for the language specific features.
Visualization should work for every RP collection abstracting over time, and would be directly applicable to languages such as REScala, debugging data flows by operator or data values, (5) support navigation between code & graph.
**Marble Diagram scalability.** Our experience shows that while Marble Diagrams are useful for small to medium amount of events (< 20), both better performance and better functionality could be achieved by providing a different interface for high volume flows. Above a certain threshold of events, this high volume interface could be the default, offering features like (1) filtering, (2) watch expressions (to look deeper into the event’s value), and advanced features like (3) histograms & (4) Fast Fourier Transform (FFT) views. Moreover, manually examining these distinct events could take a long time; a debugger could leverage the run-time information about the events that actually occur, to provide a UI. Advanced features like histograms could help the filtering process, while FFT could offer new ways to optimize the application by doing smarter windowing, buffering and sampling later on in the chain.
**Breakpoints.** Placing traditional breakpoints in a reactive program stops the system from being reactive, and therefore can change the behavior of the system. Breakpoints can be used by developers in two ways: i) to modify the application state by interacting with the variables in scope, and ii) to notify them of an event occurrence. While the first is arguably not desirable for reactive systems, the notification property might be a good addition to RxFiddle. BIGDEBUG [26], a debugging solution for systems like Spark [52], introduces simulated breakpoints for this purpose. When a simulated breakpoint is reached, the execution resumes immediately and the required lineage information of the breakpoint is collected in a new independent process. Implementing this for RxFiddle is a matter of creating the right UI as the required lineage data is already available.
**Other RP implementations.** RxFiddle is specific to Rx, but the debugger design is applicable to other RP implementations. The visualization should work for every RP collection abstracting over time, and would be directly applicable to languages such as REScala, and various JavaScript RP implementations. Future work could investigate whether the debugger protocol can be generalized such that other RP semantics can be captured too, for example by providing extension points for the language specific features.
8 THREATS TO VALIDITY
**External validity.** For the interviews we selected 5 professional developers that were both available and worked on projects involving RxJS. The online experiment was open to anyone who wanted to participate, and shared publicly. These recruitment channels pose a threat to generalizability: different practices might exist in different companies, different developer communities and for different RP implementations & languages. Future work is needed on validating the debugger in these different contexts.
Our code samples for the tasks are based on documentation samples and common use cases for Rx; RxFiddle might perform differently on real-world code, especially when the developer is familiar with the project or domain. The experiment consists of 2 small and 2 medium tasks; for larger tasks the effect of using the debugger could be bigger and therefore be better measurable. Still, we chose for these smaller tasks: in the limited time of the subjects they could answer only so many questions.
**Construct validity.** We measure the time between the moment a question is displayed and the moment its correct answer is submitted. Even though our questions and code samples are short and were designed to be read quickly, still some variation is introduced by different reading speeds of subjects. A setup where the question and code can be read before the time is started can remedy this threat; but introduces the problem of planning when given unlimited time [31]: subjects can start planning their solution before the time starts. Furthermore, subjects might have different strategies to validate their (potentially correct) assumptions before submitting, ranging from going over the answer once more, to immediately testing the answer by submitting it. However, explicitly stating that invalid answers do not lead to penalty might introduce more guessing behavior. Future studies could use longer tasks, with preparation time to read the sample software at hand, with a wizard-like experiment interface presenting one short question at a time.
**Internal validity.** As a result of the recruitment method of the experiment, a mixed group of developers took part, attracting even those without Rx experience. To reduce the variation in experience that this introduces, we separately examined the results of more experienced developers.
At the time of the experiment RxFiddle was already available online for use, and furthermore some of the experiment subjects had already used RxFiddle during piloting. We mitigate this issue partially by providing a instruction video at the start of the experiment, however subjects with extensive experience with RxFiddle might bias the results.
The subject-expectancy effect [31] poses a validity concern, since subjects who expect a certain outcome, may behave in a way that ensures it. Our subjects had the opportunity to learn the context of the experiment and thus could be more motivated to use RxFiddle than using the traditional debugger. Our online experiment captures motivation to some extend as drop-out (defined as quitting, before having started all tasks) happens; the approximately equal drop-out in both groups (RxFiddle 56.3%, Console 63.4%), suggests no significant motivational differences. Future studies could offer subjects external motivation (e.g. by ranking contenders and gamification [16] of the experiment, or organizing a raffle among top contenders), to limit the threats introduced by motivation.
9 RELATED WORK
**RP Debugging.** REScala [47] is an RP library for Scala, based on Scala.React. Recently a debugger model was created for REScala, called “RP Debugging” [48], featuring a dependency graph visualization, breakpoints, a query language and performance monitoring. The debugger fully integrates with the Eclipse IDE and the Scala debugger facilities, creating a (Scala) developer experience and a feature RxFiddle currently does not offer: reactive breakpoints. Our debugger design supports multiple languages, and works outside
of the IDE, in the browser environment and/or connecting to a production system. Rx has different reactive semantics and arguably a more powerful, but also more extensive API, which includes operators acting in the time domain (delay, etc.). Therefore, we argue that seeing values in a flow over time is very valuable; RP Debugging shows the latest values at the selected time.
RP Visualization. RxMarbles [36] visualizes single Rx operators, for the purpose of learning and comprehension. Users can drag to modify (only) the timing of events and instantly see the changes reflected in the output. By using specific pre-coded inputs and timings the essence of the operator is made clear. In RxViz [41], Moroshko takes a similar approach, but provides a code editor instead of prepared inputs, and visualizes the output of the stream. RxMarbles does not support higher-order streams, while RxViz subscribes to the one outer and multiple inner streams when it detects a higher-order stream, showing them concurrently. In contrast to our work, these tools are not debuggers: focus is on teaching the behavior of single operators or stream outputs, instead of full programs.
Omniscient Debugging. Omniscient debuggers [43] trace, store and query all events in a program execution. When storing vast amount of program execution information, performance and efficiency becomes a problem and research in omniscient debuggers focuses on this specifically. We also trace events of the entire execution, however in contrast to omniscient debuggers we only store trace events regarding RP data flows. The RP semantics allow us to create future optimizations, for example retaining only the active flow structure, while the flow’s data is kept in a rolling buffer.
Dynamic Analysis. The study of program execution is called “dynamic analysis” [12]. In many cases, dynamic analysis involves a post mortem analysis, where first the program is run, collecting an execution trace, and then the trace data is analyzed to create a visualization. Derived visualizations, like class and instance interaction graphs, function invocation histories [32], invocation views and sequence diagrams [11] show the possibility to use trace information for debugging. Arguably, on-line analysis is more useful for debugging than the standard post mortem analysis. Reiss, in reference [45], mentions the compromises that have to be made to make an on-line analysis: reduced tracing is required to not slow down the system (known as the observer effect) and fast analysis is required to lower the cost of getting to the visualization, to not discourage the users. In our design, we handle the same compromises as they are relevant for RP debugging too, and our JavaScript trace implementation bears resemblance to that of Program Visualiser [32].
Understanding Debugging. Debugging for general purpose languages revolves around attaching a debugger, stepping through the code, attaching code or data breakpoints, navigating along different calls in the call stack and examining variables and results of expressions [51]. However, existing research, measuring how these different tasks are part of the developers workday, found that while developers spend much time on comprehending code, they do not spend much time inside the IDE’s debugger [40]. Beller et al. [5] found that only 23% of their subjects actively use the IDE’s debugger, with the most common action being adding breakpoints, followed by stepping through code. The automated tooling of these studies did not measure different kinds of debugging other than using the IDE provided tools, however Beller’s survey indicates that 71% also uses printf() statements for debugging. No indication was given of any RP language and libraries used by the subjects in the study, but the observation that printf() debugging is common, matches our experience with debugging reactive programs.
Debugging for Program Comprehension. Developers need to both comprehend and debug code almost daily. Initially, comprehension was seen as a distinct step programmers had to make prior to being able to debug programs [29]. This distinction is criticized by Gilmore: “debugging is a design activity” [25], part of creating and comprehending programs. Maalej et al. [34] interviewed professional developers and found that developers require runtime information to understand a program, and that debugging is frequently used to gather this runtime information. This supports our view that debugging is not only used for fault localization, but also for comprehension.
10 CONCLUSIONS
Through analysing the current RP debugging practices, this work shows the prevalent method for RP debugging is printf() debugging. To provide a better alternative, we present an RP debugger design and its implementation for the RxJS library (RxFiddle), which enables developers to: (1) gain a high-level overview of the reactive data flow structure and dependencies, and (2) investigate the values and life-cycle of a specific data flow, at run-time.
Through an experiment, we show that RxFiddle is an viable alternative for traditional debugging and in some cases outperforms traditional debugging in terms of time spent. There are several promising directions for improving our design. Specifically scalability could be improved and different edge visualizations could be explored, to improve the usability of the tool. Furthermore, by leveraging already captured meta data about timing of events, even more insight could be provided. At the implementation level, we plan to extend RxFiddle to other members of the Rx-family of libraries.
In this paper, we make the following concrete contributions:
1. A design of a generic RP debugger, initially tuned for the Rx RP variant
2. The implementation of the debugger for RxJS, and the service RxFiddle.net
In the month after the release of RxFiddle.net the site was visited by 784 people from 57 different countries. The debugger was already used by 53 developers, excluding the use inside of the experiment. During that same period 42846 interactions with the visualizations of the debugger have been recorded, such as selecting Observables or inspecting values by hovering the mouse over the event.
The debugger and the platform are open source and are available online at [4].
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/38856517/paper.pdf", "len_cl100k_base": 10163, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 40752, "total-output-tokens": 11208, "length": "2e13", "weborganizer": {"__label__adult": 0.000354766845703125, "__label__art_design": 0.0002770423889160156, "__label__crime_law": 0.0002410411834716797, "__label__education_jobs": 0.0005578994750976562, "__label__entertainment": 5.2034854888916016e-05, "__label__fashion_beauty": 0.0001327991485595703, "__label__finance_business": 0.00012046098709106444, "__label__food_dining": 0.0002734661102294922, "__label__games": 0.00041365623474121094, "__label__hardware": 0.0004916191101074219, "__label__health": 0.00035119056701660156, "__label__history": 0.0001647472381591797, "__label__home_hobbies": 6.74128532409668e-05, "__label__industrial": 0.00023794174194335935, "__label__literature": 0.0001908540725708008, "__label__politics": 0.0001807212829589844, "__label__religion": 0.00036978721618652344, "__label__science_tech": 0.0036525726318359375, "__label__social_life": 8.26120376586914e-05, "__label__software": 0.003307342529296875, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0002899169921875, "__label__transportation": 0.0003938674926757813, "__label__travel": 0.0001939535140991211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51352, 0.02485]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51352, 0.42216]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51352, 0.92067]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3376, false], [3376, 9291, null], [9291, 12786, null], [12786, 18842, null], [18842, 22805, null], [22805, 26883, null], [26883, 33238, null], [33238, 37623, null], [37623, 44402, null], [44402, 50732, null], [50732, 50732, null], [50732, 51352, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3376, true], [3376, 9291, null], [9291, 12786, null], [12786, 18842, null], [18842, 22805, null], [22805, 26883, null], [26883, 33238, null], [33238, 37623, null], [37623, 44402, null], [44402, 50732, null], [50732, 50732, null], [50732, 51352, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51352, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51352, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51352, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51352, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51352, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51352, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51352, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51352, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51352, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51352, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3376, 2], [3376, 9291, 3], [9291, 12786, 4], [12786, 18842, 5], [18842, 22805, 6], [22805, 26883, 7], [26883, 33238, 8], [33238, 37623, 9], [37623, 44402, 10], [44402, 50732, 11], [50732, 50732, 12], [50732, 51352, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51352, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
5e3ddcdf429b8d612a4e0ceb1ca40c8609e7f7e8
|
This page summarizes all the changes you may find when you upgrade from Tiki9 'Long Term Support' (LTS) to Tiki12 LTS. You may also read the partial changes in each version Tiki10, Tiki11 & Tiki12. Please note that:
- Tiki12 was released on 2013-11-30.
- This is the last version that will support IE8
- It is an LTS version. It will be supported until 2018-11-30. (5 years). See version lifecycle
- It requires PHP 5.3.x. If you need pre-5.3 support, you can use Tiki9 which is an LTS version.
Page contents:
- 1.1. Activity Stream
- 1.2. Admin users
- 1.3. Admin wizard
- 1.4. Advanced Ratings
- 1.5. Articles
- 1.6. Auto TOC
- 1.7. Banning multiple registration IPs from user management
- 1.8. Batch Upload
- 1.9. BigBlueButton
- 1.10. Blogs
- 1.11. Categories
- 1.12. Check
- 1.12.1. Check Permissions
- 1.12.2. Check Server
- 1.12.3. Check WinCache
- 1.13. Code Review
- 1.14. Comments
- 1.15. Comments and Ratings
- 1.16. Composer added to manage external libraries
- 1.17. Console
- 1.18. Cookie Consent
- 1.19. Draw
- 1.20. elFinder
- 1.21. European Cookie compliance
- 1.22. File Galleries
- 1.22.1. Batch upload improved
- 1.22.2. Management with elfinder
- 1.22.3. Native indexing of .docx, .xlsx and .pptx
- 1.22.4. Page View
- 1.23. Forums
- 1.23.1. Show user rating on forum topic
- 1.23.2. Forum deliberations
- 1.24. Friendship Network
- 1.25. Google Analytics
- 1.26. Gravatar
- 1.27. HTML5
- 1.28. Inline editing
- 1.28.1. Wiki Inline editing
- 1.29. Kaltura
- 1.30. Layout Switching
- 1.31. Machine Translation
- 1.32. Mail
- 1.33. Mail Queue
- 1.34. Mail debug
- 1.35. Mail-in improved
- 1.36. Maps
- 1.37. Menu
- 1.38. Messages
- 1.39. Mobile
- 1.40. Modules
- 1.40.1. Modules can be loaded from static files
- 1.40.2. Modules can be hardcoded in templates
- 1.40.3. Module freetags_most_popular improved
- 1.40.4. Module last_youtube_playlist_videos improved
- 1.40.5. Module since_last_visit_new improved
- 1.40.6. Module users_list ported
- 1.40.7. New Facebook module
- 1.40.8. New Twitter module
- 1.40.9. New top_blog_posters module
- 1.41. Monitoring
- 1.42. Namespaces
- 1.43. OpenPGP
- 1.44. Override of memory and time limits for certain operations
- 1.45. Payment
- 1.46. Performance
- 1.47. Permission Check
- 1.48. Plugins
- 1.49. Profiles
- 1.49.1. Profiles wizard
- 1.50. Ratings
- 1.50.1. Option to toggle the detailed rating results
- 1.50.2. Rating on Articles from PluginArticles
- 1.50.3. Rating Language
- 1.50.4. Rating permission to view results
- 1.50.5. Show ratings in a forum thread list
- 1.51. References
- 1.52. Replacing rewrite rules with a routing file
- 1.53. Restore Database
- 1.54. Screencast
- 1.55. Search
- 1.55.1. Lucene Search
1.55.2. Search all database tables tool
1.55.3. Search Index statistics
1.55.4. Search stats support unified search
1.56. Server Check
1.57. Session collision protection
1.58. Setup.sh
1.59. Structures Drill Down menu
1.60. Smarty template engine
1.61. Switch user now has a way back
1.62. Syntax highlighter (Codemirror) upgraded
1.63. Themes
○ 1.63.1. Admin Theme
○ 1.63.2. New: Greenvalley
○ 1.63.3. New: Uthopias
○ 1.63.4. New: Horizons option in teal from jqui
○ 1.63.5. Updated in mods: many
1.64. Trackers
○ 1.64.1. Change tracker field type after creation
○ 1.64.2. Detect and remove orphan files
○ 1.64.3. Inline editing
○ 1.64.4. List Trackers: added autocomplete to the find field
○ 1.64.5. Tracker Fields: Kaltura
○ 1.64.6. Tracker Fields: Math
○ 1.64.7. Tracker forms enhanced with library 'Chosen'
○ 1.64.8. Tracker List with last comment author and date
○ 1.64.9. Users can see just their own items (new setting)
1.65. Translations
○ 1.65.1. Custom JavaScript translations
○ 1.65.2. Bing Translate support
1.66. Unified index
1.67. User Encryption
1.68. User wizard
1.69. Version checker
1.70. Video
1.71. Windows Azure
1.72. Wiki
○ 1.72.1. Argument Variables
○ 1.72.2. Flagged Revisions
1.73. Wiki Plugins
○ 1.73.1. Improved: Plugin Articles
○ 1.73.2. Improved: Plugin FancyTable
○ 1.73.3. Improved: Plugin Img
○ 1.73.4. Improved: Plugin MediaPlayer
○ 1.73.5. Improved: Plugin Proposal
○ 1.73.6. Improved: Plugin Slider
○ 1.73.7. Improved: Plugin TrackerList & TrackerFilter
○ 1.73.8. New: Plugin Insert
Alphabetically sorted
1.1. Activity Stream
It allows to create social network activity streams within Tiki. In order to create them, you will need to define what the important events are in your system. Events like "tracker item created" or "wiki page modified" will rarely make sense to your users looking at an activity stream. Instead, they may be interested when photos posted by their friends. The activity stream feature allows to intercept system event, filter them and trigger new events. These new events can be recorded and indexed, which will allow them to be displayed in an activity stream.
See Activity Stream
1.2. Admin users
You have many more filtering options to select users from the users list. You can filter by a search string in the username, by exact email, by the fact that users didn't validate their account, etc. All of them using the jQuery Sortable Tables feature.
1.3. Admin wizard
See #Admin_Wizard below
1.4. Advanced Ratings
- Various enhancements to Dogfood Ease Importance Priority
1.5. Articles
Links are shown at the top of the page, as in other tiki features, to add a new article or submission, or view/list articles, provided that the user has the permission to do that action.
See Articles
1.6. Auto TOC
Automatic generation of Table of Contents (TOC) for all wiki pages. The page author doesn't have to do anything.
Auto TOC generates 2 tables of contents.
1. a static TOC - Listed on the left, the top or on the right of the page. The TOC is inserted into the page content at runtime.
- Example:
1.7. Banning multiple registration IPs from user management
Since Tiki 12.3, admins can easily ban multiple IPs from spam registrations directly with just a few clicks. They can also optionally remove the user accounts and their user tracker items, as well as their user pages.
1.8. Batch Upload
It is now possible to integrate very large files into the Tiki File Gallery using Batch Upload. See Batch Upload for details.
1.9. BigBlueButton
- It is now possible for admins to delete recordings
- New explicit permission tiki_p_bigbluebutton_view_rec needed to view recordings:
- tiki_p_bigbluebutton_view_rec is no longer implicit if tiki_p_bigbluebutton_join is granted
1.10. Blogs
Private blog post links are filtered from other users on adjacent blog navigation link. See Blogs
1.11. Categories
There is a new preference so that the object count can be disabled on tiki-browse_categories for large sites (over 40 seconds to load on one example site for instance). See Categories
1.12. Check
1.12.1. Check Permissions
Permission Check: If Tiki installer and tiki-check.php fail, Tiki Permission Check can be used to figure out some details about filesystem permissions needed by the webserver to make those ones work.
1.12.2. Check Server
- tiki-check.php checks the server is appropriately configured for Tiki. See Server Check
1.12.3. Check WinCache
Added check for WinCache ByteCode Cache in tiki-check. See Server check
1.13. Code Review
- Many feature enhancements to take it to Dogfood level. See Code Review and code.tiki.org
1.14. Comments
- Allow comments to be edited by the author during a grace period after initial post
1.15. Comments and Ratings
1.16. Composer added to manage external libraries
This has no impact on Tiki users but makes things better for developers. See [https://dev.tiki.org/Composer](https://dev.tiki.org/Composer), and the section called "#Upgrades" below, for more information.
1.17. Console
Tiki Console is to administer your Tiki instance via the command line. It is based on Symfony's Console Component. It can handle
- Tiki install, configure, update
- the equivalent to the former command 'php installer/shell.php'
- Profile install and forget
- Clear caches
- Rebuild Search cache
- console.php `mail-queue:send` (or `m:s`), for Mail Queue (added in Tiki 12.2)
- console.php `daily-report:send` (or `d:s`), for Daily Reports (added in Tiki 12.3)
- ...
See [Console](#)
1.18. Cookie Consent
See [#European_Cookie_compliance](#) below.
1.19. Draw
- It is now possible to restrict which tools are available in SVG-edit, thus offering a simplified experience. See [Draw](#)
1.20. elFinder
See [#Management_with_elFinder](#)
1.21. European Cookie compliance
See [Cookie Consent](#), to comply with "EU Privacy and Electronic Communications Regulations."
1.22. File Galleries
1.22.1. Batch upload improved
See #Batch.Upload
1.22.2. Management with elFinder
File galleries allow using elFinder, a new more visual way to manage files and folders, with drag and drop features within the file galleries, and also from a local desktop and the tiki file gallery.
For more information, see File Gallery & elFinder
1.22.3. Native indexing of .docx, .xlsx and .pptx
- Search within files
1.22.4. Page View
New "Page View" added for images, which shows database and metadata information for each image.
1.23. Forums
1.23.1. Show user rating on forum topic
New forum setting to allow optional display, in each forum reply to a thread topic, of the Rating by each user to that forum thread topic.
Useful to ease the task to reach consensus on deliberations (in forum threads) by identifying in a more clear way the position (topic rating) of each person on that topic at each moment on the discussion.
1.23.2. Forum deliberations
- See Deliberation, even if this is a highly experimental alpha version.
1.24. Friendship Network
Complete re-implementation, removing the neglected mention on the feature. Changes include:
- Configurable relationship types
- Followers (like Twitter)
- Followers require approval
- Friends (like Facebook)
- Friend's Activity Stream on the Friendship Network page (requires some configuration)
- Friend List module can be used anywhere to manage friends or followers
- Internal: Functionality exposed as services to allow lightweight integration into other features
See Friendship Network
1.25. Google Analytics
- Google Analytics is now a pref so you no longer need to use PluginGoogleAnalytics
1.26. Gravatar
Add option to use gravatar for all user avatars (https://en.gravatar.com)
1.27. HTML5
In each version, we progressively take advantage of HTML5, such as footer, article, article elements and header elements.
1.28. Inline editing
1.28.1. Wiki Inline editing
Edit wiki pages inline, in a similar way to the feature Tracker inline edit that was added in Tiki11: Fix a typ-o in 3 seconds. Inline editing is a fast and highly user-friendly way to edit wiki pages in wysiwyg mode.
Click to expand
1.29. Kaltura
- Kaltura support has been revamped to be much easier to setup and PluginKaltura has several new parameters.
- Please see a screenshot of the new interface here: http://tv.tiki.org/Add+a+Webcam+recording
{{ kaltura id="1_cv33i4xj" }}
See also #Video below
1.30. Layout Switching
- http://sourceforge.net/p/tikiwiki/code/45793
1.31. Machine Translation
- Support updated for Google Translate version 2 (v1 is no longer free)
- Added support for Bing Translator
1.32. Mail
- Replace by htmlMailMime by Zend_Mail implementation
1.33. Mail Queue
There is a new feature to place all notification email messages in a queue, and send all those emails periodically through a Cron job using `./console.php` script with `mail-queue:send` parameter. (N.B. Historical note: Prior to Tiki 12.2 the command was `./sendmail.php`)
This requires setting up mail delivery with a SMTP server instead of just sendmail, and set it to use a Queue.
See Admin home > General > General Preferences > Mail
1.34. Mail debug
A new option **"File (debug)"** has been added to the "**General Admin Panel > General preferences (tab) > Mail > Mail Sender**" to allow the site admin to debug any potential issues with the sending of emails related to notifications, user or groupwatches, etc.
The emails are still recorded in the Tiki **System Log** as if they were sent, but they are stored as files on disk under this folder and file structure:
```
./temp/Mail_aaaammmddhhmss_randomstring.tmp
```
See **General Preferences**.
1.35. Mail-in improved
Mail-in service has been fixed and updated in **Tiki12**
The new things include
- Inline images (HTML email). These pages are written in HTML.
- Permission checking and ability to block anonymous and admin users.
- Users are required to have both edit and attach permissions to save a wiki page
- Possible to auto-assign new pages to a category and a namespace
- Possible to disable email sending by Mail-in system.
- Manual email check trigger in the mail-in admin panel
- Several fixes, including subject encoding
The Mail-in service is a fast way to generate wiki pages, if the content is already on email or can be emailed.
For more information, see Mail-in
### 1.36. Maps
- OpenLayers upgraded to 2.12
- Added MapQuest Open tilesets
- And many many other fixes and improvements which were made for the CartoGraf project, an interactive web-based mapping application to enhance learning in history and geography classes in high schools. CartoGraf is mainly based on Maps, Drawings, PluginAppFrame and Trackers. This is a great example of how to use profiles to use a general purpose app (Tiki) to make a very specific application (CartoGraf).
- allow import of map path/zone tracker data from a file instead of the existing SVG draw method of data entry. From 12.1 a new "Index As Map Layer" option (defaults to No) has been added to the Files tracker field for an uploaded file (scroll to the end of the Options list to find this new one). A drop down list allows the selection of the file format to be either geoJSON or GPX - however the map projection must (at present) be EPSG:4326. This new capability overcomes the previous limitations with the Geographic Feature field to import existing data. A tracker should either have the Files field or the Geographic Feature field.
- have more admin control over individual path/zone display characteristics ie line colour, type, thickness etc. From 12.1 the properties element of the XML-like structured file uploaded to a File Gallery and used in the tracker Files field as discussed above can have a wide range of parameters that can set the stroke-color, stroke-width etc for the individual display object.
- From 12.1 POI icons now positioned so that the location point is the bottom middle of the 'pin'
- a 'hand' cursor icon is shown when a POI/Zone is hovered over to indicate it is clickable
- there is 'admin' control over what is shown in the POI/Zone pop-up box: the bubble/dialog (popupstyle parameter options - in map plugin) content could already be highly customised using /templates/object/infobox.tpl and /templates/object/infobox/trackeritem.tpl - and these can be placed in the /templates/style/yourstyle/ folder so that they are just used with your theme and will not be overwritten during an upgrade. But from 12.2 the popup box width/height etc., can be controlled by a number of new parameters.
### 1.37. Menu
See #Structures_Drill_Down_menu
1.38. Messages
There is a new option to allow truncating internal message notification to a certain number of characters (you can set it up in Admin home > Messages)
See Inter-User Messages
1.39. Mobile
Mobile mode display has been extensively improved in Tiki12.
See Mobile
1.40. Modules
1.40.1. Modules can be loaded from static files
New option to load the modules from a static files (in profile YAML format like http://profiles.tiki.org/Module+Handler)
You can use this from admin -> profiles -> export
1.40.2. Modules can be hardcoded in templates
New option to hard code modules and module zones in templates
<table>
<thead>
<tr>
<th>Displays all of the modules within a zone</th>
</tr>
</thead>
<tbody>
<tr>
<td>{modulelist zone=top}</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Allows to hard-code the parameters of a module in a template</th>
</tr>
</thead>
<tbody>
<tr>
<td>{module module="search" title="xyz" ... }</td>
</tr>
</tbody>
</table>
This is useful for easy sync of Dev, Test, Prod: Configuration Management
- Allow for user-defined module zones
- ModuleList wiki plugin to display custom module zones in pages
- Allow to include module lists in a template using a smarty plugin
1.40.3. Module freetags_most_popular improved
There a new parameter to select type of object (wiki page, blog post, article, file gallery, etc) in module freetags_most_popular.
1.40.4. Module last_youtube_playlist_videos improved
The module accepts the param "orderby" to indicate the sorting order of the videos in the playlist shown, and its default came back to 'position', which is the official default by youtube.
See Module last_youtube_playlist_videos
1.40.5. Module since_last_visit_new improved
Since Tiki 12.1, it also displays the new calendars and their events created since the last visit.
In addition, some icons have been slightly modified in the JQuery presentation mode, so that they can distinguish similar but different content (blogs from blog posts, file galleries from files, tracker items created from updated, etc).
See an example below. Same content is shown in both displays of the module for the same site:
Using "Fold sections by default"
In this example, only 4 sections are unfolded:
1. Wiki
2. Blog posts
3. Calendars
4. Calendar events
The other sections contain items but they are hidden under the section name.
When you click on the section name, you toggle the display of its contents.
Using "jQuery presentation mode"
In this example, the Wiki tab is selected (shown with grey background), listing the 7 wiki pages changed.
When you pass the mouse over another tab, its background is shown in blue color (in this case, the new calendar events icon, at the bottom right corner).
When you click, the content displayed below switches to the changed content for that other tiki section.
1.40.6. Module users_list ported
Former module users_list, only available in mods and for older tiki versions, has been ported to Tiki 12.1. It displays a list of users with many optional parameters such as Real Name, avatar (picture), member of groups, and links to user page and action log of that user, among others.
1.40.7. New Facebook module
New module to show Facebook wall (messages and stories) of a Tiki user.
See Module facebook
1.40.8. New Twitter module
New module to show public/friends Twitter timeline.
See Module twitter.
1.40.9. New top_blog_posters module
New module to list top bloggers.
See Module top_blog_posters
1.41. Monitoring
- Nagios/Icinga plugin for checking Tiki health parameters like correct db version, last search index rebuild and APC memory usage
1.42. Namespaces
Namespaces have been added, in order to facilitate the creation of different workspaces with common page names, for instance, among them (aka: "Introduction", "About", "Team members", etc).
See Namespaces
1.43. OpenPGP
OpenPGP supported added.
1.44. Override of memory and time limits for certain operations
- http://sourceforge.net/p/tikiwiki/code/43870
- http://sourceforge.net/p/tikiwiki/code/43907
1.45. Payment
Example templates to create a basic, but functional, shopping site with Tiki 11. Used by the TikiKart profile, finally working!
(this is all still very experimental, still need to add custom search and lots more...)
1.46. Performance
1.47. Permission Check
See #Check_Permissions
1.48. Plugins
See #Wiki_Plugins
1.49. Profiles
Profiles have been improved to be more useful as a configuration management tool. Namely, profiles:
- can now be stored in the local filesystem as YAML files, allowing for version control along with the project changes without the need for an external repository. Among other things, this would allow Tiki's Featured Profiles to be bundled with the source.
- can now be installed during the upgrade process as patches.
- allow exporting advanced rating configurations individually and as a complete set
- allow exporting and import RSS feed configurations along with article generators
- allow exporting articles, article types and article topics.
- allow exporting file gallery hierarchies
- allow exporting menus
- A set of commands are now available to export profiles.
- articles and blogs accept geolocation of their content from profiles.
Local Profiles consist of a single YAML file and an optional directory containing the references files.
This is part of: Configuration Management for Tiki Projects
1.49.1. Profiles wizard
See #Profiles_Wizard below
1.50. Ratings
1.50.1. Option to toggle the detailed rating results
Simple average of ratings has been added for Articles (Tiki 12.1), and a new setting has been added in "Admin home > Rating > User Interface" to toggle the display of the detailed results, as well as to whether include the explicit percentage or not.
1.50.2. Rating on Articles from PluginArticles
You can rate an article directly from PluginArticles if the article shows the whole content in the heading, and nothing is left in the article body.
1.50.3. Rating Language
Advanced Rating language now permits to round values and concat, and can read categories and tracker item fields.
1.50.4. Rating permission to view results
There is a new permission to grant groups of users to see the results: tiki_p_ratings_view_results
1.50.5. Show ratings in a forum thread list
When you use rating in a forum (See Rating, you can display the rating results in the thread list for the first message of every thread (thread topic). If detailed results and rating smileys are both enabled, they are also included in the thread details.
1.51. References
References implements local references in Tiki. Tiki also has support for Zotero references, but they are stored externally on the Zotero server.
1.52. Replacing rewrite rules with a routing file
- [http://sourceforge.net/p/tikiwiki/code/44661/](http://sourceforge.net/p/tikiwiki/code/44661/)
1.53. Restore Database
For development environments that need to replicate a production environment, new directives allow the installer to restore the database "Clean Install" from a database dump instead of the default Tiki database.
This feature allows site administrators to quickly replicate a site, test changes locally and return to the original state. Combined with profile development, this allows to test the site upgrade path.
Documentation is available in the bundled db/install.ini.dist file.
This is part of: Configuration Management for Tiki Projects
1.54. Screencast
This permits to capture your screen and upload to Tiki.
- Still image
- or Video with sound
This is thanks to the inclusion of the jCapture applet in Tiki.
See: ScreenCast
1.55. Search
1.55.1. Lucene Search
Search results 'Default where' parameter changed from a single select drop down selector to a multiple checkbox selection so that custom 'mixes' of content types can be included in search results
1.55.2. Search all database tables tool
In the admin search panel, there is a new tool.
It enables a text search in all text columns in all tables.
1.55.3. Search Index statistics
Search Index statistics are added in command line interface, in a similar way to what is shown in the “Admin home > Search” panel, when using advanced search and rebuilding the unified search index.
1.55.4. Search stats support unified search
- Search stats support for unified search
1.56. Server Check
See #Check_Server
1.57. Session collision protection
- http://sourceforge.net/p/tikiwiki/code/45249/
1.58. Setup.sh
See the sections called "#Composer" above and "#Upgrades" below, for more information.
1.59. Structures Drill Down menu
A Drill Down menu for structures has been added, so that when the user passes the mouse over a node in the line indicating the path to that node in the structure hierarchy, the names of all the children of that node will be displayed below the path to that structure node as links for easier navigation.
See Structures Drill Down menu
1.60. Smarty template engine
New preference to allow addition of extra dirs to be used for custom icons etc., respected by the security checks.
1.61. Switch user now has a way back
As an administrator, after switching to a different user, returning to the login screen will propose to switch back to your own user, avoiding to re-login.
1.62. Syntax highlighter (Codemirror) upgraded
The syntax highlighter (CodeMirror) has been upgraded from 2.x to latest stable version (3.16). This brings new features such as right-to-left language support, smarty syntax mode, a many more.
See: http://codemirror.net/doc/releases.html
1.63. Themes
1.63.1. Admin Theme
- It is now possible to set a different theme (and/or option) for admin pages to reduce the workload when creating custom themes and options
- Admin -> Look & Feel -> Theme -> Admin Theme
1.63.2. New: Greenvally
New theme kind of nature working fine also with rtl languages.
1.63.3. New: Utopias
New Utopias theme & options another, attainable, foundation, greycard, north, spaces, writer.
1.63.4. New: Horizons option in teal from jqui
New: jqui - Horizons option in teal
1.63.5. Updated in mods: many
Andreas08, Andreas09, CandiiClouds, Club Card, Faulkner, Fluid Index, Green Blog, Judy, Kubrick, LiteJazz, Milkyway, Mittwoch, Mollio, Planetfall, Smooth, Tikipedia, Twenty Ten, Underground,
1.64. Trackers
1.64.1. Change tracker field type after creation
It has been restored the ability to change tracker field type after creation
1.64.2. Detect and remove orphan files
Added the ability to detect and remove orphan files created through the tracker files field type
1.64.3. Inline editing
There is a new feature in trackers to allow Inline editing of items (using ajax_inline_edit), from the list of displayed items. Once enabled, the list can be edited from the tracker item listing itself.
You will see this icon ↗ next to each value that can be edited inline.
In addition, lists of items generated from PluginTrackerList or from PluginTrackerFilter can be editable also if the corresponding new param "editable" (with the list of fields to be editable) or "editableall=y" are used:
For more information, see Tracker Inline edit
1.64.4. List Trackers: added autocomplete to the find field
- List Trackers: added autocomplete to the find field
1.64.5. Tracker Fields: Kaltura
Since Tiki11, there is a Kaltura Tracker field type that displays a series of attached Kaltura videos, and permits to upload (if you have permissions)
- Kaltura tracker field to attach media to tracker items
1.64.6. Tracker Fields: Math
New 'Math' tracker field added to calculate a value from the other fields. -> Mathematical Calculation Tracker Field
1.64.7. Tracker forms enhanced with library 'Chosen'
You can choose a value from a dropdown box by selecting the items in the list through scrolling down, as usual, or you can nowadays filter the list values based on the text you type at the top.
Similary, section to allow multiple selection of items can be shown in a small but enhanced dropdown box, which allows the user to select one or many of the options, remove then from the list in the text field at the top:
Additionally, you can also filter the values displayed in the dropdown so that only those matching your typed text are shown in the list (only the ones starting with "D" in the example below: Documentation and Dogfood a *.tiki.org site.
See Improve Tracker Forms
1.64.8. Tracker List with last comment author and date
You can display last comment author and date in the table column for comments, through a new option in the tracker edition > "Features > Allow comments > Display last comment author and date".
1.64.9. Users can see just their own items (new setting)
Added an option to allow displaying just the user's items to the user through PluginTrackerList with the param view=user, even if no extra permissions are granted to this user's groups.
1.65. Translations
1.65.1. Custom JavaScript translations
Custom JavaScript translations: You can place a file at lang/xx/custom.js for your language with any custom translation for the JavaScript related messages, even if it could contain any valid JavaScript.
1.65.2. Bing Translate support
Added Bing Translate support for machine translation.
1.66. Unified index
Support for MySQL Full Text Search and Elastic Search as engines has been introduced for unified index. These engines are complete alternatives to the Lucene (PHP Implementation). All user interface components and plugins (such as PluginList, PluginCustomSearch, ...) will keep working and the documentation available in Unified Index still applies.
ElasticSearch requires a server to be installed. It provides several benefits:
- Faster indexing
- Lower memory usage within PHP
- Faster searches
- Scalable across multiple machines if required
- Better result highlighting
MySQL Full Text Search doesn't require a server to be installed. It provides several benefits:
- Faster indexing
- Lower memory usage within PHP
- Easy configuration
See: Unified Index Comparison
1.67. User Encryption
See User Encryption for more.
1.68. User wizard
See #User_Wizard below.
1.69. Version checker
- Version checker has been revamped and now deals better with LTS versions:
1.70. Video
- Kaltura tracker field to attach media to tracker items
- Kaltura plugin allows to display videos with an html5 player, so that they can be viewed with some browsers in standard smartphones (tested as ok in Firefox on Android, and Safari on iPhone).
1.71. Windows Azure
- Read the environment variables for DB autoconfiguration on Azure
- Using a MySQL SSL connection
1.72. Wiki
1.72.1. Argument Variables
A few new wiki argument variables have been added. Some of them were introduced to ease the task to create simple templates for document management and revision approval systems, such as iso9001/iso14001. These variables allow to define custom information layout in the page header (Author Name, Last edited on, Document Version, and equivalent for revised and approved versions):
pageid
(id from a wiki page; added in Tiki 12.1)
domain
(site domain; added in Tiki 12.1)
domainslash
(site domain ending with a slash; added in Tiki 12.1)
domainslash_if_multitiki
(only when in a multitiki installation, site domain ending with a slash; if the page doesn’t belong to a multitiki installation, nothing is returned; added in Tiki 12.1)
lastVersion
(last version of the wiki page; added in Tiki 12.2)
lastAuthor
(last editor of the wiki page; added in Tiki 12.2)
lastModif
(last modification date, in short format, of the wiki page; added in Tiki 12.2)
lastItemVersion
(last version of the tracker item indicated in the url; added in Tiki 12.2)
1.72.2. Flagged Revisions
Many enhancements, including batch approval and reporting on status of Flagged Revisions.
1.73. Wiki Plugins
New and/or Improved Plugins below.
1.73.1. Improved: Plugin Articles
Improved Plugin. Shows a link at the bottom to facilitate adding a new article or submission if the user has permission to do so.
See PluginArticles
1.73.2. Improved: Plugin FancyTable
Improved plugin. As usual you can sort by one or more columns, and nowadays you can also filter you results by searching for some string in one or more columns. In the example below, sorted by one column ("Percentage"), and filtered by content in another column ("Native name" containing "de"):
See PluginFancyTable
1.73.3. Improved: Plugin Img
Improved plugin. Image magnification has been added to plugin image. Full size image appears with zoom option in a "Colorbox" overlay when thumbnail is clicked.
See PluginImg
1.73.4. Improved: Plugin MediaPlayer
Improved plugin. Media player plugin uses its own mp3 and flv players.
1.73.5. Improved: Plugin Proposal
- PluginProposal: Added the ability to set custom weights to groups in the proposal plugin, which affects the stored attributes. This is useful for Code Review
1.73.6. Improved: Plugin Slider
New themes added in Tiki 12.1
See PluginSlider
1.73.7. Improved: Plugin TrackerList & TrackerFilter
There is a new feature in trackers to allow Inline editing of items (using ajax_inline_edit), from the list of displayed items. Once enabled, you can use some new params in these plugins, to allow some displayed fields to be editable (param "editable", with the list of fields to be editable), or the whole list of displayed items (with param "editableall=y")
In Plugin TrackerList you can also define some parameters to use the new version of jquery sortable tables library, allowing you to produce tables that can be sorted and filtered on the fly by one or more columns, in a similar way to what can be achieved in #Plugin_FancyTable shown above.
When the param "sortable=y" is added in Plugin TrackerFilter and "jquery sortable tables" feature is enabled, the list of displayed results show a field on top which allows filtering in real time the results shown in the table, in a similar way to what can be achieved in #Plugin_FancyTable shown above.
In addition, you can display last comment author and date in the table column for comments, through a new option in the tracker edition > "Features > Allow comments > Display last comment author and date".
See PluginTrackerList and PluginTrackerFilter for more information.
1.73.8. New: Plugin Insert
- New: PluginInsert
1.73.9. New: Plugin ListExecute
See PluginListExecute
1.73.10. New: Plugin Local Files
New Plugin. Assist in showing links to files or directories on local drives or shared file servers. Likely to only work fully on IE for Windows based intranets.
See PluginLocalFiles
1.73.11. New: Plugin Pref
Simple plugin to allow global preference check and display content depending on the condition. See PluginPref
1.73.12. New: Plugin Sign
- New: PluginSign
1.73.13. New: Plugin Together
New Plugin to use the experimental service for your website from mozilla labs called TogetherJS, that makes it surprisingly easy to collaborate in real-time: notify unique urls, co-write, talk, follow pages visited by your buddies. TogetherJS is alpha-quality software. We do not recommend using it in production at this time, even if it looks promising as a Real Time Collaboration (RTC) tool. Formerly known as TowTruck.
1.73.14. New: Plugin TrackerCalendar
New plugin in Tiki10 and improved in Tiki12. It allows managing tracker items as resources in a calendar view: i.e. it uses FullCalendar ResourceViews to render the content of a tracker. The feature is not using the Tiki feature Calendar, so that you don't need to have "Calendar" feature enabled for this plugin to display data in a calendar view.
See PluginTogether and the profile Together
See PluginTrackerCalendar and the profile to easily add a working example in your site: http://profiles.tiki.org/Tracker_as_Calendar_12
1.73.15. New: PluginTrackerQueryTemplate
New plugin in Tiki10. It allows to generate forms from a tracker. Currently only able to list data, the TrackerQueryTemplate plugin simply obtains data from a tracker and allows an editor the ability to list the tracker data as he sees fit.
See PluginTrackerQueryTemplate
1.73.16. New: Plugin WebDocViewer
New Plugin. It allow displaying many types of documents online, embedded in your Tiki pages. See PluginWebDocViewer
1.74. User Watches
There is a new tab in the user watches preferences screen, which allows the user to request wether to receive email copies or the changes made by himself/herself to the different sections of the website. If you keep these options below unchcked, you will not receive a copy of you own changes.
See User Watches
1.75. Wizards
This new feature facilitates tiki admins or normal users to set up their basic settings in a group of screens the show a reduced set of basic settings for them. The admin wizard is shown up by default to all new admins, while as of Tiki 12.0, the User Wizard needs to be launched in purpose, so far.
1.75.1. Admin Wizard
The admin wizard shows up for tiki admins when they first log in, enabling them to easily configure the main features of Tiki without the need to navigate through all admin panels. It will allow the admin to easily choose among a few options of wiki editor (wysiwyg and in html or in wiki syntax), inline editing, etc.
See Admin Wizard
1.75.2. Profiles Wizard
This wizard shows the admin some information about the most recommended profiles to apply, either to set up your site with a featured configuration template, add some useful extra configuration or show a demonstration of potentially interesting features for you, just with a few clicks.
Featured
- Collaborative_Community_12x
- Company_Intranet_12x
- Personal_Blog_and_Profile_12x
- Small.Organization.Web_Presence_12x
Useful
- Mobile
- Debug_Mode_Enabled/Debug_Mode_Disabled
- Together
- Time_Sheet
Simple Demos
- Dynamic_items_list
- Bug_Tracker
- Tracker_as_Calendar_10
- Voting_System
See Profiles Wizard
1.76. Upgrade Wizard
This Wizard will guide you through the most common new settings and informations in order to upgrade your site:
- Use it if you are upgrading from previous versions of Tiki, specially if you come from the previous Long Term Support (LTS) version.
- Some of these settings are also available through the Admin Wizard, and all of them are available through Admin Panels
- But this wizard will let you learn about them as well as enable/disable them easily according to your needs and interests for your site.
See Upgrade Wizard
1.76.1. User Wizard
This wizard will help users to fill in the main settings for their accounts in that website. Depending on the features enabled by the site admin, users will be offered more or less options.

See User Wizard
1.76.1.1. User Wizard: User Details (through User Tracker)
The User Wizard allows showing a new section called "User Details" provided that the admin has setup a User Tracker and defined the fields to be shown. Those fields can be the same ones shown at registration time (default) or can be a different set of fields from the same User Tracker.
Since the user Tracker can include "Static Text" fields, users can be shown Custom informations for their users, specific of their own Tiki site. Some demonstration fields are included in the suggested profile User_Trackers to set it up easily.
** 1.77. Workspace UI
There an interface to manage the creation and editing of Workspaces, so that batch creation of sets of Tiki objects with custom groups and associated permissions can now be handled more easily.
See Workspace UI
** 1.78. Zoombox for images
See #Plugin_Img
** Known limitations **
- Chosen picker doesn't work well with jQuery Mobile mode. See wish4671
- Line numbers in the syntax highlighter (codemirror) doesn't work well: plenty of space added between lines & text hidden. See wish4840
** Upgrades **
Things to watch out for
- **Composer** Many externals have not yet been moved to Composer
- URL Rewriting Revamp
- CKEditor4
- jQuery, jQueryUI and jQuery Mobile to be updated to the latest stable versions.
- elFinder is optional but it could affect some things in File Gallery
- Blog posts with content containing HTML may not display properly after upgrading. See solution below at #Blog_posts_containing_HTML
Blog posts containing HTML
Blog posts with content containing HTML may not display properly after upgrading - place the blog post content within PluginHTML to solve this issue. PluginHTML needs to be approved - if you are converting many blog posts go to tiki-plugins.php to approve in bulk.
Composer
When installing or updating through subversion, some external libraries are nowadays handled differently (using "composer"):
```
root@camagroup:/var/www/tiki_trunk# sh setup.sh
curl -s /usr/bin/curl
#!/usr/bin/env php
All settings correct for using Composer
Downloading...
Composer successfully installed to: /var/www/tiki_trunk/temp/composer.phar
Use it: php temp/composer.phar
php is a tracked alias for /usr/bin/php
Loading composer repositories with package information
Installing dependencies from lock file
Warning: The lock file is not up to date with the latest changes in composer.lock.
You may be getting outdated dependencies. Run update-to-update them.
- Installing addb/addb (5.18)
Downloading: 100%
- Installing phpcs/phpcs (1.3.1)
Downloading: 100%
- Installing phpseclib/phpseclib (0.3.1)
Downloading: 100%
- Installing smarty/smarty (3.1.33)
Checking out /tags/v3.1.33
- Installing zetacomponents/base (1.8)
Downloading: 100%
- Installing zetacomponents/webdav (1.1.3)
Downloading: 100%
- Installing zendframework/zendframework1 (1.12.1)
Checking out /tags/release-1.12.1/@25165
```
Click to expand
This also means that you might have to install some extra packages such as php5-gmp php-compat in your server (or request to have them installed for your), for optimal work by phpseclib. Otherwise, you might see this type of message:
```
phpseclib/phpseclib suggests installing ext-gmp (Install the GMP (GNU Multiple Precision) extension in order to speed up arbitrary precision integer arithmetic operations.)
```
phpseclib/phpseclib suggests installing pear-pear/PHP_Compat (Install PHP_Compat to get phpseclib working on PHP = 4.3.3.)
You can install them in Debian-based server (adapt to your OS if different) with a command like:
```
Command in a console
sudo apt-get install php5-gmp php-compat
```
To install on Mac OSX (Moutain Lion 10.8) it have to be tricked a little bit as Mac OSX Unix doesn't come with some basic command we usually use (like apt-get or wget, etc).
You can install [http://www.macports.org/](http://www.macports.org/) (xcode will be required too, follow the install guide) to get usual set of command required to install what Composer need. Then you'll be able to do:
```
Command in a Mac Terminal once MacPorts in installed
sudo port install php5-gmp
sudo port install php5-mcrypt
```
It is also required you install PEAR php_compat. (since Mac OSX 10.5)
```
Command in a Mac Terminal once MacPorts in installed to check and install PEAR/PHP_COMPAT
which pear (to check anyway if it is installed)
sudo wget http://pear.php.net/go-pear.phar
sudo php -d detect_unicode=0 go-pear.phar
```
You should have no more worries installing Tiki and Composer stuff on Mac OSX.
**local.php**
If you had defined in your former ./db/local.php something like
```
$api_tiki='adodb';
```
and after the upgrade you notice that you can't edit some pages, of weird characters are displayed in some rare places, you can try removing that line from your ./db/local.php.
**Mobile permissions**
If you applied the Mobile profile in the past, and you get a "permission denied" message when you attempt to see the site in mobile mode, you need to grant the permissions to view perspectives to anonymous users:
- tiki_p_perspective_view
**Search Index**
After the upgrade, the [Unified Index](#) may take longer to rebuild, at least the first time that you are with this new Tiki version. If the link "Rebuild index" at the "Admin search" panel doesn't produce a successful...
reindexing, you can do that on a terminal server side, while setting a higher amount of memory for the process, and forcing an initial clean-up of index leftovers.
Therefore, you could run something like:
```
root@server:/path/trunk# php -dmemory_limit=4G console.php i:r --force --log
Removing leftovers...
Started rebuilding index...
Rebuilding index done
```
For multitiki sites, you can rebuild with commands like:
```
root@server:/path/trunk# php console.php index:rebuild --site=site1.example.com
root@server:/path/trunk# php console.php index:rebuild --site=site2.example.com
...
```
More information: [Unified Index](#)
**& Syntax for short links invalid**
In past Tiki versions such as Tiki9, wrong syntax to point in a url to a specific tab was accepted. Example:
1. http://example.com/tracker1&show=mod
2. http://example.com/tracker1&cookietab=2
In Tiki12, and probably in some version earlier also, the syntax needs to be like:
1. http://example.com/tracker1?show=mod
**Themes**
Some CSS changes were introduced for themes between 11.x and 12.x:
**Site Logo & Site titles**
The former `#sitelogo`, `#sitetitle` and `#sitesubtitle` IDs have been changed to classes `.sitelogo`, `.sitetitle` and `.sitesubtitle`. Please make these changes in your theme CSS after upgrade if you formerly styled using any of those css selectors.
**Strasa.css login background color**
If you were using strasa.css theme style, you might miss the background color of the login module in the header, getting white text font over white background temporarily.
Set up theme options to "cool", for instance, and the blue background of the login module will be back in your site.
---
Pages linking to Tiki12from9
One page links to Tiki12from9
• Tiki12
Alias names for this page
Tiki12 from 9 | Tiki12 from Tiki9 | Tiki12FromTiki9 | 9to12 | 12from9
|
{"Source-Url": "http://doc.tiki.org/tiki-print.php?display=pdf&page=Tiki12from9", "len_cl100k_base": 11308, "olmocr-version": "0.1.50", "pdf-total-pages": 42, "total-fallback-pages": 0, "total-input-tokens": 67346, "total-output-tokens": 13962, "length": "2e13", "weborganizer": {"__label__adult": 0.0003523826599121094, "__label__art_design": 0.0007238388061523438, "__label__crime_law": 0.0001728534698486328, "__label__education_jobs": 0.0012798309326171875, "__label__entertainment": 0.00019097328186035156, "__label__fashion_beauty": 0.00012814998626708984, "__label__finance_business": 0.0003070831298828125, "__label__food_dining": 0.00023627281188964844, "__label__games": 0.0010166168212890625, "__label__hardware": 0.0005846023559570312, "__label__health": 0.00013756752014160156, "__label__history": 0.00025081634521484375, "__label__home_hobbies": 0.00014662742614746094, "__label__industrial": 0.00012362003326416016, "__label__literature": 0.0003311634063720703, "__label__politics": 0.00016260147094726562, "__label__religion": 0.0003995895385742187, "__label__science_tech": 0.0019741058349609375, "__label__social_life": 0.0002474784851074219, "__label__software": 0.1597900390625, "__label__software_dev": 0.8310546875, "__label__sports_fitness": 0.00013530254364013672, "__label__transportation": 0.00012683868408203125, "__label__travel": 0.00023806095123291016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47011, 0.06517]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47011, 0.06404]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47011, 0.80468]], "google_gemma-3-12b-it_contains_pii": [[0, 1398, false], [1398, 2745, null], [2745, 4348, null], [4348, 5248, null], [5248, 5906, null], [5906, 6185, null], [6185, 7562, null], [7562, 8927, null], [8927, 9935, null], [9935, 10622, null], [10622, 12094, null], [12094, 13101, null], [13101, 13726, null], [13726, 16665, null], [16665, 16945, null], [16945, 18867, null], [18867, 19881, null], [19881, 20778, null], [20778, 22581, null], [22581, 23358, null], [23358, 24669, null], [24669, 25140, null], [25140, 26360, null], [26360, 27701, null], [27701, 28329, null], [28329, 28948, null], [28948, 29725, null], [29725, 31221, null], [31221, 32695, null], [32695, 33409, null], [33409, 33999, null], [33999, 36247, null], [36247, 36816, null], [36816, 37932, null], [37932, 38738, null], [38738, 38932, null], [38932, 40333, null], [40333, 40851, null], [40851, 43134, null], [43134, 45116, null], [45116, 46906, null], [46906, 47011, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1398, true], [1398, 2745, null], [2745, 4348, null], [4348, 5248, null], [5248, 5906, null], [5906, 6185, null], [6185, 7562, null], [7562, 8927, null], [8927, 9935, null], [9935, 10622, null], [10622, 12094, null], [12094, 13101, null], [13101, 13726, null], [13726, 16665, null], [16665, 16945, null], [16945, 18867, null], [18867, 19881, null], [19881, 20778, null], [20778, 22581, null], [22581, 23358, null], [23358, 24669, null], [24669, 25140, null], [25140, 26360, null], [26360, 27701, null], [27701, 28329, null], [28329, 28948, null], [28948, 29725, null], [29725, 31221, null], [31221, 32695, null], [32695, 33409, null], [33409, 33999, null], [33999, 36247, null], [36247, 36816, null], [36816, 37932, null], [37932, 38738, null], [38738, 38932, null], [38932, 40333, null], [40333, 40851, null], [40851, 43134, null], [43134, 45116, null], [45116, 46906, null], [46906, 47011, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 47011, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47011, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47011, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47011, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47011, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47011, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47011, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47011, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47011, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47011, null]], "pdf_page_numbers": [[0, 1398, 1], [1398, 2745, 2], [2745, 4348, 3], [4348, 5248, 4], [5248, 5906, 5], [5906, 6185, 6], [6185, 7562, 7], [7562, 8927, 8], [8927, 9935, 9], [9935, 10622, 10], [10622, 12094, 11], [12094, 13101, 12], [13101, 13726, 13], [13726, 16665, 14], [16665, 16945, 15], [16945, 18867, 16], [18867, 19881, 17], [19881, 20778, 18], [20778, 22581, 19], [22581, 23358, 20], [23358, 24669, 21], [24669, 25140, 22], [25140, 26360, 23], [26360, 27701, 24], [27701, 28329, 25], [28329, 28948, 26], [28948, 29725, 27], [29725, 31221, 28], [31221, 32695, 29], [32695, 33409, 30], [33409, 33999, 31], [33999, 36247, 32], [36247, 36816, 33], [36816, 37932, 34], [37932, 38738, 35], [38738, 38932, 36], [38932, 40333, 37], [40333, 40851, 38], [40851, 43134, 39], [43134, 45116, 40], [45116, 46906, 41], [46906, 47011, 42]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47011, 0.00822]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
8bb2f5eb6a8c37e09f835b2cedace622ef1c1fad
|
Efficient Analysis Methodology for Huge Application Traces
Damien Dosimont, Generoso Pagano, Guillaume Huard, Vania Marangozova-Martin, Jean-Marc Vincent
To cite this version:
HAL Id: hal-01065783
https://inria.hal.science/hal-01065783
Submitted on 18 Sep 2014
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Efficient Analysis Methodology for Huge Application Traces
Damien Dosimont∗†‡ §, Generoso Pagano∗†‡ §, Guillaume Huard†‡ §, Vania Marangozova-Martin†‡ §
and Jean-Marc Vincent†‡ §
∗Inria
†Univ. Grenoble Alpes, LIG, F-38000 Grenoble, France
‡CNRS, LIG, F-38000 Grenoble, France
§firstname.lastname@imag.fr
Abstract—The growing complexity of computer system hardware and software makes their behavior analysis a challenging task. In this context, tracing appears to be a promising solution as it provides relevant information about the system execution. However, trace analysis techniques and tools lack in providing the analyst the way to perform an efficient analysis flow because of several issues. First, traces contain a huge volume of data difficult to store, load in memory and work with. Then, the analysis flow is hindered by various result formats, provided by different analysis techniques, often incompatible. Last, analysis frameworks lack an entry point to understand the traced application general behavior. Indeed, traditional visualization techniques suffer from time and space scalability issues due to screen size, and are not able to represent the full trace. In this article, we present how to do an efficient analysis by using the Shneiderman’s mantra: “Overview first, zoom and filter, then details on demand”. Our methodology is based on FrameSoC, a trace management infrastructure that provides solutions for trace storage, data access, and analysis flow, managing analysis results and tool. Ocelotl, a visualization tool, takes advantage of FrameSoC and shows a synthetic representation of a trace by using a time aggregation. This visualization solves scalability issues and provides an entry point for the analysis by showing phases and behavior disruptions, with the objective of getting more details by focusing on the interesting trace parts.
Keywords—Application analysis, trace management, analysis tools, visualization tools, debugging, performance analysis
I. INTRODUCTION
Nowadays, computer systems are made of increasingly complex hardware and software components. Their hardware architectures are possibly multicore, heterogeneous and distributed. Their software stack is composed of numerous layers including, for example, middlewares to abstract the platform [1]. In this context, application debugging and performance optimization become tremendously difficult tasks.
By tracing the application, the analyst gathers low-level information on its execution (function calls, thread or process execution states, interruptions, CPU load, memory usage, hardware counters). In debugging case, the objective is finding the cause of a perturbation or an undesirable behavior. In performance optimization, the analyst looks for bottlenecks and less efficient algorithms and code parts. Following Shneiderman’s principle [2], an analysis starts by an overview of the trace, showing general information. Then, the analyst focuses on the interesting parts (visible perturbation, particular phase), and filters noise. Finally, he get details on demand, e.g., access to source code. This process can be iterative if necessary. However, several issues hinder this analysis flow:
Big Trace Management: Computer program traces may contain a large quantity of events (for example, we get several million events for a dozen of seconds of G-Streamer video decoding). High quantity of information in the trace translates into a large data volume to store and load in program memory for analysis. In particular, access to trace data is slow because it is often sequential or mono dimensional. In worst cases, the analyst cannot even access to the trace because of performance and memory. Efficient trace management storage is thus a mandatory first step to do an efficient analysis.
Analysis Flow Support: An effective analysis typically involves several treatments on traces, either on raw data, or within a flow where the result of one computation is reused as an input of another (for instance, filter the trace, process filtered data, then visualize the result of this processing). Usually, because of the variety of analysis techniques and tools, output data is not standardized. Thus, the analysis flow requires an adaptation to enable data sharing between tools. This leads to a strong software complexity, whereas output data standardization would provide a straightforward compatibility.
Trace Overview: Analysis first step requires to show a synthetic view of the trace. Traditional techniques, like Gantt Chart, are used to represent trace events over time and space. However, these techniques suffer from scalability issues, because the level of detail is too high. Using a finite screen, representing one million events leaves only one pixel for an event. This leads to cluttered drawings, non-exact proportions or uncontrolled visual aggregation. Zooming or panning, to counter these issues, provoke context loss. Aggregating the events is an other tentative to represent the full trace, but existing solutions cause an important information loss.
We solve these three main issues with two contributions. The first one is FrameSoC [3], a new trace management and analysis framework. With FrameSoC, we manage large traces by providing a database storage solution, where trace information is represented with a generic data-model. FrameSoC features an interface to get and filter trace information, which optimizes access time to data and avoid memory saturation. Regarding the analysis complexity issue, which represents our main challenge, we propose facilities to enable analysis flows, by expressing and storing analysis results using a common format. Moreover, we can plug to the infrastructure various
analysis tools, like statistics modules, filters, data mining engines and visualizations, using a generic interface. Our second contribution is Ocelotl, a visualization tool employing time aggregation techniques to represent a trace synthesis. Its objective is to provide the analyst an entry point to the analysis. By interacting with the visualization, the user gets information such as phases (initialization, steady states) or behavior disruptions. Moreover, compared to other aggregation techniques, Ocelotl gives the user the control over information loss. The tool is plugged into FrameSoC and takes advantages of its features, such as data queries, event filtering and result management. In this article, we present successively these two contributions (Section II, III, IV), by evoking for each one related works, theoretical aspects and implementations. In Section V, we detail a complete analysis flow, from the overview provided by Ocelotl to more detailed information, by using case studies. This part has the objective of validating our analysis methodology across a real example. In particular, it will highlight the synergy between both contributions and their respective features. We will conclude in Section VI by proposing new features and improvement for the analysis.
II. FRAMESOC: TRACE MANAGEMENT FRAMEWORK
A. Existing Solutions for Storing Traces
Traditionally, raw trace data are stored in plain files (event logs), with no specific support for optimized random accesses or filtering. As a consequence, the analysis requires to load the whole file into main memory [4]. Other approaches propose the use of a structured trace file, more suitable for specific kinds of access. A frame-based file format [5], for example, enables fast time-guided navigation. Another structured format [6] optimizes accesses in both time and space (processes) dimensions. These approaches help the access to trace information only in a fixed and limited number of dimensions and are not flexible for arbitrary selections. A different approach for storing traces is the use of a database, which ensures scalability while keeping flexibility for data-access. Some of the database solutions proposed in literature only provide the support for a single trace format (e.g., [7], [8]), while other solutions are more open to different trace formats (e.g., [9], [10]).
B. Our Database Solution for Trace Storage and Management
FrameSoC addresses the issue of huge trace storage by using a relational database. Several pragmatic motivations led us to this choice. First, a database separates the logical data-model from the physical representation of data. Furthermore, thanks to accurate modeling and normalization, information is stored with minimal redundancy. Then, we can easily access parts of the trace or filter noise by using trivial querying. Search operations can be optimized by defining indexes: this mechanism is flexible and not limited to time or space dimensions. Finally, complex computations on trace data can be performed in the database, instead of loading the whole trace in memory and do such computations at the application level.
The core of our database solution is the generic data-model. It represents trace metadata, trace raw data and analysis results, with related tools metadata (Figure 1). The central entity of the model is the trace, which has metadata and can be related to files (e.g., configuration files, platform description).
A trace is composed by several events, each of them having an active entity producing it. Event producers can be organized in a hierarchy, reflecting, for instance, the execution hierarchy in the traced application (processes/threads). This model is innovative since, beside trace data, it provides some predefined but extensible types of analysis results, with the link to the corresponding analysis tools (Section III). Our data-model is actually a new self-defining trace format (like SDDF [11] or Pajé [12])), since the description of trace types and event types is part of the stored information. Using this approach we obtain a generic trace representation, with minimal semantics and suitable for representing any kind of trace format without information loss. At present, we have managed to represent with our model KPTTrace [13], Pajé [12]. A Java API (FrameSoC library) is provided to easily interact with this data-model. Given the richness of our data-model, the role of the database is central in our solution. Indeed, we use the database to manage several traces, store analysis results produced on such traces, and also organize the tools producing such results. None of the aforementioned existing database solutions consider multi-trace requests (e.g., to identify a subset of traces for a multi-trace analysis) or the generic storage of results, and analysis tools are not taken into account.
To be independent from a given DBMS technology, our infrastructure is designed to be able to work with different DBMS (DataBase Management System), provided that a simple adaptation module is implemented: at this time, support for MySQL and SQLite is provided. With the aim of providing a simple and scalable solution, we store each trace in a different database and all trace databases are coordinated using a central system database. When considering storage scalability issues, none of the supported DBMS limits the number of databases managed. Considering the database size, in the case of MySQL a table can grow up to the maximum file size (4 TB on ext3 file systems) and there are partitioning techniques to manage tables exceeding this limit. For SQLite the actual database size limit is fixed by the file system maximum file size.
C. Performance Measurements
To show that the proposed database solution is effective when analyzing data over several dimensions, we present...
in this section some performance results. The DBMS used is SQLite. We use synthetic traces, where different event producers and event types are uniformly distributed over time. The workstation used has a 3.30 GHz x 12 CPU, a 256 GB SSD and 16 GB of DDR3 RAM.
1) Importing Traces of Various Sizes Into the System: We imported traces of different sizes, ranging from 5.5 MB (100 thousands of events) to 2.75 GB (50 millions of events), measuring the import time with and without indexing. Import time (Figure 2) grows linearly with trace size in both cases, as proved with a linear regression showing a coefficient of determination $R^2$ of $1 - 10^{-4}$. Import times keep reasonable values even for huge traces (without indexes about 7.5 minutes for a 2.75 GB trace). Using indexes, the import times grow by about 75%.

2) Querying a Given Trace over Different Dimensions: A great advantage of using a database for storing traces is the flexibility it offers when performing requests in various dimensions. Using a synthetic trace of 2 million events, we performed requests to retrieve events respectively in a given time interval (a), from a given producer (b), of a given type (c), or having a given value for a parameter (d). For each request, the result set has the same size (20000 events). No indexing has been used in databases. The time needed to filter trace events using each of the four different dimensions (Figure 3) remains in the same order of magnitude. This confirms that the joint use of a well designed data-model and database technology lets trace analysts explore a given trace from different perspectives at a comparable cost. On the contrary, a structured-file trace format as OTF [6] optimizes only producers and time dimensions.

3) Evaluation of Trace Size on Request Time: One of the interests of putting huge trace data in the database is information retrieval, limiting the effects of trace size. For this reason we retrieved a fixed number of events (10000) contained in a time interval from traces of different size (from 5.5 MB to 2.75 GB), measuring the request time (Figure 4). Ideally, we would like the retrieval time to be constant, given that the result set size is fixed; however, without any indexing, the retrieval time grows linearly with trace size (from less than 1 s to 60 s), as confirmed with a linear regression showing an $R^2$ of $1 - 10^{-6}$. Performing careful indexing at the database level, we actually managed to get near constant retrieval time (less than 0.1 s), paying only at import time the indexing price.

4) Dealing with Gigantic Traces: To test the limit of our system, we imported a gigantic trace of 110 GB, containing almost 2 billions of events. For reasons of disk space, we used a standard hard disk drive of 1 TB for our experiments. Table I shows the whole trace import time and the request time to retrieve 10000 events contained in a given time interval. We use both indexed and non-indexed databases. For comparison, we tried to do the same kind of filtering also directly on the raw trace file (event log) using the `awk` program. Database import time is significant given the trace volume, especially with indexing. The results are however still linear with the ones obtained in the first experiment (the small increment in time is due to magnetic disk performance). The interesting point is that, even for this gigantic trace, we manage to filter the events: without indexing the time is huge, but using indexes, filtering time is extremely small and similar to the results obtained for traces of smaller size (third experiment). On the contrary, the manual filtering on the raw trace file has a duration not suitable for interactive analysis.
<table>
<thead>
<tr>
<th>Trace Size (MB)</th>
<th>Non-indexed DB</th>
<th>Indexed DB</th>
<th>Raw trace file</th>
</tr>
</thead>
<tbody>
<tr>
<td>5.9</td>
<td>9.9</td>
<td>42232</td>
<td>100</td>
</tr>
<tr>
<td>9.9</td>
<td>0.12</td>
<td>875</td>
<td></td>
</tr>
</tbody>
</table>
**TABLE I. GIGANTIC TRACE RESULTS**
This test shows that our framework, taking advantage of database features, enables fast access to trace data for analysis purposes even when dealing with gigantic traces.
III. ANALYSIS FLOW MANAGEMENT WITH FRAMESOC
A. Existing Solutions for Tool and Flow Management
The need for differentiated analysis of traces forces the analyst to face a situation of extreme tool heterogeneity, with consequent compatibility issues, since specific tools tend to work with specific formats [14], [15]. In the field of parallel-systems, different solutions have been proposed to address this problem. The visualization tool Pajé [4] adopts a modular structure, where different modules can be plugged to the analysis flow by using semantic-agnostic interfaces. However the creation of a new analysis flow is static and requires reassembling the different modules in a new program. ScoreP [16] measurement infrastructure tackles tool heterogeneity...
by multiplexing/demultiplexing different instrumentation types to different output formats, without the notion of shared data-model, neither for trace data nor for analysis results. With the same philosophy, Tau [17] provides a trace analysis environment where the interaction among different tools is obtained via trace translators. A shared data-model exists only for trace profiles. In the domain of embedded systems, existing frameworks for trace analysis are even more specific to given formats or hardware platforms, so that no actual support for generic tool interaction exists. Proprietary solutions (e.g., [18]) typically offer a closed set of functionalities tailored to specific hardware. Even open source solutions (e.g., [19]) do not easily enable the plugging of new tools and do not support tool interaction through a shared data-model for analysis results.
B. FrameSoC Tool Management and Workflow Support
FrameSoC helps the contribution of new tools to the framework with a clean plugin mechanism based on the Eclipse one. Indeed, a preferred way to add a tool to FrameSoC is to provide an Eclipse plugin that implements the interface we defined through an extension-point. This extension point defines the metadata and the class the tool plugin should provide in order to be integrated in FrameSoC. However, our infrastructure also supports the possibility to integrate external back-box tools. In both cases, tools deal with the same data-model for trace and result storage, and are launched using the same interface.
The prototype implementation of FrameSoC itself provides some framework tools, to enable basic trace analysis (Figure 5): a structured trace explorer with details on trace metadata, an event-density chart to easily identify trace hot spots, a pie-chart gathering some statistics about the trace and a form for event querying using regular expressions. The infrastructure explicitly supports the plugging of trace importers, trace exporters and more general analysis tools. At this time, we plugged tools able to import real traces (KPTrace, Pajé formats) and to export into Pajé format. As for analysis tools, we integrated a tool able to perform simple sequence-search with result saving and a filter for event producers, able to find and save the subset of producers being active (or idle) during a given time interval. Finally, we also propose an innovative visualization tool, Ocelotl, able to perform aggregation (Section IV).
IV. OCELOTL: TRACE OVERVIEW MODULE
This section describes Ocelotl, an innovative visualization tool plugged into FrameSoC. This tool is used to highlight FrameSoC ability of helping the analysis flow. Ocelotl aims at showing a trace overview, answering to both time and space scalability issues. The trace is cut into time slices and represented as a sequence of representative elements. This sequence is constructed using an aggregation algorithm that identifies consecutive parts of the trace showing a similar behavior, and aggregates them.
A. Trace Overview Existing Approaches
Existing analysis tools use different approaches to provide a trace overview. Statistics representation, such as graph or bar charts, may represent metrics over time. These kinds of representations are proposed by KPTrace [15] with its Outline View, and are convenient to distinguish CPU activity, for instance. However, the notion of software and hardware hierarchy is totally missing, so the space dimension cannot be studied with this technique. On the contrary, other KPTrace statistic techniques [15] or those provided by LTTng Eclipse Viewer [19], show activity time proportion for each event producer. But here, the drawback is the lack of time dimension representation (aggregation is done on the full trace), and the analyst cannot observe the process behavior over time.
Another approach is based on time views, like Gantt chart representation [20]. It is classically used to visualize application behavior over time, thanks to its ability to represent causality relations. However, because of the amount of information to visualize (due to the events granularity, the platform heterogeneity or the execution duration), an analyst may be forced to zoom out or to pane, thus losing either
the execution context or the representation fidelity. A partial solution to this problem is proposed by Pajé [4] and LTtng Eclipse Viewer [19]. Both tools highlight the events that are too small to be correctly represented using pixels. They use a specific shape/color to represent an aggregation of these groups of events. However, even if such technique shows the possible information loss, it lacks associated semantics that would help the analyst to understand the trace.
Another major issue in providing trace overviews is the hierarchy representation. The space axis in Gantt charts, for example, may be used for this purpose, but the user may scroll and lose the context. In KPTrace Gantt chart [15], the hierarchy associated with a given core can be collapsed and represented as part of the root of the hierarchy. Unfortunately, it is not possible to distinguish which child an event belongs to, which may be confusing. In the Vampir [14] task profile view, event producers are clustered using a proximity metric, like the function duration. This representation, however, fails in showing causality relations. Triva [21] treemap view uses multiple axes for hierarchy representation and show the evolution of the execution over time by using animations. This visualization highlights network bottlenecks and unbalanced workloads, but is not suited to identify problems related to synchronization (deadlock) or scheduling.
B. Build a Macroscopic Description of a Trace
The contribution we propose is a temporal view, where trace areas having a “close” behavior are aggregated. This aggregation is materialized by a rectangle area of a given color. Theoretical background comes from Lamarche-Perrin’s works [22], dedicated to the Multi-Agent System macroscopic analysis. From a microscopic view, the analyst gets a macroscopic representation that has its own semantic and enables to analyze the system with a different point of view. The way to generate this system macroscopic description is the data aggregation. This process involves three concepts: information loss, complexity reduction and macroscopic semantic. Information loss is useful to determine element proximity. It is calculated from Kullback-Leibler divergence [23] (Eq. 1), which is a metric that represents logical information lost by using an aggregated description instead of the microscopic description. Entropy reduction (Eq. 2), calculated from Shannon entropy [24], represents logical information saved by encoding the aggregated description instead of the microscopic description.
$$\text{loss}(A) = \sum_{e \in A} v(e) \times \log_2 \left( \frac{v(e)}{v(A)} \times |A| \right) \quad (1)$$
$$\text{gain}(A) = (v(A) \log_2 v(A)) - \sum_{e \in A} (v(e) \log_2 v(e)) \quad (2)$$
The knowledge of these two metrics enables to compute a data aggregation, controlling information loss and complexity reduction. More the aggregation is strong (i.e., more elements are aggregated), more the information loss grows and the complexity reduces. On the contrary, a weak aggregation keeps the amount of information but also increases the complexity. What is interesting is to find a compromise between information loss and complexity reduction to build a meaningful macroscopic description. This compromise can be explicitly defined by using parametrized Information Criterion (Eq. 3) to find the desired aggregation (the one having the higher pIC).
$$\text{pIC}(A) = p \times \text{gain}(A) - (1 - p) \times \text{loss}(A) \quad (3)$$
To adapt these concepts to trace analysis, we need do define a microscopic description. We chose to perform a time slicing of the trace. We generate an array whose index is associated to the temporal position. Each element of the array is a vector, whose elements correspond to the event producers of the trace. The vector values are computed using a particular metric, for instance, the activity time ratio of the associated event producers. However the analyst may be interested by metrics with a richer semantic. For this case, we provide a cubic matrix to perform time-slicing. One dimension is related to the time slice number, the second one to the event producers, and the last one is associated to a chosen metric, as for example the activity time ratio of each state type (e.g., read, write, idle).
The macroscopic description is then generated by applying the Best Cut Partition algorithm [22] on the array. The principle is to aggregate only the temporally contiguous parts, by taking the values of each dimension into account. The first step consists in computing the quality measures (information loss and complexity reduction) for each combination of consecutive cuts. As an example, assume that, at the beginning, there are 4 slices (0, 1, 2 and 3). The algorithm computes a quality measure between 0 and 1 (i.e. aggregate 01), between 1 and 2 (12), between 2 and 3 (23) but also between 01 and 2, between 0 and 12, etc.
As the original algorithm works with scalar arrays, we need to adapt it to vector arrays. The gain and loss metrics associated to an aggregation in n dimensions are respectively the sum of aggregation gains and losses in each dimension. Hence, the new formula, where quality(A) corresponds to gain(A) or loss(A):
$$\text{quality}(A) = \sum_{i \in n} \text{quality}(A[i]) \quad (4)$$
The principle is the same for matrix arrays:
$$\text{quality}(A) = \sum_{i \in n} \sum_{j \in m} \text{quality}(A[i][j])$$
The second step requires to provide the gain/loss parameter p to compute the parametrized Information Criterion, and then, get the corresponding aggregation. For p = 0, maximizing the pIC is equivalent to minimizing the loss: a null loss will result in no aggregation, except for strictly identical contiguous vectors. For p = 1, the output array will be fully aggregated, resulting in a total loss of information. When p is between these extrema, different aggregation configurations will emerge according to the input vectors values. A list of relevant values of p is computed using a search by bisection, that finds successive parameters that give a different configuration. The objective is then to find the right aggregation parameter corresponding to a meaningful macroscopic description. An example of aggregation applied on random vector data is shown in Table II. Vectors that are aggregated for a given p are represented with a similar number.
TABLE II. EXAMPLE OF AGGREGATION APPLIED TO A VECTOR ARRAY DEPENDING ON THE GAIN-LOSS PARAMETER P
<table>
<thead>
<tr>
<th>Gain-loss parameter</th>
<th>Corresponding parts (aggregated if same number)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>0.035</td>
<td>4 aggregates</td>
</tr>
<tr>
<td>0.022</td>
<td>3 aggregates</td>
</tr>
<tr>
<td>0.078</td>
<td>2 aggregates</td>
</tr>
<tr>
<td>0.223</td>
<td>1 aggregate</td>
</tr>
</tbody>
</table>
C. Interaction to Find the Best Aggregation
The methodology we propose with Ocelotl implies to find the aggregation whose semantic is meaningful in regard to the analyst objectives. To do that, we propose several interaction mechanisms. First, the user selects the number of time-slices. This number should be chosen according to the screen resolution, but also adapted to the Best Cut Partition algorithm complexity. In fact, the original algorithm complexity is $O(n^2)$. By taking account of the vector and matrix adaptation, it becomes $O((n \times m \times l)^2)$ where $m$ and $l$ are the new added dimensions. Empiric measures show that $n \times m \times l$ should not be superior to 10000 to avoid memory saturation (6 GB required for 10000 elements). After determining the number of time-slices and getting a list of relevant values for the parameter $p$, the user starts by progressively disaggregating the representation, from the most aggregated to the least one. We provide the two quality curves in function of the parameter value, which the user can interact with. By clicking, he gets the corresponding parameter and thus the related aggregated representation. The aim is to determine the information quantity a new representation brings, compared to the previous one. This feature is interesting to spot disruptions apparition during the disaggregation process. Indeed, a disruption is often related to a jump in the complexity and an information quantity curves. After spotting an interesting trace part, the user can zoom and generate a new aggregation, until the provided representation is quite precise to determine the exact area to focus on with another tool.
D. Implementation in FrameSoC
We implement the Best Cut Algorithm in C++ for performance and memory management reasons. Our vector and matrix array management is generic as it has no associated semantics. The code is compiled as a shared library and is accessed through JNI. The Eclipse Java module integrated in FrameSoC is divided in two parts. The core part is in charge of performing queries to the database, using the FrameSoC dedicated interface, and also acquiring the parameters provided by the user and the best cut algorithm output from the shared library. The user interface part provides interaction mechanisms to set or select the different parameters for the queries and the computation. The result is visualized in a frame representing the trace as a one-dimensional array. The parts are emphasized by colors, which are identical for aggregated parts.
V. EXAMPLE OF AN ANALYSIS FLOW
In this section, we present a use case (Table III) based on a basic open-source G-Streamer video application, displaying a mpeg video. We introduce an anomaly by using the stress tool\(^3\) in order to perturb the video streaming. The trace is then imported into the FrameSoC database. The workstation used for the test has a 2.40 GHz x 8 CPU, a 256 GB SSD and 8 GB of DDR3 RAM.
The first objective is to validate Ocelotl synthetic visualization by relating the trace representation to the application perturbation timestamps. Moreover, we will compare the complexity and information curve behavior with a reference case that is not perturbed. The second aim is to find a way to reduce the trace to the areas involved in the behavior disruption. More precisely, we want to remove the event producers that are not active during this moment (space dimension reduction), and focus on the perturbation timestamps (time dimension reduction). The goal of this step is to minimize further computations, by saving the analysis result into the trace database. This result can then be reused by the overview tool, decreasing initial processing time, or by another analysis tool, like a Gantt chart.
A. Overview of the Trace with Ocelotl
We start our analysis by an overview with Ocelotl. By applying the method we evoked above (subsection IV-C), we progressively disaggregate the trace. We first discover a representation showing different phases: a slight initialization phase, at the beginning (0–10 s), and also a termination phase, at the end (550–610 s), corresponding to the period where the application is still active but the video is over (Figure 6). By continuing the process, we get an aggregation step that corresponds to a complexity and an information jump, as shown in the Figure 7. Curve behavior means that the representation semantics changes: we can indeed distinguish several big aggregates (10–550 s). With more disaggregation (Figure 8), we highlight a completely disaggregated area (around 300 s), while other trace parts are still represented by big aggregates. Actually, this area matches with perturbation timestamps, which validates our claim to represent problematic behavior with Ocelotl visualization. We deduce also that during the perturbation, trace behavior becomes unstable, which leads to a heterogeneous area.
\(^3\)https://code.google.com/p/gst-player/
\(^3\)http://weather.ou.edu/~apw/projects/stress/
TABLE III. G-STREAMER APPLICATION EXECUTION CONTEXTS
<table>
<thead>
<tr>
<th>Use Case</th>
<th>Perturbation: stress settings</th>
<th>Streaming behavior</th>
<th>Tracing duration</th>
<th>Trace size</th>
<th>Event producers number</th>
<th>Events number</th>
</tr>
</thead>
<tbody>
<tr>
<td>Reference</td>
<td>Not activated</td>
<td>Normal</td>
<td>10 min</td>
<td>8.7 GB</td>
<td>1507</td>
<td>30000000</td>
</tr>
<tr>
<td>Perturbed</td>
<td>After 5 min, 8 CPU workers, 8 Memory workers, during 12 s</td>
<td>Freeze at 5 min, during 12 s</td>
<td>10 min</td>
<td>8.7 GB</td>
<td>1535</td>
<td>2941591</td>
</tr>
</tbody>
</table>
Fig. 7. After passing complexity and information jump, we get several aggregates. Our representation becomes more precise.
Fig. 8. More disaggregation shows an heterogeneous area around 300 s, which matches to our perturbation timestamps.
Fig. 9. Zooming on the perturbation (280–320 s). Perturbed area is the heterogeneous part composed by multiple aggregates.
B. Comparison with the Reference Case
We compare the perturbed case with the reference case by using the same methodology. Here, we get the same initialization and termination timestamps. We also obtain a complexity and an information jump. However, we go from a coarsely full aggregated trace to an heterogeneous representation, without intermediary steps where the trace is progressively cut. This phenomenon is related to application stable behavior: the complexity suddenly grows but the new information brought by new aggregates is weak, so the threshold to disaggregate becomes very sensitive. By using both overview and quality measure curve, we are thus able to distinguish a perturbed behavior than a more stable execution.
C. Filtering the Space Dimension
The second analysis step is to reduce event and event producer sets (i.e., the space dimension), to improve further analysis computation. Ocelotl view shows us that there are initialization and termination phases. FrameSoC provides statistic views such as pie chart, which gives the event distribution according to their event producers. In our perturbed case, pie chart coarsely shows that only 20% of event producers generate 80% of trace events. We hypothesize that the 80% less active event producers are only active during boot and end steps, and can be removed without changing the aggregated representation behavior. We design a filter that returns as analysis result a set of event producers that are active during a given period. The objective is to select the event producers being active during the behavior disruption, and remove those being active only during the initialization and the termination phases. So we hope to keep the 20% most active event producers. We filter event producers between 20 and 560 s. Result set contains now only 18% of event producers, which confirms our hypothesis. By viewing the application behavior with Ocelotl again, query time does not decrease, because it is mainly dependent of retrieved event number (which is almost the same here). However, our microscopic description size (event producer dimension is now 18% of the full event producers size) is reduced and this enables to work with 5 times more parts than before (memory complexity of aggregation is \(o((n \cdot m)^2)\)), which leads to more precision. Finally, the tool produces the same aggregation behavior as for the full trace.
D. Zooming and Filtering Time Dimension
We now focus on the perturbation part by zooming with Ocelotl. The aim is now time dimension reduction, by determining the perturbation timestamps with the best possible precision. By doing several zoom and aggregation, we finally chose 280 and 320 s as bounds (Figure 9). Then, we use a second filter, which saves a set of events that are present during a time period. We now get only 93358 events that are actually relevant to understand trace behavior, i.e., almost 300 times less than at the beginning of the analysis.
E. More Details with Gantt Chart
The final analysis step is a detailed representation of the trace selected area. We visualize the application behavior.
between 298.4 and 299.6 s with a Gantt Chart, by providing filtering results (Figure 10). Because the event amount and the event producer number are now reduced, the Gantt chart does not suffer from time and space scalability issues as much as before.
F. Analysis Conclusion
Our analysis flow provides an overview of the trace, and then focuses on a precise trace area, with the help of statistics views, filtering tools and result management provided by FrameSoC. The external perturbation we introduced is precisely detectable. The next step will be to introduce a perturbation directly inside the program, to go further in the analysis and, for instance, rely trace behavior to the source code.
VI. Conclusion
FrameSoC manages large traces by storing them in a relational database. Traces are represented according to a generic data-model. The database choice enables filtering and searching in various dimensions, while keeping reasonable read and write performance. Experiments with huge and gigantic traces support this claim. Access to the data being crucial for analysis tools, our future research will consider specific use case requests optimization or request partitioning. The use of alternative storage solutions, such as temporal or non-relational databases, is also a perspective. FrameSoC puts a strong emphasis on analysis tool management and interoperability. Our shared data-model is a basic block for the creation of analysis flows, in which several tools can take part, possibly reusing other tool results. An explicit support is given to tool pluggability: this has been validated by the various tools we have already added to the framework. Regarding the evolution of our framework, we expect to enlarge the family of tools working with FrameSoC. An other interesting perspective is to provide to the final user a convenient interface to define analysis chains.
The visualization module Ocelotl is used as an entry point to the analysis, thanks to its ability to coarsely describe the whole trace behavior over time. With the help of user interaction and a filtering tool, we can reduce space and time dimension elements to focus on those related to a particular behavior, like a perturbation. The use of these different tools, combined with statistic views and result management provided by FrameSoC, corresponds to a coherent and complete analysis flow. Our current work is about the extension of the aggregation technique, to manage also the space representation. Indeed, the space dimension is considered to compute the time aggregation, but it is not represented. An other interesting point would be the improvement of the result management: to avoid useless and time-expensive recomputation, like retrieving the events and generating a microscopic definition each time we open a trace, we could save these results in the database.
REFERENCES
|
{"Source-Url": "https://inria.hal.science/hal-01065783/file/article.pdf", "len_cl100k_base": 8503, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28374, "total-output-tokens": 11308, "length": "2e13", "weborganizer": {"__label__adult": 0.0003464221954345703, "__label__art_design": 0.0006532669067382812, "__label__crime_law": 0.0003736019134521485, "__label__education_jobs": 0.0009608268737792968, "__label__entertainment": 0.0001595020294189453, "__label__fashion_beauty": 0.00020802021026611328, "__label__finance_business": 0.0003969669342041016, "__label__food_dining": 0.00030922889709472656, "__label__games": 0.0007481575012207031, "__label__hardware": 0.00489044189453125, "__label__health": 0.0005593299865722656, "__label__history": 0.0005245208740234375, "__label__home_hobbies": 0.00014603137969970703, "__label__industrial": 0.0008301734924316406, "__label__literature": 0.00029540061950683594, "__label__politics": 0.00034809112548828125, "__label__religion": 0.0005116462707519531, "__label__science_tech": 0.458251953125, "__label__social_life": 0.00010138750076293944, "__label__software": 0.0192413330078125, "__label__software_dev": 0.50927734375, "__label__sports_fitness": 0.0002808570861816406, "__label__transportation": 0.0006661415100097656, "__label__travel": 0.0002086162567138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48054, 0.03221]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48054, 0.22896]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48054, 0.87522]], "google_gemma-3-12b-it_contains_pii": [[0, 1092, false], [1092, 6838, null], [6838, 12706, null], [12706, 17998, null], [17998, 22255, null], [22255, 28668, null], [28668, 34297, null], [34297, 38339, null], [38339, 48054, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1092, true], [1092, 6838, null], [6838, 12706, null], [12706, 17998, null], [17998, 22255, null], [22255, 28668, null], [28668, 34297, null], [34297, 38339, null], [38339, 48054, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48054, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48054, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48054, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48054, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48054, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48054, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48054, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48054, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48054, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48054, null]], "pdf_page_numbers": [[0, 1092, 1], [1092, 6838, 2], [6838, 12706, 3], [12706, 17998, 4], [17998, 22255, 5], [22255, 28668, 6], [28668, 34297, 7], [34297, 38339, 8], [38339, 48054, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48054, 0.09615]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
b74c92f8aaf59407eba0bed7d8048a9e50f109a6
|
User Interface Façades: Towards Fully Adaptable User Interfaces
Wolfgang Stuerzlinger, Olivier Chapuis, Dusty Phillips, Nicolas Roussel
To cite this version:
HAL Id: inria-00533595
https://inria.hal.science/inria-00533595
Submitted on 8 Nov 2010
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
User Interface Façades: Towards Fully Adaptable User Interfaces
Wolfgang Stuerzlinger†, Olivier Chapuis*, Dusty Phillips† & Nicolas Roussel*
†Interactive Systems Research Group
Comp. Science & Engineering, York University
Toronto, Canada
wolfgang | dustyp@cse.yorku.ca
*LRI (Univ. Paris-Sud - CNRS) & INRIA Futurs
Bâtiment 490, Université Paris-Sud
91405 Orsay Cedex, France
chapuis | roussel@lri.fr
ABSTRACT
User interfaces are becoming more and more complex. Adaptable and adaptive interfaces have been proposed to address this issue and previous studies have shown that users prefer interfaces that they can adapt to self-adjusting ones. However, most existing systems provide users with little support for adapting their interfaces. Interface customization techniques are still very primitive and usually constricted to particular applications. In this paper, we present User Interface Façades, a system that provides users with simple ways to adapt, re-configure, and re-combine existing graphical interfaces, through the use of direct manipulation techniques. The paper describes the user’s view of the system, provides some technical details, and presents several examples to illustrate its potential.
ACM Classification: H.5.2 [Information interfaces and presentation]: User interfaces - Graphical user interfaces.
Keywords: Adaptable user interfaces.
INTRODUCTION
User interfaces are becoming more and more complex as the underlying applications add more and more features. Although most people use only a small subset of the functionalities of a given program at any given time [19], most software make all commands available all the time, which significantly increases the amount of screen space dedicated to interface components such as menus, toolbars and palettes. This quickly becomes a problem, as users often want to maximize the space available for the artifacts they are working on (e.g. an image or a text document). One reason for this problem might be that most user interfaces are still designed by software programmers today, a fact that is only slowly changing. However, even trained interface designers cannot always foresee how a software package is going to be used in practice, especially if the package is used by a large variety of different users. This makes creating flexible user interfaces a major challenge.
Consider GIMP as an example. The latest version of this image manipulation program has 22 persistent dialogs for managing brushes, colors, fonts, etc. Although dialogs can be docked together in an arbitrary number of windows, this only increases the window management overhead and increases the average distance to the drawing tools & functions from the drawing area. Users adapt with various strategies, such as having all dialogs on a secondary monitor, or overlapping the drawing area with dialogs. On the other hand, some applications use an all-in-one window logic, which provides less flexibility in terms of user interface layout.
One way of dealing with the growing number of application features and the desire to optimize screen space is to allow users or applications to customize the user interface. These two concepts have been studied for some time by the community (e.g. [17, 18]). Today, they are most often referred to as (user-)adaptable and adaptive (or self-adapting) interfaces [19]. Adaptive interfaces change their appearance based on some algorithm, such as a least-recently used criterion. One recent example is the menus of the Microsoft Office suite. Adaptable interfaces, on the other hand, can be configured by the user to suit his or her own criteria. Many applications, for example, make it possible to interactively customize their toolbars with simple drag-and-drop operations.
Adaptable interfaces can exhibit some unpleasant side effects such as surprising the user by moving or removing menu entries. Previous studies have also shown a desire for the user to be able to control and override the automatic system whenever needed [11]. Adaptable interfaces suffer from the problem that new ‘secondary’ interfaces and interaction techniques must be added to support the customization of the ‘primary’ interface. A
comparison of static, adaptive, and adaptable menus showed that users could optimize their performance if they knew about the possibility of adapting and were able to adapt their menus with a simple interface [8]. Another interesting finding is that the adaptable user interface did not perform worse than the other two alternatives. Furthermore, participants greatly preferred the adaptable interface to the two other alternatives, a fact that we see as strong motivation for additional research in this area.
While the idea of adding adaptation functionality to user interface toolkits seems attractive at first glance, it has the drawback that it will make the already complex APIs of these toolkits even more complex, requiring yet more code to be written by application programmers. This is clearly not a positive thing and would not speed adoption of the fundamental paradigm of adaptable interfaces. Moreover, modifying the toolkits would leave it to programmers or interface designers to decide what can be configured and how. Yet, again, these professionals cannot necessarily foresee all potential ways of adapting an application. Phrased differently, we believe that users should be in control of the adaptation process, not the original software authors.
In this paper, we present User Interface Façades, a system designed to address this issue. The rest of the paper is organized as follows. In the next section, we present an overview of previous work and motivate our research. After presenting the main ideas of User Interface Façades, we discuss how we implemented them. Then we present several examples to illustrate the concepts, followed by the conclusion.
**MOTIVATION**
Skins and themes are two of the simplest forms of user interface customization. The notion of a skin comes from video games such as Quake that allow players to alter the appearance of their character and has been adopted by many media players. Themes extend this notion by sharing a common visual style among different applications, as specified by the user at runtime. A skin, or a theme, can simply consist of a set of colors or textures used by existing drawing code. It can also partially or completely replace that drawing code, possibly adding complex output modifications. In addition to the visual style of interface elements, skins and themes can also specify the layout and to a lesser degree the behavior of these elements. Recent work has extended this approach to bridge the gap between appearance and semantic meaning [9, 6]. However, although these allow visual designers to customize interfaces using off-the-shelf drawing tools such as Adobe Photoshop or Illustrator, these systems remain out of reach for end-users who can only choose between predefined theme options.
One of the biggest obstacles for adaptable interfaces is that it requires a fairly substantial programming effort to add this capability to a software package. Most user interface toolkits offer no support for implementing adaptable interfaces. This factor has certainly hindered the adoption of the idea of adaptable interfaces. As a notable exception, Apple’s Cocoa toolkit provides developers with a toolbar widget that users can customize at runtime using drag and drop operations. However, the customization interface is far from optimal, as it does not allow for undoing changes or reverting to previous versions and employs a fixed window, which is inconvenient in many situations. Microsoft Office applications also allow users to customize their various toolbars and menus. But again, the customization interface has a number of serious flaws (Figure 1).
Bentley and Dourish [8] introduced an interesting distinction between surface customization, which allows users to choose between a predefined set of options, and deep customization, which allows them to customize deeper aspects of a system, such as integrating an external translation program with a word processor. They point out two problems that our above examples also illustrate. First, the level of customization provided by most systems lies above the functionality of the application, rather than within it. Second, these systems often require the learning of new languages to describe new behaviors.
Fujima et al. recently proposed the C3W system (Clip, Connect and Clone for the Web) to generate new HTML documents by cloning individual HTML elements from other documents and allowing for computation on these elements using a spreadsheet model [10]. While this approach supports deep customization, C3W is limited to Web technologies and does not allow the user to change or replace widgets nor to add new widgets to existing documents. Hutchings and Stasko proposed the more generic notion of relevant window regions and suggested to add the ability to create copies of these regions that could be manipulated as independent windows [13]. Tan et al. implemented this idea in their
WinCuts system [22]. However, this system is unable to merge several regions into a new window, which is clearly a limiting factor. Its implementation also has several problems that make it hardly usable on an everyday basis (e.g., it relies on periodic polling of window content, popup menus and dialog boxes appear on the source window, etc.). Berry et al. introduced a system that can selectively hide content based on the users’ privileges via various forms of blurring [4]. Internally, this system works similarly to WinCuts.
Hutchings and Stasko also suggested allowing users to remove irrelevant parts of windows [14]. The same idea was mentioned in [21] and partially implemented (windows could be cropped to a set of pre-defined shapes). Finally, Hutchings and Stasko proposed to replicate dialog boxes on multiple monitor configurations until the user interacts with one of the copies [15]. In this same paper, they concluded that window operations like these should be implemented within the window manager rather than using a separate application.
Based on the above discussion, we formulated the following criteria for adaptable user interface:
- **Fast, simple, just-in-time customization:** Users should be able to adapt interfaces without advance planning, whenever needed, and should be able to do this in a fast and simple way, e.g., with direct manipulation techniques.
- **Not only global customizations, but also local ones:** Most adaptable interfaces only support global changes, which forces users to undo them at some point. Global/local can be interpreted in different ways (e.g., persistent/temporary, all documents/this document). Users should be able to specify the scope of interface customizations. It should be possible, for example, to customize the toolbars of an application for a specific session only, or even for a specific document.
- **Deep customization:** Users should not be restricted to a set of pre-defined options but should be able to define new ones. Again, ‘set of options’ can be interpreted in different ways, e.g., a tool set or a set of specific locations where tools can be placed. Users should be able to select anything on the screen, change the way it operates (not only visual appearance), cut it out, duplicate it, or replace it with something else. The latter should be done in a manner that removes the ‘old’ user interface, or at least makes it invisible.
- **Cross-application customization:** Interface customizations should make it possible to combine or link together different applications.
### USER INTERFACE FAÇADES
This work focuses on applications with a graphical user interface, as opposed to command-line systems. We are more specifically interested in applications where the interaction focus is a single or, at best, a few documents. In such applications a large work area dominates the main window, with user interface elements clustered around. Examples include drawing packages, text processors, spreadsheets, etc.
A user interface façade is a user-specified set of graphical interfaces and interaction techniques that can be used to customize the interaction with existing, unmodified applications. This section provides a general overview of how users interact with such façades. Implementation details and more specific usage scenarios follow in the next two sections.
#### Copying and pasting screen regions
A basic functionality of the Façades system is the ability to copy interface components from one window to another while maintaining a one-to-one functional relationship between the copy and the original. Using the mouse and a specific modifier key the user can select one or more rectangular source regions. A drag operation on these regions duplicates them. Dropping the duplicates on the desktop puts them in a new façade window. Façade window creation from source regions is also accessible through a menu that pops up when one clicks on one of the regions using the right mouse button. A new command also makes it possible to clone a complete window through its standard window menu.
Dropping duplicated interface components onto the side of a façade window automatically expands the façade to make room for the new duplicate at this side. Dropping components into free space inside a façade window simply adds it in that space. Duplicates can also be dropped on any existing window, and will overlay the dropped component over the existing content. Figure 2 shows a user incrementally constructing a façade window by selecting widgets from three dialogs of the GIMP application. The scenario here is that the user wants to optimize the interface by packaging frequently used tools in an ad-hoc way, rather than using the GIMP developers’ pre-packaged toolsets. The upper row of images shows four selected regions in two GIMP dialogs (displayed as semi-transparent rectangles) and the resulting façade window, which contains the duplicated regions. The lower row illustrates the addition of a fifth duplicated component to this window.

The same source region can be used in several façades (i.e. it can be duplicated several times), and a façade can contain an arbitrary number of duplicates. After a façade has been created, the user typically hides or iconifies the source window(s) and the system transparently passes mouse movements and clicks over the façade to the appropriate source region. Conversely, source region updates are replicated in their corresponding duplicates. Overlay windows such as popup menus are correctly handled when triggered from a duplicate. The system also transparently manages the focus and stacking order according to standard window manager rules. In effect, the behavior of a duplicate is indistinguishable from the original source region to the user.
Parts of the above ideas have been previously presented by Tan et al. [22] (e.g. the ability to duplicate multiple screen regions into individual windows) and Hutchings and Stasko [14, 15] (e.g. the ability to duplicate windows). However, the ability to create new windows that seamlessly combine multiple screen regions and the ability to paste regions over arbitrary windows are unique to our work.
Cutting screen regions
In addition to supporting the creation of façade windows, the system also allows users to create holes in windows, via a context-sensitive menu that becomes active after a region on the screen has been selected. This can be used to remove uninteresting parts or to reveal other windows beneath. As an example, consider revealing a small utility, such as a calculator or calendar, inside an unused region of a primary application (Figure 3). As the keyboard focus follows the mouse position in Facades, the user can then simply interact via the keyboard with the partially covered calculator ‘through’ the hole. This is especially interesting if the primary application is run in full-screen mode, which is something that traditional window systems do not support. Holes created in a window with the Facades system can be deleted via a command in the window menu or with a keyboard shortcut.
Using external components to interact with applications
One idea that appears rarely in the discussion about adaptable user interfaces in the literature is that the user cannot only adapt the visual appearance of the interface, but also the interaction part of it. Facades allows the user to do this without any change to the code of the underlying application. One possible modification is to replace a component of a GUI with another GUI component, typically created by a third party. For example, with Facades the user can replace a dropdown list widget containing all countries of the world with a map widget or alternatively some radio buttons for the small set of countries that the user needs frequently in his or her work. Another modification allows the user to modify the interaction with standard components. For example, the user can integrate scrolling and zooming by remapping how mouse movements on a standard scroll bar are interpreted. These and other examples will be discussed in more detail later in the paper.
Managing Façades
To enable the quick recall of a façade, the user can give it a name and save it through a specific command in the window menu. When saving, the user can set options in a dialog: automatic recall, automatic hiding of source windows at recall time and the use of the window title in the saved description of the façade.
At a later time and if all relevant windows are open, the system can then recreate a façade automatically, or on user demand via the window menu. For this, Façades monitors all window related events and identifies matching configurations via window geometry, class, and resource names. If applicable, replacement widgets are automatically instantiated. A sub-menu of the normal desktop menu also contains a list of all saved façades for all currently active window configurations.
Contributions
In summary, we present the following new techniques for adaptable user interfaces:
- Seamlessly merge duplicated screen regions into new windows enabling the creation of new user interfaces for existing applications.
- The ability to create holes in windows and to seamlessly overlay duplicated content over existing windows.
- The ability to seamlessly replace widgets with other (potentially customized) widgets.
- The ability to seamlessly change the interaction with widgets, including the composition of widget behaviors, as well as the creation of toolglasses and other advanced user interface techniques.
- Implementing all of the above in a way that does not require any coding, with a simple-to-use interface based on drag-and-drop.
The following implementation section provides the technical details that make the system efficient and reliable and discusses related issues such as resizing.
IMPLEMENTATION DETAILS
In this section we describe how we implemented Façades and how it is integrated into a windowing system. Conceptually, Façades acts as a transparent layer over the window system that redirects input events and duplicates window regions as specified by the contents of each façade window. For seamless duplication it uses the off-screen buffer capabilities of Metisse [5], as well as its input redirection facilities. Façades determines widget
positions through the accessibility API of modern GUI toolkits. Finally, widget replacement and interaction modification is achieved via the instantiation of simple replacement applications that are again based on accessibility API calls. Figure 4 illustrates how the various components of Facades work together. The left hand part shows a façade that composites two separate windows, whereas the façade for ‘App 3’ utilizes widget replacement. In the following subsections we first discuss how input & output are redirected and then mention placement. In the following subsections we first discuss how input & output are redirected and then mention how we access and replace widgets.
**Basic input/output management using Metisse**
Façades is implemented based on Metisse [5]. The Metisse architecture uses a compositing approach, making a clear distinction between window rendering and the interactive compositing process. The Metisse server, an enhanced X Window server, renders applications off-screen. In Façades, window images are composited by a separate application, FwvmCompositor, which is based on the window manager FVWM. Mouse and keyboard events received by FwvmCompositor are usually sent to appropriate applications through the Metisse server. In some cases, however, events are directly handled by FwvmCompositor itself, e.g. to implement façade region selection and window management commands, such as ‘Alt-F4’. Specific façade commands in FwvmCompositor are accessible from FVWM to enable the creation of façade windows, holes, etc. Conversely, FwvmCompositor uses FVWM commands to handle pop up menus or to indicate the real mouse focus when the pointer is over a duplicate.
Each façade window is managed by an instance of a simple program, façade-holder, that keeps track of the duplicate regions it contains and creates a new X window to hold them (duplicates are then displayed by FwvmCompositor in that window). This program is invoked each time one or more duplicates are dragged from a source window and dropped onto the desktop. Each duplicate is described in façade-holder by a tuple of the following form: \((XID, src_x, src_y, src_width, src_height, dst_x, dst_y)\) where XID identifies the source window, \((src_x, src_y, src_width, src_height)\) specifies the original region geometry relative to the source window, and \((dst_x, dst_y)\) specifies its position in the façade window.
**Façade-holders** publish these tuples to other programs, including FwvmCompositor, through an X atom.¹ When a new duplicate is pasted into an existing façade window, FwvmCompositor sends an X client message with the source information for the duplicate to the façade-holder. Upon receiving this message, the façade-holder computes the local geometry of all its elements and updates its atom accordingly. FwvmCompositor catches this new layout and redraws the façade window.
FwvmCompositor maintains a list of duplicated regions for every window and handles updates for every content change. It also handles the necessary focus changes as the mouse moves from one duplicated region to another. Mouse and keyboard events for a façade window are normally sent to the appropriate source window. Similarly, clicking on a duplicate region raises the façade window, not the corresponding source window. FVWM handles these situations by distinguishing two types of focus: one for window management tasks, and the other for interacting with window content.
Transient overlay windows, such as popup menus or tooltips, are rendered in the right place. When such a window is mapped, FwvmCompositor computes its ‘parent’ window, i.e. the source window that is most probably responsible for this new window to appear. If the mouse pointer is over an element of the parent, FwvmCompositor positions the overlay based on the parent location and the element position and geometry. If the parent window is invisible, the overlay window is placed close to the pointer. Transient dialogs are placed so that their center is aligned with their façade window.
Iconification of source windows also poses specific problems. The usual way of iconifying X windows is to ‘un-map’ them in the server and replace them with a new graphical object. But unmapped windows do not get redrawn and cannot receive events. Consequently, when a source window is iconified in Façades, it is not unmapped, treated as iconified by FVWM and not rendered by FwvmCompositor. When a source window is closed, FwvmCompositor notifies the corresponding façades by sending them an X client message that specifies the region(s) to be removed. When its last element is removed, a façade either remains empty on-screen for
¹Atoms are an X Window specific publish/subscribe mechanism
For resizing there are two issues to consider. Any GUI application may resize its window or widgets at any time or the user can resize the façade window itself. While the Façades system can detect the first kind of resize events via the accessibility API, any automatic change to a façade might break the layout of the façade as constructed by the user. This is clearly undesirable. Hence, we currently warn the user in this case and require that he/she fixes the problem manually. Second, a user can actively resize a façade window. While we could search for widgets that are resizable and try to adapt the layout accordingly, this would require an easy-to-use interface for specifying widget layout. As current layout methods typically have (too) many options, this is a research topic of its own. Hence, we currently choose to disallow resizing of façades.
Other possible implementations
We have implemented the Façades system using Metisse and the accessibility API. The Metisse compositing architecture permits dynamic rendering of interface elements and handles input redirection. Furthermore, the FvwmCompositor interprets window management activities directly, while it passes interaction with façade content to the original applications.
It should be possible to implement Façades on other systems since accessibility APIs are now widely available. Moreover, the compositing approach is available under Mac OS X, Windows Vista and X Windows. However, neither OS X nor Vista have APIs flexible enough to freely redirect rendering output. For this reason WinCuts [22], called the PrintWindow function every second to update cut contents. In X Windows the full rendering API is accessible and documented. Even though this API is very complex (compared to Metisse) it seems possible to implement the rendering redirection part of Façades with it.
For input redirection, Mac OS X and Windows have no public API. As a workaround, WinCuts [22] draws a cursor over the interface elements, and the source window is kept in front of the true cursor. X Window has the X Event Interception Extension (XEvIE), but this extension is not powerful enough. For example it is not possible to send pointer events to a window, which is covered by another. A future X extension [20] may provide enough control of input redirection to implement something similar to Façades.
There are several other alternatives to extract widget-related information and to activate widgets. For non-accessible GUI toolkits, one can extract information about widgets by modifying the dynamically linked toolkit library and adding functionality that returns (part of) the current widget hierarchy state on demand. Interaction with non-accessible widgets can be simulated via appropriate mouse and keyboard input events on the appropriate areas of a widget. E.g. to enter a particular string into a text-field, the system selects the field via a simulated mouse click, selects all old text and erases it via appropriate key sequences, and then
Later use, or is automatically destroyed in the case of cloned windows. Façade and cloned windows are not resizeable by the user. Cloned windows are automatically resized to match the geometry of their source window. Duplicated regions are kept visible in façades only if they are still visible in their source window.
All menus to manage façades are handled by FVWM. Some are statically defined in configuration files. Others are dynamically created by FvwmCompositor (e.g. the list of previously saved façades for a window). Saving a façade generates a human-readable description of its elements on disk. FvwmCompositor uses the geometry, class, resource names, and optionally the title of the source windows of a façade to create a heuristically-unique identifier. Widget-related information obtained from accessibility APIs can also be used to make this identifier more robust. FvwmCompositor loads all saved façade descriptions at startup and whenever windows are created or resized, it checks for matching façade descriptions and creates them accordingly.
Taking advantage of accessibility services
Widget-related information is very useful for creating façades. Knowing the position, size, type, and current state of each widget as well as having access to its actions offers a number of interesting possibilities. As an example, knowing the boundaries for each widget can facilitate the selection of widgets via snapping. There are several ways to obtain widget-related information and control widgets from the outside. In the current implementation of Façades, we use the accessibility APIs supported by most modern GUI toolkits.
Apple defined Universal Access APIs for its Carbon and Cocoa toolkits, Microsoft the Microsoft Active Accessibility & System.Windows.Automation frameworks, and X Window the Assistive Technology Service Provider Interface (AT-SPI), a toolkit-neutral way of providing accessibility services supported by GTK+, Java/Swing, the Mozilla suite, StarOffice/OpenOffice.org and Qt. All of these APIs can query the current position, size, type, and state of all widgets of an application. Furthermore, all possible widget actions can be activated via these APIs (e.g. one can cause selection events, trigger buttons, etc.). The following pseudo-code segment illustrates this for the example shown in Figure 9 via the AT-SPI accessibility API.
```
# Event handler for click at (x,y) on map
# Input: x, y, app_name (application name),
# comp_name (widget name), comp_type (widget type)
# Map click to combobox list index
index = get_province_for_point(x, y)
# recursively find the accessible component in widget tree
application = desktop.find_app(app_name)
comp = application.find_component(comp_name, comp_type)
selector = comp.queryInterface("Accessibility/Selection")
# "aaaaand: Action!": fire event to widget
selector.selectChild(index)
```
simulates entry of the new string. Most other widgets can be controlled with similar strategies. However, this is only a temporary workaround, as most GUI toolkits have already or are being retrofitted with an accessibility API, due to the strong need to add accessibility to all applications.
Alternatively, we can implement Façades via an intermediate layer in a window system. Such intermediate layers already exist today, e.g. in the form of user interface description languages (UIDL’s). These are used to describe the user interface and how it activates the functionality of the application. XUL and XAML are two recent examples. If this intermediate layer is accessible from the outside, it is possible to implement Façades as an ‘UIDL filter’, which selectively replaces or duplicates widgets in the UIDL stream and adapts the calls to the application as appropriate.
**DETAILED EXAMPLES / USAGE SCENARIOS**
In the following section we present several examples of useful façades and explain how they were created.
**Widget duplication**
One application of Façades is to change the UI of a software package designed for right-handed people into a left-handed version, e.g. by moving the scrollbar from the right to the left-hand side. Another interesting idea is to duplicate a toolbar on two sides of the work area (or even on all four sides), which has the potential to significantly decrease average tool selection time. Figure 5 shows a file browser - Konqueror - with an additional toolbar at the bottom.

Façades also support the full duplication of whole windows, similar to [14][15]. This functionality is activated via a titlebar menu. Duplication can be extremely useful in a multiple monitors setting, as it allows the user e.g. to duplicate the task bar or a panel with launch buttons on every monitor (with changes visible everywhere simultaneously). Another application of this idea is best illustrated with an example: Alice has two monitors on her desk, a laptop monitor, and an external monitor, which can be turned in any direction. Paul arrives in Alice’s office and sits down on the other side of the desk. Alice turns the external monitor so that it faces Paul and duplicates her web browser onto the external monitor. Alice can then freely show her work while Paul is able to observe the demonstration.
Another example is the duplication of the GIMP toolbox window: toolboxes can be duplicated for each drawing window. We can even have two toolbox windows on each side of a drawing window to accelerate access to tools. Figure 6 illustrates such a layout.

Another application of Façades is to duplicate useful notification areas into the area of an arbitrary window. As an example, consider duplicating the taskbar clock into the title bar or another unused area of a window (Figure 7). This is clearly interesting for full-screen applications and also for multi-monitor setups.

Widget duplication can also be used for the control of applications on secondary display devices. The main issue here is the reduction of mouse travel across large distances. We describe a two-monitor scenario that significantly extends an example from a technical report of Hutchings and Stasko [14]. Paul is a web developer and he edits a web page on his main monitor. On his secondary monitor he runs two different web browsers to test his work in real time. For this Paul first creates a façade consisting of the two reload buttons and the two vertical scrollbars of the browsers. Then he places this façade on his main monitor just to the right of the web editor. This allows Paul to quickly test his design by interacting with the façade and has the advantage that his mouse never needs to leave the main monitor.
We already presented an example of the power of combining elements above. Another example is the creation of a notification façade from different applications. Most e-mail programs display the ‘inbox’ as a list of one-line items containing information on the sender, subject, etc. Selecting (part of) this list and the two last lines of an instant messaging (IM) application allows the user to compose a novel ‘contact’ notifier façade. The advantage of such a notification application compared to the
usual small notificators in the taskbar is that it gives simultaneously information on new mails and new IM messages including the sender name. Users can then use this information to decide whether to switch from their current work to answer a message. Moreover, the user can even answer an e-mail message without switching to the full mail reader window as he/she can right-click on an e-mail’s header line. One disadvantage of such a notification window is that it uses more screen space than the rather minimal taskbar notificators. However, Metisse has the ability to scale windows. Hence, such notifications can be also scaled (e.g. by reducing by 30%, which still maintains readability).
**Widget replacement**
Façades also targets the replacement of standard GUI widgets with other widgets. Consider a scenario where a user frequently uses a few options in a long list widget and only rarely uses other entries. A classical example is a call-center where data about each incident is recorded, and where the client base consists of many users in a small set of countries, but also a few others from around the world. Instead of having to choose every time from the list of all countries on the planet in the incident-entry form, it is much more efficient to have quick access to the subset of frequently used countries and provide a separate way to access the full list. As the call-center software developer cannot foresee which countries will be used frequently and how large that set will be, it is advantageous to give the user control of this GUI aspect.
Figure 8 depicts an address entry form application for specifying addresses in Canada, the dialog that lets the user specify the provinces that appear in the façade, and the façade itself.

In Façades, a user can access this functionality by first selecting a widget, then accessing a context-sensitive menu and selecting the appropriate entry. This will show a façade creation dialog with appropriate options. Once the user confirms their choice, Façades creates the custom replacement widget, which can be placed into a façade. The following pseudo-code illustrates the main parts of a generic combobox replacement widget. Code related to the dialogs for façade construction and ‘Other...’ functionality is not shown for brevity.
```python
function combo2radio(app_name, combo_name):
app = desktop.find_app(app_name)
combo = app.find_component(comp_name, "combo box")
# show dialog to user and return selected entries on close
selection = SelectFromDialog(combo.items)
# create a new window with the selected radio buttons
radiobox = Window()
for item in selection:
radio = RadioButton(item)
radiobox.add(radio)
radiobox.display()
function selectCallback(widget, id):
selector = widget.queryInterface("Accessibility/Selection")
selector.selectChild(id)
```
Another option is to replace the provinces combobox in Figure 8 with an interactive map that allows direct selection of provinces in a map of Canada (see Figure 9). This is achieved via a replacement widget that maps click locations to selection events on the combo box. While this replacement widget is not as generic as the one depicted in Figure 8, it offers a better visualization, which some users may find easier to use. Depending on the user’s needs, he or she may prefer one alternative or the other.
As a different example for the replacement of standard widgets, consider a text-area widget and its enhanced replacement that adds syntax highlighting to make the contents easier to comprehend. With this replacement widget the user interface of any application with un-enhanced text-area widgets can be improved via Façades.
Similar to the shown examples, one can imagine many other replacement widgets and the code behind them will follow the general structure of the pseudo-code shown above, but tailored to the specifics of each pair of source and replacement widget. Consider e.g. enhancing an existing date entry field with an automatic pop-up calendar widget, whenever it is selected. Note however, that not all potentially possible widget replacements are ‘good’ from a UI designer standpoint, but this topic is beyond the scope of this paper.
**Interaction composition**
Previous research has shown that toolglasses can improve user performance [16]. They are transparent UI elements, whose position is controlled by the non-dominant hand. The user then ‘clicks-through’ the desired mode-icon of the toolglass with the dominant hand to activate a function at the current cursor location. In Façades, the user can associate another (extended) input devices to a selected window or façade via the Façade window menu to create a toolglass. This causes the window to become semi-transparent and to remain always
The OrthoZoom techniques depend on the ability of the application via the accessibility API. Clearly, the fluidity of features all events on the scrollbar and controls the application.
Accessibility APIs take some time to understand. However, once this has been mastered, replacement widget applications are very easy to generate. Modifying the interaction at the event level (e.g. remapping the action associated with a right click on a canvas), is also reasonably easy. The accessibility APIs provide all necessary data for Façades, but better access to graphical widget information could simplify some issues.
The ability to snap the selection to widgets is arguably the first thing that users notice positively about Façades. However, once users get used to the idea of freely adapting user interfaces of existing applications, they quickly come up with novel uses. One user, who uses a graphical editor in combination with command-line tools, has created a replacement widget with a ”Save all” button that he places adjacent to the terminal window. The functionality behind the button activates the save function for all open editor windows to deal with the common problem of forgetting to save changes.
Another application of Façades is to monitor a larger set of mailboxes. As the user is waiting for different kinds of messages at different times, he creates a façade that monitors only those that are currently ”interesting” and adapts that façade on demand to changed requirements. Yet another good use of Façades is to fix problems with suboptimally designed user interfaces. The search box of Thunderbird, for example, has a (barely visible) drop-down menu that allows changing between searching the subject, sender, and/or body. With Façades one can create a set of radio-buttons adjacent to the search box to make it easier to select the desired functionality.
Finally, Façades has the ability to make even static visualizations interactive by mapping mouse actions in certain regions to activations of other widgets, which is yet another way to enhance existing GUI's. However, we have to point out that not all modifications possible via Façades will improve the usability of a user interface. This is the trade-off faced by any user of a general-purpose tool.
CONCLUSION
In this paper, we presented a new approach to adaptable user interfaces. User Interface Façades allow end-users to quickly, flexibly and seamlessly change the interface of any application without coding. The system supports cutting, copying and pasting of screen regions, combined with the facility to overlay screen regions over other windows. We have shown how this approach supports both ad-hoc opportunistic customizations as well as persis-
tent ones. Furthermore, we demonstrated that Façades also supports deep customizations, such as the modification of the interactive behavior of arbitrary applications, something that previous work has not supported. We also presented several examples that demonstrate and extend the basic concept in several interesting directions (e.g., window management, multiple monitors, cross-application customizations, new scrolling techniques).
From a global perspective, we believe that Façades offers a good complement to direct programming of user interfaces. From the user's view, it greatly increases the flexibility of any GUI. From the programmers view, it is transparent, as no programming is required to give the user the ability to change the user interface. In the future, appropriate APIs to the Façades system may even enhance the interface programmer's or designer's ability to create good user interfaces.
The generalization from rectangular regions to more arbitrary regions is fairly simple from a high-level point of view and may increase the utility of façades even further. For future work, we plan to explore the Façades concept further and investigate how it can be integrate with UI description languages such as XUL & XAML. Furthermore, we will evaluate the adaptation facilities of Façades with user studies, similar to [8, 12].
In this context it is interesting to realize that User Interface Façades extend Apple’s vision of the window system as a ‘digital image compositor’ [13]. More precisely, we can say that the addition of Façades to the standard window management and user interface paradigms allows us to put forth the vision of the window system as a fine-grained interactive graphical component compositor.
ACKNOWLEDGMENTS
Many thanks to the reviewers for their insightful comments, which led us to improve the paper. The initial part of this research was performed while the first author was on a sabbatical stay at In Situ, and Michel Beaudouin-Lafon’s support is gratefully acknowledged. This work has been partially funded by NSERC and the French ACI Masses de données (Micromégas project).
REFERENCES
|
{"Source-Url": "https://inria.hal.science/file/index/docid/533595/filename/UIST06-facades-av.pdf", "len_cl100k_base": 8970, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34491, "total-output-tokens": 10901, "length": "2e13", "weborganizer": {"__label__adult": 0.0005698204040527344, "__label__art_design": 0.022064208984375, "__label__crime_law": 0.0004291534423828125, "__label__education_jobs": 0.0027523040771484375, "__label__entertainment": 0.00044155120849609375, "__label__fashion_beauty": 0.0003895759582519531, "__label__finance_business": 0.0002493858337402344, "__label__food_dining": 0.0004703998565673828, "__label__games": 0.0011568069458007812, "__label__hardware": 0.0020599365234375, "__label__health": 0.0004701614379882813, "__label__history": 0.000843048095703125, "__label__home_hobbies": 0.0001919269561767578, "__label__industrial": 0.0005259513854980469, "__label__literature": 0.0008840560913085938, "__label__politics": 0.0003025531768798828, "__label__religion": 0.0008649826049804688, "__label__science_tech": 0.08038330078125, "__label__social_life": 0.00016319751739501953, "__label__software": 0.06549072265625, "__label__software_dev": 0.818359375, "__label__sports_fitness": 0.0002694129943847656, "__label__transportation": 0.0006017684936523438, "__label__travel": 0.0003342628479003906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49274, 0.01876]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49274, 0.43981]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49274, 0.89334]], "google_gemma-3-12b-it_contains_pii": [[0, 1108, false], [1108, 5360, null], [5360, 10304, null], [10304, 15416, null], [15416, 20721, null], [20721, 25476, null], [25476, 31376, null], [31376, 35849, null], [35849, 40903, null], [40903, 43643, null], [43643, 49274, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1108, true], [1108, 5360, null], [5360, 10304, null], [10304, 15416, null], [15416, 20721, null], [20721, 25476, null], [25476, 31376, null], [31376, 35849, null], [35849, 40903, null], [40903, 43643, null], [43643, 49274, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49274, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49274, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49274, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49274, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49274, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49274, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49274, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49274, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49274, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49274, null]], "pdf_page_numbers": [[0, 1108, 1], [1108, 5360, 2], [5360, 10304, 3], [10304, 15416, 4], [15416, 20721, 5], [20721, 25476, 6], [25476, 31376, 7], [31376, 35849, 8], [35849, 40903, 9], [40903, 43643, 10], [43643, 49274, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49274, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
373c6004a77b6f4da4b207c42f341fa77112ea79
|
Achievements, open problems and challenges for search based software testing
Mark Harman, Yue Jia and Yuanyuan Zhang
University College London, CREST Centre, London, UK
Abstract—Search Based Software Testing (SBST) formulates testing as an optimisation problem, which can be attacked using computational search techniques from the field of Search Based Software Engineering (SBSE). We present an analysis of the SBST research agenda, focusing on the open problems and challenges of testing non-functional properties, in particular a topic we call ‘Search Based Energy Testing’ (SBET), Multi-objective SBST and SBST for Test Strategy Identification. We conclude with a vision of FIFIVeRIFY tools, which would automatically find faults, fix them and verify the fixes. We explain why we think such FIFIVeRIFY tools constitute an exciting challenge for the SBSE community that already could be within its reach.
I. INTRODUCTION
Search Based Software Testing (SBST) is the sub-area of Search Based Software Engineering (SBSE) concerned with software testing [2], [85]. SBSE uses computational search techniques to tackle software engineering problems (testing problems in the case of SBST), typified by large complex search spaces [58]. Test objectives find natural counterparts as the fitness functions used by SBSE to guide automated search, thereby facilitating SBSE formulations of many (and diverse) testing problems. As a result, SBST has proved to be a widely applicable and effective way of generating test data, and optimising the testing process. However, there are many exciting challenges and opportunities that remain open for further research and development, as we will show in this paper.
It is widely believed that approximately half the budget spent on software projects is spent on software testing, and therefore, it is not surprising that perhaps a similar proportion of papers in the software engineering literature are concerned with software testing. We report an updated literature analysis from which we observe that approximately half of all SBSE papers are SBST papers, a figure little changed since the last thorough publication audit (for papers up to 2009), which found 54% of SBSE papers concerned SBST [56]. Many excellent and detailed surveys of the SBST literature can be found elsewhere [2], [4], [55], [85], [126]. Therefore, rather than attempting another survey, we provide an analysis of SBST research trends, focusing on open challenges and areas for future work and development.
II. A BRIEF HISTORY OF SBST
Since the first paper on SBST is also likely to be the first paper on SBSE, the early history of SBST is also the early history of SBSE. SBSE is a sub-area of software engineering with origins stretching back to the 1970s but not formally established as a field of study in its own right until 2001 [51], and which only achieved more widespread acceptance and uptake many years later [38], [43], [100].
The first mention of software optimisation (of any kind) is almost certainly due to Ada Augusta Lovelace in 1842. Her English language translation of the article (written in Italian by Menabrea), ‘Sketch of the Analytical Engine Invented by Charles Babbage’ includes seven entries, labelled ‘Note A’ to ‘Note G’ and initialed ‘A.A.L’. Her notes constituted an article themselves (and occupied three quarters of the whole document). In these notes we can see perhaps the first recognition of the need for software optimisation and source code analysis and manipulation (a point argued in more detail elsewhere [44]):
“In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selection amongst them for the purposes of a Calculating Engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation.” Extract from ‘Note D’.
The introduction of the idea of software testing is probably due to Turing [115], who suggested the use of manually constructed assertions. In his short paper, we can find the origins of both software testing and software verification. The first use of optimisation techniques in software testing and verification probably dates back to the seminal PhD thesis by James King [67], who used automated symbolic execution to capture path conditions, solved using linear programming.
The first formulation of the test input space as a search space probably dates back seven years earlier to 1962, when a Cobol test data generation tool was introduced by Sauder [103]. Sauder formulates the test generation problem as one of finding test inputs from a search space, though the search algorithm is random search, making this likely to be the first paper on Random Test Data Generation. Sauder’s work is also significant because it introduces the idea of constraints to capture path conditions, although these constraints are manually defined and not automatically constructed.
1This keynote was given by Mark Harman at the 8th IEEE International Conference on Software Testing, Verification and Validation (ICST 2015), but this paper, on which the keynote was based, is the work of all three authors.
The first paper to use a meta-heuristic search technique was probably the work of Boyer, Elspas and Levitt on the SELECT system [16]. The paper is remarkable in many ways. Consider the following paragraph, quoted from the paper:
“The limitation of the above algorithms to linear combinations is an unacceptable, and vexing, one. For example, they could not handle an inequality like $X^2 + Y + 10Z - W \geq 5$ among its constraints, unless one were prepared to assign to X a trial value, and then attempt a solution (assuming the other inequalities are linear). We therefore considered various alternatives that would not be subject to this limitation. The most promising of these alternatives appears to be a conjugate gradient algorithm (‘hill climbing’ program) that seeks to minimise a potential function constructed from the inequalities.” [16]
Here we can see, not only the first use of computational search (hill climbing) in software engineering, but also a hint at the idea (assignment of concrete values) that was subsequently to become Dynamic Symbolic Execution (DSE) [21]. Within this single paragraph we therefore may arguably find the origins of both DSE and SBST (and, by extension, SBSE too).
The SELECT paper is also remarkable in its sober and prescient assessment of the relative merits of testing and verification. Shortly after its publication, these two closely related research communities entered into a protracted and unhelpful ‘feud’ that generated a great deal more heat than light [29], [31], [35], [60]. Fortunately, we have more recently witnessed an accommodation between the two communities [61], and greater degree of welcome collaboration at their intersection [59]. We really ought to ruefully reflect on the delay in this rapprochement given the ‘understanding’ already set out by the SELECT paper in 1975. For example, speaking about the complementarity of testing and verification, the authors have this to say:
“Even after a mathematical proof of correctness, one cannot be certain that the program will run as intended on a given machine. Testing in the real machine environment on actual data would appear to be a useful complementary technique to formal verification since it is not contingent on [such] assumptions.” [16]
At about the same time² Miller and Spooner [86], were also experimenting with optimisation-based approaches for generating test data (which they refer to as ‘test selection’ in the sense that they ‘select’ from the input space, which, in the more recent literature we would refer to as ‘test data generation’).
²The Miller and Spooner paper was published in 1976, but was received by the journal on the 9th of September 1975. The acknowledgements of the 1976 journal paper indicate that it was one of the referees who pointed out the existence of the 1975 conference paper, which the 1976 paper cites. Although the conference was held in April 1975 and the proceedings appeared in the July 1975 issue of ACM SIGPLAN Notices, it is quite likely that Miller and Spooner were already working on their manuscript, which was submitted only a couple of months later.
Unlike Boyer et al. [16], Miller and Spooner used concrete execution of the program rather than symbolic execution, making their approach more similar to the techniques that ultimately became SBST, while the work of Boyer et al. followed a closely-related (but different) evolutionary path, which ultimately led to DSE. Current research develops both these techniques, and also hybrids that combine the best features of both [9], [63], [71], [110].
It appears that SBST research lay dormant for approximately a decade until the work of Korel [68], which introduced a practical test data generation approach, the Alternating Variable Method (AVM), based on hill climbing. The first use of genetic algorithms for software engineering problems is usually attributed also to the field of SBST, with the work of Xanthakis et al. [122], who introduced a genetic algorithm to develop whole test suites. Subsequent theoretical and empirical results tend to suggest that AVM outperforms genetic algorithms (in ‘non-royal road’ test data generation problems), at least for imperative programs in the C language [57]. Since the late 1990s, with a greater overall software engineering focus on SBSE, there has been an explosion in SBST publications as the analysis below indicates.
**Analysis of Trends in SBST:** Figure 1 shows the growth in papers published on SBST. The data is taken from the SBSE repository [130]. The aim of the repository is to contain every SBSE paper, underpinned by regular and careful human-based update. Although no repository can guarantee 100% precision and recall, the SBSE repository has proved sufficiently usable that it has formed the basis of several other detailed analyses of the literature [27], [38], and is widely used by the SBSE community as a first source of information on related work.
We found a close fit to a quartic function, indicating strong polynomial growth. If the trend continues, there will be more than 1,700 SBST papers before the end of this decade.
EvoSuite has proved to be particularly effective as a tool for testing Java programs. It is provided as a plug-in to Eclipse that works ‘out-of-the-box’ (the user simply needs to click ‘run EvoSuite’). A great deal of engineering effort has been directed towards the usability of the tool for practical software testing. For example, most computational search algorithms are ‘anytime’ algorithms; they can be stopped at any time and yield the best result found so far. EvoSuite exploits this by ensuring that all executions complete within reasonable time.
For regression testing, the selection and prioritisation algorithms are easy to implement. For such regression testing tools the fitness function need not be a part of the tool itself, as it is for test data generation. Instead, the search based regression test optimisation tool simply relies on recorded information concerning the properties of interest of the test suite. This makes these algorithms easy to deploy in a real world setting, provided data is available. Adoption effort is more normally found to be that associated with data collection rather than tool deployment in our experience.
**Breadth of SBST Applications to Testing Problems:** SBST for structural coverage is the most well studied and well understood paradigm within SBST. This was true when last surveyed in 2009 [55] and it remains the case among the 718 papers published on SBST to the present day in the analysis we present in this paper.
The structural code coverage achieved is not always as high as we might hope [70], with the result that we may need to rely on non-adequate test suites and all that this entails [39] using currently available tools. However, the principles are relatively well understood and progress continues with regular newly published incremental advances on the state-of-the-art.
The breadth and diversity of other testing paradigms, domains and applications attacked using SBST is a compelling testament to its general and widespread applicability. For any desirable properties of good test data that are captured as adequacy criteria, these criteria naturally reformulate as fitness functions. As has also been known, since at least 1962 [103], a system’s input space makes a very natural search space, in which we can automate the process of searching for test inputs that meet these test adequacy criteria.
Here is a long (yet partial) list of just some of the testing problems with citations to a few example papers (of many) that adopt an SBST approach to find suitable test data: functional testing [118], safety testing [11], [32], security testing [41], robustness testing [104], integration testing [18], [26], service-based testing [24], temporal testing [19], [113], [119], exception testing [114], Combinatorial Interaction Testing (CIT) [20], [25], [95], (and Software Product Line (SPL) testing [48]), state [77] and state-based-model testing [30], [78] (including popular modelling notations such as MATLAB Simulink [90], [129]), and mutation based test [37], [49] and mutant [65], [92] generation.
**The State of the Art:** SBST has made many achievements, and demonstrated its wide applicability and increasing uptake. Nevertheless, there are pressing open problems and challenges that need more attention and to which we now turn.
---
**Fig. 2:** The changing ratio of SBSE papers that are SBST papers. Initially, SBST dominated SBSE. Over the years, this ratio has decreased, stabilising at around 50%. This represents the growth in non-testing related areas of SBSE rather than any decline in the number of papers on SBST (as can be seen by comparing this figure with Figure 1).
---
**SBST’s Industrial Applications and Tools:** SBST is now sufficiently mature that it has transitioned from laboratory study to industrial application, for example at Daimler [117], Ericsson [3] and Microsoft [111]. There are also publicly available SBSE tools for automated program repair [76], and tools for SBST for popular languages, such as AUSTIN [69], an open source SBST system for the C language, and EvoSuite [36], an open source SBST system for Java.
Specifically:
1) We need to extend SBST to test non-functional properties, a topic that remains relatively under-explored, compared to structural testing (as revealed in Section III below). In particular, we need more work on Search Based Energy Testing (SBET).
2) We need Search Based Test Strategy Identification (SBTSI). Regression test process optimisation is well developed and understood, but techniques for finding test generation strategies remain under-developed.
3) We need more work on multi-objective test data generation techniques (MoSBaT). Previous work on search based test data generation has tended to focus solely on a single objective optimisation (such as branch coverage), with comparatively little work on multi-objective test data generation. Unfortunately, real-world testing problems are messy, constrained and are unlikely to be captured by a single objective.
In the remainder of this paper, we present a roadmap of future work in these three areas of Search Based Energy Testing (SBET), Search Based Test Strategy Identification (SBTSI) and Multi-objective Search Based Testing (MoSBaT). We wish to conclude on a positive note, highlighting the exciting opportunities that arise because of the extraordinary progress in SBST in particular, and SBSE in general.
We therefore close the paper with an outline of ‘FiFiVerify tools’; tools that use SBSE and verification to automatically find faults, fix them and verify the fixes. Such FiFiVerify tools would be a fitting development and realisation of testing and verification complementarity, which was expressed so eloquently by Boyer, Elspas and Levitt in their 1975 SELECT paper (discussed earlier in this section).
III. SEARCH BASED ENERGY TESTING (SBET)
An excellent survey of the state-of-the-art in search based software testing for non-functional system-level properties was presented by Afzal, Torkar and Feldt [2]. We used the SBSE repository [130] to extend to 2014, the quantitative analysis of publications contained in the paper by Afzal et al.
The results are presented in Figure 3. As can be seen from this figure, there remains activity in this area. However, given the overall growth in papers on search based software testing, revealed by Figure 1, it is surprising (and perhaps disappointing) that more work is not focused on these properties.
Lack of work on non-functional properties is surprising because of the increasing importance of non-functional properties. It is disappointing because search based software testing techniques have the significant advantage that they can, theoretically, be applied to any testing problem for which the adequacy criterion can be captured as a fitness function. In principle, testing for execution time, quality of service, and energy consumption, should be no more difficult than testing for branch coverage; we simply require a different fitness function. Of course, the measurements that inform fitness may come with their own sets of challenges, peculiar to each non-functional property of interest.
Analysis of all Work on Non-Functional SBST: In total, since the review by Afzal et al. (i.e., since 1st January 2008), there have been 44 SBST papers on non-functional properties (9% of the 484 in total on SBST over the same period). This compares to 35 papers, (16% of the 221 published) over the period of the study by Afzal et al. Although the number of papers is steadily rising, this could be simply due to overall SBST growth; the proportion appears to be falling, a troubling finding when we consider the importance of non-functional properties. The proportion of SBST papers concerning non-functional properties ought to be closer to 50% than 10%, if research activity is to adequately reflect importance.
Analyzing sub-topic distribution between the two periods, we compared the results reported by Afzal et al., with those we obtained, by extending their analysis. Afzal et al. identified 5 categories. We observed activity in all 5 of these, and new activity in a further 6. We thus conclude that SBST has been used to test at least 11 different non-functional attributes, with overall research output in the ratios given by Figure 4.
The Startling Lack of SBET Work: There is work on SBSE for improving energy consumption. For example, Li et al. [80] formulate energy optimisation as the problem of finding the mobile device screen colour choices that minimise energy consumption, while maintaining colour contrast. Monotis et al. [83] also define a search space for energy optimisation choices. They currently use an exhaustive search, but plan to extend to full SBSE for scalability to larger search spaces. Both approaches are similar, in spirit, to Genetic Improvement [53], since they search the space of program improvements. However, we could find only a single paper that has been published on Search Based Energy Testing (SBET) [15]. It is possible that our search has failed to find all papers. However, we remain confident that the overall trends we report are reasonably accurate and can be fairly confident about the finding that SBET is under-developed in the literature.
Energy optimisation has been a topic of interest for at least 20 years [112], and is gaining considerable recent interest because of its implications for the environment, and due to the dramatic increase in battery-powered computing. In order to make progress on search based software testing of non-functional properties, we need to measure the non-functional properties of concern with sufficiently computationally efficient fitness functions. This need for efficient fitness computation may mandate the use of surrogates or approximations to the true measurement [47]. In this section, we focus on Search Based Energy Testing (SBET), for which we believe immediate progress can be made and for which there are already potential measurement approaches [42], [94], and possible surrogates [88].
The problem of inadequate battery life is routinely bemoaned by many mobile device users [33] and the space occupied by the battery is becoming the predominant driver of device size. This clearly affects smart general purpose mobile devices, such as phones, notepads and laptops, for which the battery may occupy as much as 90% of the available space. However, it is also important for medical devices such as pacemakers, where the battery can typically occupy at least 50% of the device [91].
Estimates for the carbon footprint of computational energy consumption vary, but all accounts agree that the proportion of energy consumed by computation is rising and that it denotes a nontrivial fraction of global energy demand. Claims that a smartphone could consume more energy per year than a medium-sized refrigerator are deemed to be exaggerated by, perhaps, a factor of four [116], so there may be some degree of hyperbole at work.
Nevertheless, the total energy consumed by computation is undoubtedly rising. One study, conducted in 2009 and again, by the same authors, in 2011 [108], estimated the proportion of global electricity consumption due to information and communications technology rose from 3% to 6% between the two years at which the assessments were reported. Testing and optimising energy consumption is therefore an ecological imperative as well as pressing user need [82].
It is a challenge to measure the amount of energy consumed by the execution of a software system in a reliable and accurate manner. However, if we can find suitable metrics that can measure energy consumption and that can be reformulated as fitness functions, according to the standard SBSE mantra ‘metrics are fitness functions too’ [46], then we can use these to search for worst-case and best-case energy consumption, and to find anomalies, ‘energy bugs’ and ‘hotspots’ [10].
This agenda would constitute a nascent subfield of SBST called ‘Search Based Energy Testing’ (SBET). In the remainder of this section we outline some of the issues and outline potential solutions to problems in energy measurement for SBET.
**Efficiency:** We shall require that we can measure energy consumption quickly, because the overall search based approach will need to consider many different test cases in order to search for worst case or anomalous case energy consumption.
**Granularity:** The measurement of energy consumption can be fine grained (assessing the individual contribution of each line of code to energy consumed), mid-grained (focussing on energy consumed by a block of code or a method/procedure) or coarse-grained (simply reporting energy consumed by the program execution over a period of time).
Fine grained approaches such as eLens [42] and Eprof [94], would be needed to profile for sensitivity analysis. Energy sensitivity information would be useful for SBSE applications such as genetic improvement. Such techniques have been used for optimising energy usage [83], [120] and for which sensitivity analysis is helpful [73]. For SBST, however, the primary need for measurement will be to capture the energy consumed by a test execution, which can be coarse-grained. This is important because coarse-grained energy measurement is likely to come with fewer technical challenges, compared to fine grained measurement.
**Hawthorne effect**: We have to be careful for potential ‘Hawthorne’-like effects, in which the property we seek to measure is affected by the measurement process. In particular, any non-functional property we measure by instrumenting the code will likely be influenced by instrumentation code itself, thereby reducing the measurements’ reliability. If the measurements’ influence on the non-functional property is minimal or constant, then we might choose to either ignore it or factor it out. However, since many non-functional properties will be interesting, precisely because their effect is context-sensitive, we should not assume that the effect of instrumentation will be constant, and it may not be minimal.
One possible solution would be to create two versions of the system under test: one with normal instrumentation, and one with duplicated instrumentation. We can measure the non-functional property of interest for both, subtracting one from the other to determine the amount of non-functional property due purely to instrumentation. This doubles the total amount of computation required, but it potentially provides a context-sensitive and more accurate way to factor out the instrumentation influences.
**Specificity**: It is natural to design tools for search based software testing that are generally applicable, but non-functional properties such as energy, are inherently device and platform specific. There will be a tension between the applicability of an approach and the degree of information that it can return. By being specific, we may not merely test the energy consumed, but may additionally give detailed assessments of where this energy is consumed. Such a detailed and specific assessment might highlight ways to reduce energy consumption. For example, the Running Average Power Limit (RAPL) approach [28], has been developed by Intel to distinguish between the energy consumed in CPU, the dynamic random access memory, and the so-called ‘CPU uncore’ (such as caches and on-chip graphics processing units). This specificity, so closely coupled to the hardware it assesses, gives more insights as to the causes of energy consumption, but the insights it yields are naturally pertinent only to specific devices.
**Specialised Hardware Requirements**: Measuring the amount of energy consumed using specialised hardware, can lead to more accurate assessment of energy consumption, but requires specialised equipment [107]. Hardware-based energy measurement has been used for thread management [97] and to assess the energy implications of code obfuscation on the Android platform [102].
Hardware based approaches typically consist of several phases. For example, the SEEP approach [62] uses symbolic execution to capture paths, which are subsequently executed with concrete values to give platform-specific energy consumption for basic blocks.
For SBST, the number of executions required by test generation may make the use of hardware-based approaches prohibitive when no such API is provided. By contrast, for test case management, such as regression testing, there is a fixed pool of test cases, each of which needs to be assessed for the non-functional property of interest only once, prior to a subsequent optimisation phase. Once this is known, the optimisation problem consists of either prioritising, selecting or minimising the test suite according to the non-functional properties of interest [45], [126]. Therefore, for test management applications, such as regression test optimisation, it may be acceptable to build a specialised hardware test rig. The rig measures, once and for all, but with a greater degree of human effort, the non-functional properties of each test case. Hardware-based approaches, even those without a software API, may be applicable to test suite optimisation. Indeed, the LEAP node approach [107] has recently been used for just such a test suite optimisation [79].
IV. Search Based Test Strategy Identification (SBTSI)
Most forms of test data generation have been concerned with finding specific inputs or sets of inputs (test suites) that have desirable properties. Other SBSE formulations, as yet underexplored, have more of the character of Test Strategy Identification (TSI) problems, as we outline in this section.
**Genetic Programming for SBSTI**: Genetic programming is increasingly finding applications in SBSE [54], [73], [76], [121]. The primary difference between genetic programming and other forms of evolutionary computation is that the search space is a space over programs expressed in some programming language. The programming language can be as general or as specific as the application demands. Suppose we formulate simple testing strategies in a formal language. Could we then use genetic programming to search test strategies for those well adapted to a particular testing problem?
The idea of searching for testing strategies [98] rather than searching for test cases is appealing, because it may help us to raise our abstraction level; finding strategies for finding test cases rather than finding test cases themselves. It also may yield insight, which may ultimately prove to be more valuable than test suites. In the remainder of this section, we give one example of such insight, outlining how test strategy identification can be used to cluster programs and the faults they may contain.
**Using SBTSI to cluster programs**: Suppose we search for test strategies for a particular suite of programs that achieve high mutation score. Given a particular set of mutants and a particular set of programs, a particular strategy will emerge that is adapted to the set of programs concerned.
The difficulty in finding a suitable strategy will be partly governed by the degree to which the programs have some commonality, and the degree to which effective mutant killing submits to some particular strategy.
The difficulty of finding a solution can be measured quite naturally in terms of the fitness achieved for a given budget of computational search effort. One very desirable outcome is obviously the test strategy itself, if we can find a good one. However, even when TSI fails to identify good strategies, strategy identification difficulty can be used as a fitness function to help us to identify fault categories, and the programs which may contain them:
We can cluster programs with respect to a given set of faults. The cluster identification approach will, itself, be a multi-objective search problem: minimise the number of clusters, while simultaneously maximising the fitness achieved by TSI within each cluster. Programs residing in a given cluster exhibit related fault behaviour; there is a single unifying strategy for testing them in order to reveal these faults.
One possible formulation would be: Given a set of programs $P$, find the largest subset $S$ for which TSI achieves a mutation adequacy (mutation score) above $\alpha$ on a set of mutants $M$. The fitness function could be the size of the subset $S$ (including more programs is better, because TSI is more widely applicable). This formulation seeks the most general strategy for achieving at least $\alpha$.
There is a great degree of choice available in the particular formulation we might adopt. For instance, we might fix the subset of programs, $S$, and search for a strategy that achieves the highest mutation adequacy on a given set of mutants, $M$. This formulation seeks the best possible strategy for finding a particular class of faults (captured by $M$) on a given set of programs, $S$.
**A co-evolutionary approach to SBSTI:** Suppose we vary the sets of faults considered (varying $M$). We might formulate this problem as a co-evolutionary search that seeks to partition the set of programs of interest, on the one hand, while simultaneously maximising the fitness achieved by TSI within each cluster. Programs residing in a given cluster exhibit related fault behaviour; there is a single unifying strategy for testing them in order to reveal these faults.
One possible co-evolutionary formulation would be to evolve the subset $S$ and the set of mutants $M$. A co-operative formulation would use set size as the fitness for $S$ and $M$, such that there is a strategy that achieves 100% mutation adequacy with respect to $M$ on all of the programs $S$. This more co-operative approach tries to find sets of faults and programs which ‘co-operate’ in the sense that the faults can easily be found with a particular strategy on a large set of programs.
A competitive formulation might define the fitness of $S$ to be the size of the largest such set for which a strategy exists that kills all mutants in $M$, while the fitness of $M$ is the size of the largest such set that avoids being killed by all programs in $S$.
**Assignment problems:** Assignment problems are increasingly interesting in software engineering. They can often be formulated as systems that recommend engineers for particular tasks, such as debugging and testing [5], [14]. These recommender systems have an inherent optimisation flavour [99]: In general, we seek an assignment of solution techniques to problem instances that maximises the quality of solutions found.
In order for SBSE to be a viable approach, we need a representation, fitness function and a search space that is sufficiently large to make enumeration infeasible [58]. Assignment problems typically come with some form of representation, $r$ that captures the mapping between solution and problem instances. There is guaranteed to be some method, $a$, for assessing solution quality, otherwise no intelligent assignment can be performed. It is reasonable to believe that the search space will be too large to be feasibly innumerable, since assignment problem search spaces grow exponentially. The open research problem is to find appropriate reformulations, that use a computational search technique, guided by fitness function defined in terms of $a$, to search the space $r$.
When using SBSE to attack assignment problems in software testing, we need not restrict ourselves merely to the assignment of engineers. Since we have an array of different testing techniques, and a bafflingly complicated set of possible programs and test problems to which they might be applied, there is an important assignment problem for researchers and practitioners that has remained under-explored: How do we find the best assignment of test techniques to testing problems and particular programs? This is a problem for which hyper-heuristics has recently been successful [64].
V. Multi-objective Search Based Testing (MoSBAT)
For problems concerned with test suite selection and prioritisation, multi-objective approaches are increasingly prevalent [8], [13], [17], [87], [105], [106], [125]. However, for test data generation problems, the large majority of existing approaches are single objective. Relatively few attack multi-objective test case generation [7], [34], [74], [84], [124], despite it having been proposed sometime ago [52]. This is unrealistic because practising software testers are unlikely to be concerned only with a single test objective [45]. Therefore, we believe that more work is required on multi-objective search based test data generation.
Perhaps one of the reasons why multi-objective techniques have not received the attention they deserve, lies in the under development of the field of SBST for non-functional properties (discussed in Section III). Certainly, many of the additional objectives that practising testers may seek to achieve are likely to concern non-functional properties. For example, a tester may be interested in achieving higher coverage, but while also targeting unusually long execution times, security properties, or energy consumption (or all of these). Since the community seems sluggish in its uptake of non-functional properties, this may have had a concomitant effect on applications of multi-objective techniques.
Fortunately, search based techniques are readily available for multi-objective optimisation. Since many different test adequacy criteria have been captured as fitness functions, all that remains is to consider how to combine these in multi-objective frameworks, methods and tools.
Multi-objective Understanding: Such multi-objective test data generation may not be confined merely to the revelation of faults; It may so be used at a more strategic level, to understand, investigate and highlight problems at the level of policy formulation. For example, there is a well-known tension between usability and security [1], two non-functional properties that we might also seek to measure and test.
In order to investigate this phenomenon and its practical ramifications for a particular security policy, we can capture user behaviour in a simple language that defines the strategies that a user might take to increase usability. Suppose we can measure usability properties. We can now search for user strategies that maximise usability (using a similar approach to SBTSI), thereby investigating the limitations and shortcomings of security policies that sacrifice usability for security.
Furthermore, this approach could be extended to help identify potential security policies. We can formulate the trade-off between usability and security using a multi-objective approach. If we have a language for defining security policies as well as a language for defining likely user behaviours, then we can co-evolve a security policy and user behaviour using co-evolution. In this co-evolutionary framework, the fitness of a security policy is defined by the security level achieved with respect to the population of user behaviours, while the fitness of the user behaviour strategy is defined by its ability to maximise usability with respect to the security policies. Using variations on this theme, we may be able to find security policies that are well adapted to particular user behaviours, thereby balancing usability and security.
The Path from Automated Testing to Automated Improvement: In this discussion we have moved relatively seamlessly from seeking to search for test cases, to using testing to discover improved systems. This is one of the principles that underlies the recent upsurge in work on Genetic Improvement (GI) [6], [50], [53], [54], [72], [73], [75], [96], [109], [121]; If we can search for test cases that expose suboptimal system behaviour, can we not also search for versions of the system that improve this behaviour? We believe that there is a symbiotic relationship between SBST and GI: SBST can generate test cases to help guide GI [53], but it also suggest intellectual routes through which we can make the technical and practical journey from automating testing to automating improvement.
VI. Find, Fix, Verify (FiFiVERIFY)
We are tantalisingly within sight of exciting future testing tools that we would like to outline in this section; tools that will find, fix and verify the systems to which they are applied. Such near-future software engineering tools will take a program that may contain bugs (from some identified bug class, such as memory faults) and return an improved program.
The improved program has all bugs for the specified class fixed and is verified as being free from this class of faults. It may also come with a regression test suite that gives the engineer some degree of confidence that the improved system has not regressed and/or a proof that the improved version is ‘no less correct’ than the original.
We name this type of hypothesised tool a ‘FiFiVERIFY tool’ (short for ‘Find, Fix and Verify’). Though any FiFiVERIFY tool would be giant leap forward from current testing and debugging technology, we believe that such tools are already within the grasp of the verification and testing community. In the remainder of this section, we outline the case that the techniques and algorithms required to build a FiFiVERIFY tool, are already available and reported in the literature.
Verification: Verification techniques are sufficiently mature that they can verify non-trivial systems free from memory faults, scaling to complete verification (with respect to a given property) of device drivers (thousands of Lines of Code) [123] and partial verification of much larger systems [22], [23]. Where there remain faults, we can use fault localisation [66], [127] to highlight likely ‘suspicious’ statements on which we can target automated repair [76].
A New Application for Fault Localisation: Fault localisation has known theoretical limits [128]. There has also been recent discussion of whether it offers real benefits to human programmers [93]. However, the practical concerns are pertinent only for applications to human debuggers; fault localisation definitely offers benefits to automated repair techniques [89]. We believe that automated repair may prove to be a much more profitable use-case for automated fault localisation, and we hope for more work on fault localisation specifically tailored to automated repair (and, more generally, genetic improvement).
Find and Fix: Combining this work on test generation, localisation and repair will allow us to find and fix bugs automatically. This will allow us to find and fix bugs (a FiF tool).
FiFi and Verify: We can then alternate between find-and-fix and verification until the verification system is able to prove freedom from the class of faults of interest. This is a rather naive outline of a FiFiVERIFY tool. A more sophisticated approach would seek a more intimate combination of these technologies, so that testing can inform verification and vice versa, making each more efficient and effective. However, a simple iterative sequential composition would provide a proof of concept FiFiVERIFY tool.
FiFi and Verify Absence of Regression Faults: Finally, as outlined in Section II we have test data generation techniques that can achieve reasonable coverage, possibly augmented (or, where feasible, replaced by) verification [40], [81]. These can be used to help find the bugs, to guide the repair process and they could be used to provide a regression test suite. Since there is no oracle problem for regression testing [12], the regression testing also can be entirely automated.
In this paper we have reviewed work on Search Based Software Testing, its origins, trends in publication and open problems. We showed that the area continues to grow, with a polynomial increase in publications, but there are causes for concern. We presented evidence that the range of different non-functional properties being attacked using SBST is rising, but the proportion of papers on this topic is falling, which is troubling, given the increasing importance of non-functional properties to testers. Specifically, we highlighted the lack of work on Search Based Energy Testing (SBET), outlining energy measurement techniques that might be reused as fitness functions and some of the issues involved.
We also argue the case for multi-objective software testing, since we believe that most testers will have more than one objective in mind when they search for a test suite. Although multi-objective techniques have penetrated the regression testing problem space, they have yet to make a significant impact in the area of software test data generation. We give some examples of open problems and possible opportunities for multi-objective test data generation.
We conclude with an upbeat assessment of the exciting possible SBSE tools that may appear in the near future, posing the FiFiVerify tool challenge. To qualify as a FiFiVerify tool, the tool must automatically find faults in a given class, fix them and verify that the faults had been fixed. We believe that rudimentary FiFiVerify tools are already within the current capabilities of the research community.
ACKNOWLEDGMENT
Mark Harman is partly supported by the EPSRC grants EP/J017515/1 (DAASE) and EP/I033688/1 (Gismo). Yuanyuan Zhang and Yue Jia are fully supported by the DAASE grant. The authors would like to thank Daniel Kroening, Bill Langdon, Phil McMinn, Peter O’Hearn, Matheus da Assunção, Thelma Elita Colanzi, Silvia Regina Vergilio, and Aurora Pozo. A multi-objective optimization approach for the integration and test order problem. Information Sciences, 267:119–139, May 2014.
REFERENCES
[22] Cristiano Calcagno, Dino Distefano, Peter W. O’Hearn, and Hongseok Yang. Compositional shape analysis by means of bi-abduction. In Zhong Shao and Benjamin C. Pierce, editors, 36th Symposium on
|
{"Source-Url": "http://www0.cs.ucl.ac.uk/staff/Yue.Jia/resources/papers/HarmanJZ2015.pdf", "len_cl100k_base": 8898, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 41422, "total-output-tokens": 11301, "length": "2e13", "weborganizer": {"__label__adult": 0.0003674030303955078, "__label__art_design": 0.00029277801513671875, "__label__crime_law": 0.0002968311309814453, "__label__education_jobs": 0.0007109642028808594, "__label__entertainment": 5.370378494262695e-05, "__label__fashion_beauty": 0.00016987323760986328, "__label__finance_business": 0.00015985965728759766, "__label__food_dining": 0.0003018379211425781, "__label__games": 0.0005321502685546875, "__label__hardware": 0.0007476806640625, "__label__health": 0.0005087852478027344, "__label__history": 0.00021708011627197263, "__label__home_hobbies": 7.665157318115234e-05, "__label__industrial": 0.0002593994140625, "__label__literature": 0.0003116130828857422, "__label__politics": 0.00020122528076171875, "__label__religion": 0.0004112720489501953, "__label__science_tech": 0.012908935546875, "__label__social_life": 8.89897346496582e-05, "__label__software": 0.005584716796875, "__label__software_dev": 0.97509765625, "__label__sports_fitness": 0.00026416778564453125, "__label__transportation": 0.0003769397735595703, "__label__travel": 0.00016963481903076172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49712, 0.01927]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49712, 0.36476]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49712, 0.92924]], "google_gemma-3-12b-it_contains_pii": [[0, 5256, false], [5256, 10391, null], [10391, 14533, null], [14533, 19683, null], [19683, 23182, null], [23182, 29497, null], [29497, 35811, null], [35811, 42120, null], [42120, 49712, null], [49712, 49712, null], [49712, 49712, null], [49712, 49712, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5256, true], [5256, 10391, null], [10391, 14533, null], [14533, 19683, null], [19683, 23182, null], [23182, 29497, null], [29497, 35811, null], [35811, 42120, null], [42120, 49712, null], [49712, 49712, null], [49712, 49712, null], [49712, 49712, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49712, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49712, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49712, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49712, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49712, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49712, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49712, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49712, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49712, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49712, null]], "pdf_page_numbers": [[0, 5256, 1], [5256, 10391, 2], [10391, 14533, 3], [14533, 19683, 4], [19683, 23182, 5], [23182, 29497, 6], [29497, 35811, 7], [35811, 42120, 8], [42120, 49712, 9], [49712, 49712, 10], [49712, 49712, 11], [49712, 49712, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49712, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-05
|
2024-12-05
|
2c42472864d2810449d4bef9bbc3e481d072b98f
|
Abstract—We propose a multiple-komi modification of the AlphaGo Zero/Leela Zero paradigm. The winrate as a function of the komi is modeled with a two-parameters sigmoid function, hence the winrate for all komi values is obtained, at the price of predicting just one more variable. A second novel feature is that training is based on self-play games that occasionally branch with changed komi—when the position is uneven. With this setting, reinforcement learning is shown to work on 7×7 Go, obtaining very strong playing agents. As a useful byproduct, the sigmoid parameters given by the network allow to estimate the score difference on the board, and to evaluate how much the game is decided. Finally, we introduce a family of agents which target winning moves with a higher score difference.
I. INTRODUCTION
The longstanding challenge in artificial intelligence of playing Go at professional human level has been successfully tackled in recent works [1–3], where software tools (AlphaGo, AlphaGo Zero, AlphaZero) combining neural networks and Monte Carlo tree search reached superhuman level. Such techniques can be generalised, see for instance [4–6]. A recent development was Leela Zero [7], an open source software whose neural network is trained over millions of games played in a distributed fashion, thus allowing improvements within reach of the resources of the academic community.
However, all these programs suffer from a relevant limitation: it is impossible to target their margin of victory. They are trained with a fixed initial bonus for white player (komi) of 7.5 and they are built to maximize the winning probability, without any knowledge of the game score difference.
This has several negative consequences for these programs: when they are ahead, they choose suboptimal moves, and often win by a small margin (see many of the games not ending in a resignation in [8]); they cannot be used with komi 6.5, which is also common in professional games; they show bad play in handicap games, since the winrate is not a relevant attribute in that situations.
In principle all these problems could be overcome by replacing the binary reward (win=1, lose=0) with the game score difference, but the latter is known to be less robust [9,10] and in general strongest programs use the former since the seminal works [9,10,11,12].
Truly, letting the score difference be the reward for the AlphaGo Zero method, where averages of the value are computed over different positions, would lead to situations in which a low probability of winning with a huge margin could overcome a high probability of winning by 0.5 points in MCTS search, resulting in weaker play.
An improvement that would ensure the robustness of estimating winning probabilities, but at the same time would overcome these limitations, would be the ability to play with an arbitrary number of bonus points. The agent would then maximize the winning probability with a variable virtual bonus/malus, resulting in a flexible play able to adapt to positions in which it is ahead or behind taking into account implicit information about the score difference. The first attempt in this direction gave unclear results [13].
In this work we propose a model to pursue this strategy, and as a proof-of-concept we apply it to 7×7 Go.
The source code of the SAI fork of Leela Zero and of the corresponding server can be found on GitHub at [https://github.com/sai-dev/sai] and [https://github.com/sai-dev/sai-server].
II. GENERAL IDEAS
A. Winrate
The winrate $\rho$ of the current player depends on the state $s$. For the sake of generality we include a second parameter, i.e. a number $x \in \mathbb{Z}$ of virtual bonus points for the current player.
So we will have $\rho = \rho(s, x) = \rho_s(x)$, with the latter being our standard notation. When trying to win by some amount of points $n$, the agent may let $x = -n$ to ponder its chances.
Since $\rho_s(x)$ as a function of $x$ must be increasing and map the real line onto $[0, 1]$, a family of sigmoid functions is a natural choice:
$$\rho_s(x) = \sigma(x + \bar{k}_s, \alpha_s, \beta_s)$$
(1)
Here we set
$$\sigma(x, \alpha, \beta) := \frac{1}{1 + \exp(-\beta(x + \alpha))}$$
(2)
The number $\bar{k}_s$ is the signed komi, i.e. if the real komi of the game is $k$, we set $\bar{k}_s = k$ if at $s$ the current player is white and $\bar{k}_s = -k$ if it is black.
The number $\alpha = \alpha_s$ is a shift parameter: since $\sigma(-\alpha, \alpha, \beta) = 1/2$, it represents the expected difference of points on the board from the perspective of the current player. The number $\beta = \beta_s$ is a scale parameter: the higher it is, the steeper is the sigmoid, generally meaning that the result is set. The highest meaningful value of $\beta$ is of the order of 10, since at the end of the game, when the score on the board is set, $\rho$ must go from about 0 to about 1 by increasing its argument by one single point. The lowest meaningful value of $\beta$ for the full 19$\times$19 board is of the order of 10/2/361 $\approx$ 0.01, since at the start of the game, even for a very weak agent it would be impossible to lose with a 361.5 points komi in favor.
B. Neural network: duplicate the head
AlphaGo, AlphaGo Zero, AlphaZero and Leela Zero all share the same core structure, with neural networks that for every state $s$ provide
- a probability distribution over the possible moves $p_s$ (the policy), trained as to choose the most promising moves for searching the tree of subsequent positions;
- a real number $v_s$ (the value), trained to estimate the probability of winning for the current player.
We propose a modification of Leela Zero neural network that for every state $s$ gives the usual policy $p_s$, and the two parameters $\alpha_s$ and $\beta_s$ described above instead of $v_s$.
C. Branching from intermediate position
Training of Go neural networks with multiple komi evaluation is a challenge on its own. Supervised approach appears unfeasible, since large databases of games have typically standard komi values of 6.5, 7.5 or so and moreover it’s not possible to estimate final territory reliably for them. Unsupervised learning asks for the creation of millions of games even when the komi value is fixed. If that had to be made variable, then theoretically millions of games would be needed for each komi value$^3$
Moreover, games started with komi very different from the natural values may well be weird, wrong and useless for training, unless one is able to provide agents with different strength. Finally, we are trying to train two parameters $\alpha_s$ and $\beta_s$ from a single output, i.e. the game outcome. To this aim, it would be advisable to have at least two finished games, with different komi, for many training states $s$.
We propose a solution to this problem, by dropping the usual choice that self-play games for training always start from the initial empty board position. The proposed procedure is the following.
1) Start a game from the empty board with random komi close to the natural one.
2) For each state in the game, take note of the estimated value of $\alpha$.
$^3$The argument that one can play the games to the end and then score under multiple komi does not work here because this doesn’t allow to estimate the $\beta$ parameter. Moreover that approach would rely on the agent of the self-plays to converge to score-perfect play, while the current approach is satisfied with convergence to winning-perfect play.
3) After the game is finished, look for states $s$ in which $d := |\hat{k}_s + \alpha_s|$ is large: these are positions in which one of the sides was estimated to be ahead of $d$ points.
4) With some probability start a new game from states $s_*$ with the komi corrected by $d$ points, in such a way that the new game starts with even chances of winning, but with a komi very different from the natural one.
5) Iterate from the start.
With this approach games branch when they become uneven, generating fragments of games with natural situations in which a large komi may be given without compromising the style of game. Moreover, the starting fuseki positions, that, with the typical naive approach, are greatly over-represented in the training data, are in this way much less frequent. Finally, not all but many training states are in fact branching points for which there exists two games with different komi, yielding easier training.
D. Agent behaviour
We incorporated in our agents the following smart choices of Leela Zero:
- the evaluation of the winrate of an intermediate state $s$ is the average of the value $v$ over the subtree of states rooted at $s$, instead of the typical minimax that is expected in these situations;
- the final selection of the move to play is done, at the root of the MCTS tree, by maximizing the number of playouts instead of the winrate.
However, we designed our agents to be able to win by large score differences. To this aim, we designed a parametric family of value functions $\nu = \nu_\lambda(s)$, $\lambda \in [0, 1]$, as the average of $\sigma(x, \alpha, \beta)$ for $x$ ranging from $\bar{k}$ to a level of bonus/malus points $\bar{x}_\lambda$ that would make the game closer to be even: in other words, for $\lambda > 0$, $\nu_\lambda(s)$ under- or over-estimates the probability of victory, according to whether the player is winning or losing.
III. PROOF OF CONCEPT: 7$\times$7 SAI
A. Scaling down Go complexity
Scaling the Go board from size $n$ to size $\rho n$ with $\rho < 1$ yields several advantages:
- Average number of legal moves at each position scales by $\rho^2$.
- Average length of a game scales by $\rho^2$.
- The number of visits in the UC tree that would result in a similar understanding of the total game, scales at an unclear rate, nevertheless one may naively infer from the above two, that it may scale by about $\rho^4$.
- The number of resconv layers in the ANN tower scales by $\rho$.
- The fully connected layers in the ANN are also much smaller, even if it is more complicated to estimate the speed contribution.
All in all it is reasonable that the total speed improvement for self-play games is of the order of $\rho^3$ at least.
Since the expected time to train $19 \times 19$ Go on reasonable hardware has been estimated to be in the order of several hundred years, we anticipated that for $7 \times 7$ Go this should be in the order of weeks. In fact, with a small cluster of 3 personal computers with average GPUs we were able to complete most runs of training in less than a week each. We always used networks with 3 residual convolutional layers of 128 filters, the other details being the same as Leela Zero. The number of visits corresponding to the standard value of 3200 used on the regular Go board would scale to about 60 for $7 \times 7$. We initially experimented with 40, 100 and 250 visits and then went with the latter, which we found to be much better. The Dirichlet noise $\alpha$ parameter has to be scaled with the size of the board, according to \[ \alpha \sim \text{unif}(0, \frac{\sigma^2}{s_1}) \] and we did so, testing with the (nonscaled) values of 0.02, 0.03 and 0.045. The number of games on which the training is performed was assumed to be quite smaller that the standard 250k window used at size 19, and after some experimenting we observed that values between 8k and 60k generally give good results.
B. Neural network structure
As explained in Section II-B, Leela Zero’s neural network provides for each position two outputs: policy and winrate. SAI’s neural network should provide for each position three outputs: the policy as before and the two parameters $\alpha$ and $\beta$ of a sigmoid function which would allow to estimate the winrate for different komi values with a single computation of the net. It is unclear whether the komi itself should be provided as an input of the neural network: it may help the policy adapt to the situation but nevertheless range on a wide interval of values.
With the above premises, the first structure we propose for the network is very similar to Leela Zero’s one, with the value head substituted by two identical copies of itself devoted to the parameters $\alpha$ and $\beta$\(^3\). The latter is then mapped to $\beta$ by equation $\beta_s = c \exp(\beta_s^*)$. The exponential transform imposes the natural condition that $\beta$ is always positive. The constant $c$ is clearly redundant when the net is fully trained, but the first numerical experiments show that it may be useful to tune the training process at the very beginning, when the net weights are almost random, because otherwise $\beta$ would be close to 1, which is much too large for random play, yielding training problems. The two outputs were trained with the usual $l^2$ loss function but with the value $v_s$ substituted with $\rho_s(0) = \sigma(k_s, \alpha_s, \beta_s)$.
We used two structures of network, type $V$ and type $Y$, which are described in detail in \[4\].
C. Branching from intermediate positions
To train the network we included the komi value into the training data used by SAI. The training is then performed the same way as for Leela Zero, with the loss function given by the sum of regularization term, cross entropy for the policy and $l^2$ norm for the winning rate.
The winning rate is computed with the sigmoid function given by equations \[ 1 \] and \[ 2 \], in particular we set $\nu(s) = \rho_s(0)$ and backpropagate gradients through these functions.
To train the neural network it is clearly necessary to have different komi values in the data set. It would be best to have very different komi values, but when the agent starts playing well enough, only few values around the correct komi\(^1\) make the games meaningful.
To adapt the komi values range to the ability of the current network, when the server assign a self-play match to a client, it chooses a komi value randomly generated with distribution given by the sigmoid itself. Formally,
$$K = 0.5 + \lfloor \rho_s^{-1}(U) \rfloor$$
where $\rho_s(x) = \sigma(x, \alpha_s, \beta_s)$, $s$ is the initial empty board state, $\alpha_s$ and $\beta_s$ are the computed values with current network and $U \sim \text{unif}(0, 1)$, thus giving to $K$ an approximate logistic distribution.
As the learning goes on, we expect $\alpha_s$ to converge to the correct value of 9, and $\beta_s$ to increase, narrowing the range of generated komi values.
To deal with this problem we implemented the possibility for the server to assign self-play games starting from any intermediate position.
After a standard game is finished, the server looks to each of the game’s positions and from each one may branch a new game (independently and with small probability). The branched game starts at that position with a komi value that is considered even by the network. Formally,
$$k' = 0.5 + \lfloor \pm \alpha_s \rfloor$$
where $s$ is the branching position and $\pm \alpha_s$ is the value of $\alpha$ at position $s$, as computed by the current network, with the sign changed if the current player was white.
The branched game is then played until it finishes and then all its positions starting from $s$ are stored in the training data, with komi $k'$ and the correct information on the winner of the branch.
This procedure should produce branches of positions with unbalanced situations and values for the komi that are natural to the situation but nevertheless range on a wide interval of values.
D. Sensible agent
When SAI plays, it can estimate the winrate for all values of the komi with a single computation of the neural network. In fact, getting $\alpha$ and $\beta$ it knows the sigmoid function that gives the probability of winning with different values of the komi for the current position.
\(^3\)As will be explained soon, the training is done at the level of winrate, so in principle, knowing the komi, the net could train $\alpha$ and $\beta$ to any of the infinite pairs that, with that komi, give the right winrate.
\(^1\)The correct komi for $7 \times 7$ Go is known to be 9, in that with that value both players can obtain a draw. Since we didn’t want to deal with draws, for $7 \times 7$ Leela Zero we chose a 9.5 komi, thus giving victory to white in case of a perfect play. In fact we noticed that with a komi of 7.5 or 8.5 (equivalent by chinese scoring) the final level of play of the agents didn’t seem to be as subtle as it appears to be for the 9.5 komi.
We propose the generalization of the original agent of Leela Zero as introduced in Section II-D. Here we give further details.
The agent behaviour is parametrized by a real number \( \lambda \) which will be usually chosen in \([0, 1]\).
To describe rigorously the agent, we need to introduce some more mathematical notation.
a) Games, moves, trees: Let \( \mathcal{G} \) be the set of all legal game states, with \( \varnothing \in \mathcal{G} \) denoting the empty board starting state.
For every \( s \in \mathcal{G} \), let \( \mathcal{A}_s \) the set of legal moves at state \( s \) and for every \( a \in \mathcal{A}_s \), let \( s_a \in \mathcal{G} \) denote the state reached from \( s \) by performing move \( a \). This clearly induces a directed graph structure on \( \mathcal{G} \) with no directed cycles (which are not legal because of superko rule) and with root \( \varnothing \). This graph can be uplifted to a rooted tree by taking multiple copies of the states which can be reached from the root by more than one path. From now on we will identify \( \mathcal{G} \) with this rooted tree and denote by \( \rightarrow \) the edge relation going away from the root.
For all \( s \neq \varnothing \) denote the unique path such that \( \bar{s} \rightarrow s \).
For all \( s \in \mathcal{G} \), let \( \mathcal{R}_s = \{ r \in \mathcal{G} : s \rightarrow r \} \) denote the set of states reachable from \( s \) by a single move. We will identify \( \mathcal{A}_s \) with \( \mathcal{R}_s \) from now on.
For any subtree \( T \subset \mathcal{G} \), let \( |T| \) denote its size (number of nodes) and for all \( s \in T \) let \( T_s \) denote the subtree of \( T \) rooted at \( s \).
b) Values, preferences and playouts: Suppose that we are given three maps \( P \), \( u \) and \( v \), with the properties described below.
- The policy \( P \), defined on \( \mathcal{G} \) with values in \([0, 1]\) and such that
\[
\sum_{r \in \mathcal{R}_s} P(r) = 1, \quad s \in \mathcal{G}.
\]
This map represents a measure of goodness of the possible moves.
- The value \( v \), defined on \( \{(s, r) : s \in \mathcal{G}, r \in \mathcal{G}_s\} \) with values in \([0, 1]\), which represents a rough estimate of the winrate at a future state \( r \). The estimate is from the point of view of whichever player is next to play at state \( s \).
- The first play urgency \( u \), defined for all pairs \( (s, T) \) such that \( s \in \mathcal{G} \) and \( T \subset \mathcal{G} \) with values in \([0, 1]\). This represents an "uninformed", flat winning rate estimate of all states in \( \mathcal{R}_s \backslash T \), i.e. actions which were not yet visited. It may depend on the set \( T \) of visited states.
Then for any non-empty subtree \( T \) and node \( s \) not necessarily inside \( T \) we can define the evaluation of \( s \) over \( T \), as
\[
Q_T(s) := \begin{cases}
u(s, T) & \text{if } s \notin T \\
\frac{1}{|T_s|} \sum_{r \in T_s} v(s, r) & \text{if } s \in T
\end{cases}
\]
It should be noted here that the two proposed choices for \( u \) are the following:
\[
u(s, T) = 0.5 \quad \text{(AlphaGo Zero)}
\]
\[
u(s, T) = v(s, s) - C_{\text{puct}} \sqrt{\sum_{r \in \mathcal{R}_s \cap T} P(r)} \quad \text{(Leela Zero)}
\]
We can then define the UC urgency of \( s \) over \( T \), as
\[
U_T(s) := Q_T(s) + C_{\text{puct}} \sqrt{|T_s| - 1 \frac{P(s)}{1 + |T_s|}}
\]
Finally, the playout over \( T \), starting from \( s \in T \) is defined as the unique path on the tree which starts from \( s \) and at every node \( r \) chooses the node \( t \in \mathcal{R}_r \) that maximizes \( U_T(t) \).
c) Definition of \( v \): In the case of Leela Zero, the value function \( v(s, r) \) depends on \( s \) only through parity: let \( \hat{v}_r \) be the estimate of the winning rate of current player at \( r \), i.e. the output of the value head of the neural network, passed through an hyperbolic tangent and rescaled in \((0, 1)\). Then
\[
v(s, r) := \begin{cases}
\hat{v}_r & s, r \text{ with same current player} \\
1 - \hat{v}_r & s, r \text{ with different current player}
\end{cases}
\]
In the case of SAI, the neural network provides the sigmoid’s parameters estimates \( \hat{\alpha}_r \) and \( \hat{\beta}_r \) for the state \( r \). These allow to compute the estimate \( \hat{p}_r \) of the winning probability for the current player at all komi values.
\[
\hat{p}_r(x) := \sigma(\hat{\beta}_r(\hat{\alpha}_r + \bar{k}_r + x))
\]
Here \( \bar{k}_r \) is the official komi value from the perspective of the current player, at state \( r \),
\[
\bar{k}_r := \begin{cases}
k & \text{if at } s \text{ the current player is white} \\
-k & \text{if at } s \text{ the current player is black}
\end{cases}
\]
the komi correction \( x \) is a real variable that allows to fake an arbitrary virtual komi value, and \( \sigma \) is the standard logistic sigmoid,
\[
\sigma(x) := \frac{1}{1 + e^{-x}} = \frac{1}{2} + \frac{1}{2} \tanh(\frac{x}{2}).
\]
Then if we want SAI to simulate the playing style of Leela Zero, though with its own understanding of the game situations, we can simply let
\[
v(s, r) := \begin{cases}
\hat{p}_r(0) & s, r \text{ with same current player} \\
1 - \hat{p}_r(0) & s, r \text{ with different current player}
\end{cases}
\]
On the other hand, if we want SAI to play "sensibly", we may use values of \( x \) for which \( \hat{p}_r(x) \) is away from 0 and from 1, so that it can better distinguish the consequences of its choices, as they reflect more in the winrate. This means to give the agent a positive virtual komi correction if it is behind and a negative virtual komi correction if it is ahead.
One way this can be done in a robust way, is to compute the average of the expected winrate at the future state \( r \) over a range of komi correction values that depends on the current
state $s$: an interval of positive numbers if the net believes that $s$ is losing and negative if winning.
By deciding the interval at $s$, we are avoiding situations like when the current player is winning at $s$, it explores a sequence of future moves with a blunder, so that it is losing at $r$, and then evaluates the winrate at $r$ giving itself a bonus which will then mitigate the penalization.
In fact, in this way blunders done when ahead are penalized more than before in the exploration, which seems a good feature.
Formally, we introduce the symbol $\mu_r(y)$ to denote the average of $\hat{\rho}_r$ over the interval $[0, y]$ or $[y, 0]$,
$$
\mu_r(y) := \begin{cases}
\hat{\rho}_r(0) & y = 0 \\
\frac{1}{y} \int_0^y \hat{\rho}_r(x) \, dx & y \neq 0.
\end{cases}
$$
Let the common sense parameter $\lambda$ be a real parameter, usually in $[0, 1]$, and let $\pi_\lambda$ be
$$
\pi_\lambda := (1 - \lambda)\hat{\rho}_s(0) + \lambda \frac{1}{2},
$$
so that $\pi_0 = \hat{\rho}_s(0)$, $\pi_1 = \frac{1}{2}$ and $\pi_\lambda$ a convex combination of the two for $\lambda \in [0, 1]$. We introduce the extremum of the komi correction interval as the reverse image of $\pi_\lambda$,
$$
\bar{x}_{s,\lambda} := \hat{\rho}_s^{-1}(\pi_\lambda).
$$
Then for a version of SAI which plays with parameter $\lambda$, we let the value be defined by,
$$
v(s, r) := \begin{cases}
\mu_r(\bar{x}_{s,\lambda}) & s, r \text{ with same current player} \\
1 - \mu_r(\bar{x}_{s,\lambda}) & s, r \text{ with different current player}.
\end{cases}
$$
Hence the value is computed at state $r$ but the range of the average is decided at state $s$.
Remark 1. We bring to the attention of the reader that a simple rescaling shows that the quantity $\mu_r(\bar{x}_{r,\lambda})$ would be somewhat less useful, because it depends on $\hat{\alpha}_r$ and $\hat{\beta}_r$ only through $\hat{\rho}_r(0)$.
Remark 2. As shown in\textsuperscript{12}, the integral in equation (4) can be computed analytically and easily implemented in the software.
$$
\mu_r(y) = \frac{1}{2} + \frac{b - a}{2y} - \frac{1}{\beta_r y} \log \sigma(\beta_r b) + \frac{1}{\beta_r y} \log \sigma(\beta_r a)
$$
where $a := |\hat{\alpha}_r + \bar{k}_r|$ and $b := |\hat{\alpha}_r + \bar{k}_r + y|$.
d) Tree construction and move choice.: Suppose we are at state $t \in G$ and the agent has to choose a move in $A_t$. This will be done by defining a suitable decision subtree $T$ of $G$, rooted at $t$, and then choosing the move $s$ randomly inside $R_t$ with probabilities proportional to
$$
\exp(C_{\text{temp}} |T_s|), \quad s \in R_t
$$
where $C_{\text{temp}}$ is the Gibbs temperature which is defaulted to 1 for the first moves of self-play games and to 0 (meaning that the move with highest $|T_s|$ is chosen) for other moves and for match games.
The decision tree $T$ is defined by an iterative procedure. In fact we define a sequence of trees $\{t\} = T^{(1)} \subset T^{(2)} \subset \ldots$ and stop the procedure by letting $T := T^{(N)}$ for some $N$ (usually the number of visits or when the thinking time is up).
The trees in the sequence are all rooted at $t$ and satisfy $|T^{(n)}| = n$ for all $n$, so each one adds just one node to the previous one:
$$
T^{(n)} = T^{(n-1)} \cup \{t_n\}
$$
The new node $t_n$ is defined as the first node outside $T^{(n-1)}$ reached by the playout over $T^{(n-1)}$ starting from $s$.
E. Measuring playing strength
To provide a benchmark for the development of SAI, we adapted Leela Zero to 7×7 Go board and performed several runs of training from purely random play to a level at which further improvement wasn’t expected. More details on this step can be found in\textsuperscript{13}. A sample of 7×7 Leela Zero nets formed the panel used in the evaluation phase of the SAI runs.
When doing experiments with training runs of Leela Zero, we produce many networks, which had to be tested to measure their playing strength, so that we can assess the performance and efficiency of each run.
The simple usual way to do so is to estimate an Elo/GOR score for each network\textsuperscript{4}. The idea which defines this number is that if $s_1$ and $s_2$ are the scores of two nets, then the probability that the first one wins against the second one in a single match is
$$
\frac{1}{1 + e^{(s_2 - s_1)/c}}
$$
so that $s_1 - s_2$ is, apart from a scaling coefficient $c$ (traditionally set to 400), the log-odds-ratio of winning.
This model is so simple that it is actually unsuitable to deal with the complexity of Go and Go playing ability. In fact in several runs of Leela Zero 7×7 we observed that each training phase would produce at least one network which solidly won over the previous best, and was thus promoted to new best. This process would continue forever, or at least as long as we dared keep the run going, even if from some point on, the observed playing style was not evolving anymore. When some match was tried between non-consecutive networks, we saw that the strength inequality was not transitive, in that it was easy to find cycles of 3 or more networks that regularly beat each other in a directed circle. Even with very strong margins.
We even tried to measure the playing strength in a more refined way, by performing round-robin tournaments between nets and then estimating Elo score by maximum likelihood.
\footnote{In fact the neural network, is just one of many components of the playing software, which depends also on several other important choices, such as the number of visits, fpu policies and all the other parameters. Rigorously the strength should be defined for the playing agent (each software implementation of Leela Zero), but to ease the language and the exposition, we will speak of the strength of the network, meaning that the other parameters were fixed at some value for all matches.}
methods. This is much heavier to perform and still showed poor improvement in predicting match outcomes.
It must be noted that this appears to be an interesting research problem in its own. The availability of many artificial playing agents with different styles, strengths and weaknesses will open new possibilities in collecting data and experimenting in this field.
**Remark 3.** It appears that this problem is mainly due to the peculiarity of the Go game and only relevant to it.
In the official 19×19 Leela Zero project the Elo estimation is done with respect to previous best agent only and it is known that there is some Elo inflation, but tests against a fixed set of other opponents or against further past networks have shown that real playing strength does improve.
A different approach which is both robust and refined and is easy to generalize is to use a panel of networks to evaluate the strength of each new candidate.
We chose 15 networks of different strength from the first 5 runs of Leela Zero 7×7. Each network to be evaluated is opposed to each of these in a 100 games match. The result is then a vector of 15 sample winning rates, which contains useful multivariate information on the playing style, strengths and weaknesses of the tested net.
To summarize this information in one rough scalar score number, we used principal component analysis (PCA). We performed covariance PCA once for all the match results of the first few hundreds of good networks, determined the principal factor and used its components as weights.
Hence the score of a network is the principal component of its PCA decomposition. This value, which we call panel evaluation, correlates well with the maximum likelihood estimation of Elo by round-robin matches, but is much easier and quicker to compute.
**IV. RESULTS**
1) **Obtaining a strong SAI:** The first runs of SAI failed to reach the performance of the reference 7×7 Leela Zero runs. A turning point was the 9th run, when we simplified the formula for the branching probability and assigned constant probability of branching $C_{\text{branch}} = 0.025$ for all states, thus giving higher chance of branching in balanced situation. This resulted in a steady and important improvement. In Table 1 we summarized the characteristics of the most representative runs we ran after the 9th, together with their performance, measured as the panel evaluation of the 3rd best net of the run, and efficiency, measured in terms of time to reach the plateau level.
In Figure 1 we represented the evolution of the performance of the same runs, across millions of nodes. Run 15 was the lowest point, showing that increasing the softmax temperature too much, while decreasing the random temperature, produced negative results. After run 15th we also settled for the Leela Zero form of first playing urgency, as opposed to AlphaGo Zero's. Run 20th had the best balance between performance and efficiency. Increasing the maximum number of visits in run 22nd resulted in a severe loss of efficiency, not adequately compensated by a gain in performance. In runs 23rd and 24th the two temperature parameters were slightly modified again, and $\lambda$ was set to 0 and 0.5 respectively, without significant gains.
**Table 1:** Description of a representative sample of SAI runs. **Type:** shape of the network. **IP:** number of input planes. **MV:** max visits. **FPU:** first playing urgency, with values AGZ (AlphaGo Zero) and LZ (Leela Zero). **ST:** softmax temperature. **RT:** random temperature. $\lambda$: parameter of the agent. **PE:** panel evaluation of the 3rd best net of the run. **TTP:** time to plateau, in millions of nodes.

Position 1. Black, the current player, is ahead of 13 points on the board, thus, with komi 9.5, his margin is 3.5 points. However the position is difficult, because there is a seki: this is a situation when an area of the board provides points (is alive) for both players (quite uncommon in our 7×7 games), and may be poorly interpreted as white dead (black ahead by 49 points on the board) or as black dead (black ahead by 5 points on the board). In agreement with this analysis, the sample of SAI nets gives a low and sharp estimate for β with average 0.566 and standard deviation 0.453 and a wild estimate for α, with average 12.5 and standard deviation 6.0. The sample of Leela Zero nets gives winrate estimates which are almost uniformly distributed in [0, 1]: many of these nets have an incorrect understanding of the position and are not aware of this. SAI nets on the other hand are aware of the high level of uncertainty.
Position 2. White, the current player, is behind by 5 points on the board, thus, with komi 9.5, she is winning by 4.5 points. Following the policy, which recognizes a common shape here, many nets will consider cutting at F6 instead of E5, therefore losing one point. Accordingly, the estimate of α ranges approximately from −5.5 to −8.5 with average −7.1 and standard deviation 1.8. The sample of β has average 3.401 and standard deviation 1.549, thus showing that α is to be considered precise up to two units.
Position 3. Here the situation is very similar to the previous one: white is behind by 7 points on the board, thus, with komi 9.5, white is winning by 2.5 points. Following the policy, which recognizes a common shape here, many nets will consider cutting at B2 instead than C3, therefore losing one point. Accordingly, the estimate of α ranges approximately from −5.5 to −9.5 with average −7.6 and standard deviation 1.9. The sample of β has average 1.778 and standard deviation 1.529, thus showing that α is to be considered precise up to two units.
Position 4. White, the current player, is ahead by 5 points on the board, thus, with komi she is winning by a larger margin of 14.5 points. Following the policy, white is facing the choice between B4 and A3, capturing the single black stone. There is a slight strategic difference between B4 and A3: A3 is better in case a ko fight emerges. Accordingly, we found a sharp estimate for α ranging from 4 to 5.5, with average 4.8 and standard deviation 0.8. The sample of β has average 1.622 and standard deviation 0.642.
Position 5. White, the current player, is ahead by 5 points on the board, thus, with komi, she is winning by a larger margin of 14.5 points. The position is particularly easy to understand: white will win with every possible move on the board, including the pass; although only the move A3 gives white the largest possible victory. Accordingly, the estimate of α range from 4 to 7.5 with average 5.8 and standard deviation 1.7. The sample of β has average 2.877 and standard deviation 0.843.
3) Experimenting different agents for SAI: Finally, we experimented on how the parameter λ of the agent affects the preference of the next move, from positions where at least
Fig. 2: Evaluation of five positions by a sample of strong 7×7 Leela Zero and SAI nets.
and 7×7 Leela Zero’s point estimates of winrate at standard komi (blue dots). Every one of these plots shows a sample of 63 7×7 Leela Zero and 13 SAI nets from different runs, chosen among the strongest ones.
It is important to observe that the distributions of the winrates seem to agree for the two groups at standard komi, indicating that SAI’s estimates have similar accuracy and precision as 7×7 Leela Zero’s.
The SAI nets provide an estimate of the difference of points between the players. The variability that we observe shows that even strong nets do not have a uniform understanding of single complicated positions. However we can observe that the wider the discrepancies among estimates of α, the lower the estimate of β, thus showing that the nets are aware that the estimate is unstable. This confirms the robustness of our approach.
We analyse separately each position, using human expertise.
two winning moves are available. This was done using the 5 positions shown in Figure 2 and asking to the same 13 SAI nets to choose the next move. The parameter $\lambda$ was set to 0, 0.5 and 1, 1000 times each. In Figure 3 the results are represented. In position 1 and 5 the optimal move was chosen more than 90% of times for $\lambda = 0$ already, and increasing $\lambda$ did not affect the choice. In the other 3 positions increasing $\lambda$ improved the choice of the optimal move, as expected.

**Fig. 3:** Probability that a net chooses the optimal move from each of the five positions, for increasing values of the $\lambda$ parameter.
V. CONCLUSIONS
We introduced SAI, a reinforcement learning solution for playing Go which generalizes the previous models to multiple komi. The winrate as function of komi is estimated by a two-parameters family of sigmoid curves. We performed several complete training runs on the simplified $7 \times 7$ goban, exploring parameters and settings, and proving that it is more difficult, but possible, to effectively train the net to learn two continuous parameters in spite of the fact that the match outcome is a single binary value (win/lose). The generation of a suitable ensemble of game branches with adjusted komi appears to be a key point to this end.
The estimates of the winrate of our nets at standard komi are compatible with those of Leela Zero, but at the same time SAI’s winrate curves provide a deeper understanding of the game situation. As a side effect, a good estimate of the final point difference between players can also be deduced from the winrate curves.
In principle the winrate curve estimation allows to design sensible agents that aim to win by larger margins of points against weaker opponents, or that can play with handicap in points and/or stones. We propose such an agent, parametrized by a *common sense* parameter $\lambda \geq 0$. When $\lambda = 0$ the agent behaves like previous models and only tries to win. (We could obtain nets able to play at almost perfect level at $\lambda = 0$.)
With $\lambda > 0$ the agent is designed to try to win by a high margin of points, while still focusing on winning. Due to the limitations of the $7 \times 7$ goban, it was not possible to assess whether our model could really target higher margins of victory against weak opponents, but we showed the expected effect of different values of $\lambda$ on the move selection.
We posit that it should be feasible to implement SAI in the $9 \times 9$ and full $19 \times 19$ board. Albeit the configuration of the learning pipeline presents more difficulties than standard Leela Zero and the training could be longer, the experiments performed on the $7 \times 7$ board should be useful to make the right choices and develop some understanding of the possible unwanted behaviours in order to avoid them.
The development of a $19 \times 19$ board version of SAI with a distributed effort could produce a software tool able to provide a deeper understanding of the potential of each position, to target high margins of victory and play with handicap, thus providing an opponent for human players which never plays sub-optimal moves, and ultimately progressing towards the optimal game.
REFERENCES
|
{"Source-Url": "http://export.arxiv.org/pdf/1809.03928", "len_cl100k_base": 9963, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 34293, "total-output-tokens": 11554, "length": "2e13", "weborganizer": {"__label__adult": 0.0018367767333984375, "__label__art_design": 0.0016183853149414062, "__label__crime_law": 0.00203704833984375, "__label__education_jobs": 0.0022335052490234375, "__label__entertainment": 0.0009403228759765624, "__label__fashion_beauty": 0.0009150505065917968, "__label__finance_business": 0.0012578964233398438, "__label__food_dining": 0.0024890899658203125, "__label__games": 0.244873046875, "__label__hardware": 0.0038547515869140625, "__label__health": 0.0020389556884765625, "__label__history": 0.0021152496337890625, "__label__home_hobbies": 0.0005369186401367188, "__label__industrial": 0.0022430419921875, "__label__literature": 0.0012540817260742188, "__label__politics": 0.0013494491577148438, "__label__religion": 0.002056121826171875, "__label__science_tech": 0.258544921875, "__label__social_life": 0.0002923011779785156, "__label__software": 0.010986328125, "__label__software_dev": 0.44873046875, "__label__sports_fitness": 0.004878997802734375, "__label__transportation": 0.0018157958984375, "__label__travel": 0.0010242462158203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41599, 0.01735]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41599, 0.28243]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41599, 0.91944]], "google_gemma-3-12b-it_contains_pii": [[0, 4404, false], [4404, 10247, null], [10247, 16537, null], [16537, 22438, null], [22438, 28296, null], [28296, 32127, null], [32127, 36311, null], [36311, 41599, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4404, true], [4404, 10247, null], [10247, 16537, null], [16537, 22438, null], [22438, 28296, null], [28296, 32127, null], [32127, 36311, null], [36311, 41599, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41599, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41599, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41599, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41599, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41599, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41599, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41599, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41599, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41599, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41599, null]], "pdf_page_numbers": [[0, 4404, 1], [4404, 10247, 2], [10247, 16537, 3], [16537, 22438, 4], [22438, 28296, 5], [28296, 32127, 6], [32127, 36311, 7], [36311, 41599, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41599, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
53b7e24dd073dc470fddef8530b27120a422f1c9
|
RaftLib: A C++ Template Library for High Performance Stream Parallel Processing
Jonathan C. Beard
Peng Li
Roger D. Chamberlain
Original Article:
Jonathan C Beard, Peng Li, and Roger D Chamberlain
RaftLib: A C++ Template Library for High Performance Stream Parallel Processing
International Journal of High Performance Computing Applications 1094342016672542, first published on October 19, 2016
doi:10.1177/1094342016672542
RaftLib: A C++ Template Library for High Performance Stream Parallel Processing
Abstract
Stream processing is a compute paradigm that has been around for decades, yet until recently has failed to garner the same attention as other mainstream languages and libraries (e.g., C++, OpenMP, MPI). Stream processing has great promise: the ability to safely exploit extreme levels of parallelism to process huge volumes of streaming data. There have been many implementations, both libraries and full languages. The full languages implicitly assume that the streaming paradigm cannot be fully exploited in legacy languages, while library approaches are often preferred for being integrable with the vast expanse of extant legacy code. Libraries, however are often criticized for yielding to the shape of their respective languages. RaftLib aims to fully exploit the stream processing paradigm, enabling a full spectrum of streaming graph optimizations, while providing a platform for the exploration of integrability with legacy C/C++ code. RaftLib is built as a C++ template library, enabling programmers to utilize the robust C++ standard library, and other legacy code, along with RaftLib’s parallelization framework. RaftLib supports several online optimization techniques: dynamic queue optimization, automatic parallelization, and real-time low overhead performance monitoring.
Introduction and background
Decrees touting the end of frequency scaling and the inevitability of a massively multi-core future are found frequently in current literature [21]. Equally prescient are the numerous papers with potential solutions to programming multi-core architectures [31, 39]. One of the more promising programming modalities to date, and one of the few to break out of the limiting fork-join model, is a very old one: stream processing [18]. The term “stream processing” is also synonymous with data-flow programming and is a natural superset of the more limited map-reduce modality. Until recently stream processing has garnered little attention. RaftLib aims to change that by enabling performant and automatically tuned stream processing within the highly popular C++ language.
Stream processing is a compute paradigm that views an application as a set of compute kernels (also sometimes termed “filters” [18]) connected by communication links that deliver streams of data. Each compute kernel is typically programmed as a sequentially executing unit. Each stream is abstracted as a first-in, first-out (FIFO) queue, whose exact allocation and construction is dependent upon the link type (and largely transparent to the user). Sequential kernels are assembled into applications that can execute in parallel. Figure 1 is an example of a simple streaming sum application, which takes in two streams of numbers, adds each pair, and then writes the result to an outbound data stream.
A salient feature of streaming processing is the compartmentalization of state within each compute kernel [1], which simplifies parallelization logic for the runtime [19] as well as the programming API (compared to standard parallelization methods [2]). Stream processing has two immediate advantages: 1) it enables a programmer to think sequentially about individual pieces of a program while composing a larger program that can be executed in parallel, 2) a streaming runtime can reason about each kernel individually while optimizing globally [38]. Moreover, stream processing has the fortunate side effect of encouraging developers to compartmentalize and separate programs into logical partitions. Logical partitioning is also beneficial for the optimization and tuning process.
Despite the promising features of stream processing, there are hurdles that affect programmers’ decision to use the paradigm. First and foremost, before any technical issues are thought of, ease of integration becomes a bottleneck. To make stream processing, and RaftLib successful, a path must be cleared to use streaming within legacy code. Most streaming frameworks require that applications be re-written or substantially modified to conform [34]. RaftLib skirts this hurdle by
Figure 1: Simple streaming application example with four compute kernels of three distinct types. From left to right: the two source kernels each provide a number stream ($a$ and $b$), the sum kernel adds pairs of numbers ($c = a + b$) and the last kernel prints the result ($c$). Each kernel acts independently, sharing data via communications streams depicted as arrows.
existing within one of the most popular languages, C++ (according to the TIOBE index [49], future versions will include other language bindings to the library). A second hurdle, is the perceived increase in communications cost. The issues leading to thread to thread communication (real or false) are huge, and endemic to all types of thread parallel processing (the issues themselves are too lengthy to discuss in detail, see relevant texts). Stream processing itself, offers several substantive solutions given the directed graph (often acyclic) nature of the communications pattern, whereas in a standard threaded application the ability to reason about these patterns is hampered by the randomness of the access pattern itself. The FIFO pattern of most streaming systems can be optimized using pre-fetch instructions found in many modern multi-core processors since the next access is quite regular. Optimizing the communications pattern further involves minimizing the path latency between compute kernels and maximizing the overall throughput through the application. In general, this requires solving the graph partitioning problem which is NP-hard [22], however several good heuristics exist.
This work introduces RaftLib [11, 41], a C++ template library, which enables safe and fast stream processing. By leveraging the power of C++ templates, RaftLib can be incorporated with a few function calls and the linking of one additional library. RaftLib aims to transparently parallelize an application in a way that is automatic to the programmer. RaftLib is an auto-tuned streaming system. As such it must mitigate communications cost, adaptively schedule compute kernels, and provide low overhead instrumentation to the runtime so that informed decisions can be made. RaftLib minimizes the communications overhead in a multitude of ways, drawing on research from past works [7, 8, 28, 29]. Machine learning techniques [6, 10] are used to model buffers within the streaming graph and select progressively more appropriate buffer sizes while the application is executing. The framework incorporates low overhead instrumentation, which can be turned on and off dynamically to monitor such metrics as queue occupancy, non-blocking service rate, and utilization. All of these pieces put together make a highly usable and adaptive stream parallel system, which is integrable with legacy code. In addition to being a performant and robust stream processing platform for general or commercial usage, RaftLib is also intended to contribute as a research platform. RaftLib’s modular construction enables empirical evaluation a myriad of issues related to data-flow execution without extensive runtime modifications.
Related work
There are many streaming languages and runtime systems (both academic and commercial). StreamIt [48] is a streaming language and compiler based on the synchronous dataflow model. Storm [46] and Samza [43] are open-source streaming platforms that are focused on message-based data processing. Google Cloud Dataflow [40] is a proprietary stream processing system for Google’s cloud. Many of the full languages and frameworks have until recently been suited only to “niche” applications, often with steep learning curves. RaftLib’s advantage over them is that a C++ template library is compiled as optimized machine code from the outset, easily integrable with legacy code, and has more general usage. ScalaPipe [51] and StreamJIT [12] are two similar language extensions for streaming processing with a Scala frontend and a Java frontend, respectively. A similar (but non-streaming) C++ parallelization template library is the open source Threading Building Blocks (TBB) [42] library originally from Intel. RaftLib differs from the last framework in several ways, first and foremost, is that it supports distributed and heterogeneous stream processing in addition to single node (multi-core) parallelism.
There has been considerable work investigating the efficient execution of streaming applications, both on traditional multi-cores and on heterogeneous compute platforms, from the early work on dedicated data-flow engines [19], to the synchronous data-flow programming model [30]. Lancaster et al. [29] have built low-impact performance monitors for streaming computations across FPGA accelerators and multi-core devices. Padmanabhan et al. [38] have shown how to efficiently search the design space given a model that connects tuning parameters to application performance. Beard and Chamberlain [7] showed how to approximate throughput through a streaming application efficiently using network flow models. RaftLib leverages and expands upon the above work, as it seeks to efficiently execute streaming/data-flow applications.
In order to dynamically “tune” RaftLib, online instrumentation is required. Much previous work has been done in this area as well, although not all specifically targeted towards streaming systems. Tools such as DTrace [14], Pin [33], and even analysis tools such as Valgrind [37] can provide certain levels of dynamic information on executing threads. Other, more modern performance monitoring tools of note are Paradyn [35] and Scalasca [23]. These toolkits provide a multitude of information for parallel systems, however not quite the same type of information that our instrumentation provides. Many of these past works pioneered things like trace compression for instrumentation, however we are interested in eliminating traces entirely. Using the data in real time, then throwing it away reduces the communications overhead dramatically while mirroring the streaming paradigm that it espouses. RaftLib’s instrumentation is geared specifically towards stream processing, while leveraging non-streaming instrumentation methods developed by others where possible.
Scheduling a streaming application is akin to partitioning a directed graph. Communication between kernels via streams is accounted for by edge weights (weights potentially calculated using information from libraries like hwloc [13]). More advanced partitioning algorithms can add additional degrees of freedom in the form of matching kernels to specific processing resources. Early work by Kernighan et al. [27] gave a heuristic to efficiently partition the graph into two highly communicating partitions. Later work by Sanchis [44] extended partitioning to multiple parts. Partitioning an application is but one part of scheduling. Once an application is set into motion, classic scheduling algorithms like Round Robin, FIFO, and work-stealing can be used to load-balance the application. RaftLib’s specific approaches are discussed in the following sections.
Design considerations
To be successful, stream processing systems must provide efficient ways of accessing data as the program needs it, while minimizing communications cost, and maximizing the use of given compute resources. The stream access pattern is often that of a sliding window [46], which is accommodated efficiently in RaftLib through a peek_range function. Streaming systems, both long running and otherwise, often must deal with behavior that differs from the steady state [8]. Non-steady state behavior is often also observed with data-dependent behavior, resulting in very dynamic I/O rates (behavior also observed in [48]). This dynamic behavior, either at startup or elsewhere during execution, makes the analysis and optimization of streaming systems difficult, however not impossible. RaftLib’s handling of dynamic behavior is demonstrated empirically through a text search application. Many text search algorithms have the property, that while the input volume is often fixed, the downstream data volume varies dramatically as the algorithm heuristically skips over non-matching patterns. Compute kernel developers should focus on producing the most efficient algorithm for an application, and not the burden of handling data movement or resource allocation. RaftLib dynamically monitors the system to eliminate data movement and resource allocation bottlenecks where possible, freeing the programmer to focus on application logic.
At one time it was thought that programmers were probably best at resource assignment [17], whereas automated algorithms were often better at partitioning an application into compute kernels (synonymous to the hardware-software co-design problem discussed in [4]). Anecdotal evidence suggests that the opposite is often true. Programmers are typically very good at choosing algorithms to implement within kernels, however they have either too little or too much information to consider when deciding how to parallelize or where to place a computation. Understanding this information is critical to understanding the secondary effects that each decision has for the performance of an application. Within a streaming data-flow graph, it is often possible to replicate kernels (executing them in parallel) to enhance performance without altering the application semantics [32]. RaftLib exploits this ability to extract more pipeline and task parallelism at runtime (dynamically) without further input from the programmer. The next few sections discuss how these considerations are embodied within the programmer interface.
RaftLib description
The complexity of traditional parallel code (e.g., pthreads) decreases productivity, which can increase development costs [24]. This complexity also limits the access to the performance benefits of modern chip multi-processors to more experienced programmers. RaftLib aims to bring simplicity to parallel programming so that everyone can experience the performance gains promised by our multi-core future to novice programmers who would otherwise only write sequential code. The streaming compute paradigm generally, and RaftLib specifically, enables the programmer to compose sequential code (compute kernels) and execute not only in parallel but distributed parallel (networked nodes) using the same source code.
RaftLib has a number of useful innovations as both a research platform and a programmer productivity tool. As a research platform, it is first and foremost easily extensible; modularized so that individual aspects can be explored without a full system re-write. It enables multiple modes of exploration: 1) how to effectively integrate pipeline parallelism with standard threaded and/or sequential code, 2) how to reduce monitoring overhead, 3) how best to algorithmically map compute kernels to resources, 4) how to model streaming applications quickly so that results are relevant.
during execution. It is also fully open source and publicly accessible [41]. As a productivity tool it is easily integrable with legacy C++ code. It allows a programmer to parallelize code in both task and pipelined fashions.
Before diving into RaftLib as a research platform, we introduce a bit more of streaming through a concrete RaftLib example. The sum kernel from Figure 1 is an example of a kernel written in a sequential manner (code shown in Figure 2). It is authored by extending a base class: raft::kernel. Each kernel communicates with the outside world through communications “ports.” The base kernel object defines input and output port class accessible objects. These are inherited by sub-classes of raft::kernel. Port container objects can contain any type of port. Each port itself is presented as a FIFO interface. The constructor function of the sum kernel adds the ports. In this example, two input ports are added of type T as well as an output port of the same type. Each port gets a unique name which is used by the runtime and the programmer to address specific ports. The real work of the kernel is performed in the run() function, which is called by the scheduler. The code within this section can be thought of as a “main” function of the kernel. Input and output ports can access data via a multitude of methods from within the run() function. Accessing a port is safe, free from data race, and other issues that often plague traditional parallel code [5].
Figure 3 shows the full application topology from Figure 1 assembled in code. Assembling the topology can be thought of as connecting a series of actors. Each actor is sequential on its own, but when combined in a graph can be executed in parallel. Once the kernel “actors” are assembled into an application, the runtime starts to work parallelizing the application. Barrier operations are also provided so that sequential operations can be performed within the main function that interact with the parallel kernels, such as those described in Figure 4.
```cpp
template < typename T > class sum : public raft::kernel
{
public:
sum() : raft::kernel()
{
input. template addPort< T >( "input_a", "input_b" );
output.template addPort< T >( "sum" );
}
virtual raft::kstatus run()
{
T a,b;
input[ "input_a" ].pop( a );
input[ "input_b" ].pop( b );
auto c( output[ "sum" ].template allocate_s< T >() );
(*c) = a + b;
return( raft::proceed );
}
};
```
Figure 2: A simple example of a sum kernel which takes two numbers in via ports input_a and input_b, adds them, and outputs them via the sum stream. The allocate_s call returns an object which releases the allocated memory to the downstream kernel with the call of its destructor as it exits the stack frame.
const std::size_t count(100000);
using ex_t = std::int64_t;
using source = raft::random_variate<ex_t, raft::sequential>;
sum<ex_t> sum_kernel();
raft::map m;
m += source(1, count) >> sum_kernel["input_a"];
m += source(1, count) >> sum_kernel["input_b"];
m += sum_kernel["sum"] >> print<ex_t, '\n'>();
m.exe();
Figure 3: Example of a streaming application map for a “sum” application (topology given in Figure 1). Two random number generators are instantiated inline with the mapping (labeled as source), each of which sends a stream of numbers to the sum kernel, which then streams the sum to a print kernel. The += operator overload adds kernels from the current line to the map. The >> overload indicates a stream or link from one kernel to another. The kernel objects are created inline above for conciseness, however, the raft::kernel::make<type> syntax is preferred as it avoids additional copy overhead.
There are many factors that have led to the design of RaftLib. Chief amongst them is the desire to have a fully open source framework to explore how best to integrate stream processing with legacy code (in this case C/C++). In addition to being a productivity enhancing platform, it also serves as a research platform for investigating optimized deployment and optimization of stream processing systems. Scheduling, mapping, and queueing behavior are each important to efficient, high-performance execution. RaftLib is intended to facilitate empirical investigation within each of these areas. The following sections will discuss RaftLib’s programmer interface for authoring applications, its usage as a research platform, followed by a concrete benchmark compared to other parallelization frameworks.
**Authoring streaming applications**
RaftLib views each compute kernel as a black-box at the level of a port interface. Once ports are defined, the only observability that the runtime has is the interaction of the algorithm implementation inside the compute kernel with those ports, and the kernel’s interactions with the hardware. A new compute kernel is defined by extending raft::kernel as in Figure 2. Kernels have access to add named ports, with which, the kernel can access data from incoming or write to outgoing data “streams.” Once defined, programmers have multiple methods to access data from each stream. The example in Figure 2 shows the simplest method (pop) to get data from the input stream, which as the name suggests, pops an element from the head of the port’s stream and returns it to the programmer by copy to the variables a and b. A reference to memory on the output stream is returned by the allocate_s function (equivalently for fundamental types it is just as efficient to incur a copy using the push operator). If the object is not plain old data, RaftLib constructs the object in place on the output port. The return object from the allocate_s call has associated signals accessible through the sig variable. There are multiple calls to perform push, pop, and range style operations, each embodies some type of copy semantic (either zero copy or single copy). All operators provide a means to send or receive synchronous signals that can be used by the programmer, kernels will receive the signal at the same time the corresponding
data element is received (useful for things like end of file signals). Asynchronous signaling (i.e., immediately available to downstream kernels) is also available. Future implementations will utilize the asynchronous signaling pathway for global exception handling.
Arranging compute kernels into an application is one of the core functionalities of a stream processing system. RaftLib links compute kernels via an operator overload of the right shift operator $\gg$ to mimic the pattern of the C++ stream operator. The $\gg$ operator has the effect of assigning the output port of the compute kernel on the left hand side of the operator to the input port of the compute kernel on the right hand side of the operator. Once kernels are linked, they are added to a map object of type `raft::map` to be executed via an overload of the `+=` operator. The return is an object containing iterators to the source and destination kernels added in the last add increment operation. Figure 3 shows our simple example application which takes two random number generating kernels, adds pairs of the random numbers from the source kernels using the `sum` kernel and prints them.
The graph itself is executed as the `raft::map exe()` function is called, or if a barrier issued by the programmer as shown in Figure 4. Before executing, all ports are checked to ensure that they are connected, if not an exception is thrown. While type checking is performed at the time of port linking, allocation is performed lazily, right before actual execution. The runtime itself selects the type of allocation depending on where each compute kernel is mapped, currently the choices are one of (POSIX shared memory, heap allocated memory, or TCP link). Since mapping can place kernels at any resource for which an implementation is available, the allocation types themselves must follow. RaftLib supports type conversion through compatible types, as a consequence, the runtime can select the narrowest convertible type. Compression is also possible as well, and future work will investigate how best to incorporate link compression. Each stream is monitored via the runtime and dynamically re-allocated as needed (this is beneficial for both performance, and device alignment requirements).
Streaming applications are often ideally suited for long running, data intensive applications such as big data processing or real-time data analytics. The conditions for these applications often change during the execution of a single run. Algorithms frequently use different optimizations based on differing inputs (e.g., sparse matrix vs. dense matrix multiply). The application can often benefit from additional resources or differing algorithms within the application, to eliminate bottlenecks as they emerge. RaftLib gives the user the ability to specify synchronous kernel groupings called submaps, that the runtime can swap out to optimize the computation. These can be kernels that are implemented for multiple hardware types, or can be differing algorithms. For instance, a RaftLib version of the UNIX utility `grep` could be implemented with multiple search algorithms, swapped out dynamically at runtime.
Integration with legacy C++ code is one of our goals. As such, it imperative that RaftLib work seamlessly with the C++ standard library functions. Figure 4 shows how a C++ container can be used directly as an input queue to a streaming graph. It can be accessed in parallel if the out of order processing hint is given by the user. Just as easily, a single value could be read in. Output integration is equally simple. Kernels are available to assign data streams to standard library containers, or a reduction to a single value is also possible.
Copying of data is often an issue as well within stream processing systems. RaftLib provides a `for_each` kernel (Figure 5), which has behavior distinct from the `write_each` and `read_each` kernels. The `for_each` takes a pointer value and uses its memory space directly as a queue for downstream compute kernels. This is essentially a zero copy and enabling behavior from a “streaming” application similar to that of an OpenMP [15] parallelized loop. Unlike the C++ standard library
using ex_t = std::uint32_t;
/** data source & receiver container **/
std::vector< ex_t > v,o;
ex_t i( 0 );
/** fill container **/
auto func( [&](){ return( i++ ); } );
while( i < 1000 ){ v.emplace_back( func() ); }
/** read from one kernel and write to another **/
auto readeach( read_each< ex_t >( v.begin(), v.end() ) );
auto writeeach( write_each< ex_t >( std::back_inserter( o ) ) );
raft::map m;
m += readeach >> writeeach;
m.barrier( writeeach );
/** data is now copied to 'o' **/
Figure 4: Syntax for reading and writing to C++ standard library containers from raft::kernel objects. The read_each and write_each kernels are reading and writing on independent threads.
int *arr = { 0, ..., N };
int val = 0;
raft::map m;
m += for_each< int >( arr, arr_length ) >> some_kernel< int >()
>> reduce< int, func /* reduct function */ >( val );
/** wait for map to finish executing **/
m.exe();
/** val now has the result **/
Figure 5: Example of the for_each kernel, which is similar to the C++ standard library for_each function. The data from the given array is divided amongst the output queues using zero copy, minimizing data extraneous data movement.
for_each, the RaftLib version provides an index to indicate position within the array for the start position. This enables the compute kernel reading the array to calculate the position within it. When this kernel is executed, it appears as a kernel only momentarily, essentially providing a data source for the downstream compute kernels to read. Data from arrays and C++ containers can be divided up dynamically to facilitate work stealing as a means of load balancing. Further reducing copying is a size specific allocation mechanism that passes via reference versus copy when it is more efficient to do so.
Code verbosity is often an issue. Readily available in C++ are examples of full class and template declarations, when what is wanted is the ability to create a simple function without a full class declaration. C++11 has met the demand for this functionality with lambda functions. RaftLib brings lambda compute kernels, which give the user the ability to declare a fully functional, independent kernel, while freeing her from the cruff that would normally accompany such a declaration. Figure 6 demonstrates the syntax for a single output random number generator. The closure type of the lambda operator also allows for usage of the static keyword to maintain state within the function [16]. These kernels can be duplicated and distributed, however they do induce one com-
plication if the user decides to capture external values by reference instead of by value. Undefined behavior may result if the kernel is duplicated; especially across a network link (an issue to be resolved in subsequent versions of RaftLib).
```cpp
using ex_t = std::uint32_t;
/** instantiate lambda kernel as source **/
auto lambda_kernel(
lambda< ex_t >( 0, 1, [] ( Port &input, Port &output )
{
auto out( output[ "0" ].allocate_s< ex_t >() );
(*out) = rand();
} /** end lambda kernel **/ )
);
raft::map m;
m += lambda_kernel >> print< ex_t, '
' >();
m.exe();
```
Figure 6: Syntax for lambda kernel. The user specifies port types as template parameters to the kernel, in this example `std::uint32_t`. If a single type is provided as a template parameter, then all ports for this lambda kernel are assumed to have this type. If more than one template parameter is used, then the number of types must match the number of ports given by the first and second function parameters (input and output port count, respectively). The number of input ports is zero and the number of output ports is one for this example. Ports are named sequentially starting with zero. The third parameter is a C++11 lambda function, which is executed by the runtime.
**RaftLib as a research platform**
As a research platform, RaftLib is designed to enable the investigation of a number of questions that impact the performance of streaming applications. In addition to the open question of how best to blend parallel and sequential execution, RaftLib intends to be a platform for facilitating scheduling, resource mapping, and buffer allocation (queueing) within streaming/data-flow systems. Other research avenues abound, however, most of them stem from these core questions. Our focus here, is not on solving each question but in facilitating further research.
Blending parallel code with sequential code, often results in a “spaghetti code” that is hard to debug [20]. Streaming requires that each kernel maintain user accessible state within the compute kernel, simplifying the reasoning process for the programmer. When building an application all that is left is to string compute kernels together. The best way to manage the interface between code executing in parallel via streams and procedural code remains an open question. Likewise, what information can the programmer give the runtime to aid optimization decisions? Some applications require data to be processed in order, others are okay with data that is processed out of order, yet others can process the data out of order and re-order at some later time. RaftLib accommodates all of the above paradigms. Currently RaftLib supports insertion of ordering information while linking streams (see Figure 7), but more hints can easily be incorporated in future versions (especially if user studies hint that they are useful).
Automatic parallelization of candidate kernels is currently accomplished by analyzing the graph for segments that can be replicated preserving the application’s semantics. As part of the graph
raft::map m;
m += S[ "0" ] >> A >> T[ "0" ];
m += S[ "1" ] >> raft::order::out >> B;
m += E[ "0" ] >> C;
m += E[ "1" ] >> raft::order::out >> D;
m += D >> raft::order::out >> E[ "1" ];
m += E >> raft::order::out >> T[ "1" ];
Figure 7: Example of a compute kernel mapping, with multiple single entry, single exit (SESE) sections identified by the user through the raft::order::out enumerated value. These SESE out of order sections can be parallelized as the runtime sees fit (e.g., “D” or “B → E”).
Given an application topology to execute, the kernels need to be assigned to specific compute resources, and scheduled for execution. Scheduling of compute kernels within a streaming application has been the subject of much research. Conceptually it has two parts, initial resource assignment or “mapping” of kernels to compute resources and then scheduling the kernels to actually execute temporally. RaftLib currently supports multiple schedulers, including OS level threads, and user space “fibers” or “threadlets” within each heavy-weight kernel thread using the Qthreads lightweight thread library [50]. Threadlets give the runtime yet another degree of freedom in scheduling since within each kernel thread, the scheduler can partition the time quanta to its threadlets in an application dependent manner. Different architecture and operating system combinations prefer different types of threading models, so an open question is how best to switch between these models on the same architecture. Even more complicated, some virtual memory systems perform better with combinations of heavy-weight processes and threads. Asking for the “best” combination is a very loaded question at best. For applications that require it, RaftLib supports forcing a specific source for a particular compute kernel.
Fast partitioning itself is a vibrant area of research. RaftLib enables empirical evaluation of the partitioning problem in isolation from the scheduling problem. The act of partitioning kernels and threads of a streaming application to compute resources is nearly identical to the decades old problem of partitioning and mapping a circuit. Partitioning for RaftLib means finding the best
layout of compute kernels that minimizes the cost of communications between compute operations while attempting to maximize the match of the hardware to the operations being performed within each kernel. As mentioned before, the partitioning problem in general is NP-hard [22]. The default partitioner uses a variant of k-way partitioning similar to the work by Sanchis [44]. Separating the scheduling and partitioning enables researchers, and programmers to consider one problem (e.g., data locality in mapping) without necessarily having to dive into scheduling each kernel temporally (although tuning both knobs could lead to better overall performance).
As illustrated in Figure 8, the allocated size of each queue of a streaming application can have a significant impact on performance (the data from the figure is drawn from a matrix multiply application, as in [6], performance based on overall execution time). One would assume perhaps that simply selecting a very large buffer might be the best choice, however as shown the upper confidence interval begins to increase after about eight megabytes. Instrumentation using the PAPI [36] toolkit, shows that as the queue increases in size the L1,L2 miss rates increase dramatically, as do “soft” page-faults, and finally “hard” page-faults begin cropping up towards the extreme right side of the queue sizing for the platform utilized (Note: it should be apparent that these trends hold in general, the exact shape and limits are architecture dependent). RaftLib currently uses a variety of approaches to optimize buffer allocation size ranging from branch and bound search to queuing network models guided by machine learning (described in detail by [6]). The best solution to optimizing buffer allocation and placement is still an open question. RaftLib modularizes the interface to dynamically resize buffers, and buffer placement, so that new methods may be incorporated as they are developed.
Considering the application as a whole for optimization is also possible for RaftLib (i.e., tuning more than one knob across an entire application). Prior works by Beard and Chamberlain [7] demonstrated the use of flow models to estimate the overall throughput of an application. The flow-
model approximation procedure can be combined with well known optimization techniques such as simulated annealing, analytic decomposition [38], or other heuristic techniques to continually optimize long-running high throughput streaming applications. In practice, local (to each compute kernel) search often works better due to reduced overall communications cost during dynamic optimization [6]. Currently RaftLib uses a combination of flow model based optimization followed by localized heuristic search. Modularity enables easy expansion as more efficient methods are developed.
Performance monitoring is essential to the optimization and tuning of systems. In addition to performance data pertinent to the tuning of standard applications (e.g., performance counters), RaftLib provides instrumentation that is specifically useful to the tuning of an application structured as a streaming directed graph (abstract arrangement depicted in Figure 10). Specifically RaftLib can monitor statistics such as queue occupancy (mean, and full histogram), non-blocking service rate (see Figure 9 for example, online approximated rate and variance, as well as time averaged), and overall throughput. The data collection process, and instrumentation itself is optimized to reduce overhead and has been the subject of much research [6, 9, 29]. As new instrumentation methods are developed, they can be easily added to the RaftLib platform, improving the statistics to be optimized over.

Non-blocking service rate, and distribution of that service rate are of particular interest when using stochastic queueing models to optimize a streaming system. Stochastic models are desirable because they are much faster than the alternatives, e.g., branch-and-bound search, which require many memory reallocations. Both service rate and process distribution can be extremely difficult to determine online without effecting the behavior of the application (i.e., degrading application performance). In previous works, Beard and Chamberlain [9] show that a heuristic approach can approximate the non-blocking service rate with relatively high accuracy and very low overhead. RaftLib incorporates this approach. Figure 9 shows the instantaneous approximations of service
rate using this method for a microbenchmark implemented with RaftLib. This method is critical for dynamic optimization using stochastic models, and is available within RaftLib.
Figure 10: High level depiction of the abstraction layers coalesced around a simple streaming application with two compute kernels. An independent monitor thread serves to instrument the queue. Both the kernel threads and monitor threads are subject to the runtime and operating system (OS) scheduler.
One often overlooked benefit of stream processing from the programmer perspective is that data “streams” can be contiguous in memory. Within RaftLib, fundamental types are by default contiguous, the exact memory alignment is selected by the runtime. Vectorized mathematical operations are a stalwart feature of high performance computation. For machine architectures that support SIMD instructions, RaftLib has specialized kernels for basic operations (more to be added in the future) which support vectorized addition, subtraction, and multiplication on input ports. This is important as the C++ compiler often cannot determine when a particular vector operation could be safely inserted. The contiguous alignment of data on input ports, and indeed the regular access pattern provided via a FIFO communications paradigm are perfect for cache positioning hints provided by some computer architectures. FIFO patterns are also quite useful for determining where to optimally place memory within NUMA systems as the reader and writer are in defined locations.
The “share-nothing” mantra of stream processing might introduce extra overhead compared to looser parallelization paradigms, however this overhead is paid for by ease of parallelization. Each compute kernel can be easily duplicated on the same system, on different hardware across network links or really any compute resource for which an implementation is available (or a translator exists). As a research vehicle, RaftLib enables studies that explore how the communication and resource placement can be optimized. As a productivity tool, we are more interested in how few lines of code it takes to produce a result. Mentioned but not described has been the distributed nature of RaftLib. The capability to use network connections for many distributed systems is clunky at best. With RaftLib there is no difference between a distributed and a non-distributed program from the perspective of the developer. A separate system called “oar” is a mesh of network clients that continually feed system information to each other in order to facilitate distributed RaftLib computation. This information is provided to RaftLib in order to continuously optimize...
and monitor Raft kernels executing on multiple systems. Future work will see its full integration, as well as container integration facilitated through “oar.”
**Benchmarking**
Text search is used in a variety of applications. We will focus on the exact string matching problem which has been studied extensively. The stalwart of string matching applications (both exact and inexact) is the GNU version of the `grep` utility. It has been revised and optimized for 20+ years, resulting in excellent single threaded exact string matching performance (~ 1.2 GB/s) on our test machine (see Table 1). To parallelize GNU `grep`, the GNU Parallel utility is used to spread computation across one through 16 cores. Two differing text search algorithms will be tested and parallelized with RaftLib. One will utilize the Aho-Corasick string matching algorithm, which is quite good for multiple string patterns. The other will use the Boyer-Moore-Horspool algorithm, which is often much faster for single pattern matching. The realized application topology for both string matching algorithms implemented with RaftLib, are conceptually similar to Figure 11 however the file read exists as an independent kernel only momentarily as a notional data source since the runtime utilizes zero copy, and the file is directly read into the in-bound queues of each `match` kernel.

For comparison we contrast the performance of our implementations of Aho-Corasick and Boyer-Moore-Horspool against the GNU `grep` utility and a text matching application implemented using the Boyer-Moore algorithm implemented in Scala running on the popular Apache Spark framework.
using strsearch = raft::search< raft::ahocorasick >;
std::vector< hit_t > total_hits;
raft::map m;
/** capture an object with source and destination iterators **/
auto kernels( m += filereader( file, offset ) >> strsearch( search_term ) );
/** get begin and end iterator to destination **/
std::tie( BEGIN, END ) = kernels.getDst();
m += (*BEGIN).get() >> write_each< match_t >( std::back_inserter( total_hits ) );
/** wait for dst kernel to complete **/
m.exe();
Figure 12: Implementation of the string matching application topology using RaftLib. The actual
search kernel is instantiated by making a search kernel. The exact algorithm is chosen by specifying
the desired algorithm as a template parameter to select the correct template specialization.
We’ll use a single hardware platform with multiple cores and a Linux operating system (see Table 1).
We use version 2.20 of the GNU grep utility. In order to parallelize GNU grep, the GNU
Parallel [47] application is used (version 2014.10.22), with the default settings. RaftLib (and
all other applications/benchmarks used) is compiled using GNU GCC 4.8 with compiler flags
“-Ofast.” For this set of experiments, the maximum parallelism is capped to the number of cores
available on the target machine. A RAM disk is used to store the text corpus to ensure that disk
IO is not a limiting factor. The corpus to search is sourced from the post history of a popular
programming site [45] which is ~40 GB in size. The file is cut to 30 GB before searching. This
cut is simply to afford the string matching algorithms the luxury of having physical memory equal
to the entire corpus if required (although in practice none of the applications required near this
amount). All timing is performed using the GNU time utility (version 1.7) except the Spark
application, which uses its own timing utility.
Table 1: Summary of Benchmarking Hardware.
<table>
<thead>
<tr>
<th>Processor</th>
<th>Cores</th>
<th>RAM</th>
<th>OS Version</th>
</tr>
</thead>
<tbody>
<tr>
<td>Intel Xeon E5-2650</td>
<td>16</td>
<td>64 GB</td>
<td>Linux 2.6.32</td>
</tr>
</tbody>
</table>
Figure 13 shows the throughput (in GB/s) for all of the tested string matching applications,
varying the utilized cores from one through 16. A data point is shown for each repetition (10x)
for each benchmark, for each thread count. The performance of the GNU grep utility when single
threaded is quite impressive. It handily beats all the other algorithms for single core performance
(when not using GNU Parallel, as shown in the figure). Perfectly parallelized (assuming linear
speedup) the GNU grep application could be capable of ~16 GB/s. When parallelized with GNU
Parallel however, that is not the case.
The performance of Apache Spark when given multiple cores is quite good. The speed-up is
almost linear from a single core though 16 cores. The Aho-Corasick string matching algorithm
using RaftLib performs almost as well, topping out at ~1.5 GB/s to Apache Spark’s ~2.8 GB/s.
RaftLib has the ability to quickly swap out algorithms during execution, this was disabled for this
benchmark so we could more easily compare specific algorithms. Manually changing the algorithm
RaftLib used to Boyer-Moore-Horspool, the performance improved drastically. The speed-up from
one through 10 cores is now linear, with the 30 GB file searched in ~4.1 s which gives it close to
8 GB/s throughput.
Overall the performance of the RaftLib Aho-Corasick string matching algorithm is quite comparable to the one implemented using the popular Apache Spark framework. The Boyer-Moore-Horspool however outperforms all the other algorithms tested. The change in performance when swapping algorithms indicates that the algorithm itself (Aho-Corasick) was the bottleneck. Once that bottleneck is removed we found that the memory system itself becomes the bottleneck. All in all, the performance of RaftLib is quite good, comparable with (arguably better than) one of the best current distributed processing frameworks (Apache Spark) and far better than the popular command line parallelizing utility GNU Parallel for this application.
Conclusions & Future Work
RaftLib has many features that enable a user to integrate fast and safe streaming execution within legacy C++ code. It provides interfaces similar to those found in the C++ standard library, which we hope will enable users to quickly pick up how to use the library. New ways were also described to specify compute kernels, such as the “lambda” kernels which eliminates much of the “boiler-plate” code necessary to describe a full C++ class or template. The RaftLib framework enables massively parallel execution, in a simple to use form. The same code that executes locally can execute distributively with the integration of the “oar” framework. No programming changes are necessary. This differs greatly from many current open source distributed programming frameworks.
RaftLib is put forward as a tool to enable programmers to safely exploit parallelism, from within a familiar environment. While doing that, it also lays a foundation for future research. How best to integrate stream processing with sequential computation is still an open question. Pragma methods such as OpenMP for loop parallelization work well for fork-join parallelism, however they are far from ideal (in complexity, and extracting as much parallelism as is possible from an application). RaftLib promises similar (or greater) levels of parallelism that are automatically optimized by the runtime. The RaftLib framework provides a platform for safe and fast parallel streaming execution within the C++ language. It serves as a productivity tool and a research vehicle for exploring integration and optimization issues. Stream processing, and data-flow computing in general, has been around as a well-known concept for over four decades [18]. Despite this long history, not a single streaming language has broken into the top ten programming languages (as kept by TIOBE [49]). We hope that RaftLib serves as a catalyst to gain more than a niche user base to the stream processing paradigm.
Funding
This work was supported by Exegy, Inc., and VelociData, Inc. Washington University in St. Louis and R. Chamberlain receive income based on a license of technology by the university to Exegy, Inc., and VelociData, Inc.
References
17
|
{"Source-Url": "http://www.jonathanbeard.io/pdf/blc16.pdf", "len_cl100k_base": 9891, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 48211, "total-output-tokens": 13835, "length": "2e13", "weborganizer": {"__label__adult": 0.0003964900970458984, "__label__art_design": 0.0004341602325439453, "__label__crime_law": 0.0003039836883544922, "__label__education_jobs": 0.0004968643188476562, "__label__entertainment": 9.745359420776369e-05, "__label__fashion_beauty": 0.0001697540283203125, "__label__finance_business": 0.00021898746490478516, "__label__food_dining": 0.0003542900085449219, "__label__games": 0.0007395744323730469, "__label__hardware": 0.002170562744140625, "__label__health": 0.0004925727844238281, "__label__history": 0.00032591819763183594, "__label__home_hobbies": 0.0001131892204284668, "__label__industrial": 0.0005106925964355469, "__label__literature": 0.00023555755615234375, "__label__politics": 0.00027680397033691406, "__label__religion": 0.0006103515625, "__label__science_tech": 0.061279296875, "__label__social_life": 8.45193862915039e-05, "__label__software": 0.00775909423828125, "__label__software_dev": 0.921875, "__label__sports_fitness": 0.00035452842712402344, "__label__transportation": 0.0007181167602539062, "__label__travel": 0.00023496150970458984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59192, 0.03029]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59192, 0.3183]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59192, 0.89144]], "google_gemma-3-12b-it_contains_pii": [[0, 426, false], [426, 506, null], [506, 4577, null], [4577, 7665, null], [7665, 11640, null], [11640, 15535, null], [15535, 18366, null], [18366, 21642, null], [21642, 25861, null], [25861, 28414, null], [28414, 31506, null], [31506, 33701, null], [33701, 35946, null], [35946, 38549, null], [38549, 41243, null], [41243, 43314, null], [43314, 46652, null], [46652, 48197, null], [48197, 51127, null], [51127, 53986, null], [53986, 56797, null], [56797, 59192, null]], "google_gemma-3-12b-it_is_public_document": [[0, 426, true], [426, 506, null], [506, 4577, null], [4577, 7665, null], [7665, 11640, null], [11640, 15535, null], [15535, 18366, null], [18366, 21642, null], [21642, 25861, null], [25861, 28414, null], [28414, 31506, null], [31506, 33701, null], [33701, 35946, null], [35946, 38549, null], [38549, 41243, null], [41243, 43314, null], [43314, 46652, null], [46652, 48197, null], [48197, 51127, null], [51127, 53986, null], [53986, 56797, null], [56797, 59192, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59192, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59192, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59192, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59192, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59192, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59192, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59192, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59192, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59192, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59192, null]], "pdf_page_numbers": [[0, 426, 1], [426, 506, 2], [506, 4577, 3], [4577, 7665, 4], [7665, 11640, 5], [11640, 15535, 6], [15535, 18366, 7], [18366, 21642, 8], [21642, 25861, 9], [25861, 28414, 10], [28414, 31506, 11], [31506, 33701, 12], [33701, 35946, 13], [35946, 38549, 14], [38549, 41243, 15], [41243, 43314, 16], [43314, 46652, 17], [46652, 48197, 18], [48197, 51127, 19], [51127, 53986, 20], [53986, 56797, 21], [56797, 59192, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59192, 0.012]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
79f361166cbdfbde5f3916b10c2d564b5789728e
|
Abstract—NoSQL databases like Redis, Cassandra, and MongoDB are increasingly popular because they are flexible, lightweight, and easy to work with. Applications that use these databases will evolve over time, sometimes necessitating (or preferring) a change to the format or organization of the data. The problem we address in this paper is: How can we support the evolution of high-availability applications and their NoSQL data online, without excessive delays or interruptions, even in the presence of backward-incompatible data format changes?
We present KVolve, an extension to the popular Redis NoSQL database, as a solution to this problem. KVolve permits a developer to submit an upgrade specification that defines how to transform existing data to the newest version. This transformation is applied lazily as applications interact with the database, thus avoiding long pause times. We demonstrate that KVolve is expressive enough to support substantial practical updates, including format changes to RedisFS, a Redis-backed file system, while imposing essentially no overhead in general use and minimal pause times during updates.
I. INTRODUCTION
NoSQL databases, such as Redis [1], Cassandra [2], and MongoDB [3], are increasingly the go-to choice for storing persistent data, dominating traditional SQL-based database management systems [4], [5]. NoSQL databases are often organized as key-value stores, in that they provide a simple key-based lookup and update service (i.e., with "no SQL"). While these databases typically lack a formal schema specification, applications attach meaning to the format of the keys and values stored in the database. Keys are typically structured strings, and values store objects represented according to various formats [6], e.g., as Protocol Buffers (“Protobufs”) [7], Thrift [8], Avro [9], or JSON [10] objects.
Database schemas change frequently when applications must support new features and business needs. For example, multiple schema changes are applied every week to Google’s AdWords database [11]. Applications that use NoSQL databases also evolve data formats over time, and may require modifying objects to add or delete fields, splitting objects so they are mapped to by multiple keys rather than a single key, renaming of keys or value fields [12]. (These changes are similar in concept to relational database schema changes, but the lack of a formal schema allows for a wide variety of less strictly specified changes.) When changes are not compatible with the old version of an application, a straightforward way to deploy them in the field would be to shut down the running applications, migrate each affected object in the database from the old format to the new format, and then start the new version of the application.
High availability applications would prefer to avoid the downtime of shutdown-and-restart upgrades, but evolving a database on-line is challenging. Thrift, Protobufs, and Avro provide some support for format changes by allowing alteration of the data encoding itself or by tracking the version of an object’s “schema” [13], [14], but there is still the task of updating each object in the database (e.g., by iterating over all of its keys [15]). For large amounts of data, this can create an unacceptably long pause. As an extreme example, Wikipedia was locked for editing during the upgrade to MediaWiki 1.5, and the schema was converted to the new version in about 22 hours [16]. Developers could avoid shutting down the application by making the new format backward-compatible with the old format, but this could impose a significant constraint on the future evolution of the application. It may also be possible to grant applications read-only access to the old database while the migration takes place, but applications that have even occasional writes will suffer.
A more general approach to evolve the database online is to migrate data lazily. When the updated application accesses an object in the old format, the object is converted to the new format on-the-fly. Thus, the long pause due to migrating the data is now amortized over the updated application’s execution, causing slower queries immediately after the update but no full stoppage. Currently, the task of implementing lazy data migration falls on the developer: applications are rewritten to expect data in both old and new formats and to migrate from the old format to the new format when the old one is encountered [12], [17], [18], [19]. This approach results in code that mixes application and format-maintenance logic. Since there is no guarantee that all data will ultimately be migrated, the migration code expands with each format change, becoming more confusing and harder to maintain.
To address these problems, this paper presents KVolve, a NoSQL database that provides automatic support for online upgrades using lazy data migration. KVolve presents the logical view to applications that data is at the newest version of the format. Rather than convert all data at once, keys and values are converted as they are accessed by the application. Pleasantly, to use KVolve requires almost no changes to application code—they simply indicate the data version they expect when they connect to the database, and they are
*Work performed while at the University of Maryland.
1KVolve stands for Key-Value store evolution.
permitted to proceed if their expected version and the logical version match. When a data upgrade is installed, applications with an incompatible version must update themselves. They can do this with dynamic software updating (DSU) [20], [21], [22], [23], or by concurrent application switching (as in parallel AppEngine [24]) to avoid lost application state and/or shorten pause times, or by simple stop-and-restart (to the new application version). This is straightforward, in our experience, and need not be disruptive to end users. For example, customer-facing clients in web browsers can maintain session-permanence even as the backend servers, i.e., those connected to a KVolve DB, are upgraded. Such update patterns are common with load-balancing stateless servers [25].
KVolve triggers conversions automatically as data is accessed. To track its progress, KVolve attaches a version identifier to the value of each entry, allowing lost conversion state to be restored. Conversions are written by the developer. KVolve ensures updates are installed atomically in a way that supports fault tolerance. KVolve also automatically ensures that conversions take place atomically with the triggering database action; as such, KVolve avoids races that could clobber concurrent accesses. KVolve requires a conversion function to only access the corresponding old value/key, not several old key/values; to allow otherwise could violate logical consistency depending on the order that conversions are triggered. To support laziness, transformations to keys must be reversible and unambiguous. Examination of open-source software histories, and our own experience, suggests that realistic conversions typically satisfy these restrictions.
We describe a proof-of-concept implementation of KVolve as an extension to the popular Redis key-value store. We evaluate this implementation extensively, using both microbenchmarks (the standard Redis performance benchmark) and macro-benchmarks (two feature-rich applications, redisfs and Amico). Our experiments suggest that KVolve imposes essentially no overhead during normal operation and that complex applications can be upgraded with zero downtime. In particular, when upgrading redisfs we used KVolve to upgrade the filesystem data, and Kitsune [20], a whole-program updating framework for C, to dynamically update the redisfs driver. As a result, we could seamlessly maintain the file system mount point during the upgrade, resulting in zero downtime.
In summary, we make three contributions:
- We identify the challenges for evolving NoSQL databases without downtime (Section II) and, to our knowledge, we propose the first general-purpose, automatic solution to this problem (Section III).
- We describe a proof-of-concept implementation as an extension of the Redis key-value store (Section IV).
- We evaluate this implementation extensively, and we show how to combine KVolve with a dynamic program updating tool for zero-downtime upgrades (Section V).
II. THE PROBLEM WITH ON-LINE UPGRADES
This section details the problem of updating a NoSQL database on-line, and the drawbacks of prior solutions. Our approach, KVolve, is detailed in the next two sections.
A. NoSQL DBs and KV stores
NoSQL databases distinguish themselves from traditional relational database management systems (RDBMSs), by supporting a simple, lightweight interface. Our focus is on a NoSQL variant referred to as a key-value (KV) store which, as the name implies, focuses on mapping keys to values. There are two core operations: GET k, which returns the value v to which k maps in the database (or “none” if none is present); and SET k v, which adds (or overwrites) the mapping k → v in the database. Example KV stores include Redis (the most popular [1], and the target of our proof-of-concept implementation), Project Voldemort [26], Berkeley DB [27], and many others [28].
While a KV store may place no formatting requirements on values (i.e., treating them as bytearrays), applications typically store values adhering to formats such as JSON [10], Avro [9], or Protobufs [7]. Some KV stores do expect a specific value format; e.g., Cassandra defines typed “rows” in “tables,” and MongoDB employs “documents.” Likewise, key formats may be unstructured (i.e., just strings) or have some structure added by the system (e.g., a notion of prefix, or namespace).
B. Example application and update
As an example (adapted from Sadalage and Fowler [12]), consider an on-line store which keeps track of purchase orders. The application stores these orders in a KV store, using keys of the form order/\text{n}, where \text{n} is a unique invoice number, and values formatted as JSON records describing the purchasing customer and what was ordered. In this key, order is a prefix to assist in key grouping, e.g., as part of the encoding of a table. An example JSON record is shown in Figure 1(a).²
Suppose we wish to upgrade the application to support differentiated pricing, which necessitates changing the data format in the KV store. Keys remain the same, but values change: we rename the field price to fullprice, and insert a new field named discountedPrice that is a possible reduction of the original price. The updated orderItems array (the last element of the JSON object) is shown in Figure 1(b).
C. Past approaches to on-line data upgrades
Eager, stop-the-world data upgrades: One approach for implementing the data upgrade described above is to simply halt all client applications and use a script to convert all the data in the KV store that is out of date. Once all the data is updated, the clients can be restarted. For our example, the conversion script would get each purchase order value, modify
²JSON defines four primitive types: numbers, strings, booleans, and null. It also defines two container types: arrays, which are an ordered list of values of the same JSON type; and objects, which are an unordered collection of values of any JSON type, with field labels. We use JSON as an example only; other formats are also supported by KVolve.
KVolve aims to solve the on-line upgrade problem in a way that enjoys the best features of lazy and eager data upgrades. In particular, KVolve migrates data lazily, as it is accessed by applications, thus eliminating any long, disruptive pause. But KVolve presents a logically consistent view to applications, providing the appearance that all the data is instantly upgraded to the new version. As a result, programmers do not need to add any version-management code to their applications; they simply write the application assuming the most recent data version. Because the lazy migration is handled by KVolve, it can ensure there are no errors due to concurrent interactions.
A. KVolve design
Our approach is characterized by three techniques. Versioned data: We associate logical version identifiers with the database content. Rather than having a global data version, we track separate versions for data associated with different key prefixes. E.g., data mapped from keys $p_1:x$ (for all $x$) has a separate version ID space than data mapped from keys $p_2:x$ (for all $x$). A version tag is stored with each data item indicating its actual version, which might be earlier than its logical one (i.e., if the data item has not been migrated yet). Version tags are invisible to applications accessing the database. When the application connects to the database, it indicates the version IDs of the key prefixes it will use, and KVolve compares them to the logical versions of those prefixes. If the two IDs match, KVolve accepts the connection.
Update specifications and state transformer functions: When the database is to be upgraded, the operator installs a specification describing the mechanics. In particular, the specification defines the new logical prefix versions, and provides state transformer functions to be used to upgrade particular values. Each transformer function $f$ is associated with a key prefix $p$. If a key of the form $p:x$ (for some $x$) maps to a value $v$, then $p:x$ will be updated to map to $f(v)$. KVolve can handle key format changes, too, as discussed below.
On-demand (lazy) transformation: Once the update specification is installed, applications connected to the database that are out of date must be disconnected. They will reconnect at the new code version (mirroring the situation with eager upgrades). Doing this is not onerous for most applications as discussed in Section IV-B.
Once a new application version starts running, it will submit GETs and SETs to KVolve for handling. If a GET accesses a value that is out of date, KVolve first updates the stale item using the appropriate transformer function. If a data item...
is several versions out-of-date, transformer functions will be composed and applied automatically.
We illustrate these three techniques in Figure 2. Here, ClientX initially connects at version v0, and is able to access the value mapped to from k:x safely, since it is also at v0. Then, ClientU updates the database to version vl, and includes a state transformer function for prefix k. This function concatenates the string “upd” to an existing key’s value. This update causes ClientX to be disconnected because its version v0 is now inconsistent with the database’s logical version vl. Finally, ClientY connects to the database at version v1. It performs a GET on key k:y. This key maps to a stale value, having version v0. Therefore, KVolve remaps the key to a the value produced by running the transformer function on the old value. Then it returns the updated value to ClientY.
B. Ensuring logical consistency
KVolve’s goal is to provide a logically consistent view to applications. That is, any sequence of commands issued by up-to-date clients should produce the same results whether interacting with a fully (i.e., eagerly) updated database or with one whose data is being migrated lazily, as it is accessed. This goal imposes three requirements on KVolve’s implementation.
First, state transformations must occur atomically with the operation that induced them. Second, transformer functions may only reference the old version of the to-be-updated key/value, and key changes are restricted. Third, the update specification must persist once it is installed, so that logical consistency is maintained following recovery from a fault.
Atomicity: Upgrading data atomically ensures that clients accessing the data concurrently with the lazy transformations will not cause anomalies that cannot occur with eager upgrades or during normal operation. To see how an anomaly could be introduced, consider a trace with a GET k:x by client A and a SET k:x w by client B. Suppose that k:x maps to v in the old database, and the update’s transformer function f operates on a key’s old value to produce the new one.
In an eager update, k:x’s value v is updated to be f(v). Then there are two possible execution schedules: client A could retrieve f(v) and client B could set k:x to w, or client B could do the set, in which case A returns w (which is already up to date). In both cases, the final database maps k:x to w.
In a lazy update, a transformer must be invoked before returning the value to A. One way to implement this would be to convert client A’s GET into two commands when dealing with an out-of-date value: SET k:x f(v) (i.e., set it to the updated value) and then GET k:x (i.e., return that transformed value f(v) to the client). But in this case, client B’s SET could be scheduled in between client A’s SET and GET. This would result in A’s GET returning the w SET by B, but then A’s SET overwriting w with f(v). This final state would never be possible in the eager case. Effectively, improperly implemented lazy updates could cause client B’s operations to fail silently, without notifying B of the failure. This anomaly violates logical consistency. In contrast, this scheduling is not possible if client A’s read and update must always be atomic.
One of KVolve’s benefits, over by-hand modification of code to support lazy migration, is that it can ensure atomicity automatically.
Limited domain of transformer function: KVolve restricts transformer functions f in two ways. First, the transformer may only operate over the old version of the key/value it is updating, and not any other items in the database. Second, the transformer may not change keys arbitrarily; instead it only supports unambiguous bijections on a key’s prefix.
The first restriction ensures that when f runs, it will operate on the same keys/values it would have if run when the update was installed. This is because only the first GET of a key could possibly see a stale value, and it will immediately update it. As such, it ensures a logically consistent view. On the other hand, if we allowed f to access other data items, it is easy to see how logical consistency is broken. For example, suppose the function f to update a key k:x’s value also examined m:x’s value v. If the new-version code executed a SET m:x w prior to a GET k:x, then f would read w, not v.
The second restriction ensures that lazy key updates can be implemented safely and efficiently. After an update is initiated, the new application version will issue commands using the new keys. For example, suppose an update changes the prefix from k to m:j, so that keys k:n would become m:j:n (for all n). After the update, applications will submit commands like GET m:j:n. If the key is present, we need to be sure that it is a new-version key, not an old one that has yet to be transformed; as such transformations may not map to key prefixes that are also present in the old database version. On the other hand, if the key m:j:n is not present, KVolve should look for the old version of the key, in case it is there and thus needs to be updated. To do this, KVolve will have to run the transformation backwards, i.e., on m:j:n to produce k:n. Limiting transformations to key prefixes helps make backward transformation efficient, since KVolve can match keys against (new-version) prefixes directly.
Restricting transformation functions in this way is conceptually limiting, but not practically so, we believe. We analyzed 18 of the most active projects on GitHub that used Redis to store program data, and none of the programs contained value changes that were dependent on other value changes, and key changes were limited to prefix changes.
Fault tolerance: Many KV stores provide fault tolerance guarantees; i.e., there is a way to checkpoint the database so that it can be recovered after a crash. As such, if the database crashes during a lazy upgrade, KVolve’s implementation should ensure the logical view is retained following recovery. KVolve ensures this by (a) storing per-data version tags in the database, so they are made persistent; and (b) storing the update specification (and logical version) in the database, atomically, when the update is installed. This way, if the database crashes before the update is fully installed, then on recovery the database will still appear (correctly) to be at the old version. But once the update is fully installed, then the database identifies as being at the next logical version, and lazy migration can pick up where it left off after recovering from a failure.
This section describes our implementation of KVolve as a modular extension to the popular Redis key-value store.
A. KVolve implementation overview
KVolve is implemented as a separate library compiled into Redis. It works by preprocessing commands coming in from the client before passing them along to Redis, as depicted in Figure 3. In Step 1, the client issues the command, e.g., `SET Kx my_val`. In Step 2, `kvpProcessCmd`, KVolve’s hook is called to preprocess the command (the dashed green box is the KVolve library). Once the KVolve preprocessing is complete (which might involve changes to data’s contents and version field), control returns to normal Redis. In Step 3, Redis’s `processCmd` function calls the function pointer shown in blue (which depends on the choice of command—here it is `procSet` because the client requested `SET`), and this adds the affected object to the database, including any changes to the version field set during KVolve’s processing. Finally, in Step 4 Redis responds to the client’s request, acknowledging to the client that it successfully executed the SET command.
All of this is sure to be atomic because Redis is single-threaded: it processes each command it receives in its entirety before moving to the next. Redis provides commands, such as `multi`, that can be used to execute a group of commands atomically; KVolve’s design works in concert with such commands. We also believe that KVolve’s basic “interceptor” architecture would work in multi-threaded KV store implementations by employing appropriate synchronization.
B. Describing and installing updates
An update consists of transformer functions that will convert the old version of a key and/or value to the new version. The programmer compiles the transformer functions into a shared object file that she can direct KVolve to install (using a repurposed Redis command). After installation, the shared object and metadata about it are stored persistently in Redis, meaning that the specification is restored in case of a crash.
There are two kinds of updates: key/value updates and key updates (only). As an example of the former, Figure 4 shows a transformer for the example from Figure 1. The old key (a string) and value (binary data) are passed in by reference, and the function will update them to the new versions via these references. In this case, the body of the function uses the Jansson library [30] to implement the change to the purchase order example from Figure 1 described in Section II-B; the last two lines update the value (the key is not changed). Writing this code is a bit tedious. As done in our prior work [20], [31], [32], we could easily implement a domain-specific language to simplify the process.
Along with the transformer functions, an update specification contains a function that is invoked when the shared object is loaded as part of an update. This function consists of a series of calls to install transformer functions. Our example above is installed by the following call:
```
kvolve_upd_spec("order","order", 0, 1, 1,test_fun_updval);
```
This call indicates that the order prefix doesn’t change, from version 0 to version 1, while the test_fun_updval should be called for each key with the prefix order.
Key prefixes can be changed without requiring a transformer function. For example, in the Amico program described in Section V-C, the keys are renamed from the prefix `amico:followers` to the prefix `amico:followers:default`. To describe this update, the initialization function would include the call:
```
kvolve_upd_spec("amico:followers", "amico:followers:default",1, 2, 0);
```
where the version numbers are 1 and 2, and the final 0 indicates that there are no functions to manipulate the value.
KVolve will close the connection to all clients using the old version of updated prefix(es). A disconnected client will not be allowed to reconnect until it is upgraded to the new version. Clients not using updated prefix(es) will not be affected. To use KVolve, therefore, processes connecting to KVolve must be coded to support disconnection, upgrade, and restart.
KVolve stores update specifications indefinitely. We find that the transformer functions take up a small amount of space relative to the rest of the data. However, if program updates are very large or very frequent, one could employ a background client or similar thread to force updates to outdated data by GETting them all; once done, all update information could be freed for that version. (This would essentially be a hybrid of the lazy and eager approach.)
C. Key lookup
After an update is installed, the database’s logical version is advanced. Because the new version might transform the format of keys, KVolve may need to look up the old version of a key specified in an application’s GET or SET commands, so that it can update that key (as per Section III-B). To support this, we use a update information hash table (UIHT). This table maps a key prefix to a record which contains both that prefix and pointers to records that describe the next and/or previous versions of the prefix. For example, after a key update from foo: to foo:bar:, the table would map prefix foo: to a record q whose next pointer would be to a record r for foo:bar:, which points back to q; the table would also map prefix foo:bar: to r. As such KVolve can trace through all current and former versions of a prefix for applying updates. The table’s records also contain transformation functions for moving forward between versions, and track the IDs of client connections that are using a particular prefix version, so they can be disconnected on an update to it.
After a key update, client queries will use the new key format; e.g., after updating prefix foo:bar:, client commands will refer to keys foo:bar:n. KVolve will first look to see if a key exists under the issued name. If it is not there, an old, non-updated version of the key may be present. As such, KVolve looks up foo:bar: in the UIHT to see if a record is present that maps to an old prefix. In this case, KVolve will see that prefix foo:bar: points back to prefix foo:, so it reissues the client command with key foo:bar:n instead. If it finds it, it updates the key name to foo:bar:n and returns the value to the client. If no match is found, it will continue to follow backpointers in the UIHT if prior versions should be considered. If none are found, meaning that no key is present, KVolve returns control to Redis without further action.
Looking for keys under previous prefixes adds additional lookups only once (during the update) when the key is present under an old version. However, the case where there is no key present under any prefix version will add unnecessary additional lookups each time the non-present key is queried. An application that frequently queries keys that are not present where there has been a prefix change could negatively impact performance. In previous work [32], we experimented with adding a sentinel value to mark the key as absent, skipping the step of checking for the key under previous prefixes, thus saving on lookup time but adding a bit extra storage.
D. Getting and setting values
In the simplest case, keys map to string values. We consider this case first. Our implementation currently supports 36 Redis commands and all of the main Redis data structures (string, set, list, hash, sorted set); we discuss containers in the next subsection. We focused on implementing commands that modify data. The majority of Redis’ commands that we did not implement do not impact updating the data (e.g. commands related to networking such as the pub/sub functionality or connectivity).
GET: If the client request involves getting a string, KVolve must first prelookup the existing robj value structure in the database to get the version information. An example of the key-value pair for string types is shown for key1:string in the first column of Figure 5. This action retrieves a pointer to the actual object structure that is stored in the database, so any modifications that KVolve makes to this object will be automatically stored in Redis. Note that this requires an additional database lookup by KVolve, on top of the one that Redis will do later when it does its own lookup to handle the client request. However, this is an O(1) operation and does not incur excessive overhead relative to the other operations that KVolve must already perform.
If the version field of the robj (in this example the version is 1 shown in red for key1:string in Figure 5) is current for the prefix of the key, or if the key is not present under the current or any former prefix (and therefore no robj exists for the key), then KVolve returns control to Redis and does no further processing. If the version is not current, either in the current prefix or a former prefix, KVolve will update the key and value, as specified. All of the necessary information to perform the update (the transformer functions themselves, and the meta-data about which prefixes and versions the updates apply to) is stored in the update information hash table, and KVolve uses that information to apply the update as follows:
- In the case of a key prefix change, KVolve uses the update information from the hash table to perform the key rename, leaving the value untouched.
- In the case of a value change, KVolve calls any applicable user-supplied functions and applies them to the value, starting from the oldest needed update and working forward to the current version. After all of the transformer functions have been applied, KVolve stores the updated value in the robj (which is a pointer to the actual structure stored in the database) and updates the version string to match the current version by setting the field in the robj.
If both actions (key prefix and value change) are necessary, KVolve will perform both. KVolve then returns control flow to Redis, and when Redis performs its own GET, it will retrieve and return the newly updated key to the client.
**SET**: If the client request involves setting a string, KVolve first checks to see if the request has any flags that would prevent the value from getting set. These flags, **XX** or **NX**, respectively specify to only set the key if it already exists, or only set the key if it does not already exist. If necessary, KVolve does a lookup in the database to determine if the key exists, indicating if the value will be set for the requested key. (As described in Section IV-A, if the prefix changes, KVolve will search for the key under the old prefix to see if it exists.) If the value will not be set due to the flags, KVolve does nothing and returns control to Redis. In this SET command, or any such command where Redis will be adding the robj to the database, Redis deletes the old robj and replaces it with the new one from the client’s request. Therefore, all that KVolve must do is set the most current version string in the robj for the prefix of the key. (Remember that there is no need to attempt to update the value in the key, because the client’s provided value is guaranteed to be at the up-to-date version.) If this set occurs after a key prefix change, KVolve must delete the old value for the key to ensure that deprecated version keys are not unnecessarily bloating Redis. For example, in a change to redisfs (presented in Section V-B), an old key prefix was named skx:/ but after an update, the new name postfixes DIR such that the key is now named skx:DIR/. If the user were to set the key skx:DIR:/root before getting (and updating it), this would leave the old key skx:/root still in the database. Therefore, KVolve must check to see if the existing version under the old key prefix exists, and if it does, delete it. It does this by first checking if the prefix had any previous changes. If not, it does nothing. If so, it checks and deletes the old key if necessary. At this point, KVolve returns control to Redis, and Redis adds the robj structure to the database, which also contains the updated version string to be retrieved later if necessary.
**E. Sets, hashes, lists, and sorted sets**
The other Redis value data structures are containers of sub-values. The base of Redis containers are all robj structures, and they store the actual data. Figure 5 shows examples in columns two and three of robjs that contain a hash of strings and a set of integers, respectively. KVolve stores version information in the container, not in the contained values (to avoid more pervasive changes to Redis), so updates to containers happen all at once.
The process for doing a GET or SET on one of the container elements is the same as for the string type described in Section IV-D, except that if an update is necessary then all sub-elements are updated using a Redis-provided iterator.
**V. EXPERIMENTAL RESULTS**
This section considers the performance impact of KVolve, during normal operation and during an update. Our experimental results are summarized as follows:
- Using the standard benchmark that is included with Redis, we found that KVolve adds essentially no overhead during normal operation, and we determined that storing the version and update information in Redis adds only about a 15% overhead in space.
- We updated the redisfs file system which included renaming some keys and compressing some data stored in keys, and found the operating overhead to be in noise, and the pause time to be close to zero as opposed to 12 seconds for an offline data migration.
- We updated the Amico social network system and found no added overhead, with a pause time of close to zero as opposed to 87 seconds for an offline data migration.
In our experiments, we worked with read loads because they are the worst case, as is this the case where the lazy update takes place. In the write case, the old data is simply replaced by the new data, which is guaranteed to be already up-to-date due to version checking.
All experiments were performed on a computer with 24 processors (Intel(R) Xeon(R) CPU E5-2430 0 @ 2.20GHz) and 32 GB RAM with GCC 4.4.7 on Red Hat Enterprise Linux Server release 6.5. All tests report the median of 11 trials, and communication was via localhost with ~.03 ms latency.
**A. Steady state overhead**
First we report the steady state overhead for KVolve reported by Redis’s included benchmark, Redis-bench. Redis-bench acts as a client that repeatedly issues commands to Redis. The default settings for Redis-bench are with 50 clients, with 10,000 repetitions of a single operation at a time (only 1 request per round trip), and with a single key (getting or setting a single key multiple times). However, Redis-bench allows many different configurations. For a longer benchmark, we increased the number of operations to 5 million operations and for a more realistic benchmark we performed these operations over 1 million keys, leaving the rest of the default settings alone. We ran this experiment over localhost which had a latency of ~0.03ms. We chose three types of GET operations (string gets, set pops, and list pops) and three types of SET operations (string sets, set adds, and list pushes), as these were part of the default benchmark operations test suite.
Table I shows the steady state overhead of this experiment. We show unmodified Redis in column 3 for comparison and broke the overhead into separate categories: KVolve with no prefixes to update declared (causing KVolve to return immediately for each key) in column 4, KVolve with a single prefix declared (causing a hash lookup and a version check for each key) in column 5, and KVolve with a previous prefix declared but no previous keys with the old prefix (causing a hash lookup, a version check, and a string concatenation to look for a non-existent previous key) in column 6. Each sub-column of Table I shows the total time for the test, the siqr (Semi-Interquartile Range) to show the variance, and the overhead as a comparison against unmodified Redis. We ran this benchmark many times with various configurations (multiple key prefixes to track, less or fewer keys, less or fewer clients, etc) and found that the overhead varied generally around ±3%, with no consistent pattern between any of the tests, even repeated tests with the exact same setup. The
numbers presented in the table show some negative and some positive overhead, reflecting this variation. Notice that the siqr numbers show that the variance is relatively high, as high as 1.49s for setting strings with KVolve and a prefix, shown in the fourth row of the fifth column.
The bottom half of Table I shows a modification of the original overhead experiment, using a pipeline to feed 10 instructions into each round trip to Redis-bench over localhost. This reduced the I/O overhead, putting more emphasis on KVolve operations. We found that these numbers showed a bit more overhead, and allowed us to bound the overhead at 5.74% for 10 subsequent pipelined instructions. This test demonstrated that although there is some overhead added by KVolve, for the non-pipelined version and most commonly-used scenario (Table I), the overhead is mostly buried in I/O and very low overall. (In our test programs, described next, Amico pipelined at most 3 instructions per round trip, and redisfs did not use pipelineing.)
In addition to time overhead, KVolve incurs some additional memory overhead due to tracking the version information. Table II shows the maximum resident set size as reported by ps. Empty, Redis and KVolve take up about the same amount of size in memory. With 1 million keys each mapping to 10-byte values and with 5 separate prefixes declared, KVolve takes up about 16.5MB (~15%) more memory than unmodified Redis. This includes the extra version field (4 bytes) on each value structure, the amount of space it takes to store the version lookup information and hash table, and any extra padding that may be automatically added to the additional structures.
### B. Redifs
Redifs [33] uses Redis as the backend to the FUSE [34] file system. The inode information, directory information, and all file system data are stored in Redis. On startup, FUSE mounts a directory with Redis as the backend, and a user can perform all of the normal operations of a file system, with the data silently being stored in Redis. Redifs has 8 releases, ~22K lines of C code each. In redifs.5, released March 4th, 2011, file data is stored in a Redis as a binary string with no compression, and the directory keys have the format skx:/path/todir. In redifs.7, released March 11th, 2011, file data is compressed using zlib, and directory keys have the format skx:DIR:/path/todir. (Note that redifs.6 contained an error and was retracted, so we use versions .5 and .7.) This change makes it impossible to view the directories or any of the files using redifs.7 for any files created using redifs.5.
In all versions, the inode data is stored across 12 Redis keys including meta information such as modification time and file size. All file system information is represented in redifs with four prefixes: the skx/ prefix for directories (which is updated to skx:DIR in redifs.7), the skx:NODE prefix for inodes (some of which is updated to add compression in redifs.7), skx:PATH for paths to directories, and skx:GLOBAL to track internal structure; the last two are not updated. To make redifs compatible with KVolve, we added only 6 lines of code in both versions which consisted of an additional call to Redis on start-up to declare that we would be using those 4 prefixes at either version .5 or .7, along with a few additional lines of error handling.
We performed an update from redifs.5 to redifs.7, both by migrating the keys offline (referred to as the Eager version), and with KVolve to automatically rename the directory keys as they are accessed and to add compression to the files as they are accessed. In addition to updating redifs with KVolve, we also used Kitsune [20], whole-program update software for C, to allow us to also dynamically update redifs along with the data so that the users experience no downtime; the switchover from .5 to .7 is completely seamless. Normally, killing redifs.5 and restarting at redifs.7 also causes the mount point to be unmounted then remounted (causing the user to have to
<table>
<thead>
<tr>
<th>Program</th>
<th>Max RSS</th>
</tr>
</thead>
<tbody>
<tr>
<td>Redis, empty</td>
<td>7.7MB</td>
</tr>
<tr>
<td>Redis, 1M 10-byte values</td>
<td>112.1MB</td>
</tr>
<tr>
<td>KVolve, empty</td>
<td>7.7MB</td>
</tr>
<tr>
<td>KVolve, 5 prefixes, 1M 10-byte values</td>
<td>128.6MB</td>
</tr>
</tbody>
</table>
switch back into the mounted directory after remount), but with Kitsune, the mount point is not disrupted during the switchover. We used the file system benchmark PostMark [35] to generate a workload for redisfs, creating an initial 10,000 files ranging from 4-1024 bytes in 250 subdirectories plus the root directory, for a total of 251 directories. We ran PostMark outside the root directory mount point, accessing the files via full path name to avoid having to change directories due to the restart for the Eager (non-KVolve/Kitsune) version.
Figure 6 shows the results of the redisfs experiment. After about 60 seconds, PostMark switched from creating the new files to reading from or appending to existing files. As shown on the left y-axis, both KVolve and the Eager version had a very similar average Queries Per Seconds (QPS), displayed by the solid and finely dashed lines. At 80 seconds, we killed redisfs. For KVolve, we used Kitsune to dynamically update to redisfs,7 without pause, maintaining the mount point so that the benchmark never lost access to the files or the directory structure, and KVolve continued to process queries throughout the update. For the Eager version, we halted all traffic to Redis and migrated the data, performing the renames and compression as necessary. In this update, not all of the keys needed to be updated, only the 251 directory keys that needed to be renamed and the 10,000 data keys that needed to be compressed. However, the database contained 123,002 total keys, and the to-be-updated keys were searched for in the database, adding to the pause time. This offline update process took about 12 seconds, as shown in Table III.
In addition to showing the QPS lines, the green widely-dashed line in Figure 6 shows the number of lazy updates per second for KVolve, corresponding to the right y-axis. Immediately after the update, this number burst to ∼3K keys per second, and quickly trailed off as keys were lazily updated. KVolve renamed the 251 directory keys, updated the version on all 112,752 keys in the $\texttt{scope}$ prefix, and compressed the data for the 10K keys in that prefix that contained file data.
Overall the impact on the update experienced by redisfs was minimal, as the QPS dipped only slightly right after the update before it quickly returned to full speed around the 120 second mark of the experiment. After the update, the overall QPS was slower for both KVolve and redisfs because the files must be compressed and decompressed as they were accessed.
C. Amico
Amico [36] maps relationships in the style of a social network, defining a set of users and the relationships between them. Amico provides an API that allows queries over a data set of users: a user may be following or be followed by any number of other users. Amico is backed by Redis, has 10 versions created between 2012-2013, and is written in ∼200 lines of Ruby code. Amico version 1.2.0, released Feb 22, 2012, stores these relationships in 5 different types of Redis keys with the following prefixes: amico:followers, amico:following, amico:blocked, amico:reciprocated, and amico:pending. In version 2.0.0, released Feb 28, 2012 (the next consecutive version after 1.2.0), the developers added the concept of a "scope" so that there could be different graphs stored in Redis with prefixes to keep them separate, such as "school" network and a "home" network. The default name for the scope is "default", such that all of the keys are prefixed with amico:followers:default for example. This change makes databases created with Amico 1.2.0 incompatible with Amico 2.0.0. To make Amico work with KVolve, we only changed the same 4 lines of code in each version to declare the prefixes right after Amico connects to Redis.
For this experiment, we used the LiveJournal data set from the SNAP [37] library. The LiveJournal data set has 4,847,571 nodes and 68,993,773 directed edges defined by ordered node id numbers $A$ follows $B$ such as 186032 2345471, which we shuffled into two separate files for reading in a random order. To create a workload, we started two programs with calls to Amico 1.2.0: one program to read from the first random file and add nodes to the Amico network, and one program to read from the second random file and perform queries over nodes in the network such as querying if USER A followed USER B or querying the number of followers of USER A. After letting the programs run for 900 seconds (15 minutes), the Redis database was filed with 792,711 keys containing nodes and edge data.
At the 900 second mark, as shown in Figure 7, we stopped both of the Amico 1.2.0 programs. For the Eager case (finely dashed line), we then updated all 792,711 keys, renaming them to have the $\texttt{default}$ scope prefix in all of the key names. This migration took ∼87 seconds as shown in Table III. In addition to the pause, the Eager case shows a continued disruption until around the 1,000 second mark. After the migration was complete, we started the same writer/reader programs, this time using Amico 2.0.0. For the KVolve case (solid line), we immediately started the two Amico 2.0.0 programs after the update so that the keys could be lazily migrated. Right at the update point, there is a ∼2K drop in the QPS (left y-axis), before a brief spike and a return to the original rate. The widely-dashed green line corresponds to the right y-axis and shows the number of lazy updates that take place each second. Because this is a very large data set, many of the keys are not accessed immediately, taking full advantage of laziness. Although the lazy updates continue at a rate of about 500 per second at the 1,100 second mark, this does not significantly
TABLE III: Offline update pause times
<table>
<thead>
<tr>
<th></th>
<th>Pause (s)</th>
<th>Update Events</th>
</tr>
</thead>
<tbody>
<tr>
<td>Amico</td>
<td>87s</td>
<td>792,711 : rename</td>
</tr>
<tr>
<td>redis</td>
<td>12s</td>
<td>10,000 : compress, 251 : rename</td>
</tr>
<tr>
<td></td>
<td></td>
<td>(123,002 total keys in database)</td>
</tr>
</tbody>
</table>
impact overall queries per second, as shown by the solid line maintaining a similar QPS before and after the update.
VI. RELATED WORK
In the realm of relational databases, the evolution of an application’s schema is characterized by the changes to the CREATE TABLE statements used to instantiate the schema in subsequent versions of the application. In practice, complex schema changes often require taking the application offline or locking the database tables, such as the update to Wikipedia that held a write lock for 22 hours [16]. Prior research has proposed supporting non-blocking schema changes by accepting out of date copies of database objects [38] or by implementing changes on-the-fly using triggers [39] or log redo [40]. Additionally, several professional tools can perform ALTER TABLE operations in a non-blocking manner [41], [42], [43], [44], [45]. Because these tools focus only on the database, the changes implemented must be backward compatible to avoid breaking the application logic. To avoid this limitation, the Imago system [46] proposed installing the new version in a parallel universe, with dedicated application servers and databases, which allowed it to perform an end-to-end upgrade atomically. This can be achieved in practice by deploying parallel AppEngine [24] applications, at multiple versions. However, this approach duplicates resources and exposes the new version to the live workload only after the data migration was completed.
In contrast, the F1 database from Google implemented an asynchronous protocol [11] for adding and removing tables, columns and indexes, which allows the servers in a distributed database system to access and update all the data during a schema change and to transition to the new schema at different times. This is achieved by having stateless database servers with temporal schema leases, by identifying which schema-change operations may cause inconsistencies, and by breaking these into a sequence of schema changes that preserve database consistency as long as servers are no more than one schema version behind. Google’s Spanner distributed key-value store [47] (which provides F1’s backend) supports changes to key formats and values by registering schema-change transactions at a specific time in the future and by utilizing globally synchronized clocks to coordinate reads and writes with these transactions. These systems do not address changes to the format of Protobufs stored in the F1 columns or Spanner values [12] or inconsistencies that may be caused by interactions with (stateful) clients using different schemas [48].
Schema evolution in NoSQL databases is less well understood, as these databases do not provide a data definition language for specifying the schema. However, many applications attach meaning to the format of the keys and values stored in the database, and these formats may evolve over time. In particular, the values often correspond to data structures serialized using JSON [10] or a binary format like Thrift [8], Protobufs [7], or Avro [9]. The latter formats have schema-aware parsers, which include some support for schema changes, e.g. by skipping unknown fields or by attempting to translate data from the writer schema into the reader schema [13]. However, orchestrating the actual changes to the data and the application logic is entirely up to the programmer.
One approach to defining schema changes defines a declarative schema evolution language for NoSQL databases [49]. This language allows specifying more comprehensive schema changes and enables the automatic generation of database queries for migrating eagerly to the new schema. (While the paper also mentions the possibility of performing the migration in a lazy manner, which is needed for avoiding downtime, design and implementation details are not provided.) Other approaches use a domain-specific language (DSL) for describing data schema migrations for Python [31] and for Haskell datatypes [50]. Many other approaches [51], [52], [53], [54] have focused on the problem of synthesizing the transformation code to migrate from one schema version to the next, and the transformation is then typically applied offline, rather than incrementally online. In this paper, we focus on how to apply a transformation without halting service rather than synthesizing the transformation code.
In practice, developers are often advised to handle all the necessary schema changes in custom code, added to the application logic that may modify the data in the database [12], [17], [18], [19]. This approach burdens programmers with complex code that mixes application and schema-maintenance logic and does not provide a mechanism for reasoning about the correctness of schema changes performed concurrently with the live workload.
Our work is also related to the body of research on dynamic software updates [20], [23], [21], [22], which aim to modify a running program on-the-fly, without causing downtime. However, with the exception of a position paper [55], these approaches focus on changes to code and data structures loaded in memory, rather than changes to the formats of persistent data stored in a database.
VII. CONCLUSIONS AND FUTURE WORK
This paper has presented KVolve, a general approach to evolving a NoSQL database without downtime. KVolve adapts Redis to migrate data as it is accessed, reducing downtime that would otherwise result during a data upgrade, and minimizing required changes to applications. We find that KVolve imposes essentially no overhead when not performing an update, and minimal overhead when performing an update.
In the future, we would like to expand KVolve to work with Redis Cluster, a distributed implementation of Redis. We also would like to add direct support for programmer-specified, backward-compatible updates, which would support continued operation without restarting clients. Finally, we would like to streamline writing the transformation function with a DSL, simplifying the update planning process.
We plan to release our code and make it freely available.
REFERENCES
[17] Agora Games, “Relationships (e.g. friendships) backed by redis,” https://github.com/agoragames/amico.
[37] Agora Games, “Relationships (e.g. friendships) backed by redis,” https://github.com/agoragames/amico.
|
{"Source-Url": "http://www.cs.umd.edu/~mwh/papers/kvolve.pdf", "len_cl100k_base": 11483, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 39988, "total-output-tokens": 14066, "length": "2e13", "weborganizer": {"__label__adult": 0.0002243518829345703, "__label__art_design": 0.0002377033233642578, "__label__crime_law": 0.0002193450927734375, "__label__education_jobs": 0.0004646778106689453, "__label__entertainment": 6.526708602905273e-05, "__label__fashion_beauty": 0.00010484457015991212, "__label__finance_business": 0.0003044605255126953, "__label__food_dining": 0.00023567676544189453, "__label__games": 0.0005173683166503906, "__label__hardware": 0.0006723403930664062, "__label__health": 0.0002589225769042969, "__label__history": 0.00020432472229003904, "__label__home_hobbies": 7.271766662597656e-05, "__label__industrial": 0.00022232532501220703, "__label__literature": 0.00019025802612304688, "__label__politics": 0.0001722574234008789, "__label__religion": 0.00025391578674316406, "__label__science_tech": 0.0219879150390625, "__label__social_life": 6.4849853515625e-05, "__label__software": 0.0215606689453125, "__label__software_dev": 0.951171875, "__label__sports_fitness": 0.00015091896057128906, "__label__transportation": 0.00025391578674316406, "__label__travel": 0.00015020370483398438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59509, 0.02201]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59509, 0.16592]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59509, 0.88631]], "google_gemma-3-12b-it_contains_pii": [[0, 5390, false], [5390, 11479, null], [11479, 14155, null], [14155, 20741, null], [20741, 25332, null], [25332, 30804, null], [30804, 37089, null], [37089, 41438, null], [41438, 47160, null], [47160, 53614, null], [53614, 59509, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5390, true], [5390, 11479, null], [11479, 14155, null], [14155, 20741, null], [20741, 25332, null], [25332, 30804, null], [30804, 37089, null], [37089, 41438, null], [41438, 47160, null], [47160, 53614, null], [53614, 59509, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59509, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59509, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59509, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59509, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59509, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59509, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59509, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59509, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59509, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59509, null]], "pdf_page_numbers": [[0, 5390, 1], [5390, 11479, 2], [11479, 14155, 3], [14155, 20741, 4], [20741, 25332, 5], [25332, 30804, 6], [30804, 37089, 7], [37089, 41438, 8], [41438, 47160, 9], [47160, 53614, 10], [53614, 59509, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59509, 0.0618]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
6a2b56fd0e9a9b26a5f587774099f4073c707877
|
The mobile apps industry: A case study
Thomas L. Rakestraw
Youngstown State University
Rangamohan V. Eunni
Youngstown State University
Rammohan R. Kasuganti
Youngstown State University
ABSTRACT
From its origins with the advent of Apple’s iPhone in 2007, to an industry that could potentially be worth as much as $100 billion by 2015, the mobile apps industry has experienced nearly unprecedented growth. The unique aspects of the industry are discussed in terms of how they have encouraged the widespread popularity of smartphones and other mobile devices and have transformed electronic gaming, internet retailing, and social networking. As major competitors in this arena, Apple and Google have endeavored to distinguish themselves in terms of their relationships with app developers, numbers and uniqueness of apps available, as well as the marketplaces in which the apps are sold. While these battles are waged, others (Blackberry RIM, Facebook, and Amazon) have continued to find their loyal users and niches in the market. Forecasts unanimously paint a very bright future for the industry, but potential stumbling blocks remain in the form of, monetization difficulties, accusations of exploiting children, and security and privacy issues.
Keywords: Industry Analysis, Porter’s Five Forces, High Velocity Industries
Copyright statement: Authors retain the copyright to the manuscripts published in AABRI journals. Please see the AABRI Copyright Policy at http://www.aabri.com/copyright.html.
EVOLUTION OF THE INDUSTRY
Since the advent of the iPhone in early 2007, users could experience the functionality of personal computers on pocket-sized devices. These so-called “smartphones” and their associated mobile software “applications” or “apps” are becoming increasingly ubiquitous in our daily life. According to Mobilewalla.com, a website dedicated to cataloging and rating apps, the one millionth app was made available to users in December, 2011. Even with many of these apps being duplicates, or slight variations created for different devices (e.g., an app created for the iPhone and the iPad would be counted twice), that is an incredible explosion of interest for such a new industry. The growth in mobile apps has shown no signs of slowing, with as many as 15,000 new apps being released each week (Frierman, 2011).
The proliferation of apps being developed can only be expected to continue as smartphone usage grows globally. In a 2011 study conducted jointly by Google and Ipsos MediaCT Germany, data were obtained via random telephone interviews from amongst the general populations of the United States, United Kingdom, Germany, France, and Japan. The highest reported smartphone ownership was found in the United Kingdom (45% of those interviewed) and the United States (38% of those interviewed). Even more telling is the 50% increase in ownership that occurred in the United Kingdom between the first phase of the research conducted in January and February of 2011 and the latter phase in September and October of that year (The Mobile Movement, 2011). There is clearly a shift in usage from computers to mobile devices.
In 2010, smartphones outsold personal computers, which caused tech analysts to shift their attention to the handheld platform. During the fourth quarter of 2010, 100.9 million smartphones were shipped worldwide, whereas only 53.9 million units had been shipped in the corresponding quarter of 2009. According to Flurry, a company that collects mobile-software data and provides consulting services to software developers, in 2011, smartphone and tablet shipments exceeded the shipments of desktop and notebook computers combined. Software developers are increasingly realizing that in the near future smartphones could replace many core functions of personal computers, such as e-mailing, instant messaging, web browsing, and even gaming (Smartphone Mobile Applications To Overtake Standard Websites in Near Future, 2012). Further, in comparing publically available data pertaining to Internet usage with their own client data concerning mobile app usage, Flurry concluded that users are spending more time on mobile apps than on the Internet, as indicated in Table 1 (Appendix) (Newark-French, 2011).
Evidence also suggests that these devices are becoming more and more important in people’s lives. In another study conducted by Google in partnership with Ipsos OTX MediaCT, 5,013 adults in the United States who identified themselves as using a smartphone to access the Internet were interviewed in the last quarter of 2010. Eighty-nine percent of those interviewed reported using their smartphones throughout the day and 68% reported having used an app in the previous week. Seventy-nine percent of respondents reported using their smartphones to help with shopping, and 22% reported using apps on their smartphones to make purchases (The Mobile Movement, 2011).
The continued importance of smartphones and mobile apps was highlighted in President Obama’s order that all major federal agencies make at least two public services available on mobile phones by May 2013. The intention of the president’s order was to encourage innovation and stimulate employment in the field of mobile communications. Others have expressed hope that this initiative would lead to the U.S. government making information available to outside users.
developers that would facilitate the creation of applications to take full advantage of available government data. It is also anticipated that the increased demand created by those availing themselves of these governmental services would create pressure on the government to free up bandwidth for use by mobile carriers. In sum, President Obama’s efforts will greatly add to the groundswell behind the burgeoning field of mobile apps (Melvin, 2012).
BASICS OF THE MOBILE APPS INDUSTRY
Although the mobile apps industry began with Apple’s introduction of the iPhone, its phenomenal growth is due to the entry of several competitors into the marketplace, notably Motorola, LG, and Samsung. This competition has given rise to an entirely new product space known as smartphones. Smartphones have far greater functionality than normal mobile phones due to their ability to run mobile apps. These applications confer on smartphones the capabilities to send and receive e-mail, play music, movies, and video games, and even communicate remotely with computers from virtually anywhere in the world (Coustan & Strickland, n.d.).
Smartphones contain many of the same components as personal computers. Every smartphone has a processor, random access memory stick(s), USB ports, display adapters, and internal storage devices. Users may even customize and upgrade their devices to suit their individual needs. For example, a user who wishes to use the smartphone for gaming can purchase a device with a multi-core processor and additional storage to hold large games. Most smartphones are also equipped with a touchscreen obviating the need for a physical key board. USB peripherals such as audio headphones and data transfer cables are also available for smartphones (Coustan & Strickland, n.d.).
The core software found in a smartphone is called the operating system. The operating system contains all the drivers necessary to carry out instructions between the software and hardware of the device. The operating system can be visualized as a software stack consisting of several layers. First, the kernel manages the drivers that manipulate a smartphone's hardware, such as its built-in camera or USB ports. Middleware contains software libraries which link to mobile applications. The application execution environment contains all the application programming interfaces (APIs) for developers to program new mobile applications for the operating system. Finally, the application suite contains core applications which are packaged with the operating system by default. These applications include phone call software, text messaging, menu screens, calendars, and more. A mobile app is software that a user can install on a smartphone to perform a particular task. For example, Android has a GPS app which allows the user to obtain travel directions in real time, or even track the locations of family members from anywhere in the country (Coustan & Strickland, n.d.).
IMPACT ON MOBILE GAMING
Before the dawn of smartphones, mobile gaming for most users occurred on handheld devices such as a Nintendo DS or Sony PSP. Now that smartphones have become commonplace and literally hundreds of low priced games with high-quality graphics are available, mobile gaming has become very different. Apple’s iOS and Google’s open-source Android operating systems are capable of running some of the most innovative games in the market. As a result Nintendo's and Sony's handheld devices are quickly losing ground to smartphones (iOS and Android Take Over Mobile Gaming Industry, 2011). In 2009, the Nintendo DS accounted for
70% of revenue generated by portable gaming software in the United States, with the iOS and Android at 19% and the Sony PSP at 11%. In 2010, the Nintendo DS dropped to 57% of the revenues while iOS and Android picked up 34%. By 2011, the Nintendo DS fell to 36% while iOS and Android claimed 58% of the revenues from portable gaming software. In 2009, the iOS and Android revenues from mobile gaming stood at $500 million. By 2010, these revenues spiked to $800 million, and continued to climb in 2011 when they hit $1.9 billion, demonstrating the speed with which mobile apps are revolutionizing the use of digital media and tools (iOS and Android Take Over Mobile Gaming Industry, 2011).
**IMPACT ON TRADITIONAL WEBSITES**
Many now believe that apps will eventually supplant standard Internet websites in the way that DVRs have replaced videotaping and cell phones replaced land line phones. Advances in technology have enabled web developers to not only program for standard web browsing but for mobile browsing as well. This trend of mobile apps taking the place of traditional websites is likely to accelerate for a number of reasons. First, a mobile application can be accessed from virtually anywhere without the need for a wireless hotspot or expensive and physically large piece of hardware. Additionally, many companies and other website owners have created mobile versions of their websites to provide faster loading times, and have optimized user interfaces and other features to add to the functionality of mobile browsers. Not surprisingly, as of 2011 the number of users accessing websites from their mobile phones exceeded those who did so from personal computers (Smartphone Mobile Applications To Overtake Standard Websites in Near Future, 2012).
**NATIVE APPS VS. MOBILE WEB APPS**
There are two main types of mobile applications: native and mobile Web. Native applications integrate directly with the mobile device's operating system and can interact with its hardware much like the software on a personal computer. Native applications are also capable of taking advantage of local APIs in order to maximize functionality while preserving efficiency. Mobile Web applications are apps that run directly from an online interface such as a website. These applications typically cannot manipulate a device's hardware and are limited to the web application's APIs rather than the programming packages found on the phone (Industry Innovations: A Mobile Applications Interview with Bob Evans, 2011). A mobile website is a series of web pages created for the sole purpose of being viewed on a mobile device's web browser. These pages are often created using HTML, but some operating systems such as iOS or Android are equipped with a webkit. These webkits enable web page rendering that extends functionality far beyond that of a typical mobile Web application; they allow hardware manipulation, user interface scaling, and more (Industry Innovations: A Mobile Applications Interview with Bob Evans, 2011).
Some applications are hybrids that combine the interface and coding components of a web-based interface with the functionality derived from native applications. This allows developers to update the application remotely while still affording a large amount of programming functionality. It also extends the number of platforms which can run the application, as their web-based nature ensures the application must not necessarily be platform-specific (Industry Innovations: A Mobile Applications Interview with Bob Evans, 2011).
Currently, the two dominant operating systems - Google’s Linux-based open-source Android Operating System and Apple’s iPhone Operating System (iOS) - both support their own marketplaces where users can purchase mobile applications. Some apps are packaged with the operating system by default, but most apps must be downloaded manually from an app marketplace (Coustan & Strickland, n.d.).
THE MARKETPLACE: APPLE VS. GOOGLE
A mobile application marketplace is software which allows the user to download or distribute mobile apps for their smartphone. Free applications may be found in these marketplaces alongside those offered for sale. In most cases, apps are programmed by third party developers such as companies hoping to advertise or enhance their existing products, or by freelance programmers who sell their apps for revenue. Both the leading operating systems, Apple and Android, each have a corresponding dedicated marketplace, as indicated in Tables 2(a) and 2(b) (Appendix). However, third party marketplaces also exist which may offer the same apps, often at different price and/or apps that are unique to that site (Coustan & Strickland, n.d.).
Two types of independent app stores exist for developers to publish their apps: a) full-catalog stores, which sell applications for multiple operating systems and are typically associated with higher priced apps, and b) platform specialists, which are niche marketplaces that concentrate on only one operating system. These marketplaces tend to be more user-friendly and focus on a community-driven, socially-structured interface which give customers the opportunity to compare prices between multiple, similar applications to find the best in price and quality. Full-catalog stores tend to distribute apps at higher prices on an average than those found in platform specialist stores. The prices users are willing to pay for apps appear to depend upon the marketplace. For example, Handango, a full-catalog app store, has an average app price of $9.10. On the other hand, the Amazon App Store, which is a specialist Android marketplace, has an average app price of only $2.52 (Mikalajunaite, 2011).
Restrictive policies of Apple concerning app development in the initial phase have had a demonstrable effect on the market for iOS apps. In mid-2010, a survey revealed that 54% of all mobile app developers prefer to develop apps for the Android operating system while only 40% prefer to do so for Apple’s iOS. Later that year, Google and Apple made several announcements regarding the future of their mobile operating systems, and Google was the clear winner. A subsequent survey revealed that 58.6% of these developers now preferred Android while the support for the iOS dropped to 34.9% (Cameron, 2010). In response to these findings, Apple eased some of the restrictions placed on iOS developers and publishers with a view to make their platform somewhat more open. Apple also released additional documentation to the public regarding the process by which applications are accepted for sale in the iOS app store. However, these changes apparently had little impact on the confidence mobile developers placed in the company’s operating system. Significantly, 62% of the developers surveyed revealed their preference to develop for Android-powered devices as compared to only 58% for the iPad before its launch. With Google reaping a higher level of support from mobile developers across the board, Apple may face difficulty in gaining new apps to distribute in their mobile app store (Cameron, 2010).
Google, however, is not without concerns of its own. Amazon's new application marketplace decentralizes users’ acquisitions of mobile apps. Users visiting multiple app stores...
to compare prices and find exclusive apps may become an unwieldy experience and detract from the level of convenience that Android has worked so hard to attain. However, on the upside, Amazon’s entrance into the Android market may bring in additional users, and ultimately bode well for the future of this operating system (What Developers Should Know About Amazon's Android App Store, 2010).
THE AMAZON MARKETPLACE
Online retail giant Amazon has developed a specialized marketplace to distribute mobile applications for the Android platform and serve as the main interface for Amazon’s Kindle Fire which runs a restricted version of the Android OS. This marketplace was created to provide a more organized, intuitive, and user-friendly alternative to the standard Android Market and is available for all Android users (What Developers Should Know About Amazon's Android App Store, 2010).
Like other mobile application marketplaces, Amazon splits revenue by paying developers 70% of the purchase price per sale, while retaining 30% of the purchase price for Amazon. However, Amazon requires an annual fee of $99 for publisher participation in this marketplace compared to Android's one-time $20 fee. In addition, Amazon reserves the right to modify an application's code and even add its own DRM (Digital Rights Management, a system to prevent piracy in digital goods such as music and software) to the binary. In contrast to the largely unrestricted Android Market, Amazon also has a set of rules to which all publishers must adhere. For instance, applications on the Amazon marketplace cannot be sold at a lower price in competing marketplaces such as the Android Market. Amazon also reserves the right to modify the prices of apps without prior approval of the original publishers. Finally, developers must deliver any updates to the apps to the Amazon market before doing so in other markets. For example, it is illegal to distribute an app update to the Android Market before it is uploaded to Amazon's marketplace (What Developers Should Know About Amazon's Android App Store, 2010).
THE FACEBOOK MARKETPLACE
Facebook working in tandem with one of its major partners, Zynga, has a huge stake in the future of the mobile apps industry. The success of Zynga’s online gaming apps has benefited both the companies immensely. Nineteen percent of Facebook’s 2011 revenue and 15% of its 2012 first quarter revenue was tied to Zynga, most of which came from the fees the company received for processing users’ purchases in Zynga’s gaming apps. However, there seems to be a consensus that the future of both these companies is dependent upon their ability to extend that success to mobile applications. The growth of online social games has slowed as the growth of mobile games for iOS and Android devices has exploded (The Most Important Friendship: Facebook and Zynga, n.d.).
The need for Facebook to transfer its success to mobile devices may be the greatest in global markets into which it hopes to expand. From February to March of 2012, Facebook added 56 million users, most of who were based in Asia. They seem to have been particularly successful in gaining mobile users in countries such as Japan. Their efforts there involved creating a mobile site that worked on Japanese phones and building relationships with local developers. According to Google data, Japan’s use of smartphones had tripled in less than a year.
Similarly, Facebook has added 2.5 million users in the first six months of 2012, accelerating the growth that was stimulated by the introduction of the iPhone there in 2010 (Wagstaff, 2012).
Facebook’s expansion plans are particularly challenging in India. Many Indians have mobile phones and mobile usage is growing faster than web usage. Furthermore, as China is closed to them as a market, India represents the largest population of potential new users that is available. However, much cellphone service in India is provided over less than 3G quality (slower) networks and the users’ equipment may be antiquated. Facebook has penetrated the market to the point that 60 percent of the Internet population in India has used the service (representing 51 million users), but given the technological limitations it is extremely difficult for Facebook to reproduce the “large screen experience” on basic phones (What Developers Should Know About Amazon's Android App Store, 2010).
Facebook CEO Mark Zuckerberg has stated that improving Facebook’s mobile application, integrating it with other online apps, and creating a “transformative” advertising experience were top priorities for 2012. Numerous third parties have pointed out that being able to monetize its presence with mobile users will be essential to its future success. In a public meeting with investors in May 2012, Zuckerberg and COO Sheryl Sandberg pointed out that the key to Facebook’s success on mobile devices would be social ads that make use of information concerning the “likes” of users’ friends and that the collection of additional information such as users’ locations would be key to targeted advertising efforts (Barr, 2012).
One means by which Facebook hopes to improve its mobile presence is its new App Center. This will be a central location at which users will be able to access all apps (initially, 600) that have been reviewed and cleared by Facebook as having met their quality standards. Rather than happening upon apps randomly, users will have apps recommended to them by the App Center based upon their expressed interests or those of their friends. Links in the App Center will send users to the appropriate Apple or Google marketplace from where the apps could be downloaded (Barr, 2012).
THE BLACKBERRY ANDROID MARKETPLACE
In early 2011, Research In Motion (RIM) announced its new PlayBook tablet computer that has the capability to run Android applications using an ‘app player’. This device can run BlackBerry Java apps as well; creating a very powerful piece of hardware that is capable of running apps from multiple platforms. All these applications are available through RIM’s BlackBerry App World, which is the company's dedicated marketplace for Android and BlackBerry apps (RIM's New Playbook Will Be Able to Run Android Mobile Applications, 2011).
However, Android applications which are run on the PlayBook cannot be obtained anywhere else other than the BlackBerry App World. This implies that apps from other marketplaces, such as the Android Market and other third-party marketplaces such as the Amazon App Store are not compatible with this device. For Android developers, this means their apps must be compiled using specific rules, certificates, programming packages and permissions designed to run on RIM’s PlayBook. This adds a new layer of complexity for developers which could potentially detract them from programming for the PlayBook. It also means that fewer apps will be available for PlayBook users than for native Android users. In 2011, for example, the Android Market had over 250,000 apps whereas the BlackBerry App World had only 20,000 apps (RIM's New Playbook Will Be Able to Run Android Mobile Applications, 2011).
RIM had also announced the release of a SDK, which would enable application programming for the PlayBook's operating system, Tablet OS. This will allow low-level customization of the tablet, including its user interface and other functionality that expands outside the scope of standard applications. Additionally, Ideaworks Labs and Unity Technologies are also capable of running on PlayBook. Ideaworks is a C and C++ SDK for mobile platforms, which runs on iOS, Android, Symbian, webOS, and Windows Mobile. Unity Technologies offers a host of tools used for creating 3D games for iOS and Android, which may add appeal for potential game developers (RIM's New Playbook Will Be Able to Run Android Mobile Applications, 2011).
NICHE MARKETPLACES
Recently, a number of new third-party marketplaces have entered the mobile apps industry. Since many of these marketplaces are developed by companies far smaller than Google or Apple, they have been forced to target niche app user segments rather than engage in full scale competition with the bigger players. Since 2009, the number of niche app stores has doubled annually, while the number of general app stores has decreased. The number of general app stores entering the market peaked toward the end of 2010, and declined rapidly through 2011. These data clearly suggest that niche marketplaces are the preferred solution for smaller companies to penetrate the mobile apps industry (Gair, 2011).
Niche marketplaces provide users with applications targeting their specific needs, thus reducing much of the confusion created by the ever-increasing number of developers and apps. These marketplaces could also benefit developers by reducing the number of apps they compete with for attention in full catalog stores. In general, there are three categories of niche mobile app marketplaces. 1) Platform-oriented marketplaces offer applications for a specific operating system, such as AndroidPIT for Android or Crackberry for RIM devices; 2) Target group-oriented marketplaces that provide apps for a particular segment of app users, such as businesses or adults; and 3) “Carve out” marketplaces, which are niche stores within a full catalog store, such as “@work” by Apple (Gair, 2011).
CONSUMER PREFERENCES IN MOBILE APPS
By the year 2010, the mobile apps industry became increasingly saturated as new competitors entered the market flooding it with numerous varieties of utilitarian as well as lifestyle apps. A survey conducted by Nielsen in 2010 revealed the types of apps that were in greatest demand by users. A breakdown of the various categories of applications used within a span of 30 days as emerging from the survey is presented in Table 3 (Appendix) (The State of Mobile Apps, 2010). In addition, a chart of app popularity by users of specific operating systems is depicted in Table 4 (Appendix). The survey revealed that games, including both free and paid, were the most downloaded application category. Facebook, Google Maps, and the Weather Channel were the most popular apps across all platforms. In social networking, Facebook was by far the most popular app, with MySpace trailing behind in part due to its continuing popularity with teenagers. LinkedIn also attracted a large number of users in the age group of 25 to 44 (The State of Mobile Apps, 2010). The news and weather application category was dominated by The Weather Channel, which was downloaded by 58% of the users surveyed. Amazon and eBay led the shopping category with 57% and 41% respectively. Finally, the music category was fiercely
competitive with iTunes, Pandora, Sirius-XM, and Yahoo! Music all competing for the #1 position (The State of Mobile Apps, 2010).
Data collected by Flurry in May 2011 revealed that games and social networking apps, led by Facebook, continued to be the most popular app categories among users, as indicated in Table 5 (Appendix). Flurry also discovered that users not only accessed game and social networking apps more frequently but also for longer periods of time per session. That many users were accessing Facebook in order to play games available on that platform points to the overwhelming dominance of this category of smartphone apps (Newark-French, 2011).
In 2010, the most acknowledged choice for app publishers was the iPhone’s iOS operating system. However, other operating systems like Android, iPad, Windows Mobile and Symbian also enjoyed large spikes in usage as the devices associated with them became more popular and developers attempted to diversify their products accordingly. A breakdown of the major mobile operating systems and their utilization by app developers in 2010 is presented in Table 6 (Appendix) (State of the app industry 2010 (report), 2010).
In 2011, the emerging operating systems, especially Android and Microsoft’s Windows Phone 7, were expected to gain in usage. Microsoft has attempted to stimulate developer interest in its platform by offering incentives to programmers to create pre-release applications. Microsoft has also invested considerable resources in marketing its new product, especially by encouraging favorable reports by technology reviewers. A breakdown of the projected app developer support for 2011 is shown in Table 7 (Appendix) (State of the app industry 2010 (report), 2010):
Finally, a chart showing publishers’ expectations of revenue increases for the mobile app industry between 2010 and 2011 is presented in Table 8 (Appendix). Clearly, most publishers were highly optimistic about the industry; with 31% believing revenues would more than double, and 17% predicting revenues would increase by at least 50 (State of the app industry 2010 (report), 2010).
DEVELOPERS AS COMPLEMENTORS TO THE INDUSTRY
In order for companies like Google and Apple to effectively compete in the mobile application industry, they must attract innovative developers to create software for their operating systems and devices. Without the support of developers, the inflow of new apps will wane, leading to the customers shifting to more popular systems. Therefore, innovative business models must be put in place by these platform owners to remain attractive to developers and thereby sustain their competitive advantage (Power, n.d.).
A successful business model requires platform owners to offer as much cooperation to third party app developers and publishers as possible. This involves providing support to the developers as well as creating a developer-friendly environment. Developers expect full and efficient documentation on using a particular operating system, and an active community to enhance further development. The faster and easier it is for them to create an application, the more likely would it be developed for the operating system in question. This would also allow developers to allocate resources to the features and appearance of their applications rather than dissipating them in dealing with cumbersome coding and unclear documentation (Power, n.d.).
An obvious way for companies like Google and Apple to cultivate a developer ecosystem is to offer them APIs (Application Programming Interfaces) which are libraries of code that reduce the work involved in creating an application. These APIs greatly enhance programming
efficiency, reduce the chance of bugs, and greatly simplify an otherwise difficult task of programming for a mobile interface (Power, n.d.).
In early 2012, the Application Developers Alliance was formally launched at the Consumer Electronics Show in Las Vegas, Nevada. This organization was created to bolster the capabilities of mobile app developers by providing more educational opportunities to prospective developers, give developers access to cloud hosting services, and enable government lobbying possibilities. Currently, this alliance is aimed specifically toward the iOS, Android, and RIM/BlackBerry platforms (Essany, 2012). The ultimate goal of the organization is to develop a solid industry association for the mobile app sector. The core features of the organization include an online database for developers and publishers to collaborate and communicate, a plethora of development tools and application testing facilities, access to free or low-cost technological documentation, structured training and certification programs, and even discounted hosting opportunities via cloud services (Essany, 2012).
REVENUE GENERATION FROM APPS
There are various ways developers earn money from the apps. One common practice is to release an app for free, and generate revenues by placing advertisements throughout the app's user interface. When a user clicks an ad, revenue is instantly generated for the app’s publisher. The advantage of this approach is that ad placement is easy to set up, and allows the app access to a wider audience because it does not cost the user any money. However, the amount of revenue generated per click is typically very low. Moreover, users may refrain from using an app if the advertisements are too intrusive (Holbrook, 2011).
Developers may also sell their apps for a predetermined price in an online marketplace. In such cases, the platform owner, Apple or Google, charge a 30% royalty fee for each app sold while the remainder goes to the developer. No fees are however charged by the platform owner for the free apps. Some marketplaces also charge developers a one-time fee to establish a publisher account. Android's publisher accounts, for example, currently cost a one-time fee of $20. This revenue generation method is straightforward and requires minimal effort to set up. However, with so many apps available in the marketplace, competition is intense. Acquiring enough customers to create a significant revenue flow could be difficult if the app is not original, useful, or marketed creatively (Holbrook, 2011).
A more common business model to generate revenues from apps involves distributing an app in two forms: one a “for sale” version with no ads and full functionality, and a second version made available free of cost but with sponsored ads and limited functionality. This dual format allows potential customers to try the app risk-free while providing an incentive to eventually purchase the full version if a user finds that it delivers value for the money. However, in order to succeed in this model, developers must strike a balance in the number of features offered in the trial app. If too many features are offered free, the incentive for customers to purchase the full version may be reduced. On the other hand, if too few features are offered, customers may overlook the app's full potential (Holbrook, 2011).
Apps can also be used by businesses to complement or advertise their existing products. A high quality app can potentially speak for the quality of the entire business which in turn could attract new customers. Alternatively, an app can improve the way existing customers use a product. Insurance companies, banks, video game studios, and a plethora of other businesses actively are pursuing this business model with great success. However, if the app does not
integrate itself seamlessly with the business' agenda, it could have limited effectiveness (Holbrook, 2011).
Finally, app revenues could also be generated by creating an online store within the app itself. Many video games use this “freemium” model to generate revenue. For example, *Zenonia* by Gamevil is a free-to-play game that generates revenue by selling optional weapons, armor, and other virtual goods for real money. Other ways to employ this method include the creation of an e-store. *Fandango* uses this method with great success by selling movie tickets directly from an app that is ostensibly a source of information (movie reviews) and entertainment (movie trailers). When a ticket is purchased, a barcode appears on the smartphone's interface which is scanned by the staff in movie theaters. The advantage of using this revenue generation method is that customers believe that since the app is free, the additional payment is not directly linked to the app and therefore entails a lower risk of downloading and using it. However, from a technical standpoint, this method is also one of the most difficult to implement because it is not directly supported by Google's or Apple's application development kit (Holbrook, 2011).
One effective way to increase revenue flows for app developers is to remain flexible in varying their business models. Each application must be analyzed and compared to the target market in order to determine the optimal marketplace for its distribution as well as the price structure. For example, some application marketplaces may be more suitable for distributing free apps, while others might be better for selling high-priced, high-quality apps. Through a careful analysis of customer trends, app developers and publishers can maximize their revenues (Mikalajunaite, 2011).
**ROLE OF NETWORK PROVIDERS**
A network provider is a host that allocates resources for developers to create new applications. There are two major methods by which a network provider can monetize assets. First, these providers can add value by granting app developers access to their APIs. The second method involves making investments in the network capabilities that have the most potential to create value. In the end, no one business model will work for every developer or every application. The best business model is determined by the network provider's business goals, competition, compensation policies and a host of other factors. Sometimes, a mix of different business models might be necessary in order to maximize revenue (Alcatel-Lucent, 2010).
Network providers may choose the optimal business model by analyzing a variety of factors. They must determine their primary source of revenue, whether it is from the end user or another party, and who would own the relationship with this revenue source. The number of developers to be supported, as well as how they would be supported are other crucial factors in identifying the most effective business model. The nature of the interaction between the network provider and the application developers and how the development ecosystem is fostered (such as with monetary incentives) are other important decisions pertinent to maximizing revenue (Alcatel-Lucent, 2010).
Traditionally, network providers prefer a 'pay-per-dip' business model. However, small developers may find it difficult to generate revenue with this method due to the low profit margins imposed by fierce marketplace regulations as well as the greater financial risk involved. In order to deal with these issues, network providers must work with developers to reduce development costs, maximize the efficiency of processes, and grant developers greater control. For example, network providers may consider requiring only a minimal upfront investment from...
developers, adopting a revenue sharing ratio more favorable to independent developers, instituting transparent approval processes, and allowing developers freedom to set their own prices, branding methods, and means of interaction with their customers (Alcatel-Lucent, 2010).
To illustrate, Alcatel-Lucent supports network service providers through three main initiatives. These initiatives were launched to aid in the creation of new business models that bridge the gap between network providers and application developers. First, Alcatel-Lucent is equipped with an Application Exposure Suite which allows developers to gain access to the network provider's APIs securely and efficiently. This initiative is compounded through Alcatel-Lucent's Open API Service, which provides managed access to a web portal where developers can access the most up-to-date version of the APIs, as well as receive important documentation surrounding these programming libraries. Lastly, Alcatel-Lucent provides a vast portfolio of professional services, such as “business model consulting, the integration of multivendor systems and management of complex networks and service-layer operations”, support for developers to transition between open business models and third party development, and other applications and content (Alcatel-Lucent, 2010).
SECURITY AND PRIVACY ISSUES
Both Google and Apple, the mobile industry's top players, were frequently challenged by issues of privacy and security arising from some third-party app developers who sought to exploit their operating systems for illicit gains. Although these market leaders worked hard to combat such misuse, the problem is far from permanent resolution. For instance, Path, a social networking app, was one such application that infringed upon a user's personal information. When a user downloaded Path, the application would send the entire contacts list, including names, e-mail addresses, and phone numbers from the user’s mobile device to the company's database. Infringement of privacy is a serious violation of Apple's terms of service, and Path's chief executive officer was summoned for interrogation and reprimand by Apple’s top executives. Soon, Apple found that Path was among numerous applications that could mine users’ address book from the iPhone. Understandably, such incidents sparked controversy and frustration amongst iPhone enthusiasts (Satariano & MacMillan, 2012).
When Apple’s App Store was established in 2008, Steve Jobs' view of the relationship with developers was very different from that of Microsoft. While Microsoft allowed developers unrestricted freedom to create and distribute programs, Apple took a contrasting approach. Every app developed for the iPhone was to be submitted to Apple's servers and a team of analysts would parse through its code to ensure that it meets the company's quality standards, and is free of bugs, malicious codes or scams. However, with the increasing popularity of the iPhone and iOS, and the exponential pace of app development, Apple could no longer sanitize and approve the apps in a timely manner. Developers grew frustrated when their applications had to wait in line for months before being allowed into the App Store, and eventually, Apple had to lower the severity of its vetting policies (Satariano & MacMillan, 2012).
Adverse publicity in media and the resulting public outrage following the revelation of privacy violations by Path even invited the attention of the U.S. Congress. Senator Charles Schumer called up on the Federal Trade Commission to investigate both Apple and Google over claims that apps running on their mobile operating systems were violating user privacy. Senator Schumer voiced the general sentiment that personal information being accessed by the mobile apps goes “beyond what a reasonable user understands himself to be consenting to when he
allows an app to access data on the phone for purposes of the app’s functionality.” Representatives Henry Waxman (D-California) and G. K. Butterfield (D-North Carolina) publicized letters they had sent to CEO Tim Cook of Apple and 33 other companies with iOS apps published on Apple’s iTunes store, soliciting information on their privacy policies (Lowensohn, 2012; New York Senator Asks FTC to Investigate Google, Apple, 2012).
In February 2012, California’s attorney general, Kamala Harris, made Apple, Google, Microsoft, Amazon, Hewlett-Packard, and Research in Motion sign an undertaking promising to improve privacy protections in apps made available on their operating systems. Under this agreement, the companies would henceforth require developers to provide app users information concerning their privacy policies and disclose what user data the apps would access and share before the apps could be downloaded. The state of California already had in place a stringent legal framework to protect Internet users - the Online Privacy Protection Act, which applied to websites and online services. With this agreement, the Act ostensibly extended the protection to mobile apps too. Penalties for infringing the provisions of this act could be quite severe - fines up to $500,000 per use of the app in violation. Even though Google was a signatory to the agreement, the company spokesperson claimed that “from the beginning, Android has had an industry-leading permissions system which informs consumers what data an app can access and requires user approval before installation” (Mills, 2012).
The Federal Trade Commission (FTC) has been particularly concerned with protecting children’s privacy. In a widely discussed report of its findings, the FTC concluded that a vast majority of the apps meant for children that they had examined on both the Apple and Google marketplace sites displayed no privacy policies at all. In fairness to the developers, the report qualified that this is not to say that such policies did not exist, but rather that they were not readily accessible on the store’s promotions page or on the “landing page” that was accessed after a particular app was selected for downloading. A major role of the FTC is enforcement of the Children’s Online Privacy Protection Act (“COPPA”) and the FTC’s Implementing Rule. The summary result of these efforts is to “require operators of online services (including mobile apps), direct to children under age 13 to provide notice and obtain parental consent before collecting items of personal information from children.” The FTC settled its first case against a mobile app developer in 2012 and proceeded to issue a Notice of Proposed Rulemaking to amend the COPPA Rule. In so many words, the FTC enjoined all those involved in the developing, selling, and managing of apps targeting children to provide privacy-related information to parents. They further emphasized the need to disclose what information would be collected, how it would be used and who would have access to it. Through its words and actions, the FTC has made it clear that they intend to stay involved in the rapidly growing mobile apps industry to ensure that the same public safeguards are in place as they do for other media (Poss & Hasty, 2012).
Many of the privacy issues arising from the apps created for Apple’s iOS revolve around the app developers’ ability to access the unique device identifiers (UDIDs) that were made available to them by the operating system. The UDID for a specific phone is a string of numbers, which is meant to be anonymous. However, some believe that they have been used to identify individuals by combining the UDID with other information available on their phones. In response to numerous lawsuits and the media scrutiny that followed, Apple had announced that it would begin phasing out UDIDs and followed it up relatively quickly with rejection of apps that continued to use them. These actions have placed Apple squarely in between the public’s calls for privacy and developers’ desires to effectively monetize their apps. By tracking the UDIDs
advertisers could profile a user across multiple apps and thereby direct ads relevant to them more accurately. MoPub, an ad server, has estimated that preventing access to the UDIDs could lead to a 24% decrease in revenue for app developers and believes the onus is on Apple to create an alternative means of identifying users (Aimonetti, 2012; Cooper, 2012; Vascellaro, 2012).
Another common ploy in the mobile industry involves the manipulation of advertisement revenue. Some companies enter into agreements with app developers to generate app downloads through effective marketing procedures. In reality, however, some of these developers operate a series of computers called 'bots' which are designed to download applications thousands of times on command. Thus while an application may show that it was downloaded several times, these downloads were artificially generated using fake accounts. An alternative to using bots involves paying workers in other countries such as China small amounts to download the applications manually. In response to this scam, Apple announced a new policy that would ban any developer who engages in such scam practices. This resulted in a 24% decrease in the number of downloads in Apple's App Store from January to February of 2012 (Satarianno & MacMillan, 2012).
Some companies also attempt to manipulate the leaderboards in mobile app marketplaces by offering incentives for users to rate, review, or download additional apps in order to inflate the popularity of a specific app. Other developers seek to piggyback on the truly successful apps by christening their apps with a slight variation in the names of games such as Angry Birds and Temple Run in order to attract more customers. Apple has worked vigorously to remove such applications from its marketplace, but the process is laborious and is frequently difficult for a company to sustain. According to a former Apple executive, thousands of new apps are submitted to Apple every month, and each one is reviewed for only about 15 minutes. In other words, a lot many malicious apps could potentially conceal their true intent and slip through the company’s review process undetected (Satarianno & MacMillan, 2012).
In early 2012, Apple invested $50 million in a project called Chomp, a search engine to help smartphone and table-top users find new apps and thereby reduce reliance on the marketplace's leaderboard system. The search engine would also have the intelligence to craft new algorithms in order to determine which apps suit a user's needs better. Normally, Apple's default algorithm based the leaderboard rankings solely on the number of times an app was downloaded. However, with Chomp, new criteria such as the frequency of an app's usage could be factored into the recommendations. This would allow the leaderboard to evaluate and signpost the usefulness and popularity of the apps more accurately (Satarianno & MacMillan, 2012).
In early 2011, a team of programmers released a Trojan disguised as an official Android clean-up app. This malicious file, called the DroidTeam Trojan, was distributed in the Android Market through the use of a vulnerability found in its operating system. Google quickly released a security patch to kill the Trojan, but the file continued to thrive on the Android Market by disguising itself as a number of popular gaming and other apps. Google successfully pulled all such apps from the Marketplace with alacrity and thereby halted the spread of this malware. Learning from this experience, Google soon thereafter released an update to co-exist within the Android Marketplace called the Android Market Security Tool (Google Working Hard to Keep Android Safe From Viruses and Malware, 2011). Capitalizing on the release of this tool, the Trojan programmers created a false version of the application which further spread the malicious file to additional handsets. This time around, the file was distributed over third-party Android marketplaces wherein Google had virtually no power to stop the infection from spreading. The new Trojan was capable of extracting phone numbers and other contact details from the infected
Android phones and uploading them to a database controlled by the hackers. Using this database, the hackers could send remote text messages from the infected devices to perhaps extract information from other unsuspecting phones in the contacts list, and thereby further spread this malware (Google Working Hard to Keep Android Safe From Viruses and Malware, 2011).
**MOBILE APP INDUSTRY TRENDS**
Changes occur so rapidly in the mobile app industry that a totally new trend makes its presence every year. For instance, in 2010 smartphones and mobile apps were the trendiest products in the industry, while in 2011 tablet computers took the lead with a sharp increase in consumer demand. These changes in demand have led to corresponding increases in the supply of apps available in the app marketplaces. By analyzing previous trends, one may forecast the products which would be in high demand in future (Viswanathan, 2011).
In 2010, the leading app categories were mobile gaming and social networking. However, in 2012, a growth area is expected to be business apps with practical utility. App developers must cater to users by creating apps to handle things such as time management and multitasking. One difficulty associated with the development of business apps involves the selection of the right platform to suit the business customers’ needs. Developers must determine the most popular platforms used by businesses and executives in order to maximize profits. Toward the end of 2011, the number of business apps downloaded had risen significantly, but in terms of the overall market there appears to be even greater potential for growth (Viswanathan, 2011).
Consumers have also begun using their mobile devices for online shopping as well as making normal purchases. Therefore, another trending app category for 2012 might be the mobile wallet, wherein the mobile device serves as a virtual form of credit or debit card and online banking. Several banks have already begun to offer mobile apps to address this need. Such digital wallet apps are expected to become more common in 2012 (Viswanathan, 2011).
The recent dramatic increase in demand for cloud computing will continue as more industries shift toward cloud-based services. This will in turn create the need for cloud sync providers and cloud based apps. More apps that organize data through the cloud, both private and public, are expected to be created as developers gravitate toward web-based applications, as these are much more efficient for cloud-based software (Viswanathan, 2011).
Another significant trend for app development in 2012 will be the usage of location-based monitoring systems. This will allow app publishers to offer targeted advertising to consumers based on their GPS location. Targeted ads will be much more efficient and cost-effective, which will also increase the app revenues (Viswanathan, 2011).
In addition to the sale of apps to users, another profitable aspect of the mobile app industry expected to see growth in 2012 is the offering of marketing services to app developers. Because of the ever-growing supply of apps in all the major app marketplaces, it has become extremely difficult for app publishers to differentiate their products from the competition. It is difficult for developers to pitch the superiority of their products due to the over-saturation and lack of structure in the marketplace. Mobile marketing firms are very uncommon in the industry, which leaves room for a great opportunity to offer these services to developers (Viswanathan, 2011).
INDUSTRY FORECASTS
Mobile industry analysts predict that by 2012, the industry would be worth $17.5 billion. The number of app downloads is projected to grow at a rate of 92% per year, which translates into a jump from 7 billion downloads in 2009 to nearly 50 billion in 2012. These figures imply an enormous opportunity for app developers as the industry grows exponentially (Floriceanu, 2010). In addition, studies forecasted that by 2012, off-deck applications (i.e., apps which are not approved by the phone carriers) will account for nearly 50% of all revenues in the mobile application industry. Conversely, on-deck applications (apps which must be approved by phone carriers before distribution is allowed), which accounted for 60% of all revenues in the industry in 2009, were projected to make up only 23% of revenues by 2012 (Floriceanu, 2010).
Intense competition in app development has led to a steep hike in failure rate among mobile app companies. In the iPhone App Store, the top 10% of the apps in popularity account for 80% of all the apps downloaded. In response to this trend, many app developers are now resorting to third party app development to boost their revenues. In the United States, third party app development is in greatest demand with up to 98% of application project revenues stemming from concept development, design, and coding. Analysts predict that this number will drop to around 70% by 2015. On the other hand, app maintenance, analytics, and distribution and extension services which generated only 2% of the industry revenue in 2010 will increase to nearly 30% by 2015 (Perez, 2011). Currently, third party app developers are thriving in the United States and Western Europe. However, emerging markets, such as China and India, have shown great promise in the mobile app industry (Perez, 2011).
It is anticipated that the average price of mobile apps will decrease by about 29%. In 2009, the number of mobile app stores increased from 8 to 38, and these will continue to increase through 2012 and beyond. The methods used to generate revenue from mobile apps are also changing. In 2009, advertising accounted for only 12% of the app revenues; this is expected to double by the end of 2012 (Floriceanu, 2010).
App revenues in Europe are estimated to surge from $1.5 billion in 2009 to nearly $8.5 billion by 2012. In North America, the revenues are predicted to rise from $2.1 billion in 2009 to $6.7 billion in 2012. Although Asia leads in the number of mobile apps downloaded globally, North America still generates the highest revenues, accounting for 50% of the entire mobile app industry revenues in 2009 (Floriceanu, 2010; What Developers Should Know About Amazon’s Android App Store, 2010). Revenue from apps is predicted to increase by 92%, from $7.3 billion in 2011 to $14.1 billion in 2012. The market for mobile apps is forecasted to increase @ 50% compounded annually, resulting in downloads across all platforms of 182.7 billion and revenues of $36.7 billion by 2015. These numbers reflect not only apps which have a one-time purchase price but also those with additional fees for “in-app” premium content, and subscription-based pricing models. In other words, the projections aggregate direct purchases, in-app purchases, and advertising (Racoma, 2011). Another forecast estimates that the mobile app industry, including app development, management, distribution, and extension processes, will grow to $100 billion by 2015. As of 2011, 66% of the applications were being developed by third party vendors. Analysts predict that this number will grow over the next several years as the industry matures and more programmers gain familiarity with working on platforms like Android and iOS (Perez, 2011).
ABI Research, a market intelligence company focusing on global technology trends, forecasts that revenue from mobile apps will reach an estimated $46 billion in 2016. That
projection includes income from downloading apps, in-app purchases, subscriptions, and advertising and represents nearly a five-fold increase over the $8.5 billion generated in 2011. In 2012, income from in-app purchases was projected to surpass that from app downloads, but ABI does not expect this trend to continue. They report that the majority of in-app purchases are made by a relatively small percentage of players of mobile game apps. The percentage of such users is not expected to grow; so any increase in in-app income will have to come from an as yet unseen source. However, HIS iSuppli, another market research firm, reported that in-app purchases were 39% of total revenue from apps in 2011 and they expect that to increase to 64% by 2015.
Another limitation on growth of in-app revenue is Google’s relatively stringent restrictions on in-app purchase options for apps created for the Android operating system. As the number of free apps available for download increases, the ability of developers to offer apps for sale decreases. It therefore becomes increasingly important for them to find other sources of revenue (Mobile App Revenue Set to Soar to $46 Billion in 2016, 2012).
LOOKING AHEAD
In response to the growing competition in the mobile industry, Google and Apple have announced several expansions to their existing products, and also expanded their product lines in 2012. During the vacation season in 2011, the sale of physical devices and app downloads rose to an all-time high. The number of app downloads exceeded one billion, demonstrating a clear boom in the industry. This milestone called to attention the future of the big players in the mobile industry – Google and Apple (Mathew, 2012).
Apple believes that smartphones will be soon replaced by super phones, which are more intelligent and responsive than their earlier counterparts. Mobile apps are expected to become even more impressive with the introduction of these super phones, and competition is expected to continue to intensify throughout the industry. Moreover, mobile commerce is expected to garner a lot of attention from both businesses as well as consumers. In an attempt to dominate the mobile commerce sector, Google introduced the Google Wallet in the summer of 2011, which is an Android app that allows customers to make purchases either in-store or online using their phones. It is expected to be met competition from the Windows Mobile, which could take a significant chunk of the market share. Post Nokia acquisition, Microsoft now has access to the resources necessary to carve out a significant position in the mobile industry (Mathew, 2012).
With the launch of the Mobile Application Developer's Association, an alliance created to aid in the development of apps for iOS, Android, and RIM platforms, the app developers will also gain in strength as important stakeholders of the mobile app industry. The core functions of this alliance include a collaboration network, a plethora of platform-specific tools and testing modules, cloud hosting services, discounted and free certification and training programs, and more (Mathew, 2012). The future of the mobile app industry is indeed exciting and potent with possibilities.
APPENDIX
Table 1
<table>
<thead>
<tr>
<th></th>
<th>June 2010</th>
<th>December 2010</th>
<th>June 2011</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>WEB</strong></td>
<td>64 MINUTES</td>
<td>70 MINUTES</td>
<td>74 MINUTES</td>
</tr>
<tr>
<td><strong>MOBILE APPS</strong></td>
<td>43 MINUTES</td>
<td>66 MINUTES</td>
<td>81 MINUTES</td>
</tr>
</tbody>
</table>
Sources: comScore, Aleska, Flurry Analytics
Table 2(a)
Apple’s iOS App Store
Table 2(b)
Google's Android App Store
Table 3
Categories of Apps Used in a 30-Day Span
<table>
<thead>
<tr>
<th>Category</th>
<th>Smartphone</th>
<th>Feature Phone</th>
</tr>
</thead>
<tbody>
<tr>
<td>Music</td>
<td>46%</td>
<td>45%</td>
</tr>
<tr>
<td>Social Networking</td>
<td>36%</td>
<td>54%</td>
</tr>
<tr>
<td>Maps/Navigation/Search</td>
<td>30%</td>
<td>55%</td>
</tr>
<tr>
<td>Video/Movies</td>
<td>21%</td>
<td>23%</td>
</tr>
<tr>
<td>Entertainment/Food</td>
<td>21%</td>
<td>38%</td>
</tr>
<tr>
<td>Sports</td>
<td>20%</td>
<td>30%</td>
</tr>
<tr>
<td>Communication</td>
<td>15%</td>
<td>25%</td>
</tr>
<tr>
<td>Banking/Finance</td>
<td>15%</td>
<td>31%</td>
</tr>
<tr>
<td>Shopping/Retail</td>
<td>14%</td>
<td>29%</td>
</tr>
<tr>
<td>Productivity</td>
<td>12%</td>
<td>30%</td>
</tr>
<tr>
<td>Travel/Lifestyle</td>
<td>11%</td>
<td>21%</td>
</tr>
</tbody>
</table>
Base: Feature Phone (n=1,914), Smartphone (n=2,351)
Table 4
App Popularity by Operating Systems
**Most Popular Used Apps on the iPhone OS**
Past 30 Day App Downloaders (n=1,121)
- Facebook: 58%
- iPod/iTunes: 48%
- Google Maps: 47%
- Weather Channel: 46%
- Pandora: 27%
**Most Popular Used Apps on the BlackBerry OS**
Past 30 Day App Downloaders (n=665)
- Facebook: 51%
- Google Maps: 34%
- Weather Channel: 28%
- ESPN: 19%
- Pandora: 18%
**Most Popular Used Apps on the Android OS**
Past 30 Day App Downloaders (n=62)
- Google Maps: 67%
- Facebook: 50%
- Weather Channel: 38%
- Pandora: 26%
- Google Search: 26%
**Most Popular Used Apps on all Other Smartphones**
Past 30 Day App Downloaders (n=503)
- Facebook: 39%
- Google Maps: 33%
- Weather Channel: 21%
- Pandora: 20%
- YouTube: 19%
Table 5
Mobile App Consumption Time By Category
Table 6
Popularity of Mobile App Platforms with Publishers in 2010
Table 7
Popularity of Mobile App Platforms with Publishers in 2011
State of the Apps Industry 2010 and 2009 Surveys; Digiday, Stifel Nicolas, Millennial Media.
Millennial Media | Stifel Nicolas | Digiday
Table 8
Publishers’ Expected Increase in Apps Revenue from 2010 to 2011
**Publishers’ Expected Increase in Apps Revenue from 2010 to 2011**
<table>
<thead>
<tr>
<th>Percentage</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>31%</td>
<td>100% increase or more</td>
</tr>
<tr>
<td>17%</td>
<td>>50% increase</td>
</tr>
<tr>
<td>17%</td>
<td>>25% increase</td>
</tr>
<tr>
<td>17%</td>
<td>10-25% increase</td>
</tr>
<tr>
<td>6%</td>
<td>5-10% increase</td>
</tr>
<tr>
<td>2%</td>
<td>1-5% increase</td>
</tr>
<tr>
<td>2%</td>
<td><1% increase</td>
</tr>
<tr>
<td>2%</td>
<td>Flat</td>
</tr>
</tbody>
</table>
Millennial Media | Stifel Nicolaus | DIGIDAY
REFERENCES
|
{"Source-Url": "http://www.aabri.com/manuscripts/131583.pdf", "len_cl100k_base": 13144, "olmocr-version": "0.1.50", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 80172, "total-output-tokens": 17357, "length": "2e13", "weborganizer": {"__label__adult": 0.001068115234375, "__label__art_design": 0.0015993118286132812, "__label__crime_law": 0.0008234977722167969, "__label__education_jobs": 0.00350189208984375, "__label__entertainment": 0.0012903213500976562, "__label__fashion_beauty": 0.0006642341613769531, "__label__finance_business": 0.033294677734375, "__label__food_dining": 0.0007433891296386719, "__label__games": 0.01568603515625, "__label__hardware": 0.0273284912109375, "__label__health": 0.0004544258117675781, "__label__history": 0.0008077621459960938, "__label__home_hobbies": 0.0003154277801513672, "__label__industrial": 0.0006861686706542969, "__label__literature": 0.000804901123046875, "__label__politics": 0.0006608963012695312, "__label__religion": 0.0008034706115722656, "__label__science_tech": 0.031829833984375, "__label__social_life": 0.00010031461715698242, "__label__software": 0.129638671875, "__label__software_dev": 0.74560546875, "__label__sports_fitness": 0.0005102157592773438, "__label__transportation": 0.0012388229370117188, "__label__travel": 0.0003886222839355469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 72609, 0.05145]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 72609, 0.11295]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 72609, 0.94067]], "google_gemma-3-12b-it_contains_pii": [[0, 1505, false], [1505, 5385, null], [5385, 8994, null], [8994, 12545, null], [12545, 16289, null], [16289, 19718, null], [19718, 23461, null], [23461, 27032, null], [27032, 30734, null], [30734, 34578, null], [34578, 38388, null], [38388, 42282, null], [42282, 46406, null], [46406, 50573, null], [50573, 54140, null], [54140, 58073, null], [58073, 61311, null], [61311, 61658, null], [61658, 62568, null], [62568, 63305, null], [63305, 63421, null], [63421, 63627, null], [63627, 64340, null], [64340, 67151, null], [67151, 70033, null], [70033, 72609, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1505, true], [1505, 5385, null], [5385, 8994, null], [8994, 12545, null], [12545, 16289, null], [16289, 19718, null], [19718, 23461, null], [23461, 27032, null], [27032, 30734, null], [30734, 34578, null], [34578, 38388, null], [38388, 42282, null], [42282, 46406, null], [46406, 50573, null], [50573, 54140, null], [54140, 58073, null], [58073, 61311, null], [61311, 61658, null], [61658, 62568, null], [62568, 63305, null], [63305, 63421, null], [63421, 63627, null], [63627, 64340, null], [64340, 67151, null], [67151, 70033, null], [70033, 72609, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 72609, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 72609, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 72609, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 72609, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 72609, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 72609, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 72609, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 72609, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 72609, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 72609, null]], "pdf_page_numbers": [[0, 1505, 1], [1505, 5385, 2], [5385, 8994, 3], [8994, 12545, 4], [12545, 16289, 5], [16289, 19718, 6], [19718, 23461, 7], [23461, 27032, 8], [27032, 30734, 9], [30734, 34578, 10], [34578, 38388, 11], [38388, 42282, 12], [42282, 46406, 13], [46406, 50573, 14], [50573, 54140, 15], [54140, 58073, 16], [58073, 61311, 17], [61311, 61658, 18], [61658, 62568, 19], [62568, 63305, 20], [63305, 63421, 21], [63421, 63627, 22], [63627, 64340, 23], [64340, 67151, 24], [67151, 70033, 25], [70033, 72609, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 72609, 0.11588]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
b0d301d3fe3daa1f45e1fd2da4301b36b75b6d4c
|
[REMOVED]
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/19596746/Jackson_Schanda_ET_AL_2013_Auditing_User_Provided_Axioms_in_software_verification_conditions.pdf", "len_cl100k_base": 8829, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 38468, "total-output-tokens": 10788, "length": "2e13", "weborganizer": {"__label__adult": 0.0003767013549804687, "__label__art_design": 0.000293731689453125, "__label__crime_law": 0.0004482269287109375, "__label__education_jobs": 0.0006465911865234375, "__label__entertainment": 6.383657455444336e-05, "__label__fashion_beauty": 0.00016701221466064453, "__label__finance_business": 0.0002275705337524414, "__label__food_dining": 0.0003731250762939453, "__label__games": 0.0005497932434082031, "__label__hardware": 0.0007758140563964844, "__label__health": 0.0006504058837890625, "__label__history": 0.00021851062774658203, "__label__home_hobbies": 8.749961853027344e-05, "__label__industrial": 0.00043487548828125, "__label__literature": 0.00028204917907714844, "__label__politics": 0.000286102294921875, "__label__religion": 0.00045990943908691406, "__label__science_tech": 0.0285491943359375, "__label__social_life": 9.238719940185548e-05, "__label__software": 0.00521087646484375, "__label__software_dev": 0.95849609375, "__label__sports_fitness": 0.00035309791564941406, "__label__transportation": 0.0006046295166015625, "__label__travel": 0.00019538402557373047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43152, 0.02797]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43152, 0.48133]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43152, 0.92192]], "google_gemma-3-12b-it_contains_pii": [[0, 1337, false], [1337, 3822, null], [3822, 7049, null], [7049, 10098, null], [10098, 13175, null], [13175, 15877, null], [15877, 18259, null], [18259, 21257, null], [21257, 24258, null], [24258, 27207, null], [27207, 29859, null], [29859, 32263, null], [32263, 34309, null], [34309, 37049, null], [37049, 39810, null], [39810, 43152, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1337, true], [1337, 3822, null], [3822, 7049, null], [7049, 10098, null], [10098, 13175, null], [13175, 15877, null], [15877, 18259, null], [18259, 21257, null], [21257, 24258, null], [24258, 27207, null], [27207, 29859, null], [29859, 32263, null], [32263, 34309, null], [34309, 37049, null], [37049, 39810, null], [39810, 43152, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43152, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43152, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43152, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43152, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43152, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43152, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43152, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43152, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43152, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43152, null]], "pdf_page_numbers": [[0, 1337, 1], [1337, 3822, 2], [3822, 7049, 3], [7049, 10098, 4], [10098, 13175, 5], [13175, 15877, 6], [15877, 18259, 7], [18259, 21257, 8], [21257, 24258, 9], [24258, 27207, 10], [27207, 29859, 11], [29859, 32263, 12], [32263, 34309, 13], [34309, 37049, 14], [37049, 39810, 15], [39810, 43152, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43152, 0.03139]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
a9436b1aa7325a52373f0622eb934e7a39d41490
|
MOBIUS: Mobility, Ubiquity, Security*
Objectives and progress report
Gilles Barthe1, Lennart Beringer2, Pierre Crégut3, Benjamin Grégoire1, Martin Hofmann2, Peter Müller4, Erik Poll5, Germán Puebla6, Ian Stark7, and Eric Vertillard8
1 INRIA Sophia-Antipolis, France
2 Ludwig-Maximilians-Universität München, Germany
3 France Télécom, France
4 ETH Zürich, Switzerland
5 Radboud University Nijmegen, the Netherlands
6 Technical University of Madrid (UPM), Spain
7 The University of Edinburgh, Scotland
8 Trusted Labs, France
Abstract. Through their global, uniform provision of services and their distributed nature, global computers have the potential to profoundly enhance our daily life. However, they will not realize their full potential, unless the necessary levels of trust and security can be guaranteed.
The goal of the MOBIUS project is to develop a Proof Carrying Code architecture to secure global computers that consist of Java-enabled mobile devices. In this progress report, we detail its objectives and provide a snapshot of the project results during its first year of activity.
1 Introduction
Global computers are distributed computational infrastructures that aim at providing services globally and uniformly; examples include the Internet, banking networks, telephone networks, digital video infrastructures, peer-to-peer and ad hoc networks, virtual private networks, home area networks, and personal area networks. While global computers may deeply affect our quality of life, security is paramount for them to become pervasive infrastructures in our society, as envisioned in ambient intelligence. Indeed, numerous application domains, including e-government and e-health, involve sensitive data that must be protected from unauthorized parties. Malicious attackers spreading over the network and widely disconnecting or disrupting devices could have devastating economic and social consequences and would deeply affect end-users’ confidence in e-society. In spite of clear risks, provisions to enforce security in global computers remain extremely primitive. Some global computers, for instance in the automotive industry, choose to enforce security by maintaining devices completely under the control of the operator. Other models, building on the Java security architecture, choose to enforce security via a sandbox model that distinguishes between a fixed trusted computing
* Work partially supported by the Integrated Project MOBIUS, within the Global Computing II initiative.
Base and untrusted applications. Unfortunately, these approaches are too restrictive to be serious options for the design of secure global computers. In fact, any security architecture for global computing must meet requirements that reach beyond the limits of currently deployed models.
The objective of the MOBIUS project is to develop the technology for establishing trust and security in global computers, using the Proof Carrying Code (PCC) paradigm [37, 36]. The essential features of the MOBIUS security architecture are:
- **innovative trust management**, dispensing with centralized trust entities, and allowing individual components to gain trust by providing verifiable certificates of their innocuousness; and
- **static enforcement mechanisms**, sufficiently flexible to cover the wide range of security concerns arising in global computing, and sufficiently resource-aware and configurable to be applicable to the wide range of devices in global computers; and
- **support for system component downloading**, for compatibility with the view of a global computer as an evolving network of autonomous, heterogeneous and extensible devices.
MOBIUS targets are embedded execution frameworks that can run third party applications which must be checked against a platform security policy. In order to maximize its chances of success, the MOBIUS project focuses on global computers that consist of Java-enabled devices, and in particular on devices that support the Mobile Information Device Profile (MIDP, version 2) of the Connected Limited Device Configuration (CLDC) of the Java 2 Micro Edition.
## 2 MIDP
CLDC is a variant of Java for the embedded industry, and stands between JavaCard and Java Standard Edition. CLDC is a perfect setting for MOBIUS because it has all the characteristics of a real language: true memory management, object orientation, etc., but applications developed for it are still closed: there is no reflection API, no C interface (JNI) and no dynamic class loading (class loading is done at launch time). Furthermore, CLDC is widely accepted by the industry as a runtime environment for downloadable code: on mobile phones (MIDP), set-top-boxes (JSR 242) and smart card terminal equipment (STIP).
The MIDP profile is a set of libraries for the CLDC platform that provides a standardized environment for Java applications on mobile phones (so-called midlets). Its wide deployment (1.2 billion handsets) has lead to a consensus on security objectives. Moreover, MIDP promotes the idea of small generic mobile devices downloading services from the network and is an archetypal example of the global computing paradigm.
MIDP defines a simple connection framework for establishing communications over various technologies, with a single method to open a connection that takes as argument a URL which encodes the protocol, the target address, and some of the connection parameters. MIDP offers a graphical user interface implementing the view/controller...
paradigm and provides access to specific mobile phones resources (persistent store, players, camera, geolocalisation, etc.)
MIDP security policy is based on the approval by the end-user of every method call that can threaten the security of the user (such as opening a network connection). Depending on the API, the frequency of security screens varies (from once for all to once for every call).
This scheme, although simple, has several drawbacks: users accept dangerous calls one at a time and have no idea of the forthcoming calls necessary for the transaction; there can be too many screens to perform a simple transaction; moreover even a clearly malicious action will be statistically accepted by some users if the customer basis is large enough. To mitigate some of these risks, MIDP2.0 proposes to sign midlets. Signing changes the level of trust of the midlet and reduces the number of mandatory warning screens. Signing moves the decision of accepting an API call from the end-user to a trusted entity (the manufacturer, the operator or an entity endorsed by them), but it does not provide clues to take the decision. One goal of MOBIUS is to develop the necessary technology for allowing the developer to supply clues and proofs that can help operators to validate midlets developed by third parties.
Finally, MIDP dynamic security policy does not provide any control on the information flow. This is in contrast with the european legislation that puts information control at the heart of its requirements for computerized systems [38]. The information flow analysis reported in Section 5.3 provides a first step to provide a technical enforcement of those regulations.
Several factors such as handset bugs, different handset capabilities, operational environment (language, network), etc. lead to a fragmentation of MIDP implementations. As resources (cpu, memory for heap, code or persistent data) on device are scarce, code specialization is the only viable alternative to adapt application to handsets. It is not uncommon to have hundreds of versions of a single application. Whereas some solutions exist for automating the development, the management, and the provisioning to the handset of so many variants, in practice, validation [32] is still based on a technology which is unable to cope with multiple versions: black-box testing. Indeed, only the bytecode is available to test houses, as software companies refuse to disclose their source code to third parties to protect their intellectual property. MOBIUS outcome should help to automate the validation process for operators. PCC can be used on the most complex properties whereas type based techniques could be sufficient on simple ones.
3 PCC Scenarios
Figure 1 shows the basic structure of all certificate-based mobile code security models, including Proof Carrying Code. This basic model, or scenario, comprises a code producer and a code consumer. The basic idea in PCC is that the code is accompanied by a certificate. The certificate can be automatically and efficiently checked by the consumer and it provides verifiable evidence that the code abides by a given security policy. The main difference w.r.t. digital signatures is that the latter allows having certainty on the origin of the code, whereas PCC allows having certainty about the behaviour of the code. Different flavours of PCC exist which use different techniques
for generating certificates, ranging from traditional logic-based verification to static analysis in general and type systems in particular.
In the context of global computing, this initial scenario needs to be extended in a number of ways to consider the presence of multiple producers, multiple consumers, multiple verifiers and intermediaries. We have identified a series of innovative scenarios for applying Proof Carrying Code in the context of global computers [23]; below we summarize the main scenarios and issues of interest within the context of MOBIUS.
3.1 Wholesale PCC for MIDP devices
Figure 2 depicts the MOBIUS scenario for MIDP devices. It involves a trusted intermediary (typically the mobile phone operator), code producers that are external to the phone companies, and code consumers (the end users). PCC is used by developers to supply phone operators with proofs which establish that the application is secure. The operator then digitally signs the code before distributing it to the user.
This scenario for “wholesale” verification by a code distributor effectively combines the best of both PCC and trust, and brings important benefits to all participating actors. For the end user in particular, the scenario does not add PCC infrastructure complexity to the device, but still allows effective enforcement of advanced security policies.
From the point of view of phone operators, the proposed scenario enables achieving the required level of confidence in MIDP applications developed by third parties through formal verification. Although this process is very costly, which often results in third party code not being distributed, PCC enables operators to reproduce the program verification process performed by producers, but completely automatically and at a small fraction of the cost.
From the software producer perspective, the scenario removes the bottleneck of the manual approval/rejection of code by the operator. This results in a significant increase in market opportunity. Of course, this comes at a cost: producers have to verify their code and generate a certificate before shipping it to the operator, in return for access to a market with a large potential and which has remained rather closed to independent software companies.
3.2 Retail PCC and on-device checking
Although our main MOBIUS scenario is for wholesale proof-checking by a trusted intermediary, we are also exploring possibilities for “retail” PCC where checking
MOBIUS: Mobility, Ubiquity, Security
Fig. 2. The MOBIUS scenario
takes place on the device itself. Limited computing capabilities rule out full-blown proof-checking for the moment, but there are other kinds of certificates that support verification: MIDP already annotates code with basic type information for lightweight bytecode verification [40], and we aim to extend this with more sophisticated types to capture security properties, and with the results of other analyses as in abstraction-carrying code [1]. Complementary to digital signatures, these certificates maintain the PCC property that clients perform actual verification of received code, by providing rich type information to make it fast and cheap to do.
3.3 Beyond the MOBIUS scenarios
Though the MOBIUS scenario concerns networks of mobile devices, we believe that the concept of trusted intermediary and the use of off-device PCC can have a significant impact in the quality of the applications developed in other contexts. For the case of general-purpose computers, we believe that our scenario is also applicable, since the role of trusted intermediary can be played by other organizations such as end-user organizations, governmental institutions, non-profit organizations, private companies, etc. Note that this scenario is radically different from the situation today: though some organizations play the role of trusted intermediaries, they do not have the technology for formally verifying code and they have to resort to other techniques such as manual code inspection. Thus, we argue that PCC holds the promise of bringing the benefits of software verification to everyone. The fact that verified code becomes available at low cost will increase the demand on verified code, which will in turn encourage software companies to produce verified code with certificates.
4 Security requirements
A fundamental question in developing a security architecture for global computers is the inventory of the security requirements that we should be able to express and guarantee. This has been the one of the first step of the project.
The choice to focus on the MIDP framework was very helpful, as it allowed us to consider concrete examples of various kinds of security requirements. Moreover, as the framework has been actively used for some time, there is considerable experience with security requirements for MIDP applications. Although inspired by concrete MIDP setting, or even concrete MIDP applications, the range of security requirements we have found is representative of the requirements that are important for any distributed computing infrastructure.
We have considered two, largely orthogonal ways to analyse and classify security requirements. In a first deliverable [19], we investigated two important classes of security requirements, namely resource usage and information flow. In a second one [20] we considered general security requirements that apply to all applications for the MIDP framework, so-called framework-specific security requirements, and security requirements specific to a given application, so-called application-specific security requirements. Here we summarise the main conclusions of those reports.
4.1 Resources
Any global computing infrastructure naturally raises issues about identifying and managing the resources required by mobile code. This is especially true on small devices, where resources are limited.
Central issues for resource policies are: what resources they should describe; how resource policies can contribute to security; and what kinds of formalism are appropriate. Surveying different possible kinds of “resource”, we are looking to identify those that are both likely to be amenable to formal analysis by current technologies, and are also clearly useful to real-world MIDP applications. Some of these are classical instances of computational resources, namely time, where counting bytecodes executed can be a useful estimate of actual runtime, and space, of stack or heap, which may be rather limited on a mobile device. The focus on MIDP also allows us to address some platform-specific kinds of resource, namely persistent store, as file storage space will be limited, and billable events such as text messages (SMS) or network connections (HTTP), which have real-money costs for the user. Many of these platform-specific resources can be unified by treating particular system calls as the resource to be managed: how many times they are invoked, and with what arguments. This fits neatly into the existing MIDP security model, where certain APIs are only available to trusted applications.
Policies to control resources such as these are useful in themselves, but they also have a particular impact on security. First, some platform-specific resources are intrinsically valuable — for example, because an operator will charge money for them — and so we want to guard against their loss. Further, overuse of limited resources on the device itself may compromise availability, leading to denial of service vulnerabilities.
4.2 Information flow
Information policies can track integrity or confidentiality. We concentrated on the second, as the former is essentially just its dual. The attacker model is a developer who leaks sensitive information to untrusted parties, either intentionally (in case of
a malicious developer) or by accident. On the MIDP platform sensitive information is typically information related to the user: sources include the addressbook, audio or video capture, the permanent store, and textfields where the user typed in private data. Untrusted information sinks are network connections and the permanent store, especially if the store is shared between applications.
4.3 Framework-specific security requirements
Framework-specific security requirements describe generic requirements applicable to all the applications running on a given framework. In industry there is already considerable experience with framework-specific security requirements for MIDP. [20] provides a comprehensive listing of all of these requirements.
Many of these requirements concern critical API methods: both the use of certain methods (does the application uses the network ?) and possibly also the arguments supplied to them (for example the URL supplied to open a connection defines the protocol used). Deciding these questions is already an issue in the current MIDP codesigning scheme: to decide if signing is safe, it is necessary to know statically which critical APIs are used and to compute an approximation of the possible values of their key parameters. There are already some dedicated static analysis techniques for this [16, 24], but there is a limit to what such automated analyses can achieve.
More complicated requirements on API methods are temporal properties that involve the sequencing of actions, such as a requirement that every file that is opened must be closed before the program exits. Checking these properties requires a deeper insight of the control flow of a program, which can be complicated by the possibility of runtime exceptions, the dependency on dynamic data structures, and the influence of thread synchronization. Finite state automata are a convenient formalism for specifying temporal requirements. Such automata can be expressed in the program specification language JML that we plan to use. Moreover, they are easily understandable by non-experts.9
4.4 Application-specific security requirements
An individual application may have specific security requirements beyond the generic requirements that apply to all the applications. These application-specific security requirements may simply be more specific instances of framework-specific security properties, but can also be radically different. Whereas framework-specific requirements are often about the absence of unwanted behaviour, security requirements for a particular application may include functional requirements, concerning the correctness of some functional behaviour. Application-specific properties are usually more complex than framework-specific properties and less likely to be certified by fully automatic techniques.
We have selected some archetypical applications representative of classical application domains for which interesting security requirements can be expressed. These applications include a secure private storage provider, an instant messenger client, an SSH
9 In fact, the current industrial standard for testing MIDP applications, the Unified Testing Criteria [32] already uses finite automata for specification, albeit informally.
client, and an application for remote electronic voting. All of these have strong security requirements, including information flow requirements, that go beyond the framework-specific requirements.
The final two applications selected are in fact core services of the MIDP platform itself rather than applications that run on the platform, namely a bytecode verifier and a modified access controller. Note that for these components functional correctness is one of the security requirements. The specification language JML that we will use in logic-based verification is capable of expressing such functional requirements, although extensions to conveniently use mathematical structures in specification, as proposed in [15], may be needed to make this practical.
5 Enabling technologies
A central component of the technology being developed by MOBIUS is a hierarchy of mechanisms that allow one to reason about intensional and extensional properties of MIDP-compliant programs executed on a Java Virtual Machine. The two enabling technologies that these mechanisms rely on are typing and logic-based verification. Depending on the security property, and the respective computational resources, code producer and consumer (or verifier in the case of wholesale PCC) may negotiate about the level at which the certificate is formulated. For example, the availability of a type system with an automated inference algorithm reduces the amount of code annotations, whereas expressive program logics may be applied in cases when type systems are insufficiently flexible, or when no static analysis is known that ensures the property of interest. In the sequel, we provide a short overview of the mechanisms developed during the first year of the project, namely the MOBIUS program logic for sequential bytecode, and type systems for resources, information flow, and aliasing.
In the following sections we summarise some of the formal systems which we have developed and outline possible verification approaches.
5.1 Operational model
The lowest level of our hierarchy of formal systems consists of an operational model of the Java Virtual Machine that is appropriate for MOBIUS. In particular, as a consequence of the choice to target the MIDP profile of the CLDC platform, features such as reflection and dynamic class loading may safely be ignored, as is the case for complex data types. In addition, our current model is restricted to the sequential fragment of the JVM and does not model garbage collection.
The operational model builds the basis for all program verification formalisms to be developed in MOBIUS: all formal systems considered within the MOBIUS project – and hence the validity of certificates – may in principle be given interpretations that only refer to the operational judgments defining the model. Like any mathematical proof, these interpretations may involve some abstractions and definitional layers, including some more abstract operational semantics which we have defined and formally proven compatible with the small-step relation.
In order to guarantee the utmost adherence to the official specification, we have implemented a small step semantics. The corresponding judgement relates two consecutive states during program execution. We keep the same level of detail as the official description, but with some simplifications due to the fact that we concentrate on the CLDC platform.
The correctness of an operational model can not be formally proved, we assert it axiomatically, and have developed a rigorous mathematical description of it, called Bicolano, in the Coq proof assistant [43]. In order to get more confidence in our axiomatization we have also developed an executable version of fragments of Bicolano which can be used to compare evaluation results with other implementations of the official specification.
5.2 Program logic
The second layer of our reasoning infrastructure is built by a program logic. This allows proof patterns typically arising during the verification of recursive program structures to be treated in a uniform matter. Extending program logics with partial-correctness interpretations, the MOBIUS logic supports the verification of non-terminating program executions by incorporating strong invariants [28].
The global specification structure is given by a table $M$ that associates a partial-correctness method specification $\phi$ and a method invariant $\varphi$ to each defined method, where the latter relates each state occurring throughout the (finite or infinite) execution of the method to its initial state. In order to support the modular verification of virtual methods, the method specification table is required to satisfy a behavioural subtyping condition which mandates that the specification of an overriding method declaration must be stronger (i.e. imply) the specification of the overwritten method. In addition, each program point in a method may be decorated with an assertion that is to be satisfied whenever the control flow passes through the decorated program point. All such annotations are collected in a global annotation table $Q$.
The program logic employs proof judgements of the form $G \vdash \{A\} \ell \{B\} (I)$ where the program point $\ell$ (comprising a method identifier $M$ and a label in the definition of $M$’s body) is associated with a (local) precondition $A$, a local postcondition $B$, a (strong) invariant $I$. The types and intended meanings of these components are as follows.
Whenever the execution of $M$, starting at label 0 and initial state $s_0$ reaches $\ell$ with current state $s$, and $A(s_0, s)$ holds, then
- $B(s_0, s, t)$ holds, provided that the method terminates with final state $t$,
- $I(s_0, s, H)$ holds, provided that $H$ is the heap component of any state arising during the continuation of the current method invocation, including invocations of further methods, i.e. subframes,
- $Q(s_0, s')$ holds, provided that $s'$ is reached at some label $\ell'$ during the continuation of the current method invocation, but not including subframes, where $Q(\ell') = Q$.
Moreover, the judgements are supplied with a proof context $G$. The latter contains assumptions typically associated with merge-points in the control flow graph. These
assumptions are used by the logic rules in order to avoid infinite cycling in the proof derivation. For the technical details of this the reader is referred to [22,9].
In order to give a flavor of what the proof rules look like, we show the rule for basic instructions (arithmetic operations, load/store, ...):
\[
\begin{align*}
\text{INSTR} & \quad G \vdash \{ \text{Pre}_{M,I}(A) \} M, \text{suc}_{M}(l) \{ \text{Post}_{M,I}(B) \} \{ \text{Inv}_{M,I}(I) \} \psi \\
& \quad G \vdash \{ A \} M, l \{ B \} (I)
\end{align*}
\]
Note that the correctness of \( I \) depends on the correctness of its successor. Also, the rule uses predicate transformers \( \text{Pre}_{M,I}(A), \text{Post}_{M,I}(A), \text{Inv}_{M,I}(I) \) which relate the assertions for the successor instruction with the assertions of instruction \( I \). For the definition of these transformers, see [9]. Finally, the side condition \( \psi \) states that the local precondition \( A \) implies the strong invariant \( I \) and any annotation that may be associated with \( M, l \) in the annotation table \( Q \):
\[
\psi = \forall s_0. s. A(s_0, s) \Rightarrow (I(s_0, s, \text{heap}(s)) \land \forall l. Q(M, l) = Q(s_0, s)).
\]
In addition to rules of similar shape for all instruction forms, the logic is also supplied with logical rules, such as a consequence rule and an axiom rule that extracts assumptions from the proof context.
We have proven a soundness theorem for the proof system which ensures that the derivability of a judgement \( G \vdash \{ A \} \ell \{ B \} (I) \) entails its semantic validity. The latter is obtained by formulating the above informal interpretation in terms of Bicamano’s operational judgements.
This soundness result may subsequently be extended to programs. We first say that a program has been verified if each entry in the method specification table is justified by a derivation for the corresponding method body, and similarly for the entries of local proof contexts \( G \). The soundness result for programs then asserts that all methods of a verified program satisfy their specifications: whenever \( M(M) = (\varphi, \varphi) \) holds, any invocation of \( M \) is guaranteed to fulfill the method invariant \( \varphi \), with terminating invocations additionally satisfying the partial-correctness assertion \( \varphi \).
In order to evaluate our logic experimentally, we have implemented a verification condition generator (VCgen) that applies proof rules in an automatic fashion and emits verifications conditions stemming from side conditions such as \( \psi \) above, and from the application of the rule of consequence.
In the next period of the project, we will extend the logic by mechanisms for reasoning about the consumption of resources and incorporate ghost variables and associated concepts. This will provide a platform for the encoding of some type systems that defy the current version of the program logic. A typical example are type systems that track the number of calls to certain API-methods like sending of SMS messages or opening files.
### 5.3 Type systems
In this section we describe MOBIUS work on types for information flow, resources, and alias control. Classically, types in programming languages are used to check data formats, but we envisage much broader type-based verification, with specialised systems
to analyse individual security properties. Indeed, Java 5 has annotations that support just such *pluggable* type systems [11].
**Information flow** Work on information flow has focused on the definition of an accurate information flow type system for sequential Java bytecode and on its relation with information flow typing for Java source code, as well as on flexible analyses for concurrency.
**Policies** Our work mainly focuses on termination insensitive policies which assume that the attacker can only draw observations on the input/output behavior of methods. Formally, the observational power of the attacker is captured by its security level (taken from a lattice $S$ of security levels) and by *indistinguishability* relations $\sim$ on the semantic domains of the JVM memory, including the heap and the output value of methods (normal values or exceptional values).
Then, policies are expressed as a combination of global policies, that attach levels to fields, and local policies, that attach to methods identifiers signatures of the form $k_v \xrightarrow{k_h} k_r$, where $k_v$ sets the security level of local variables, $k_h$ is the heap effect of the method, and $k_r$ is a record of security levels of the form $\{n : k_n, e_1 : k_{e_1}, \ldots, e_n : k_{e_n}\}$, where $k_n$ is the security level of the return value (normal termination) and each $e_i$ is an exception class that might be propagated by the method, and $k_{e_i}$ is its corresponding security level.
A method is safe w.r.t. a signature $k_v \xrightarrow{k_h} k_r$ if:
1. two terminating runs of the method with $\sim_{k_v}$-equivalent inputs and equivalent heaps, yield $\sim_{k_h}$-equivalent results and equivalent heaps;
2. the heap effect of the method is greater than $k_h$, i.e. the method does not perform field updates on fields whose security level is below $k_h$.
The definition of heap equivalence adopted in existing works on information flow for heap-based language, including [8], often assumes that pointers are opaque, i.e. the only observations that an attacker can make about a reference are those about the object to which it points. However, Hedin and Sands [29] have recently observed that the assumption is unvalidated by methods from the Java API, and exhibited a Jif program that does not use declassification but leaks information through invoking API methods. Their attack relies on the assumption that the function that allocates new objects on the heap is deterministic; however, this assumption is perfectly reasonable and satisfied by many implementations of the JVM. In addition to demonstrating the attack, Hedin and Sands show how a refined information flow type system can thwart such attacks for a language that allows to cast references as integers. Intuitively, their type system tracks the security level of references as well as the security levels of the fields of the object it points to.
**Bytecode verification for secure information flow** We have defined a lightweight bytecode verifier that enforces non-interference of JVM applications, and proved formally its soundness against Bicolano [8]. The lightweight bytecode verifier performs
a one-pass analysis of programs, and checks for every program point that the instruction verifies the constraints imposed by transition rules of the form
$$P[i] = {\text{ins constraints(ins, } st, st', \Gamma)}$$
$$\Gamma, i \vdash st \rightarrow st'$$
where $i$ is an index consisting of a method body and a program point for this body, and the environment $\Gamma$ contains policies, a table of security signatures for each method identifier, a security environment that maps program points to security levels, as well as information about the branching structure of programs, that is verified independently in a preliminary analysis. For increased precision, the preliminary analysis checks null pointers (to predict unthrowable null pointer exceptions), classes (to predict target of throw instructions), array accesses (to predict unthrowable out-of-bounds exceptions), and exceptions (to over-approximate the set of throwable exceptions for each method); the information is then used by a CDR checker that verifies control dependence regions (cdr), using the results of the PA analyser to minimise the size of regions.
Relation with information flow type system for Java
JFlow [34] is an information flow aware extension of Java that enforces statically flexible and expressive information policies by a constraint-based algorithm. Although the expressiveness of JFlow makes it difficult to characterize the security properties enforced by its type system, sound information flow type systems inspired from JFlow have been proposed for exception-free fragments of Java.
JFlow offers a practical tool for developing secure applications but does not address mobile code security as envisioned in MOBIUS since it applies to source code. In order to show that applications written in (a variant of) JFlow can be deployed in a mobile code architecture that delivers the promises of JFlow in terms of confidentiality, [7] proves that a standard (non-optimizing) Java compiler translates programs that are typable in a type system inspired from [5], but extended to exceptions, into programs that are typable in our system.
Concurrency
Extending the results of [8] to multi-threaded JVM programs is necessary in order to cover MIDP applications, but notoriously difficult to achieve. Motivated by the desire to provide flexible and practical enforcement mechanisms for concurrent languages, Russo and Sabelfeld [41] develop a sound information flow type system that enforces termination-insensitive non-interference in for a simple concurrent imperative language. The originality of their approach resides in the use of pseudo-commands to constrain the behavior of the scheduler so as to avoid internal timing leaks. One objective of the project is to extend their ideas to the setting of the JVM.
Declassification
Information flow type systems have not found substantial applications in practice, in particular because information flow policies based on non-interference are too rigid and do not authorize information release. In contrast, many applications often release deliberately some amount of sensitive information. Typical examples of deliberate information release include sending an encrypted message through an untrusted network, or allowing confidential information to be used in statistics over large
Resource analysis In §4.1 we identified requirements for MOBIUS resource security policies, as well as some notions of “resource” relevant to the MIDP application domain. Here we survey work within the project on analyses to support such policies, with particular focus on the possibility of formally verifying their correctness: essential if they are to be a basis for proof-carrying code.
Memory usage The Java platform has a mandatory memory allocation model: a stack for local variables, and an object heap. In [9] we introduce a bytecode type system for this, where each program point has a type giving an upper limit on the number of heap objects it allocates. Correctness is proved via a translation into the MOBIUS logic, and every well-typed program is verifiable [21, Thm. 3.1.1]. Using the technique of type-preserving compilation we can lift this above the JVM: we match the translation from a high-level program $F$ to bytecode $\llbracket F \rrbracket$ with a corresponding translation of types; and again for every well-typed program its bytecode compilation is verifiable in the MOBIUS logic [21, Thm. 3.1.3]. Even without the original high-level source program and its types, this low-level proof can certify the bytecode for PCC.
Work in the MRG project [4] demonstrated more sophisticated space inference for a functional language, using Hofmann-Jost typing [30] to give space bounds dependent on argument size, and with these types used to generate resource proofs in a precursor of the MOBIUS logic. We have now developed this further, into a space type system for object oriented programming based on amortised complexity analysis [31].
Billable events Existing MIDP security policies demand that users individually authorise text messages as they are sent. This is clearly awkward, and the series of confirmation pop-up screens is a soft target for social engineering attacks. We propose a Java library of resource managers that add flexibility without compromising safety [21, §3.3]: instead of individual confirmation, a program requests authorisation in advance for a series of activities. Resource security may be assured either by runtime checks, or a type system for resource accounting, such that any well-typed program will only attempt to use resources for which it already has authorisation.
We have also used abstract interpretation to model such external resources [10]. From a program control-flow graph, we infer constraints in a lattice of permissions: whenever some resourceful action takes place, the program must have acquired at least the permissions required. Automated constraint solving can then determine whether this condition is satisfiable.
Execution time Static analysis to count instructions executed can be verified in bytecode logic using resource algebras [3]. We have recently developed a static analysis framework [2] which provides a basis for performing cost analysis directly at the bytecode
level. This allows obtaining cost relations in terms of the size of input arguments to methods. In addition, platform-dependent factors are a significant challenge to predicting real execution time across varied mobile platforms. We have shown how parameterised cost models, calibrated to an individual platform by running a test program, can predict execution times on different architectures [33]. In a PCC framework, client devices would map certified platform-independent cost metrics into platform-dependent estimates, based on fixed calibration benchmarks.
**Alias control** Alias characterisations simplify reasoning about programs [26]; they enable modular verification, facilitate thread synchronisation, and allow programmers to exchange internal representations of data structures. Ownership types [18, 17] and Universe types [35] are mechanisms for characterising aliasing in object oriented programming languages. They organise the heap into a hierarchical structure of nested non-overlapping contexts where every object is contained in one such context. Each context is characterised by an object, which is said to own all the objects contained directly in that context. Figure 3 illustrates the ownership structure of a linked list with iterator.

In the Universe Type System [35, 26], a context hierarchy is induced by extending types with Universe annotations, which range over `rep`, `peer`, and `any`. A field typed with a Universe modifier `rep` denotes that the object referenced by it must be within the context of the current object; a field typed with a Universe modifier `peer` denotes that the object referenced by it must be within the context that also contains the current object; a field typed with a Universe modifier `any` is agnostic about the context containing the object referenced by the field.
So far, we have concentrated on the following three areas:
- **Universe Java**: The formalisation and proof of soundness of a minimal object-oriented language with Universe Types.
- **Generic Universe Java**: The extension of Universe Java to Generic Java.
- **Concurrent Universe Java**: The use of Universe Types to administer race conditions and atomicity in a concurrent version of Universe Java.
**UJ - Universe Java** As a basis for the other two work areas, we formalized Universe Java and proved the following key properties:
- **Type safety**: The Universe annotations \texttt{rep} and \texttt{peer} correctly indicate the owner of an object.
- **Encapsulation**: The fields of an object can only be modified through method calls made on the owner of that object (owner-as-modifier discipline).
**GUJ - Generic Universe Java** We extended Universe Java to handle generics, which now form part of the official release of Java 1.5. In Generic Java, classes have parameters which can be bound by types: since in Universe Java, types are made up of a Universe modifier and a class, GUJ class parameters in generic class definitions are bound by Universe modifiers \texttt{and} classes. Generic Universe Java provide more static type safety than Universe Java by reducing the need for downcasts with runtime ownership checks. We proved that GUJ is type safe and enforces encapsulation.
**UJ and Concurrency** The Universe ownership relation in UJ provides a natural way to characterise non-overlapping nested groups of objects in a heap. We therefore exploit this structure in a Java with multiple concurrent threads [25] to ensure atomicity and absence of data races.
### 6 Towards certificate generation and certificate checking
An important part of a PCC infrastructure is concerned with certificates. For the code producer one of the main tasks is to generate a certificate ensuring that his program meets the security policy of the client. In contrast, the code verifier/consumer needs to convince himself that the transmitted program respects his security policy.
In the scenario of Fig. 2 we assume that operators send compiled code, i.e. bytecode, to their customers, but this leaves the question of whether code producers will supply source code or bytecode to the operator. In MOBIUS, we concentrate on the latter, since this avoids the inclusion of the compiler in the trusted code base and does not require code producers to provide access to their source code.
#### 6.1 Certificate generation
The MOBIUS project focuses on two approaches for the generation of certificates, logic-based verification and type-based verification. By exploring both approaches, we hope to complement the rigorousness of our formalization by flexibility and automation.
The first technique (logic-based verification) is the concept of a proof transforming compiler [6], where properties can be specified and verified at the source code level and
are then guaranteed to be preserved by the compilation, analogously to the way that
\textit{type-preserving compilation} guarantees the preservation of properties in the context of
type systems. In addition to a program written in the source language, such a compiler
expects a proof that the source program satisfies a (high-level) specification. Its output
consist of the bytecode program and a proof (\textit{certificate}) that this program satisfies
the translation of the original specification into a formalism appropriate for bytecode.
Logic-based verification is particularly suitable for functional correctness properties, but
we have already shown in previous work how to generate JML annotations for a large
class of high-level security properties \cite{39}. Interactive usage of the proof assistant, for
example in order to discharge side conditions emitted by the VCgen, is also admissible.
To be able to write such a proof transforming compiler for Java programs annotated
with JML, we have developed a dedicated annotation language for Java bytecode: the
Bytecode Modeling Language (BML) \cite{13}.
The second technique for the generation of specifications and certificates, \textit{type-based
verification}, rests on automated (and in general conservatively approximate) program
analysis. Here, certificates are derived from typing derivations or fixed-point solutions
of abstract interpretations, as outlined in the previous section and in the philosophy of
lightweight bytecode verification.
\subsection*{6.2 Certificate checking}
For the code verifier/consumer, the goal is to check that the received program meets its
specification (i.e. check the validity of the certificate) and to ensure that the specification
is compliant with his security policies. Both parts should be fully automatic, and the
machinery employed for this task is part of the trusting computing base (TCB).
The size of TCB is one of the main difficulties in a PCC architecture. Foundational
PCC \cite{2} minimizes the TCB by modeling the operational semantics of the bytecode in a
proof assistant, and by proving properties of programs w.r.t. the operational semantics.
Then deductive reasoning is used to encode program logic rules or typing rules. FPCC
allows to remove the VCgen and type checkers for the application type systems from the
TCB, but the deductive reasoning to encode proof rules or typing rules leads to bigger
certificates than using a VCgen or a type checker.
One ambitious goal is to merge both approaches, and to get a small TCB and small
certificates. Ultimately, a MOBIUS certificate is always a Coq proof of desired property
phrased in terms of semantics. Apart from the proof assistant itself, Bicolano represents
the trusting computing base of MOBIUS reasoning infrastructure. By representing
formal systems in a proof assistant, we firstly increase the confidence in the validity of
our checkers. Secondly, these representations allow us to exploit the infrastructure of
the proof assistant when verifying concrete programs and their certificates.
Based on this, and complementing FPCC, the following two proof methodologies
for type-based verification are considered within MOBIUS.
\textbf{Derived Assertions} The Derived Assertions-Approach pioneered in MRG associates
with each typing judgement an assertion in the program logic, the derived assertion.
For each (schematic) typing rule one then proves a derived program logic proof rule
operating on these derived assertions and possibly involving semantic, e.g. arithmetic, side conditions to be discharged by the proof assistant. Given a concrete typing derivation, a proof of the derived assertion corresponding to its conclusion can then be obtained by a simple tactic which invokes these derived rules mirroring the typing derivation. The typing derivation itself will typically be obtained using an automatic type inference which then need not be part of the TCB.
Reflection Recent versions of Coq come with a powerful computational engine [27] derived from the OCAML compiler. This allows computationally intensive tasks to be carried out within the proof assistant itself. A prominent example thereof is Gonthier-Werner’s self-contained proof of the four-color theorem within Coq. This feature can be harnessed for our purposes in the following way using the reflection mechanism:
– we encode a type system T as a boolean-valued function typable_T on programs, and prove that the type system is sound in the sense that it enforces some expected semantic property interp_T. Formally, soundness is established by proving the lemma
\[
\text{TypeCorrect} : \forall P : \text{prog}, \text{typable}_T(P) = \text{true} \implies \text{interp}_T(P)
\]
– to prove that interp_T(P₀) holds for a particular program P₀, we just have to apply the TypeCorrect lemma, and prove that typable_T(P₀) = true holds.
– if your checker allows you to reason by computation (i.e. two propositions are equal if they are computationally equal) and if the program P₀ is typable, the proposition
\[
\text{typable}_T(P₀) = \text{true}
\]
is equal (i.e. reduces) to true = true which is trivial to prove.
The Coq proof assistant allows such a reasoning mechanism. In Coq, the representation of such a proof is TypeCorrect P (refl_equal true), where (refl_equal true) is a proof of true = true.
Similar to this reflective approach to PCC is the technique we presented in [14], where lattice abstract interpretation is used to verify bounded memory use. Significantly, here both the algorithm and its correctness proof are expressed within the Coq proof assistant, such that we may extract a certified checker from the proof itself. This allows a novel realisation of proof-carrying code, where a fast program verifier is trusted because it is obtained from its own proof of correctness.
7 Next steps
After a year activity, the MOBIUS project is well on tracks. Scientific progress is proceeding as expected: security requirements and the PCC scenarios for global computing have been defined, and significant advances in enabling technologies have been reported in deliverables and scientific publications. For further information, please consult http://mobius.inria.fr.
References
|
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/34517/34517.pdf?sequence=1", "len_cl100k_base": 9902, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 46948, "total-output-tokens": 13687, "length": "2e13", "weborganizer": {"__label__adult": 0.0003705024719238281, "__label__art_design": 0.0002741813659667969, "__label__crime_law": 0.0004687309265136719, "__label__education_jobs": 0.0003695487976074219, "__label__entertainment": 5.1975250244140625e-05, "__label__fashion_beauty": 0.0001480579376220703, "__label__finance_business": 0.00024819374084472656, "__label__food_dining": 0.0002856254577636719, "__label__games": 0.0004916191101074219, "__label__hardware": 0.00162506103515625, "__label__health": 0.00048160552978515625, "__label__history": 0.0002366304397583008, "__label__home_hobbies": 8.916854858398438e-05, "__label__industrial": 0.00042366981506347656, "__label__literature": 0.0001933574676513672, "__label__politics": 0.00027489662170410156, "__label__religion": 0.0004248619079589844, "__label__science_tech": 0.029449462890625, "__label__social_life": 7.236003875732422e-05, "__label__software": 0.006557464599609375, "__label__software_dev": 0.95654296875, "__label__sports_fitness": 0.0002627372741699219, "__label__transportation": 0.0005702972412109375, "__label__travel": 0.0001914501190185547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59266, 0.01942]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59266, 0.40158]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59266, 0.89067]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2510, false], [2510, 5504, null], [5504, 8920, null], [8920, 11397, null], [11397, 13506, null], [13506, 16745, null], [16745, 20019, null], [20019, 23083, null], [23083, 26300, null], [26300, 29667, null], [29667, 32844, null], [32844, 36168, null], [36168, 39124, null], [39124, 41296, null], [41296, 44192, null], [44192, 47694, null], [47694, 50464, null], [50464, 54231, null], [54231, 57763, null], [57763, 59266, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2510, true], [2510, 5504, null], [5504, 8920, null], [8920, 11397, null], [11397, 13506, null], [13506, 16745, null], [16745, 20019, null], [20019, 23083, null], [23083, 26300, null], [26300, 29667, null], [29667, 32844, null], [32844, 36168, null], [36168, 39124, null], [39124, 41296, null], [41296, 44192, null], [44192, 47694, null], [47694, 50464, null], [50464, 54231, null], [54231, 57763, null], [57763, 59266, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59266, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59266, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59266, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59266, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59266, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59266, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59266, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59266, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59266, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59266, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2510, 2], [2510, 5504, 3], [5504, 8920, 4], [8920, 11397, 5], [11397, 13506, 6], [13506, 16745, 7], [16745, 20019, 8], [20019, 23083, 9], [23083, 26300, 10], [26300, 29667, 11], [29667, 32844, 12], [32844, 36168, 13], [36168, 39124, 14], [39124, 41296, 15], [41296, 44192, 16], [44192, 47694, 17], [47694, 50464, 18], [50464, 54231, 19], [54231, 57763, 20], [57763, 59266, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59266, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
dfcde3615416a26bb7c70073af577956ed2ec499
|
HookTracer: A System for Automated and Accessible API Hooks Analysis
By
Andrew Case, Mohammad M. Jalalzai, Md Firoz-Ul-Amin, Ryan D. Maggio, Aisha Ali-Gombe, Mingxuan Sun, and Golden G. Richard III
From the proceedings of
The Digital Forensic Research Conference
DFRWS 2019 USA
Portland, OR (July 15th - 19th)
DFRWS is dedicated to the sharing of knowledge and ideas about digital forensics research. Ever since it organized the first open workshop devoted to digital forensics in 2001, DFRWS continues to bring academics and practitioners together in an informal environment.
As a non-profit, volunteer organization, DFRWS sponsors technical working groups, annual conferences and challenges to help drive the direction of research and development.
https://dfrws.org
HookTracer: A System for Automated and Accessible API Hooks Analysis
Andrew Case a, Mohammad M. Jalalzai b, c, Md Firoz-Ul-Amin c, Ryan D. Maggio b, c, Aisha Ali-Gombe d, Mingxuan Sun e, Golden G. Richard III b, c,*
a Volatility Foundation, USA
b Center for Computation and Technology, Louisiana State University, USA
c School of Electrical Engineering & Computer Science, Louisiana State University, USA
d Department of Computer and Information Sciences, Towson University, USA
A R T I C L E I N F O
Article history:
Keywords:
Memory forensics
Malware
Memory analysis
API hooks
Unicorn
Emulation
A B S T R A C T
The use of memory forensics is becoming commonplace in digital investigation and incident response, as it provides critically important capabilities for detecting sophisticated malware attacks, including memory-only malware components. In this paper, we concentrate on improving analysis of API hooks, a technique commonly employed by malware to hijack the execution flow of legitimate functions. These hooks allow the malware to gain control at critical times and to exercise complete control over function arguments and return values. Existing techniques for detecting hooks, such the Volatility plugin apihooks, do a credible job, but generate numerous false positives related to non-malicious use of API hooking. Furthermore, deeper analysis to determine the nature of hooks detected by apihooks typically requires substantial skill in reverse engineering and an extensive knowledge of operating systems internals. In this paper, we present a new, highly configurable tool called hooktracer, which eliminates false positives, provides valuable insight into the operation of detected hooks, and generates portable signatures called hook traces, which can be used to rapidly investigate large numbers of machines for signs of malware infection.
© 2019 The Author(s). Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
1. Introduction
The last decade has seen the rise of memory forensics from a research-grade idea to a standard procedure in digital forensics workflows. This adoption has largely been driven by the widespread creation and use of memory-only malware and malware components that require little-to-no interaction with the local filesystem. To detect such threats, investigators must rely on analysis of the data structures and artifacts contained within volatile memory. Fortunately, significant open-source memory forensics research and tool development has been performed that enables a wide variety of analysis tasks, including malware detection, insider threat investigations, system audits, and more (The Volatility Framework, 2017; Rekall, 2016; Ligh et al., 2014). One of the most significant drawbacks of all of these tools, however, is the inaccessibility of several critical analysis tasks to less experienced investigators, especially those with little previous background in operating system internals and malware reverse engineering. One of the most glaring examples of this is the detection and analysis of API hooks by userland malware on Windows systems. The use of API hooks by malware allows it to inspect, filter, and modify any data being passed to and returned by functions within running programs, including any associated libraries (Branco et al., 2012). By placing such hooks, malware is then able to perform a wide variety of tasks, such as keystroke logging, password stealing, hiding processes and files, hijacking network connections, preventing security tools from loading, and nearly anything else that it wishes to perform on the system. Due to the power that API hooks gives malware over a system, detection of such threats is a high priority for digital investigators (Case and Richard, 2016; Peter, 2018).
The current inaccessibility of API hook triage and analysis to all but the most experienced investigators significantly reduces the scalability of memory forensics and presents a significant bottleneck within the workflow of organizations. In this paper, we demonstrate these issues through the use of the industry-standard apihooks (Ligh, 2013) plugin in Volatility and our newly developed
1742-2876 © 2019 The Author(s). Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Volatility plugin, hooktracer. Our plugin performs post-processing of apihooks-generated output in conjunction with our own memory analysis algorithms. The goal of our plugin is to automate significant portions of API hook triage, make the analysis results accessible to novice investigators, and generate data that can be fed into other automated analysis engines, such as machine learning and security analytics systems. Our plugin is intended to benefit investigators in enterprise environments, where a significant number of 3rd party applications and security monitors are installed after the initial Windows installation, which populates memory with many disparate artifacts. In these environments, whitelists of memory-resident data and system-wide instrumentation are generally not deployed or realistically even possible, making the large amount of noise generated by certain memory forensic techniques untenable.
This paper begins by providing an overview of API hooks and how Volatility's existing apihooks plugin detects them. It then illustrates the specific deficiencies in the existing apihooks plugin that make it largely unusable in real-world, enterprise environments. This discussion is followed by presentation of the algorithm that drives our new analysis plugin along with the results of our plugin against a variety of operating system versions, security software, and malware samples.
2. API hooks background
2.1. Code injection
As mentioned in the previous section, the use of API hooks allows malware to have nearly complete control of a running system. To place API hooks within target processes, malware must first be able to run code inside a process. A variety of code injection techniques are available to malware to accomplish this goal (Hosseini, 2017). These techniques allow injection of blocks of code, commonly known as shellcode, or entire library files (DLL files) into foreign processes. In nearly all modern investigations, these blocks of shellcode or DLL files will be entirely memory-resident. Detection of code injection techniques can be accomplished with Volatility's existing malfind, messagehooks, and eventhooks plugins, among others (Case, 2016).
Once malware is injected into a victim process, it often inserts API hooks (Bremer, 2012) within the victim's address space. The hooks effectively replace the implementation of an existing function with one implemented by the malware. Such hooks can take one of two forms, both of which are detected by Volatility's apihooks plugin, explained next.
2.2. IAT and EAT hooks
Portable executable (PE) files are the native executables for Windows environments (Matt, 2010). At compile time, generated PE files specify which libraries and external functions are needed for the application to operate correctly. When a Windows application is loaded, the runtime loader will then load and initialize these libraries from the file system using the LoadLibrary API (Galkovsky, 2009) and resolve the runtime addresses of needed functions through the GetProcAddress API (MSDN, 2018). As these addresses are resolved, they are stored in optimized lookup tables so that future calls will not require loader-related overhead. For functions that an application or library imports, the resolved addresses are stored in the module’s import address table (IAT). For functions that are exported for use by other modules, the resolved addresses are stored in the module’s export address table (EAT).
Malware can effectively hijack the operation of resolved functions by overwriting the corresponding entries within these lookup tables. Once addresses are overwrittten with the addresses of malicious functions, all future calls to the victim function are completely under the control of the malware.
Volatility’s apihooks plugin detects such hooks by first enumerating every module (the main application and its dependent DLLs) in a process’ address space and then verifying that every entry in the IAT and EAT for each module points back into its owning module or, if it points outside the module, that it matches a whitelist of known redirected functions. Otherwise, any entry whose implementation points to an address outside the owning module is reported as hooked. Fig. 1 shows how IAT and EAT hooks are reported in Volatility. In this output, the type of the hook (IAT), the process that is hooked (svchost.exe) and the function (SLGenerateOfflineInstallationId) that was hooked inside of the victim DLL (slc.dll) are shown. Additional information includes the module responsible for the redirection (sppc.dll) and a disassembly of the first few redirected instructions.
2.3. Inline/trampoline hooks
The second technique used for API hooking is known as inline or trampoline hooks. These hooks work by overwriting the first few instructions of a function to redirect control flow to a malicious implementation. This type of hook has two advantages for malware authors compared to IAT/EAT hooks. First, inline hooks are stealthier in memory as automated disassembly is required to detect them, instead of a verification of the IAT and EAT. Second, inline hooks can target any function within a module, not just those that are directly imported or exported.
To detect these types of hooks, Volatility’s apihooks plugin performs some relatively simple static analysis. The plugin enumerates all functions within all loaded modules of a process, and then disassembles the first few instructions to see if control flow leaves the containing function. If such a control flow change occurs, the plugin will report output as shown in Fig. 2. This catches most inline hooks, but may miss hooks inserted deeper into a function.
3. Drawbacks of current memory forensic detection of API hooks
While existing memory forensic algorithms for enumerating API hooks are capable of detecting most hooking mechanisms, the amount of data produced by such algorithms on modern operating systems is too much for even subject matter experts to handle. To make matters worse, analyzing a reported hook to determine if it was placed by legitimate software or malware requires reverse engineering of in-memory code and understanding the context of
Hook mode: Usermode
Hook type: Inline/Trampoline
Process: 420 (IEXPLORE.EXE)
Victim module: mswebsocket.d11 (0x71a50000 - 0x71a8f000)
Function: mswebsocket.d11!WSPStartup at 0x71a5c35b
Hook address: 0x27000a
Disassembly(0):
0x71a5c35b e9aa3c818e JMP 0x27000a
0x71a5c360 8ec24010000 SUB ESP, 0x124
0x71a5c366 a12e72a871 MOV EAX, [0x71a8722c]
0x71a5c36b 8945fc MOV [EBP-0x4], EAX
0x71a5c36e 8b4550 MOV EAX, [EBP+0x50]
0x71a5c371 53 PUSH EBX
0x71a5c372 8b DB 0x8b
Disassembly(1):
0x27000a e9b36fffff JMP 0x26365a
Fig. 2. An inline/trampoline hook.
each hook within the process. Manual examination of each hook clearly doesn’t scale without refining how apihooks operates, as we discuss in the next section.
3.1. Overwhelming number of legitimate hooks
When the memory forensics algorithms for detecting API hooks were originally developed (circa Windows XP), there were almost no hooks present on systems not infected with malware. This meant that any reported hooks were likely malicious and deserved investigation. Unfortunately, this situation has drastically changed in modern versions of Windows, as API hooks are explicitly used by Windows to support backwards compatibility. Specifically, hooks are used to ensure that applications will execute the required version of some function. Many investigators are familiar with the Compatibility Cache, more commonly referred to by the digital forensics community as the shimcache (Parisi, 2015).
To illustrate this problem, Table 1 documents the number of API hooks present in a clean/default install of various Windows versions. For our testing, the state of each system was a clean install of the 32bit version of the operating system followed by the default user logging in and then launching the default browser (either Internet Explorer or Microsoft Edge). Each install was done in a new VMware Fusion virtual machine. The memory capture was acquired by suspending the virtual machine and copying the produced vmem and vms files (Volatility Foundation, 2014).
Starting with Windows 7, hooks placed by the backwards compatibility engine, browser engine, and other operating system components make the number of hooks to manually analyze completely impractical. Furthermore, as we discuss in the related work section, to date there has not been any effort to effectively whitelist such hooks in a scalable and accessible manner. In Sections 5, we discuss our efforts to implement effective API hook whitelisting as well as document how the usability of apihooks becomes far worse when anti-virus applications are installed on a system.
3.2. Diagnosis requires manual reverse engineering
The overwhelming number of API hooks present in default installs of modern versions of Windows and particularly, systems with anti-virus enabled, would not be such a burden for experienced investigators if existing algorithms were able to produce better indicators of which hooks were actually suspicious. Instead, if an investigator wishes to examine an API hook, they must use a combination of the apihooks, volshell, and vadinfo plugins. As discussed in (Ligh, 2013, 2016; Tyler, 2014), the volshell plugin allows programmatic exploration of memory samples, including disassembling arbitrary regions of process memory. The vadinfo plugin maps addresses within a process’ address space to a file path on disk or the anonymous memory region that backs it. Using these plugins in combination allows an investigator to determine the source of a single API hook, but again, this is a very labor intensive, manual process. Even ignoring the tedium, this procedure is realistically only accessible to experienced reverse engineers.
4. Automating analysis of API hook behavior
To provide automated analysis and filtering of API hooks within a memory sample, we developed a new Volatility plugin, hooktracer. Algorithm 1 illustrates hooktracer internals at a very high level. First, a set of API hooks is gathered by executing apihooks from Volatility (line 1). Emulation is then performed on each API hook to determine the basic blocks that are executed. Then each basic block is mapped to its hosting memory regions. Finally, traversed regions are displayed in one of several accessible formats (line 2–8).
Algorithm 1: Hooktracer
```
1. ApiHookSet ← API hooks from Volatility’s apihooks
2. foreach hook in ApiHookSet do
3. BasicBlocks ← Emulate(hook)
4. CodeMemRegs ← Map(BasicBlocks)
5. foreach Region in CodeMemRegs do
6. Display(Region)
7. end
8. end
```
The following sections describe the implementation of Algorithm 1 in more detail.
4.1. Gathering API hooks
The set of API hooks present within each process can be gathered using the techniques employed by the existing Volatility apihooks plugin. This process is relatively slow, as it must check thousands of functions to be thorough. The current implementation of our tool consumes the output of apihooks formatted using JSON.
4.2. Hook emulation engine
To determine the code paths that a particular API hook takes, we rely on runtime emulation (Stevens, 2008). Emulation is a technique for “executing” code in a software environment that mimics physical hardware. The use of emulation has a long history in the security and malware analysis communities (Bartholomew,
2006; Bilzor et al., 2011; Kimball and Baldwin, 2012; Elena Gabriela Barrantes et al., 2003; Portokalisid et al., 2006; Yin and Song, 2010), with QEMU being perhaps the most well-known emulator. We chose to leverage emulation to avoid the pitfalls of the current apihooks plugin, which statically analyzes instructions and uses several hard-coded patterns to detect control flow redirection outside of the hosting module. Not only is this brittle, but it also makes analysis of more than a few instructions per function extremely difficult. Our choice of emulator for our plugin was unicorn (Nguyen and QuynhDang, 2015), which is used in a variety of security and forensics software (Unicorn showcase, 2018), and has Python bindings to allow complete control of its emulation environment from Volatility.
4.3. Initializing the emulation environment
Before emulation using unicorn can begin, the emulator environment must be initialized. This is left largely to the developer and provides a great deal of flexibility. To be useful, code using the emulator needs to register callbacks within the emulated environment to monitor the emulated code’s behavior. Our Volatility plugin currently registers emulator callbacks for the following events exposed by unicorn:
- Instruction tracing
- Basic block tracing
- Memory reads and writes
- Memory accesses (read, write, or execute) to invalid or unmapped memory regions
After registering our callbacks, our plugin initializes a virtual address space for analysis of each API hook.
4.4. Implementing the emulated stack
The first aspect of the virtual address space that our plugin initializes is the stack. By default, unicorn provides no stack and the programmer must initialize a memory region within the emulated address space and set the stack pointer register to point to it. Implementing a fake stack and maintaining correct operations presented two main challenges.
First, the stack region chosen must live within a region not currently in-use by the application and one which would not be inadvertently overwritten by the emulated code. To avoid this issue, we chose a region within the kernel virtual address space to place our emulated stack. When running on a real Windows system, userland code can never access kernel ranges so this does not break any operations. We also implemented our read, write, and execution monitor callbacks to stop emulation if they detect access attempts to kernel memory ranges that are not within our chosen stack region. The effect of this setup and associated monitors is that the emulated code can store and retrieve data on the stack as usual, and we can ensure that data within the process’ memory is not trampled by our stack emulation.
The second challenge we faced related to the stack was how to correctly determine when an emulated hook finished executing. This was essential to ensure that we let the entire API hook call chain be emulated without letting execution branch to incorrect locations after completion. To meet this goal, we instrumented our read and write operation callbacks to monitor access to the stack base address. Since our emulated stack starts ‘empty’, the plugin’s initialization code sets a global flag to False and only updates it if the stack base is written to by emulated code. Our memory read callback is set to monitor for reads to the stack base and halts emulation if the stack base is read from before being written. The motivation behind this monitoring is that when an API hook executes its final ret instruction to return control flow from itself, the ret will attempt to read from the initial stack base to gather the address to continue execution. We know that the ret will be pointed at the stack base as the API hook handler is the initial function emulated, and any/all sub-procedures called by the hook will have already adjusted the stack pointer before returning.
With these challenges dealt with, our plugin is able to provide a fully functional stack to the emulated code.
4.5. Emulating an API hook
Once the emulator environment is initialized, we use unicorn to begin emulation at the starting address of the API hook. Since this address is not yet mapped into the emulated address space, the initial execution attempt will trigger a call to our invalid memory access callback with the address and size of the access set as parameters. If the address is within a valid memory region of the analyzed process, then our plugin will attempt to read it from the memory sample. When the accessed page is present within the memory sample, our callback will first read the data out of the memory sample and then copy the data to the corresponding address in the emulated address space. This allows the emulator to continue processing and for our plugin to fill the emulated address space on demand. The same procedure occurs when control flow pivots to previously unmapped pages or when data is read from or written to pages for the first time.
In situations where a needed page is not accessible, our plugin’s callback will optionally “patch” in data where possible to allow execution to proceed for as long as possible. When enabled for write operations, the plugin maps a blank page into the emulated address space and then allows the write to occur on the new page. For execution attempts on new pages caused by a CALL instruction, our plugin maps in the target page and fills the target address with the opcodes corresponding to the MOV EAX, 0; RET; instruction sequence. These instructions set a return value of zero, which mimics the usual error condition of Windows APIs. The calling function can then branch based on the error condition and continue execution.
4.6. Gathering and analyzing basic blocks
As an API hook is being emulated, unicorn triggers a callback event when new basic blocks are reached. Basic blocks are units of code (instructions) that execute linearly and in an unconditional manner. The hooktracer plugin leverages this callback to record every basic block executed by a particular API hook. Once emulation of a hook is complete, the plugin leverages Volatility’s API to map every basic block to its containing memory region. By gathering these regions in the order of their execution, a wide variety of analysis can be performed as described in the following section.
5. Automated analysis with hooktracer
Fig. 3 shows the output of our plugin against an API hook inserted by the Core Flood (U.S. Government Takes, 2011) malware. In this output, the plugin reports that a process with PID 2044 and name IEXPLORE.EXE has an API hook on the GetMessageA function inside user32.dll. This information comes directly from the JSON data generated by apihooks. The rest of the information is generated by our analysis algorithm. Each subsequent line lists, in order, the memory region where at least one basic block was executed.
In interpreting this output, we first see that control flow of our hooked API was redirected to a non-file backed region starting at 0x7ff80000. We also see that the permissions of the region are
executable, readable, and writable. This raises several red flags, the first being that legitimate code should be mapped from a file on disk, not stored and executed directly from memory. Second, having all three permissions bits enabled is a common sign of malware that is utilizing memory-only code, as these permissions allow injection of shellcode. In legitimate applications that do not contain self-modifying code, executable regions should be readable and executable, but not writable. The permissions also assist in detecting hollowed processes. As described by Cysinfo (Monnappa, 2016), DLL files loaded through normal APIs, such as LoadLibrary, will have their permissions set to PAGE_EXECUTE_WRITECOPY. For hollowed processes, the permissions will always be something else, generally PAGE_EXECUTE_READWRITE. Finally, we note that the paths displayed by our plugin are derived from the in-kernel data structures (VADs) that track the memory region. This prevents name-overwriting attacks against the userland loader from affecting our output (Powershell-suite, 2016).
The remainder of the output in Fig. 3 illustrates that the legitimate ntuser.dll and user32.dll handled the actual API request and then later returned control back to the malicious handler. The number in parenthesis after each region is the number of basic blocks that were executed in a memory region before control flow was transferred outside the region. This numbering makes the output more concise and helps to focus attention on regions in which significant numbers of instructions were executed.
The usefulness of grouping memory regions becomes even more clear when examining API hooks inserted by one of the most prolific pieces of malware in history, TDSS (Microsoft Security Intell, 2010). An API hook related to TDSS is illustrated in Fig. 4. In the beginning of this output, we see that the API hook initially begins executing in the memory region starting at 0x270000 but then later transfers control to a second malicious region starting at 0x260000. Based on this output, the investigator can quickly deduce that there are two regions hosting suspicious code, as opposed to just the original one. No reverse engineering was required to gain this insight. Furthermore, Volatility provides several plugins that permit extraction of memory regions once the base address is determined (Wiki, 2012).
5.1. Hook analysis with security tools present
Based on the previous figures, readers may draw the same conclusion that many investigators do, which is that any API hook that initially starts execution in non-file backed memory is illegitimate. Unfortunately, this is often an incorrect conclusion, as nearly all anti-virus and endpoint security monitors employ malware-like tactics to gain visibility into system activity as well as to remain as hidden as possible. Visibility is often gained by utilizing API hooks to monitor parameters passed to functions as well as for system events, such as a process starting or a DLL loading. Stealthiness is enhanced by using non-file backed regions to dissociate executing endpoint security code from files that might be identified and flagged by malware. Unfortunately, these hooks are detected by the apihooks plugin, potentially creating a large number of false positives for an investigator looking for malware.
As an example, after we installed the free edition of AVG Anti-Virus (AVAST Software) in our previous default Windows 7 install, the number of API hooks reported went from 296 as shown in Table 1 to 1625. This occurred because AVG places numerous hooks in every process to monitor activity. Fig. 5 shows the output of Volatility’s apihooks plugin against one of the AVG hooks. Obviously, the apihooks plugin does not provide any indication that the hook is associated with AVG. Instead, it simply lists the first two hops in the control flow chain, with the second hop transferring control to an unknown third destination. For an investigator to determine the hook’s source, they must load volshell, as previously discussed, to begin reverse engineering the hook’s code and manually following the jumps. The investigator might then use Volatility’s vadinfo plugin to map the jump destinations to memory regions.
In comparison, Fig. 6 shows this hook as reported by hooktracer. In this output, the investigator can see that control flow transfers from the API hook at 0x776a22b8 to the non-file backed region at 0x74c60000, and then to several DLLs inside of the AVG Program Files subfolder. Given that the hook has likely been placed by a well-known security product, the investigator can instead dedicate time to looking for other signs of malware infection.
5.2. Filtering legitimate DLLs
Even with the accessibility of API hooks analysis provided by hooktracer, the sheer number of API hooks present on even non-infected systems makes manually scrolling through the output time consuming. To help alleviate this burden, we added filtering support to the plugin. Two of these filters are described in this section and the third filter is described in the next section.
The first filter allows excluding an API hook from output if every memory region accessed during emulation matches a given file or folder path. The most common use of this filter is to exclude API
```
420 IEXPLOR.E.EXE mswsock.dll\WSPStartup at 0x71a5e35b
PAGE_EXECUTE_READWRITE <Non-File Backed Region: 0x270000 0x270fff>
PAGE_EXECUTE_WRITECOPY <Non-File Backed Region: 0x260000 0x26efff> (2)
PAGE_EXECUTE_WRITECOPY <Non-File Backed Region: 0x270000 0x270fff> (18)
PAGE_EXECUTE_WRITECOPY \device\HarddiskVolume1\WINDOWS\system32\mswsock.dll (2)
```
Fig. 4. Hooktracer output for TDSS malware.
```
404 IEXPLOR.E.EXE user32.dll\GetMessageA
PAGE_EXECUTE_READWRITE <Non-File Backed Region: 0x7ff80000 0x7ff8afdf> (2)
PAGE_EXECUTE_WRITECOPY \Device\HarddiskVolume1\WINDOWS\system32\user32.dll (10)
PAGE_EXECUTE_WRITECOPY \Device\HarddiskVolume1\WINDOWS\system32\vcredist.dll (5)
PAGE_EXECUTE_WRITECOPY \Device\HarddiskVolume1\WINDOWS\system32\user32.dll (3)
PAGE_EXECUTE_WRITECOPY \Device\HarddiskVolume1\WINDOWS\system32\user32.dll (1)
PAGE_EXECUTE_WRITECOPY \Device\HarddiskVolume1\WINDOWS\system32\user32.dll (1)
```
Fig. 5. AVG API hook detected by Volatility’s apihooks plugin.
hooks where all code paths are handled by file-backed regions originating under the System32 directory. This is possible as modern Windows versions protect DLLs in this directory from modification, which prevents malicious overwriting of these files. As an example, Fig. 7 shows our plugin’s output against a legitimate API hook from our clean Windows 10 system.
In the output, an API hook of the CryptUninstallCancelRetrieval function is shown as well as that every code path for the hook is inside DLL files under System32. This is precisely what thousands of hooks look like in memory when shimcache and other built-in hooks are active, which is the default starting in Windows 7.
To exclude such hooks from the output of hooktracer, investigators can re-run the plugin with an “All Containing” filter of \Windows\System32. For “All Containing” filters, our plugin compares the path of every memory region found during basic block tracing to the path(s) specified in the filter. If every region matches the filter (e.g., they are all in the Windows System32 directory), then information about the API hook is suppressed. By applying this filter to our clean Windows 10 sample, the number of hooks reported drops from 32,458 to only 178. This shows that by simply filtering every API hook whose implementation exists solely in DLLs stored under System32, we have removed over 99% of the plugin’s default output.
When examining the remaining 178 hooks, two hook patterns emerge, as illustrated in Figs. 8 and 9. These hooks are related to the Visual C++ runtime and to Microsoft’s OneDrive application and files associated with these components are not stored under the System32 directory. If we re-run the plugin with filtering added for these DLLs, the number of hooks reported goes from 178 to zero.
Thus by starting with a filter for hooks targeting System32 DLLs and runtime DLLs from the Windows Side-by-Side directory and the AclLayers.dll component of the shimcache, which is stored in the \Windows\ProgramFiles\AVG\Antivirus directory. To exclude AVG’s hooks from the plugin’s output, we can use an “Any Containing” filter configured with the AVG directory path. As mentioned previously, apihooks found 1625 userland API hooks in our memory sample with AVG active as compared to 296 before it was installed. By using an “Any Containing” filter set for AVG in conjunction with our previous “All Containing” filters for System32 and vcruntime, the number of hooks is reduced to 175, an 89% reduction. Examining the remaining hooks shows that 122 of them are inside of Internet Explorer processes and are browser compatibility hooks that redirect into IESHims.dll or ifremade.dll, as shown in Fig. 10.
5.3. Grouping hooks across processes
Another powerful capability of hooktracer is the ability to group sets of hooks across processes. This allows investigators to understand the full scope of infections on a single system as well as build simple and reliable indicators of compromise that can be used on any number of memory samples across a number of systems. For this case study, we will analyze our previously clean Windows 7 system, which we infected with the infamous Zeus malware (IOActive, 2012; James, 2011).
Executing apihooks against this memory sample produces 480 API hooks compared to the 296 present in our clean sample. This large increase is due to Zeus’ aggressive behaviour of injecting code into every process that it has permission to access, as well as hooking 41 functions within each victim process. Without any filters, hooktracer will produce many similar blocks of output per Zeus hook, as shown in Fig. 11. Note that the permissions indications have been removed for readability.
The hook’s control flow starts with two anonymous regions followed by a DLL file under System32 and then exiting from the original anonymous memory region. All of the hooks placed by
Zeus follow this same pattern of two anonymous regions to start followed by the legitimate API being handled by a varying number of DLLs inside of System32.
To allow investigators to avoid manually examining 41 of these hooks per process, we implemented a grouping capability in hooktracer that allows filtering the output to include only the processes and victim functions hooked by the same malware code. To generate a grouping, an analyst runs hooktracer and specifies the process ID and victim function name of the hook to be grouped. As shown in Fig. 11, the PID is 2384 and the function is ntdll!NtCreateUserProcess. This instructs hooktracer to create an ordered record for the first three memory regions executed by the hook so that they can later be re-identified. This record will include the size for non-file backed regions and the full path on disk for file-backed ones. We chose the size for non-file backed regions as the identity marker as it is highly consistent across injections. Other attributes, such as the starting and ending address or a region's content, are not reliable, due to both address layout space randomization (ASLR) as well as code and data changes that occur within a region at runtime and across processes.
Once the grouping record is generated, the analyst can re-run hooktracer with the record specified. This will instruct the plugin to display only processes and hooked functions that match the record's pattern of region sizes and file paths. As shown in Fig. 12, hooktracer's grouping capability uses the record from one hook in one process to identify every other process and function infected with Zeus. This figure has some of the output truncated for brevity's sake, but in total hooktracer was able to automatically find and report the 41 hooked functions across all 8 infected processes.
Investigators can also use hook records when analyzing other Windows memory samples. In real-world investigations, where numerous machines may need to be investigated quickly, being able to rapidly determine which are infected and which are not is key. By integrating hooktracer's grouping capability into their investigative workflow, an investigator can whittle an entire investigation's worth of systems down to only the infected ones within minutes.
6. Related work
6.1. Emulation for malware analysis
The use of emulation to analyze the behavior of malware is a powerful technique with over a decade of research behind it (Yin et al., 2008; Kang et al., 2009; Lutz, 2008; Kruegel, 2014). Until recently, however, all of these emulation efforts required access to an original malware executable file as well as the ability to emulate that executable in a heavyweight environment, such as Bochs (Lawton, 1996) or QEMU (Bellard, 2005), to instrument and observe execution. While these techniques are powerful, such approaches are not directly applicable to memory analysis, as executables in memory go through substantial transformations from the time they are loaded from disk until a memory capture is taken. This generally prevents the executables from being later extracted from memory and then natively executed. Furthermore, the rise of memory-only malware means that much of the malware found in modern investigations cannot be easily encapsulated into a functional executable file at all. This prevents existing whole-system emulators from being able to analyze the malware. Finally, existing hook detection architectures require substantial instrumentation and specialized lab setups that are not realistically feasible in incident response handling across diverse enterprise infrastructures. Other modern techniques for live analysis, such as virtual machine introspection (libvmi, 2019), face many of the same challenges and are not applicable in post-compromise scenarios.
6.2. Memory forensics and emulation
After the introduction of unicorn and its accessible Python bindings, there have been two recent research efforts besides ours that integrated unicorn with Volatility. The first, ROPEMU (ROPEMU, 2016; Graziano et al., 2016), uses unicorn to automatically detect ROP chains (Maloney, 2012) within memory. ROP is used by system-level exploits to perform code-reuse attacks. Such attacks are necessarily memory-only and can be difficult to detect with traditional Volatility plugins.
The second project (Hammond) also hunts for ROP chains and was specifically developed to detect the “Gargoyle” attack (Lospinoso, 2017) that hides executable code using permission changes and timers. Detection of Gargoyle is implemented by emulating the handler of each registered timer found by Volatility and checking if calls are made to any Windows API functions leveraged by the Gargoyle attack.
Although neither of the referenced projects are related to API hooks, we consider them to be important related work, as they both leverage unicorn in conjunction with Volatility to significantly expand the state-of-the-art in memory forensics.
6.3. Analysis of in-memory API hooks
The difficulties of analyzing API hooks on enterprise systems without a filtering capability led to a research project and Volatility plugin named apihooksdeep (Volatility PluginDeep, 2014). This plugin
---
Fig. 11. Hooktracer output for a Zeus API hook.
Fig. 12. Hooktracer grouping Zeus' API hooks.
References
Microsoft security intelligence report.
Parisi, Timothy, 2015. Caching out: the value of shimcache for investigators. DFRWS.
references.
Supported in part by NSF grant “SATC: CORE: Medium: Robust Memory Forensics Techniques for Userland Malware Analysis”, Award # 1703683.
|
{"Source-Url": "https://dfrws.org/wp-content/uploads/2019/06/2019_USA_paper-hooktracer_a_system_for_automated_and_accessible_api_hooks_analysis.pdf", "len_cl100k_base": 8297, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 37815, "total-output-tokens": 10027, "length": "2e13", "weborganizer": {"__label__adult": 0.0013475418090820312, "__label__art_design": 0.001129150390625, "__label__crime_law": 0.0350341796875, "__label__education_jobs": 0.00213623046875, "__label__entertainment": 0.0004634857177734375, "__label__fashion_beauty": 0.0005536079406738281, "__label__finance_business": 0.0005354881286621094, "__label__food_dining": 0.0006008148193359375, "__label__games": 0.004131317138671875, "__label__hardware": 0.00847625732421875, "__label__health": 0.0010423660278320312, "__label__history": 0.0010690689086914062, "__label__home_hobbies": 0.0003337860107421875, "__label__industrial": 0.001461029052734375, "__label__literature": 0.0011730194091796875, "__label__politics": 0.0010385513305664062, "__label__religion": 0.0008730888366699219, "__label__science_tech": 0.32275390625, "__label__social_life": 0.0003361701965332031, "__label__software": 0.1253662109375, "__label__software_dev": 0.48828125, "__label__sports_fitness": 0.0005846023559570312, "__label__transportation": 0.0009293556213378906, "__label__travel": 0.0002913475036621094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42686, 0.05611]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42686, 0.34602]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42686, 0.89734]], "google_gemma-3-12b-it_contains_pii": [[0, 774, false], [774, 5293, null], [5293, 11512, null], [11512, 16793, null], [16793, 23895, null], [23895, 30206, null], [30206, 34114, null], [34114, 39454, null], [39454, 41064, null], [41064, 42686, null]], "google_gemma-3-12b-it_is_public_document": [[0, 774, true], [774, 5293, null], [5293, 11512, null], [11512, 16793, null], [16793, 23895, null], [23895, 30206, null], [30206, 34114, null], [34114, 39454, null], [39454, 41064, null], [41064, 42686, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42686, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42686, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42686, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42686, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42686, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42686, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42686, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42686, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42686, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42686, null]], "pdf_page_numbers": [[0, 774, 1], [774, 5293, 2], [5293, 11512, 3], [11512, 16793, 4], [16793, 23895, 5], [23895, 30206, 6], [30206, 34114, 7], [34114, 39454, 8], [39454, 41064, 9], [41064, 42686, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42686, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
7b1852103d6e5f1d56a594b761da4464dfe21dc6
|
Towards Runtime Behavior Generation in Games
Nicholas Jennings
Björn Hartmann, Ed.
Electrical Engineering and Computer Sciences
University of California, Berkeley
Technical Report No. UCB/EECS-2024-153
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-153.html
August 2, 2024
Acknowledgement
For a more complete account of this project including results from interviews with game developers, please refer to the UIST 2024 Paper by the entire research team: What’s the Game, then? Opportunities and Challenges for Runtime Behavior Generation. I would like to thank my research advisor Björn Hartmann for his guidance and support throughout the entire research process. James Smith, Han Wang and Isabel Li all worked alongside me for this project, and were instrumental to the development and analysis of GROMIT. I’d additionally like to acknowledge Cathy Wang, Roshan Nagaram, and Alvin Bao for their help in developing an early prototype of the GROMIT system.
Towards Runtime Behavior Generation in Games
Nicholas Jennings
Research Project
Submitted to the Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, in partial satisfaction of the requirements for the degree of Master of Science, Plan II.
Approval for the Report and Comprehensive Examination:
Committee
[Signature]
Björn Hartmann
Research Advisor
July 22, 2024
(Date)
[Signature]
Eric Paulos
Second Reader
[Signature]
(Aug 2024)
(Date)
Abstract
Procedural content generation (PCG), the process of algorithmically creating game components instead of manually, has been a common tool of game development for decades. Recent advances in large language models (LLMs) enable the generation of game behaviors based on player input at runtime. Such code generation brings with it the possibility of entirely new gameplay interactions that may be difficult to integrate with typical game development workflows. We explore these implications through GROMIT, a novel LLM-based runtime behavior generation system for Unity. When triggered by a player action, GROMIT generates a relevant behavior which is compiled without developer intervention and incorporated into the game. We create three demonstration scenarios with GROMIT to investigate how such a technology might be used in game development. In a system evaluation we find that our implementation is able to produce behaviors that result in significant downstream impacts to gameplay. We outline a future work agenda to address these concerns, including the need for additional guardrail systems for behavior generation.
Acknowledgement
For a more complete account of this project including results from interviews with game developers, please refer to the UIST 2024 Paper by the entire research team: What’s the Game, then? Opportunities and Challenges for Runtime Behavior Generation [15]
I would like to thank my research advisor Björn Hartmann for his guidance and support throughout the entire research process. James Smith, Han Wang and Isabel Li all worked alongside me for this project, and were instrumental to the development and analysis of GROMIT. I’d additionally like to acknowledge Cathy Wang, Roshan Nagaram, and Alvin Bao for their help in developing an early prototype of the GROMIT system.
## Contents
1. Introduction ................................................. 5
2. Related Work .................................................. 6
2.1 Procedural Content Generation ............................ 6
2.2 Runtime Generative AI in Video Games ................... 8
2.3 Scene Manipulation ....................................... 9
3. Runtime Behavior Generation .................................. 10
3.1 System Design ............................................ 10
3.2 Demo: Sandbox ........................................... 13
3.3 Demo: Escape Room ...................................... 14
3.4 Demo: Adventure Game ................................... 15
4. System Evaluation ............................................. 16
4.1 Explicit Scene Manipulation ............................... 16
4.2 Implicit Rule Generation .................................. 17
5. Discussion and Future Work .................................. 19
6. Conclusion ..................................................... 21
### Bibliography
1. Appendix ....................................................... 24
Figure 1: Example of runtime behavior generation in an adventure game. When the player initiates an interaction with no developer-defined output, the generation system is invoked, creating the name, description, and code defining the behavior of a new object to complete the interaction. The behavior code is compiled without developer intervention and the resulting object is incorporated into the game.
1 Introduction
In game development there are well established practices to generate game content algorithmically rather than manually, in a processes known as Procedural Content Generation (PCG). PCG use cases range from relatively straightforward tools for speeding up game development, to game-defining systems enabling novel player experiences. PCG tools have been developed to generate most types of game content, from textures to animations to entire virtual worlds. A major exception is game behaviors, the programmed rules, mechanics, and actions that define how the game itself is played. Due to their open-ended nature, game behaviors have so far largely stayed beyond the purview of generative systems. Traditional methods of procedural generation require the design space to be parameterized in some way, and parameterizing the design space of game behaviors requires severe restrictions that limit the scope of the generation. For instance, there is no clear way to parameterize "power-ups for a platformer game", but LLMs have no trouble creating designs for such a feature.
Recent advancements in the capabilities of Large Language Models (LLMs) promise to solve the technical side of this problem. LLMs don’t require a properly defined design space, and so are promising candidates for navigating the semantic ambiguity inherit in game behavior requests.
An import distinction in generative systems is whether the output content is first viewed by developers during the game’s initial development, or by players afterwards. In this paper we use the terms devtime and runtime to describe each scenario respectively\(^1\). There is nothing inherently limiting this sort of behavior generation to tools used by developers in a devtime context. Implementing **Runtime Behavior Generation (RBG)** would allow for more personalized interaction with individual players. Compared to a devtime system, runtime systems can incorporate player input, and can cover a larger potential design space than could be manually checked by a developer. Optimistically, this method of behavior generation can enable entirely new forms of gameplay and player agency, and offer a new dimension of exploration. What would a runtime gameplay behavior generation system look like, concretely? How might game developers go about creating and incorporating such a system?
In this paper we begin exploring these questions through a design probe. We created a system capable of runtime gameplay behavior generation, and used it to create three demos of possible use-cases. We then ran these demos through quantitative tests to outline the efficacy of such a system, and highlight certain development pitfalls.
## 2 Related Work
### 2.1 Procedural Content Generation
Procedural Generation, the act of creating data algorithmically rather than manually, is a common approach to efficiently creating a large amount of content. PCG systems have been used in many stages of game development and for most game systems [13]. Tools like Material Maker allow developers to quickly generate textures and materials [28]. A large library of systems exists for generating foliage [25, 31]. While these tools can be used while a game is being developed, many of the most notable PCG systems are run without direct developer oversight. Brewer analyzes the 30-year legacy of *Rogue*, this influential game’s procedurally generated dungeons and items
\(^1\)These terms heavily overlap with the definitions of ‘online’ and ‘offline’ used in prior PCG work, which describe whether the content generation occurs after or prior to the game being shipped to players, although they differ in some key edge cases. For example, in No Man’s Sky the game universe was generated prior to the game’s release, but is far too large for any significant portion to have been manually checked by developers prior to the shipping. In this case the world generation is technically offline, but is still considered runtime by our definition.
enhanced its exploration and replay potential, and inspired the massively popular "roguelike" video game genre [2]. Games such as Minecraft [21] and No Man’s Sky [12] use procedural generation to create entire virtual worlds for players to discover. World generation systems that additionally account for player challenge have also been developed [5]. Nitsche et. al. demonstrate a world generation system capable of combining player input with procedural methods [22]. Beyond these straightforward examples, Compton et. al. note that generative methods have been used for many systems that may not typically be considered "Content" [6].
Procedural Content Generation has also been applied to the rule sets and behaviors of a game. The design spaces of full game genres or mechanics are too large to be reasonably parameterized, so prior work has explicitly chosen sub-spaces to work with[27]. Togelius and Schmidhuber used a discrete 15x15 grid populated with a player controlled agent and various colored objects, then used an evolutionary system to create games based on the grid layout and object behavior [30]. Browne et al. developed the Ludi system, which generates board games in the style of tic-tac-toe or Go [3]. Chu et al.’s BPAlt system allows a developer to parameterize a design space in the Unreal engine, and then explore that space in a structured manner [4]. In all cases, explicit restrictions on game rules are given which allow for a structured search of the design space. This requires some level of human designer involvement, so these tools are devtime. A grid organizing some of these prior works along with the prototype RBG system used in this paper can be seen in Table 1. By exchanging explicit restrictions with semantic requests, we can develop systems that interpret user input for design restrictions in a runtime setting.
Khaled et. al. provide a set of metaphors describing the uses of PCG systems. Systems can be treated as a Tool, used by a developer as part of the game design process. Systems can also be seen as Designers which undertake design tasks alongside human developers, and as Materials which are dynamically generated [16]. Which metaphors are used affects how a PCG system is thought of by designers and developers, and in this paper we note how these metaphors can be applied to a RBG system.
<table>
<thead>
<tr>
<th>Devtime</th>
<th>Assets</th>
<th>Behaviors</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Speedtree [31]</td>
<td>Ludi [3]</td>
</tr>
<tr>
<td></td>
<td>Material Maker [28]</td>
<td></td>
</tr>
<tr>
<td>Runtime</td>
<td>Rogue [2]</td>
<td>GROMIT</td>
</tr>
<tr>
<td></td>
<td>Minecraft [21]</td>
<td></td>
</tr>
</tbody>
</table>
Table 1: Generation types of a sample of prior work in game development contexts, along with the GROMIT RBG system introduced in this paper.
2.2 Runtime Generative AI in Video Games
In game development, use of dev-time Generative AI systems is already somewhat common. The GDC 2024 State Of The Game Industry Report shows 31% of developers use some form of Generative AI tools such as ChatGPT, DALL-E, and GitHub Copilot [10]. Tools are also being built specifically for game development. Muse is a suite of AI tools for Unity, which allows developers to prototype code, 2D art, animations, and conversation text-trees [29].
Systems also exist for the runtime scenario. Rieder demonstrated that a machine learning based system could be used as a game mechanic to power a runtime generative material [26]. In the industry space, Infinite Craft is a sandbox game where the player combines object icons to form an endless number of new objects [1]. In a similar vein, Suck Up! is a comedy adventure game where the player takes the role of a vampire convincing AI-powered townsfolk to let the vampire into their house [24].
Volum et al. use a large language model to write API code for piloting a character in the popular video game Minecraft [32]. The agent is able to chain API function calls in response to user prompting, and can coordinate with human-controlled characters to perform complicated in-game tasks such as mining for specific items and solving escape-room puzzles. Inworld.ai is a commercial software for creating AI-non-playable characters (NPCs) for video games [14]. While Inworld gives developers a high degree of control over how the NPCs will behave narratively by prompting the agent’s personality and knowledge of the virtual environment, the NPC’s ability to interact with the in-game world is largely left up to the developer to implement. These are all examples of runtime content generation that follow behaviors manually created by the developers, RBG differs in that the possible actions themselves can also be generated.
While not specific to games, Park et. al.’s Generative Agents system also tackles the problem of piloting characters through a virtual environment called Smallville, and takes the approach of using LLMs to directly manage Smallville as well as the agents [23]. The state of virtual objects is determined by natural language prompts, which allows the generative agents to interact with their environment in the same format they run on internally. This has the benefit of making the Smallville robust to changes made by the agents, so most actions made by agents can have a true effect on the game state. This makes Smallville an example of a true RBG system. However, since each interaction equates to at least one LLM query, this approach would not scale to full sized real-time games. To be playable on a personal computer RBG systems still require most moment-to-moment gameplay to be handled by traditional code.
Figure 2: System Diagram for GROMIT. The input system, scene understanding, LLM, and output system are highlighted in orange, blue, green, and purple respectively. Once triggered, the input system combines implicit and manual input. This is then combined with the semantic scene graph to create an input form, which is sent to the LLM. Code from the LLM output is compiled and added to the virtual environment. If compilation fails, the error is sent to the LLM and the request is retried. Depending on the application, manual input may not be used and miscellaneous output may change.
2.3 Scene Manipulation
Behavior generation depends upon the context of the scene inside which objects exist. Attempts to achieve scene understanding and manipulation date back to 1968 with the SHRDLU system [34], where its users could move colored blocks using natural language. A shared approach to the representation of virtual environments is through the use of a Semantic Scene Graph (SSG), which structures the environment in terms of nodes and links, representing spatial relations. SSGs provide adaptivity to 3D interaction tasks [8], allow semantic control over generated content [9], and function in a way to generate and manipulate 3D scenes [33]. We use SSGs to interface an LLM with a 3D environment.
LLMR is a complete GenAI scene manipulation tool, including object and animation generation and behavior generation [7]. They focus on the explicit prompts to the generative system; we are interested in how our RBG system affects game developers’ workflows.
3 Runtime Behavior Generation
Our analysis of related work indicates that Runtime Behavior Generation systems suitable for realtime games have been under-explored. As such, we elected to take a research-through-design approach to investigate systems of this nature. We built an example of a runtime behavior generation system, and used it to construct three demo experiences that sample the space of possible use cases.
The ways in which runtime behavior generation systems can be used in a game can be partitioned into three main types:
1. **Fully Generative Games**: Generating a significant portion of all behaviors at runtime, resulting in a game that largely adapts itself to the wants of any particular user. In this case, the generative system has a large degree of control over the whole game.
2. **Partially Generative Games**: Manually creating key game behaviors, but generating edge-case behaviors as they are encountered by the player. In this case, the generative system has a small degree of control over the whole game.
3. **Games With Generative Mechanics**: Some combination of the first two cases, choosing a particular section of the game to utilize behavior generation, and keeping the rest handcrafted. In this case the generative system has a large degree of control over a small portion of the whole game.
Each of our demos embodies one of these use cases, and we use the demos to evaluate our RBG system’s capabilities, as well as a demonstrative tool to help with communicating these use cases in interviews with game developers.
3.1 System Design
To gain insights into the specific properties of a runtime behaviour generation system we built an example of such a system, which we call GROMIT². GROMIT generates behaviors by making requests to a LLM for program code which it then compiles and runs.
Where prior behavior generation systems restrict their design space through explicit parameterization, GROMIT’s restrictions come from a combination of prior context of the game’s code/setting and the plain-text request prompt. This is an important distinction from filling out a
---
2Named after the scene in the Wallace and Gromit animated series where Gromit lays tracks for a train as the train is running.
pre-formatted API. The generality of this format is what allows the system to truly generate novel behaviors and is also the root cause of many of the issues that will be discussed in section 4.
GROMIT is built for the Unity3D game engine and uses GPT-4 as its LLM. These choices were made primarily due to our team’s prior experience with these tools. In Unity, game scenes are built up of a collection of entities, called GameObjects, each with an attached set of components. Most components are typed as Monobehaviors, which typically express a single behavior/attribute associated with the GameObject. At a high level, GROMIT works by recording a prompt for a desired behaviour, combing this prompt with contextual scene information, and then sending the request to a GPT-4. Depending on the usecase, the initial prompt may be created directly by the player, or indirectly based on their in-game actions. The request is always framed as requiring a JSON string containing C# code as part of the response, but may also require secondary information. GROMIT then takes the response from GPT-4 and compiles the C# code. The method of prompt recording, and the way in which the compiled code is linked to the rest of the project, differs depending on the use-case scenario.
The C# compilation system was adapted from a project by Sebastian Lague [17], and contains several compilation tricks to compensate for programming errors GPT-4 consistently makes. For instance, GPT-4 regularly does not include import statements and class names in the code snippets it returns. Rather than attempting to prompt engineer the LLM to add these features, which produces inconsistent results, we simply detect if the features are missing before script compilation, and add default imports and dummy class wrappers if necessary. Additionally, if a compilation error occurs GROMIT will re-prompt the LLM and include the compilation error. The general system diagram for GROMIT is shown in Figure 2, details of the input and output systems change depending on how the behavior generation is being used.
Scene information is formatted using a Semantic Scene Graph (SSG), as shown in Figure 3. The SSG represents each object in the Unity3D scene as a node, encompassing information such as the object’s name, description, and spatial data. These nodes are arranged in a hierarchy to reflect spatial and relational aspects of the scene. In the GROMIT implementation, nodes are manually defined by the developer, and are organized in their hierarchy based on a combination of their size/shape and their position in the existing Unity transform hierarchy. As objects move around during gameplay, events are triggered which recompute local portions of the SSG. Automatic methods for generating SSGs exist [19], however for our purposes of exploring behavior generation we found a simple manual implementation was sufficient.
This structure facilitates the conversion of the 3D scene into a JSON format, making it a text-based representation that is compatible with LLMs. A significant feature of the SSG is its ability
Figure 3: A sample game scene, and its resulting semantic scene graph. Each vertex in the graph contains additional data regarding its coordinates, behaviors, and text description. Vertices are labeled manually by the designer, and their position within the scene graph is calculated at runtime.
to filter nodes of its graph based on keywords derived from user input, streamlining the data processed by the LLM. For example, in the scene shown in Figure 3 if the initial prompt only mentions the table, then objects outside the house may be trimmed from the SSG before it is added to the prompt. This is done in the spirit of Retrieval-Augmented Generation, adding only the relevant information necessary for the request [18]. The scene graph is dynamic and capable of updating in real-time to reflect changes within the 3D scene, whether due to user interaction or LLM-driven modifications.
Depending on it’s usecase, GROMIT can be seen through either of Khaled et. al.’s Tool, Designer, and Material metaphors. Additionally, GROMIT can be used in an Explicit setting where a player deliberately invokes the generation, or in an Implicit setting where the generation is triggered by regular gameplay. Crucially, in implicit scenarios it may be possible to obfuscate that any generation is occurring at all.
Figure 4: Screenshots of each demo scene, including written responses from GROMIT explaining newly generated behaviors. 4a Shows the Sandbox blockland scene, and the generation system’s response to the verbal request “Make the apple float around and tell me how you did it”. 4b shows the Traffic scene for the Sandbox demo and GROMIT’s response to the request ‘Shrink the buildings and tell me how you did it’. 4c shows the result of interacting the torch with one of the bookshelves in the library, revealing the bookshelf behind it. 4d shows the player using a ‘firestorm’ spell that was created by GROMIT.
3.2 Demo: Sandbox
In the Sandbox demo, GROMIT is used as a general tool for manipulating the environment. A screenshot of one of the scenes used for the Sandbox demo is shown in Figure 4a. Players click a button to begin recording input, and then provide a command to the system. For example, if a players says "make the apple spin", GROMIT will attach a script to the apple that makes it rotate over time. The primary medium for these instruction is spoken audio processed with a Whisper audio-to-text model. We utilized the Whisper-for-Unity asset [11], although direct calls to any speech to text API would suffice. The audio input can be supplemented with pointing gestures. Pointing is an incredibly common type of gesture, and has been well explored in prior work [20]. By casting a ray from the user’s reticle we can determine which object the user was pointing to and include this in the action. We also highlight the object for visual feedback to the user.
The LLM is prompted to complete the request by either writing code for a static method to be run once, or by writing a MonoBehaviour component to be added to a Unity GameObject. If the LLM writes a MonoBehaviour, it also provides the name of the GameObject it must be added to. From the player’s perspective, they request a behavior change, wait a few seconds, and then that behavior change is manifested in the scene.
Two scenes were made for the Sandbox demo, a simple box scene called "Blockland" and a larger city scene called "Traffic". These scenes were used for investigating GROMIT’s ability to work in differently constructed games, and are discussed in more detail in section 4.1.
This demo shows how GROMIT can be used as an Explicit Tool for players. There is a specific action, in this case a button press, that triggers the system, and the system is used to directly carry out a request by the player.
3.3 Demo: Escape Room
To explore the ability for GROMIT to generate new interactions from implicit input, we built the Escape Room demo. In the Escape Room demo, players are placed in a library and told to find a way to escape. The intended solution for the game is to search the library for a unique book. Behind the book is a key that unlocks the door to the library. Besides the door, key, bookshelves, and unique book, there are an additional 11 objects in the library room. While the only human-programmed interaction in the demo is the key opening the door, the UI action to trigger the interaction (holding an object and pressing the 'e' key) can be performed between nearly any pair of objects in the scene.
GROMIT is triggered when the player attempts to interact with a pair of objects that don’t already have a defined interaction. The LLM is prompted to write a method to be run when the objects interact, as shown in Figure 5. This prompt is generated based on the names and descriptions of the interacting objects with no direct input from the player. Once the method is compiled and run it is linked to the rest of the demo such that subsequent interactions between the objects will call the method without triggering GROMIT. Each interaction consists of the method itself, and a text description. For example, interacting a torch with a bookshelf may generate a method that destroys the bookshelf object along with the description "Burns the bookshelf down with the torch", as seen in Figure 4c.
In this case, GROMIT is used as an Implicit Designer. The player never experiences the system being explicitly invoked as in the Sandbox demo, instead the system is triggered as necessary.
The Adventure Game demo takes deliberate inspiration from the classic adventure game series The Legend of Zelda, which often sees the player navigating through a dungeon (shown in Figure 4d) by fighting enemies and solving puzzles. In the demo, the combat system has the player cast "spells", and any two spells can be combined to form a new spell in the style of Infinite Craft [1]. The initial set of spells were manually created by the authors, and spell combinations are created at runtime by GROMIT. The generation process can be seen in Figure 1. The puzzle system consists
of switches and keys that change the world state and allow the player to reach different parts of the dungeon. Unlike the combat system, GROMIT has no direct control over the puzzle system.
In this demo, GROMIT can be seen as an Implicit Material. Similar to the Escape Room demo, the system is never invoked directly by the user. Where in the Escape Room demo GROMIT can define interactions between any two objects and these interactions can have any effects, in the Adventure Game demo GROMIT can only write the behaviors of new spell objects. Spells created by GROMIT can be fed back into GROMIT, so the entirety of the behavior generation in the demo can be abstracted as a property of the spell objects. In this sense, spells in the demo are a generative material powered by GROMIT.
4 System Evaluation
4.1 Explicit Scene Manipulation
To evaluate GROMIT’s ability to manipulate virtual scenes, we conducted a quantitative evaluation consisting of making various behavior requests in the Sandbox demo. Two scenes were used in the study. The first scene, Blockland, was a simple scene designed for testing GROMIT’s functionality and was built with GROMIT in mind. Blockland is shown in Figures 3a and 4a. The second scene, Traffic, was a larger scale traffic simulation imported from another project and is shown in Figure 4b. Traffic was not built with GROMIT in mind.
Behavior requests were collected through a Mechanical Turk survey. Survey respondents were instructed to provide 6 requests of varying complexity for GROMIT to perform for each of the
two scenes: 2 simple, 2 medium, and 2 complex. 25 surveys were given, which resulted in 145 unique requests after nonsensical or partial entries were removed.
Each request was run through GROMIT in its relevant scene and was marked by the authors as "Successful" or "Unsuccessful" based on whether the script written by GROMIT eventually compiled without errors and whether the effect of GROMIT’s output could reasonably be said to complete the request. The length of GROMIT’s output for each request was also recorded.
Overall, across the 145 unique requests, GROMIT achieved a success rate of 54%, successfully executing 78 requests while failing in 67. Comparing success rates by the complexity assigned in the survey submissions shows no correlation, as seen in Fig. 6a. A Chi-Squared test between the complexity groups did not show a significant difference (for all pairs $p > .25$). Linear regression shows a negative correlation between success rate and the line length of code outputted by GROMIT (Spearman’s $r = -.6369, p < .005$, see Figure 6b). These results suggest that GROMIT has a harder time completing requests that are complicated to implement, but that humans use different internal metrics to judge complexity.
Comparing the success rates of requests by the test scene shows a clear difference ($p < .0005$, see Fig. 6c). Requests made in Blockland, which was designed with GROMIT in mind, tend to succeed with a 85% success rate, whereas requests made in the Traffic scene tend to fail with a 30% success rate. These results suggest that designing a program to be easily manipulable is necessary for GROMIT to be effective, and that programs designed without GROMIT in mind are unlikely to work well with it. For example, a number of requests in both scenes were some variation of "Change the color of X to C". In the Blockland scene all objects used standard Unity materials, which can have color filters applied in code. In the Traffic scene all objects used a mobile diffuse material, where the only way to change the color of the object is to edit the image file of the texture. This is comparatively very complicated to do in code. All of the color change requests succeeded in the Blockland scene, and failed in the Traffic scene. On average, GROMIT completed all requests in 10.6 seconds. The average time drops to 5.2 seconds when using the gpt-40 model.
### 4.2 Implicit Rule Generation
To determine if GROMIT has the ability to support implicit use cases, we ran a system evaluation to determine the success and failure rates of interactions in the Escape Room and Adventure Game demos.
To generate data for the Escape Room demo, we populated the room with additional items. In the MTurk survey we also asked participants for 10 additional items to include in an escape room.
The results were a sword, a wizard’s hat, a potion, a relic, a stationary kit, a stray frog, a sheet of paper, a newspaper, an old walking stick, and a pen. We then used GROMIT’s auto prompting to generate interactions between each pair of items. Besides the 10 items from the survey, there were 5 items (a torch, bookshelves, the special book, the key, the door) which we implemented for the intended puzzle solution. Since each item was not allowed to interact with itself, there were 105 interaction item pairs. We implemented the 1 interaction necessary for the intended solution (the key opens the door). Additionally, 2 objects (the door and the bookshelf) were stationary so could not interact with each other. This left 103 potential interactions for GROMIT to generate. We used GROMIT to generate interactions between each viable object pair.
To generate data for the adventure game, we started with 11 spells created by the paper authors. These were combined in breadth-first order until 205 spells had been generated by GROMIT. We categorized each of the behaviors generated by GROMIT based on whether the generation was successful, meaning the generated behavior ran without errors and aligned with it’s text description. If the generation was unsuccessful, the category of error was also recorded. The results are summarized in Figure 7.
In the Escape Room Demo, none of the combinations resulted in compiler errors and 6 resulted in runtime errors. Another 10 had had text descriptions that suggested an event should happen, but the written code did not produce this event. GROMIT responded that 14 of the pairs shouldn’t have any interaction. The remaining 73 pairs gained some form of successful interaction. 68 of these interactions appear to be mainly visual changes, such as a torch heating up a sword by changing it’s color to red, but 5 interactions resulted in alternate solutions to the escape room. These alternate solutions either destroyed the bookshelves in some way, allowing a faster way to find the key, or created new "magical" objects such as wands or staffs which could open the door.
In the Adventure Game Demo, 30 interactions resulted in continued compilation errors after re-prompting, 1 resulted in runtime errors, 6 did not produce an effect related to their description (in most of these cases the spell simply deleted itself on use), and 12 did not produce any effect. The remaining 156 attempts produced new functional spells. Of these, 9 could be used in some meaningful way outside of combat. These spells either allowed the player to trigger the puzzle switches from new locations, or enhanced player movement through various kinds of teleportation.
In the implicit rule generation study, all failure cases resulted in no change to game behavior, usually due to the generated scripts not interacting with the scene. In the explicit scene manipulation study, 62 of the 67 failure cases similarly resulted in no visible changes to game behavior. The 5 remaining cases resulted in visible errors, but these fell short of crashing the game or preventing continued gameplay. One of the more dramatic occurred when the request "Implement a day-night
Figure 7: Results of automatically generating interactions with GROMIT. A result was considered a compiler or runtime error if the behavior threw an error that prevented it from compiling or running properly. A result was categorized as an "Inconsistent Description" if it ran without errors but had a significant mismatch between how the behavior was described and what it actually did. Results where the LLM responded that the behavior should produce no effect were marked as "no interaction". If the behavior did produce some effect it was labeled a "Novel Interaction". If a Novel Interaction was found to cause significant gameplay changes it was recategorized as a "High-Impact Interaction".
cycle" resulted in a script which disabled the light source in the blockland scene. We have created some requests that can cause larger-scale errors than were seen in the study. For example, in the traffic scene the request "Make each building spin individually" can instead cause all buildings to spin around a central point. In theory, generated code could crash the entire game though we have not observed this in our testing. No behavior generated for either demo resulted in the game crashing in any way. This is largely due to GROMIT handling compilation errors internally, and runtime errors being limited in scope. On average, GROMIT completed all requests in 5.6 and 14.1 seconds for the escape room and adventure game scenes respectively. The average times drop to 2.6 and 6.1 seconds when using the gpt-40 model.
5 Discussion and Future Work
The complete version of this project also includes an interview study with n=13 game developers using GROMIT as a probe to elicit their current opinion on runtime behavior generation tools, in order to enumerate the specific themes curtailing the wider use of such tools.
This paper has demonstrated RBG in several small-scale scenarios. We expect that incorporating common LLM scaling strategies, such as adding a planning stage to the behavior generation pipeline, could allow GROMIT-like systems to scale to larger tasks. Properly assessing scalability would ideally involve creating a large-scale game system incorporating RBG, which we leave to future work.
Future work should also investigate player perspectives on RBG. Exploring, among other things, if and how players can detect RBG, how their gameplay changes with knowledge of RBG, and player opinion on the general use of Generative AI. Comparing those findings with developer opinions would help both to characterize the relationship between developers and players with this technology, and to inform developer decisions on making games with RBG.
Finding a process to better control RBG systems is a primary concern, which is unsurprising when comparing RBG to commonly used PCG systems. Devtime PCG systems are often experienced as tools used to develop a part of the resulting game, which gives developers two avenues of control. They can develop the PCG system itself, and/or they can verify the output of the system before it is included in the main game. In this sense content created by an devtime system can still express developer intent even if the developer was not responsible for the PCG software itself, such as when using a GenAI system. In terms of ownership, the output of an devtime system acts similarly to content purchased from an asset store. The asset itself might not have been created by the developer, but its inclusion is still a vector of developer intent.
Runtime systems, in contrast, only have the first avenue of control. Without the ability for developers to verify output before it is shown to users, the developer impact on the PCG output can only come from their effect on the PCG system itself. This makes PCG systems that are both runtime and GenAI-based problematic, since they have no direct method of developer control. The high-level choice to use the runtime tool at all is certainly still made by the developer, but their level of control is significantly reduced.
LLMs already provide several methods of developer input. Prompts can be partially engineered by the developers. Few-shot examples can be provided to demonstrate intended output. Indeed, we used both these methods in GROMIT to achieve basic functionality. However, these methods don’t necessarily fit with the requirements expressed by developers. Depending on the desired guardrails, there may be better interfaces for expressing the requirements.
Additionally, based on the results of section 4, as well as the general quality concerns expressed by developers, restricting only the input to the model may be insufficient. We identify that GROMIT and behavior generation systems with a similar implementation have 4 main avenues for restriction implementations. These are:
1. Modified input to the LLM
2. Static analysis of code generated by the LLM
3. Dynamic analysis of LLM-generated code in a sandboxed environment
4. Rollback/Undo functionality if restriction violation occurs
A “Guardrail System” that maps the constraint descriptions from a vocabulary developers already use to a set of implementations from the above list could improve on both the usability and effectiveness of an approach using only existing LLM control methods. Ideally, this could restore the avenue of direct control present in traditional PCG systems. We conclude that there’s a need for a set of such Guardrail Systems, although the degree to which such tools can/should be generalized between games is unclear. Future work should explore the efficacy of specific guardrail systems, both in their ability to control a RBG system and in their alignment with developer needs.
6 Conclusion
In this paper, we solidified the emerging concept of Runtime Behavior Generation as it applies to the games industry. Through three concrete examples, we explored possible ways RBG can be used for games. We conducted a system evaluation and found that, using our current system, generated behaviors can achieve a high success rate if other game systems are designed to be easily manipulable through code. We also found that some generated behaviors can have significant effects on gameplay. We highlight potential future work that could address these challenges.
Bibliography
31. I.D. Visualization. **SpeedTree.** 2024. [url](https://store.speedtree.com/).
1 **Appendix**
Figure 8: Diagram of an example prompt from the Adventure Game demo with labeled components. When the player combines the Fire and Air spells, a prompt is generated to request a new spell. Although they were manually created by a developer, the existing spells are included in the prompt as if the LLM had generated them so that the JSON format can be reused. The LLM output JSON is then interpreted and the code compiled. The compiled behavior is then combined with the emoji and plaintext output to generate the resulting spell.
|
{"Source-Url": "https://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-153.pdf", "len_cl100k_base": 8557, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 60316, "total-output-tokens": 12159, "length": "2e13", "weborganizer": {"__label__adult": 0.00145721435546875, "__label__art_design": 0.0010671615600585938, "__label__crime_law": 0.0013446807861328125, "__label__education_jobs": 0.00238800048828125, "__label__entertainment": 0.000469207763671875, "__label__fashion_beauty": 0.0008449554443359375, "__label__finance_business": 0.0005855560302734375, "__label__food_dining": 0.0012340545654296875, "__label__games": 0.053192138671875, "__label__hardware": 0.002895355224609375, "__label__health": 0.001613616943359375, "__label__history": 0.0011129379272460938, "__label__home_hobbies": 0.0002090930938720703, "__label__industrial": 0.0011720657348632812, "__label__literature": 0.001068115234375, "__label__politics": 0.0008168220520019531, "__label__religion": 0.0015401840209960938, "__label__science_tech": 0.08465576171875, "__label__social_life": 0.0002264976501464844, "__label__software": 0.0082244873046875, "__label__software_dev": 0.83056640625, "__label__sports_fitness": 0.0013647079467773438, "__label__transportation": 0.0015363693237304688, "__label__travel": 0.0005669593811035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48318, 0.04082]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48318, 0.38508]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48318, 0.91018]], "google_gemma-3-12b-it_contains_pii": [[0, 289, false], [289, 974, null], [974, 1466, null], [1466, 2600, null], [2600, 3290, null], [3290, 4421, null], [4421, 5916, null], [5916, 8823, null], [8823, 11626, null], [11626, 14454, null], [14454, 16014, null], [16014, 18263, null], [18263, 21363, null], [21363, 22673, null], [22673, 24250, null], [24250, 26873, null], [26873, 27453, null], [27453, 29015, null], [29015, 31829, null], [31829, 35022, null], [35022, 36848, null], [36848, 39825, null], [39825, 41290, null], [41290, 43666, null], [43666, 46369, null], [46369, 47788, null], [47788, 48318, null]], "google_gemma-3-12b-it_is_public_document": [[0, 289, true], [289, 974, null], [974, 1466, null], [1466, 2600, null], [2600, 3290, null], [3290, 4421, null], [4421, 5916, null], [5916, 8823, null], [8823, 11626, null], [11626, 14454, null], [14454, 16014, null], [16014, 18263, null], [18263, 21363, null], [21363, 22673, null], [22673, 24250, null], [24250, 26873, null], [26873, 27453, null], [27453, 29015, null], [29015, 31829, null], [31829, 35022, null], [35022, 36848, null], [36848, 39825, null], [39825, 41290, null], [41290, 43666, null], [43666, 46369, null], [46369, 47788, null], [47788, 48318, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48318, null]], "pdf_page_numbers": [[0, 289, 1], [289, 974, 2], [974, 1466, 3], [1466, 2600, 4], [2600, 3290, 5], [3290, 4421, 6], [4421, 5916, 7], [5916, 8823, 8], [8823, 11626, 9], [11626, 14454, 10], [14454, 16014, 11], [16014, 18263, 12], [18263, 21363, 13], [21363, 22673, 14], [22673, 24250, 15], [24250, 26873, 16], [26873, 27453, 17], [27453, 29015, 18], [29015, 31829, 19], [31829, 35022, 20], [35022, 36848, 21], [36848, 39825, 22], [39825, 41290, 23], [41290, 43666, 24], [43666, 46369, 25], [46369, 47788, 26], [47788, 48318, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48318, 0.03297]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
2ee07b446870b6ebf97741fc7670cabbbc95341d
|
[REMOVED]
|
{"Source-Url": "https://inria.hal.science/hal-01964222/file/main.pdf", "len_cl100k_base": 9967, "olmocr-version": "0.1.49", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 53377, "total-output-tokens": 14081, "length": "2e13", "weborganizer": {"__label__adult": 0.0005431175231933594, "__label__art_design": 0.0006108283996582031, "__label__crime_law": 0.00290679931640625, "__label__education_jobs": 0.0009517669677734376, "__label__entertainment": 0.00018596649169921875, "__label__fashion_beauty": 0.00023055076599121096, "__label__finance_business": 0.00024700164794921875, "__label__food_dining": 0.0004124641418457031, "__label__games": 0.0016813278198242188, "__label__hardware": 0.0024547576904296875, "__label__health": 0.0005574226379394531, "__label__history": 0.00042724609375, "__label__home_hobbies": 0.0001475811004638672, "__label__industrial": 0.0005846023559570312, "__label__literature": 0.0005240440368652344, "__label__politics": 0.0004897117614746094, "__label__religion": 0.0005507469177246094, "__label__science_tech": 0.1517333984375, "__label__social_life": 0.00016188621520996094, "__label__software": 0.057342529296875, "__label__software_dev": 0.7763671875, "__label__sports_fitness": 0.0002677440643310547, "__label__transportation": 0.0003998279571533203, "__label__travel": 0.0001852512359619141}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51599, 0.04168]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51599, 0.71278]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51599, 0.86201]], "google_gemma-3-12b-it_contains_pii": [[0, 1117, false], [1117, 3580, null], [3580, 6641, null], [6641, 9147, null], [9147, 11506, null], [11506, 13541, null], [13541, 15389, null], [15389, 16740, null], [16740, 18633, null], [18633, 20885, null], [20885, 24193, null], [24193, 26045, null], [26045, 28096, null], [28096, 30425, null], [30425, 33359, null], [33359, 35474, null], [35474, 36763, null], [36763, 38577, null], [38577, 40343, null], [40343, 42152, null], [42152, 44702, null], [44702, 48094, null], [48094, 51108, null], [51108, 51599, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1117, true], [1117, 3580, null], [3580, 6641, null], [6641, 9147, null], [9147, 11506, null], [11506, 13541, null], [13541, 15389, null], [15389, 16740, null], [16740, 18633, null], [18633, 20885, null], [20885, 24193, null], [24193, 26045, null], [26045, 28096, null], [28096, 30425, null], [30425, 33359, null], [33359, 35474, null], [35474, 36763, null], [36763, 38577, null], [38577, 40343, null], [40343, 42152, null], [42152, 44702, null], [44702, 48094, null], [48094, 51108, null], [51108, 51599, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51599, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51599, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51599, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51599, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51599, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51599, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51599, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51599, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51599, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51599, null]], "pdf_page_numbers": [[0, 1117, 1], [1117, 3580, 2], [3580, 6641, 3], [6641, 9147, 4], [9147, 11506, 5], [11506, 13541, 6], [13541, 15389, 7], [15389, 16740, 8], [16740, 18633, 9], [18633, 20885, 10], [20885, 24193, 11], [24193, 26045, 12], [26045, 28096, 13], [28096, 30425, 14], [30425, 33359, 15], [33359, 35474, 16], [35474, 36763, 17], [36763, 38577, 18], [38577, 40343, 19], [40343, 42152, 20], [42152, 44702, 21], [44702, 48094, 22], [48094, 51108, 23], [51108, 51599, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51599, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
1acd112ce340499b36a140e22775c660e7506396
|
Small Data: Applications and Architecture
Cheng-Kang Hsieh*, Faisal Alquaddoomi†, Fabian Okeke‡, John P. Pollak§, Lucky Gunasekara¶ and Deborah Estrin∥
* UCLA CSD; Los Angeles, CA, USA (changun@cs.ucla.edu)
† UCLA CSD; Los Angeles, CA, USA (faisal@cs.ucla.edu)
‡ Cornell CSD; Ithaca, NY, USA (fno2@cornell.edu)
§ Cornell Tech; New York, NY, USA (jpp9@cornell.edu)
¶ Cornell Tech; New York, NY, USA (llg24@cornell.edu)
∥ Cornell Tech; New York, NY, USA (destrin@cornell.edu)
Abstract—Small data are the digital traces that individuals generate as a byproduct of their daily activities, such as: communicating through email or text; buying groceries or ordering delivery; or going to work on foot or by car. These traces can empower individuals to gain insights into their behavior, personalize their care, improve their relationships, motivate achievement of goals, and broadly improve their quality of life. As such small data are both byproducts of today’s and drivers of tomorrow’s ubiquitous computing applications. The contributions of this paper are twofold: we motivate the requirements for a small data ecosystem and supporting architecture, and present a critical component – Lifestreams Database (DB) – which is evaluated using three exemplar apps. Lifestreams DB extracts, processes, and models diverse traces from data silos and enables various small data applications through simple SPARQL queries. Its soft-state design provides storage-efficiency, robustness, and query performance for processing small data.
Keywords—small data; linked data; knowledge representation.
I. INTRODUCTION
Small data are “digital traces”, records of our activities that are stored as we interact with the world around us. These traces are passively produced when we use tools and services that maintain logs: credit cards, grocery receipts, websites and other streaming content services, browsers themselves, etc. They can also be intentionally produced and tracked by wearable sensors, including mobile phone applications. It is well-known that service providers derive value from this information – usage metrics and demographic information, all personal data, are routinely employed to help direct advertisement and optimize products. We argue that this data can and should provide value for the end users. However, the need for a small-data ecosystem and supporting architecture, and present a critical component – Lifestreams Database (DB) – which is evaluated using three exemplar apps. Lifestreams DB extracts, processes, and models diverse traces from data silos and enables various small data applications through simple SPARQL queries. Its soft-state design provides storage-efficiency, robustness, and query performance for processing small data.
For example, a small data app may promote healthier eating by coaching users to take the planning actions needed to prepare meals at home. The app would utilize grocery and online food delivery history, browser history, and Moves or Foursquare data to build a model of meal preferences. The user could then receive prompts at their desired frequency about which recipes they are likely to enjoy, and suggestions for additions to their grocery shopping list to enable them to prepare these meals at home. The app could incentivize this with informative comparisons of calorie and cost savings, or could be tied to more intentional gamification. Another small data app could allow independent living elderly to share how they are doing without sharing every detail of what they are doing. The app would make use of passively collected small data streams such as email, activities, and mobile phone usage to create a personalized model of the user’s activity, well-being, and degree of social engagement. Rather than exposing the model itself, the app would expose deviations from the model to make family and friends aware of changes to a person’s state without divulging detailed information. Such an app can support many types of relationships, including family and friends separated geographically, or other support-network relationships such as social workers, caregivers, and coaches. We describe these concepts in greater detail in section III.
The central role of a small data architecture is to facilitate application-level access to a person’s diverse information sources on their behalf. While individual service providers, such as Google, Facebook, and Amazon each have information about many aspects of our behavior, they are limited in how specifically they personalize by the terms of their end-user licensing agreements and a need to preserve users’ trust. They also do not each have access to all data of interest. Because of this, there is an opportunity in the market for providers to give users access to their individual data in various forms (application programming interfaces, downloads, email receipts), and for third-party products to emerge that integrate with that user’s data in the same way that third party mobile apps make use of mobile-device data. These third party apps would serve the end user without degrading the large-service provider’s position, and in fact have the potential to solidify the user’s sense of the service provider’s utility and trustworthiness. Note that we are promoting that users be given access to their data and not making any statement about data ownership. We are also not addressing the very important policy question regarding service providers making user data available to third parties directly.
As mentioned, service providers have difficulty providing apps that cut across multiple data sources or mine too deeply into their users’ data. In contrast, a small data app leverages the user as the common denominator, and can take advantage of the trend for service providers to support application programming interfaces (APIs) for individuals to their data. The user has both the access and authority to collect and aggregate data across these providers, allowing for powerful and comprehensive insights that, by virtue of the fact that they are initiated and consumed by that same user, can be much more focused in their oversight and suggestions. We anticipate and favor broad provision and adoption of systematic programmatic access to personal data for the end users. However, the need for a small data application architecture need not wait for, nor will it be obviated by, future developments. Already, today, users can
obtain access to their data, albeit through idiosyncratic and sometimes ad-hoc channels: e-receipts, diverse APIs, browser plug-ins, etc. Even with access to these data, infrastructure is still required to process these traces into formats that are useful and actionable to the individual. Since most individual users do not develop their own software, we are targeting support for small-data app developers who will implement apps on the behalf of this growing user base; just as they have driven the development of third party apps for smartphones [1]. This approach is aligned with the emerging Social Web activities in W3C [2].
Our vision is to create a small data ecosystem in which small data apps can be readily developed and deployed atop an infrastructure that standardizes their inter-operation and addresses concerns that are common across apps, such as helping to ensure security and reducing redundancy in storage and computational resources, as well as resolving policy/legal questions that are outside the scope of this paper. The vision is, again, driven by the individual as the common denominator, and rightful beneficiary, of access to their data.
We describe the core components of a small data architecture using three exemplar applications, and present a specific system-design for the most central of these components – Lifestreams Database (hereafter “Lifestreams DB”). Lifestreams DB is designed to extract and process diverse digital traces from various sources and make them available to the client applications for further analysis or visualization.
Data interoperability is an important requirement for such a system as it allows one to gain insights from the combination of data that were originally locked in their own data silos. Lifestreams DB extracts raw data from these data silos, and transforms them into a standardized Resource Description Format (RDF) that allows one to join these digital traces against each other and with external RDF data sources (e.g., fuse nutrition information with users’ online shopping records.)
Unlike many enterprise settings, small data differs in the fact that most of original sources (e.g., Google, Facebook, etc.) persist users’ data in their own databases and individually provide security and access control. Therefore, it may be wasteful, or even harmful to the users’ security and privacy for Lifestreams DB to permanently replicate these data in one place. Motivated by this distinction, we propose a soft-state design that, while providing client applications with virtual access to all the data, only caches a part of it locally, and reproduces the rest on demand. Such a design introduces two important advantages in the context of small data. First, our soft-state model discourages our system from becoming a data “honeypot” that attracts attacks from malicious entities since only a limited amount of information is cached in the system at any given time. Second, it requires much less storage and allows the system to scale to serve a large number of users or integrate with more diverse information beyond its storage capacity. We also provide an encryption mechanism that encrypts the sensitive data to further protect the user.
After introducing related work in section II, we present three small data applications in III and use them to identify cross cutting application requirements. We provide a brief overview of our architecture in section IV, then go into depth on the main contribution of this work, Lifestreams Database (DB), in section V. Section VI contains the results of performance analyses for simulated workloads on a sample of simple and complex query types. Finally, section VII provides some observations and outlines future work.
II. Related Work
Small data are fueling a new genre of personalization technologies. Recommender systems have been some of the most successful applications in this domain to date as evidenced by recommendations for music in Pandora, consumer goods in Amazon [3], articles in Wikipedia [4], and locations in Foursquare [5]. These systems rely heavily on the users’ application-specific histories, such as queries, clicks, ratings, and browsing data that result from interacting with their product. Small data can enable far more immersive recommender systems that take into account a larger space of user needs and constraints. In particular, they can benefit from user models derived from both more diverse and longitudinal data (e.g., features and dynamic patterns in: daily travel patterns, consumption from gaming to dining, interests and sentiment expressed in personal communication, etc.). General-purpose recommendation frameworks such as MyMediaLite [6] and LensKit [7] (to name a few) could make use of small data to learn these kinds of broad user models, but they require a front-end component to fetch user’s data and drive the framework with appropriately-formatted inputs.
Small data’s goal of providing individuals with transformative insights into their behavior is aligned with that of the Quantified Self (QS) movement [8]. In QS studies, individual experimenters engage in the self-tracking of biological or behavioral information using commercial devices such as Fitbit and myZero sleep trackers, or personal testing services such as 23AndMe, and many systems have been developed to help integrate and visualize QS data [9]. Even prior to QS’s popularity, research projects such as Ubifit and BeWell demonstrated the potential of making personal data actionable [10][11]. More recent work, i.e., EmotionCheck [12], has demonstrated that not only QS data itself, but a user’s trust in the tool, can serve as effective leverage for behavioral change. Small data, however, differs from earlier studies in its focus on harnessing data that are (a) generated as byproducts of interacting with services and (b) that are readily available, versus having to be manually collected or otherwise procured. These data can be complementary to or serve as a proxy for some of the data that QS studies collect.
Small data are also related to Personal Information Management (PIM) systems [13]. This line of work covers a broad range of environments from desktops [14][15], to connected-devices in the home [16][17], to e-learning [18] and health information management systems [19]-[22]. Our work is complementary to these systems’ focus on information organization and retrieval, by providing support for third party applications that would generate additional inputs to these systems through the processing of small data streams that are not yet accessible.
Small data shares similar data input with Personal Automation Engines. For example, Atomate [23] is a system that integrates individuals’ social and life-tracking feeds into a unified RDF database, and automatically carries out simple tasks (e.g., messaging) when the incoming feeds satisfy user-defined rules. The service “If-This-Then-That“ (IFTTT) [24], expanding on the same idea, compiles a large set of feeds that monitor various online and offline activities and can trigger a wide set of actions when a user-defined condition on a feed is satisfied. On a more application-focused and user-local
level. PrefMiner [25] monitors on-device notifications from numerous sources to identify which notifications are important to the user or not. Small data differs from these services in its emphasis on providing insights that require longer-term observation, rather than performing transient event-driven actions. This fundamental distinction results in rather different system requirements, particularly in resource management and security as mentioned in the introduction. That said, our small data application architecture could enable a richer set of inputs to both of these systems.
Our aims are similar to existing systems that provide a modular computational infrastructure and mediate the release of processed personal data, such as openPDS and Virtual Individual Servers [26][27]. While these systems do provide personal data acquisition, storage, and release, they do not explicitly address the problem of normalizing and joining disparate data streams under a shared ontology. Our work complements these systems in providing data modeling and interoperability required to join multiple data streams, as opposed to simply providing analysis of individual data streams.
III. SMALL DATA APPLICATIONS
A small data application is an application that operates on multiple personal data streams, produces some kind of analysis of these streams, and presents the result to the user via an interface. Personal data can include static data, for instance the individual’s genome or family lineage. We focus particularly on temporal data, either regular or episodic, that must be continually collected and analyzed. The reason for this focus is twofold: first, these information-rich data sources will be most transformative in creating detailed user models and feedback for diverse applications, and second the temporal data are the more difficult to manage since it is constantly accumulating. Of course, our focus on temporal data does not obviate the value of joining the user’s data with other non-temporal data sets - e.g., summarizing nutritional exposure using temporal grocery receipts and relatively-static nutritional databases.
Below, we motivate the requirements of our software architecture using three exemplar small data apps. These applications comprise two data access modes – background and foreground. In the background mode, the application may periodically access a long history of user data to build or update the user’s behavioral model. In the foreground, the user experience tends to be based on a more recent window of time, interpreted in the context of the behavioral model.
Figure 1. Small Data Architecture: illustrates the flow of data between Data Storage Units (DSUs), Data Processing Units (DPU), and Data Visualizations Units (DVUs, e.g., apps).
A. Ora
Ora (Figure 2) is a tool for sharing how you are doing – without sharing the details of what you are doing – with family, friends, or other people who might be part of your support network (counselors, coaches, etc.) Users interact with Ora via a mobile-optimized website, where they authorize the app to connect to their Gmail and Moves accounts using an OAuth2 grant. Ora extracts descriptive numeric features from these data sources and uses them to build a baseline model that represents the user’s usual values for each feature. Deviations from this model are calculated on a per-day basis and summarized into a single numeric value, referred to as a pulse, that acts an opaque indicator of the degree to which the user is deviating from the model.
Specifically, the pulse is computed from 20 features extracted from the users’ data, including their geodiameter (the distance between the furthest two points in their location trace for the day), exercise duration (the number of minutes the user was walking or running), time not at home (the amount of time not spent at their primary location, typically their home), and the number of emails sent in a day. Then, for a set of features $F$, the baseline for each $f \in F$ is computed as a tuple consisting of the sample standard deviation and mean over a two-month sliding window. For a given day, the pulse ($P$) is then computed as the sum of the numbers of standard deviations from the mean for each feature.
B. Pushcart
Pushcart (Figure 3) uses receipts from services such as FreshDirect or Peapod to determine the nutritional value of the food that a household purchases. This information is provided to a “Wizard of Oz” system in which a clinician, masquerading as a learning algorithm, reviews the purchasing habits of each household and suggests substitutions of more nutritional items during future purchases.
The system’s primary source of input is email – after opting in, users register the system to automatically receive a copy of their receipt email, from which the list of items is extracted and then joined against a database of nutritional information for each food item. The user interacts with the system through email as well: the user interface is a weekly “report email”...
The architecture is composed of three layers, as depicted in Figure 1. There are three main entities: **Data Storage Units (DSUs)**, **Data Processing Units (DPUs)**, and **Data Visualization Units (DVUs)**. These terms mirror the open mHealth standard [30]. DSUs include service provider APIs, e.g., Google’s numerous service APIs and Facebook’s Graph API. DSUs can be accessed directly from DPUs/DVUs, but are often accessed through a “transforming” DPU that converts the API’s often proprietary data format into the schemas we use in small data apps. Data flows from DSUs through arbitrary compositions of DPUs – so long as their input and output types are compatible – and terminates in the DVUs. Lifestreams DB acts as a container for DPUs, and provides caching, data modeling, access control, and a unified query interface. Its outputs can be directly consumed by DVUs, or by other DPUs that provide additional data processing capability.
This modular pipeline approach is necessitated by the fact that our system will never be complete; there will always be new data sources and means of processing and displaying data, which the architecture should readily accommodate. Further, the implementation of its components is a collaborative effort and we wish to encourage developers to reuse and build upon existing components.
### V. DPU Containers: Lifestreams DB
Lifestreams DB is an important component in our architecture. Positioned between data sources and small data apps, Lifestreams DB is designed to be the “narrow waist” of the small data ecosystem that provides a unified interface for querying, combining, and fusing diverse small data streams. Lifestreams DB contains a pipeline of DPUs that Extract, Transform and Load (ETL) an individual’s digital traces from different sources using common software APIs and Schemas to enable diverse small data applications. Figure 4 illustrates the architecture of Lifestreams DB. On the left is **Lifestreams Pipeline**, a data processing pipeline that contains a set of reusable DPUs that extract raw data from different small data sources and transform raw data into structured, readily usable information. For example, raw actigraphy and geolocation sensor samples from a mobile app are transformed into structured data that describe the time, location, speed, and distance of each activity episode. These extracted data are loaded into **Lifestreams Triplestore**, an RDF datastore built on top of Jena TDB [31], that exposes an integrated view of all the diverse RDF data for apps to query. We made two principal design decisions when designing Lifestreams DB: 1) to model data using RDF, and 2) to utilize a soft-state system design. The rationales behind these design decisions are described in the...
following.
a) **Using RDF for interoperability:** Data interoperability is key to the success of such a system. Raw data extracted from different data silos need to be transformed into a compatible form to allow one to derive knowledge from them. In Lifestreams DB, we utilize RDF to enable data interoperability. Each DPU outputs data in JavaScript Object Notation (JSON), and the DPUs at the final stage generate RDF data in the JSON-LD format, which will be transformed into RDF triples (i.e., subject-predicate-object) before stored in the Triplestore. The advantages of using RDF are as follow. First, it eliminates the need to define database schema, unlike, for example, in a Structured Query Language (SQL) datastore. Data generated by different DPUs are inherently interoperable if the DPUs follow the same ontology to model the data. This property is of significant benefit to a small data ecosystem, since it allows DPUs developed by different people to be plug-and-play without the need to modify the system’s database schema. Also, any client application developer, given the ontology, can compose queries to filter, join, and aggregate various types of data generated by different DPUs without knowing specific implementation details such as table and column names, etc.
b) **A Soft-State System Design:** Architecturally, one major difference between an individuals’ digital traces and an enterprise’s operational data is that an individual’s data are mostly persisted and protected in each original data source’s databases (e.g., Google, Facebook). In many cases, there is no need, and is actually wasteful and harmful to the users’ security and privacy, for Lifestreams DB to replicate all these data in one place. Thus, we propose a soft-state design that, while providing the client applications with virtual access to all the data, only caches a small portion of it in the system. Data which the user owns (e.g., sensor data from the user’s phone or wearable) can be considered in the same way, except that it will reside on a personal DSU instead of in an external organization.
The advantages of this design are three-fold: First, a soft-state design requires much less storage to serve the requests, and thus allows the system to scale more effortlessly to serve a larger number of users and integrate with more diverse information beyond its storage capacity. Further, it enables elastic storage provision, where a service provider can provide the service with less storage (at consequently lower cost), and increase the storage provision only when better performance is needed. Second, it makes the system more robust, since there are less points where critical data loss can occur. If the system needs to be brought down, it can be done so without concern over maintaining important state. Third, a soft-state design inherently has better security properties. Since only a small amount of information is cached in the system at any given time, the exposure of any single data breach is limited. In addition, the fact that the data can be repopulated into the database on-the-fly allows us to encrypt sensitive data and only decrypt them when they are demanded.
These advantages do not come without a price. A soft-state system tends to incur much overhead in indexing, reproducing, and reloading data. In Lifestreams DB, we reduce these overheads by utilizing a chunk-based data management strategy that generates and manages data in chunks. Our design is particularly suitable for applications that perform timeseries-based analysis with temporal locality where subsequent accesses tend to access records that are near in time (in our scheme, in the same chunk.) Within these assumptions, we have improved Lifestreams DB’s query performance by multiple factors (compared to the base Jena TDB triplestore) and made it perform even better than a hard-state system that stores all the data with only a fraction of storage space.
In the following, we first describe our RDF-based data modeling approaches and demonstrate its advantages using the SPARQL queries for the real-world small data applications we are developing. Then, we describe the chunk-based management strategy and the techniques we used to realize the proposed soft-state design.
### A. Data Modeling
When modeling data using RDF, one needs to follow a certain ontology. In small data, the concepts we come across most often are the various actions performed by users, such as sending emails, making purchases, etc. We chose schema.org [33] as the main ontology rather than the other competing candidates, such as Activity Streams [34], for its semantic action type system. Schema.org defines a hierarchical type system that describes different (sub)categories of actions. At the root is Action, a generic type that describes the common properties of an action (e.g., agent, time, etc.). It is then subclassed by more specific types, such as MoveAction, which, in turn, are subclassed by more specific types, such as ArriveAction, DepartAction, etc. This hierarchical structure enables one to write queries to reason across different types of actions within specific categories. For example, an app that encourages better sleep hygiene may analyze users’ before-sleep routines by querying certain action categories (e.g., the ExerciseAction and all its subclasses) that occurred before the sleep period.
Table I summarizes eight different kinds of data we have extracted and modeled from four different data sources, based on schema.org’s ontology. The purchase records are derived from email receipts on an opt-in basis. The phone-based data are uploaded to ohmage, a mobile sensing DSU. In the following, we demonstrate how our data modeling approaches can satisfy the requirements of the small data applications described previously with simple SPARQL queries.
**Ora:** Listing 5 shows a snippet of Ora Query that computes the geodiameter and the number of emails sent in a day. For brevity, the snippet omits the part that limits the time range to a single day. The first part of the snippet computes the geodiameter by selecting the maximum distance between any pairs of places at which the user stayed. The second part of the query counts the number of SendAction’s of which the targeted object is an email. This example is intended to
---
**TABLE I. DATA MODELING TYPE ASSIGNMENTS**
<table>
<thead>
<tr>
<th>Data</th>
<th>Source</th>
<th>Subject Types</th>
<th>Object Types</th>
</tr>
</thead>
<tbody>
<tr>
<td>Location/Mobility</td>
<td>Move API [32]</td>
<td>Stay/Travel</td>
<td>Place</td>
</tr>
<tr>
<td>Email</td>
<td>Gmail API</td>
<td>Send/Receive</td>
<td>EmailMessage</td>
</tr>
<tr>
<td>Purchase</td>
<td>Gmail API</td>
<td>Buy</td>
<td>Product</td>
</tr>
<tr>
<td>Calendar</td>
<td>gCal API</td>
<td>Join</td>
<td>Event</td>
</tr>
<tr>
<td>Web Browse</td>
<td>Android API</td>
<td>Browse</td>
<td>WebPage</td>
</tr>
<tr>
<td>App Usage</td>
<td>Android API</td>
<td>Use</td>
<td>MobileApp</td>
</tr>
<tr>
<td>Phone Call</td>
<td>Android API</td>
<td>Call/Receive</td>
<td>Person</td>
</tr>
<tr>
<td>Message</td>
<td>Android API</td>
<td>Send/Receive</td>
<td>SMSMessage</td>
</tr>
</tbody>
</table>
---
Copyright (c) IARIA, 2018. ISBN: 978-1-61208-631-6
Figure 5. A short snippet from Ora query that computes the geodiameter and the number of emails sent.
Figure 6. Pushcart Query joins an individual’s food purchase records with the corresponding nutritional information contained in the USDA nutrient database.
---
B. Chunk-based Data Management
As mentioned, Lifestreams DB’s soft-state design is made possible by a chunk-based strategy. The basic idea behind this strategy is as follows: The DPUs in Lifestreams Pipeline generate data in chunks and load them into Lifestreams Triplestore, which maintains an index to all the chunks (including the ones that are not cached in the system). When a client application submits a query, it will additionally submit a meta-query that selects the chunks it desires. If a chunk selected by the meta-query is not currently available in the system, Lifestreams Pipeline will re-run the corresponding DPUs and reproduce the chunk on the fly from the source. The chunks that contain sensitive data (determined from the data source and the user’s preferences) will be encrypted and decrypted on the fly when requested by a query. The chunks are encrypted with 256-bit Advanced Encryption Standard (AES).
Our strategy allows a system to maintain only a small amount of information (i.e., the chunk index) while providing access to much larger amount of data that is beyond the system’s storage capacity. In the following, we describe three major designs that realize this strategy and discuss several query optimization techniques enabled with chunking that can be utilized to provide a better user experience.
1) Chunk Index Design: The chunk index needs to be carefully designed to avoid unnecessary chunk reproduction. For each chunk of data, we extract the following features as its index:
- Distinct object types in the chunk.
- Start time and end time of the aggregate timespan.
- Geo-coordinates of a convex hull that covers all the spatial features in the chunk.
The rationales behind these choices are as follow. First, most of our applications are interested in certain types of actions or objects (e.g., CommunicationActions or ExerciseActions) so object types are a natural choice for indexing. Also, most of small data are time-tagged, and the applications we focus on tend to involve analysis of time series and aggregation based on time or location. Therefore, it is important for us to make chunk index satisfy these requirements.
2) Lifestreams Pipeline: a reproducible pipeline: We adopt a functional approach to allow Lifestreams Pipeline to reproduce arbitrary chunks of data from the original sources. The Lifestreams Pipeline consists of two types of DPUs: **Acquirers** acquire raw data from the sources while **Transformers**
transform data from one form to another. These DPUs are treated as passive functions invoked by the system. Consider a simple pipeline where one Acquirer and one Transformer linked in sequence. In each iteration, the system invokes the Acquirer with a state variable that indicates the chunk we want the Acquirer to fetch. After fetching the corresponding chunk, the Acquirer will return the chunk along with a new state variable that indicates the subsequent chunk to be acquired in the next iteration. The system then invokes the Transformer to transform the chunk, and stores the output chunk along with the state variable. When the chunk is removed, the state variable will be preserved in the system. Therefore, when we need to reproduce the chunk, we just need to re-run the pipeline with the preserved state variable.
An assumption we make here is that the raw data are permanently persisted in the original data sources (i.e., DSUs), and can be re-acquired by the Acquirer anytime. If this is not the case, a shim can be implemented to transfer the data to a DSU with such properties (such as Amazon S3). Unlike some chunk-based systems where the chunk sizes are pre-determined, Lifestreams DB allows each Acquirer to decide the chunk sizes according to the characteristics of the APIs it acquires data from. A typical chunk size is daily as it is supported by most data sources. However, as the state variable is updated by the Acquirers themselves, Acquirers can have state variables with different formats or granularity (e.g., hours, weeks.). This feature is important for small data where one usually needs to work with a large variety of external data sources whose APIs it has no control over.
3) Two-Level GDS Chunk Replacement Policy: Similar to many cache systems, Lifestreams DB requires a replacement policy to select chunks for replacement when the available space is low. Our replacement policy minimizes the overall expected query latency by selecting the chunks that are of larger size and less likely to be used again, and can be reproduced in shorter time. There are two ways to make space in Lifestreams DB: (1) compress the chunk, or (2) evict the chunk entirely. Compression on average results in 7.2x size reduction and can be restored more efficiently than reproducing a chunk from the source. Considering this difference, as well as, the varying chunk sizes and cost in reproducing different kinds of chunks (see Table II), we develop a Two-level Greedy-Dual-Size (Two-Level GDS) replacement policy that is both cost- and size-aware and appropriately choose between two space reduction methods. The basic Greedy-Dual (GD) algorithm assigns each chunk a cost value \(H\). Every time when a replacement needs to be made, the chunk with the lowest \(H\) value \(H_{min}\) will be replaced first, and all the other chunks reduce their \(H\) values by \(H_{min}\). Only when a chunk is accessed again will its \(H\) value be restored to its initial value. Greedy-Dual-Size (GDS) incorporates the different chunk sizes by assigning \(H\) as \(cost/size\) of the chunk [37]. On top of that, our Two-Level GDS algorithm additionally considers the different characteristics of compression and eviction. When a chunk is first inserted into the cache, its cost is set to the estimated decompression latency, and the size is the estimated space reduction after compression. When this chunk is selected for replacement, it will be compressed and re-inserted into the cache with its cost increased to the estimated latency to reproduce it from the source, and the size decreased to its size after compression. Only when this chunk is selected again will it be completely evicted. Similarly, after a chunk is reproduced, it will be first stored in its compressed form. When it is accessed again, it will have a certain probability to be promoted to its decompressed form. The default probability for a compressed chunk to be restored is 0.2. In this way, our algorithm uses compression as the default to make space for its efficiency, but still removes the compressed chunks to reduce cache clutter if they have not been used for long.
4) Chunk-Assisted RDF Query Evaluation: The flexibility of RDF is not without its drawbacks: compared to many SQL datastores, a RDF datastore tends to be slower in query evaluation due mainly to the difficulty of constructing an effective data index [38]. Our chunk-based strategy has several desirable side benefits that mitigate this problem. First, chunk indexes can be utilized as a multi-column index that allows the query engine to take a short path by skipping those data that do not belong to the requested chunks. Second, chunking enables a more effective result cache, which caches the query results and returns the result when the same query is given. Unlike a record-based system, where any modification can potentially invalidate a cached result [38], a chunk-based system only needs to track the modifications of the chunks that generate a cached result to ensure the result’s validity. This technique is particularly effective in our system, as most chunks won’t change after they have been generated.
VI. PERFORMANCE EVALUATION
In this section, we evaluate the feasibility and performance of our system using Gmail and Moves data. Using Jena TDB as a baseline, we first evaluate the system performance in different scenarios and with different kinds of data. Then, we evaluate the overall system performance with a real-world query with a workload simulation based on an assumed application usage. The experiment was conducted on an Amazon Web Services (AWS) instance with 8 Intel Xeon E5-2680 processors and 15GB of memory.
A. Dataset
A dataset of 180 days worth of Gmail and Moves data is used to evaluate the system performance. The data are from three authors of this paper who are regular users of these services. There are in total 360 chunks in the dataset, each of which contains a single day’s Gmail or Moves data. Table II summarizes the different characteristic of Gmail and Moves data. For example, while smaller in size, a Gmail chunk requires many more HTTP requests to be issued thus has longer (re)production time. A Moves chunk, on the other hand, can be (re)produced in a much shorter time, but usually is much larger in size due to the high-resolution location traces. These differences will result in different performance characteristics as shown in the following. These differences must be taken into account to achieve efficient resource utilization.
| TABLE II. GMAIL AND MOVES DATA-SIZE AND REPRODUCTION-TIME CHARACTERISTICS |
|-------------------------------|-----------------|----------------|
| Avg. Values of 180 Chunks | Gmail | Moves |
| Chunk Size (KB) | 20.32 | 392.44 |
| Compressed Chunk Size (KB) | 3.08 | 54.12 |
| Required HTTP Requests | 14.24 | 1 |
| Reproduction Time (msec) | 1423.63 | 182.17 |
B. Query Performance
We compare the query performance of our system with our baseline, Jena TDB, based on the following scenarios:
1) The demanded chunks are readily available.
2) The chunks need to be decompressed.
3) The chunks need to be decompressed and decrypted.
4) The chunks need to be reproduced from the data source.
The results suggest up to 14x performance improvement over Jena TDB for a both a simple query and a complex real-world query. The experiment was conducted with all 360 chunks preloaded into the triple store. Each data point presented below is an average of 30 runs of the experiment. The error bars in the figures are the 95% confidence interval.
1) Simple Query Performance: We first evaluate the performance with a simple query that counts the number of distinct Action subjects. Figure 8a and Figure 8b show the results for Gmail and Moves data respectively, where the x-axis is the number of chunks demanded in the query, and the y-axis is the mean query evaluation time. When the demanded chunks are cached in the system, our system outperforms Jena TDB by up to 14x and 10x for Gmail and Moves respectively. This performance gain is mainly attributed to the chunk-skipping optimization mentioned in the Chunk-Assisted Evaluation section. For Gmail data, decompressing shows up to 36x better performance than reproducing, and decryption adds only negligible overhead (less than 1.3%). This difference is not that significant for Moves, since Moves data can be reproduced in a relatively shorter time, but incurs larger overhead to be inserted into the triplestore in either scenario.
2) Real-World Query Performance: Next, we use a real-world query to demonstrate the system performance in a more realistic setting. A query from one of our small data applications, Ora, is used. It consists of 211 lines of SPARQL script, extracting 20 features from Gmail and Moves data (See Application section). Since this more complex query requires a larger number of scans to be made over the search space, as shown in Figure 9, the performance gain of our chunk-skipping technique becomes more evident (up to 14x) for a complex real-world query where more scans need to be made over the search space. In addition, due to the longer overall query time, the overhead in decompression and decryption becomes less significant. Reproducing is still the slowest among the four scenarios, but it still outperforms Jena TDB by up to 1.8x.
C. Performance with Simulated Workload
The varying performance for different types of data and scenarios stresses the need for a chunk replacement policy that is able to incorporate these discrepancies. We evaluate the effectiveness of the proposed Two-Level GDS algorithm using a simulated workload of Ora. Based on the UI of Ora, we assume a binomial process usage pattern where each page shows one-week worth of data and can be browsed in a reverse chronological order. We assume the user will use the app daily, and after viewing a page, the user has a probability \( p \) to leave the app. We set \( p = 0.7 \) and compare our approach with well-known Least-Recently-Used (LRU) policy, as well as the Jena TDB that
Figure 10. Query Performance with Simulated Workload: our Two-Level GDS approach shows superior performance over LRU, and outperforms Jena TDB that retains all 50.44MB of data, by up to 4.7x using only about 1/10th the storage.
Retains all the data. The results suggest that overall, our system outperforms LRU and Jena TDB by up to 4.7x using only a fraction of storage.
We generate 120 days worth of data for the workload based on an assumed usage pattern of Ora. We only consider the performance of the last 60 days when the cache space has become saturated. To allow a fair comparison, we modify the traditional LRU in a way that the chunk chosen for replacement will be first compressed and re-inserted into the LRU list. Only if it is chosen again will it be entirely evicted. We refer to this variant of LRU as Two-Level LRU. In addition, for the baseline, Jena TDB, we assume it retains all the 120-day worth of data in the system, which is 50.44MB in size.
Figure 10 shows the performance of different approaches with cache sizes varying from 5MB to 20MB. Our Two-Level GDS shows superior performance over Two-Level LRU especially with a smaller cache size. This advantage comes from the fact that our approach leverages both the cost of reproduction/compression and the size of each individual chunk into account. For example, our approach tends to evict a Moves chunk for its shorter reproduction time and larger size. On top of that, if we use 0.5MB of the cache space to cache the query results, we see another 2x of performance improvement. Overall, our approach achieves up to 4.7x performance improvement over Jena TDB, using only about 1/10th the storage. Such a performance improvement is important for small data services to be provided effectively and affordably.
VII. CONCLUSION AND FUTURE WORK
In this work, we introduce the notion of small data apps, and the increasing opportunity of these apps to produce deeper and more comprehensive insights across the union of a user’s available data, and across a wide range of ubiquitous computing applications. By virtue of the fact that these apps leverage the user as the common denominator and benefactor, there is both the potential for deeper, more personal insights, as well as the need for a robust infrastructure for accessing such intimate data. We present an architecture to support these small data apps that decouples the data sources from the processing and visualization layers, and accounts for the unique challenges presented by contending with sensitive streaming spatio-temporal data from multiple providers. We describe our implementation of a critical component of this architecture, Lifestreams DB, and several candidate applications built on top of it.
Lifetreams DB includes several improvements over existing RDF datastores in terms of storage requirements and query latency, which are likely attributable to the constraints of our domain (i.e., streaming spatio-temporal data which can be reproduced at a cost in latency from an external source.) The application of chunking to the datastore, and a cache eviction policy that leverages both the cost of reproduction/compression and the size of the data, is demonstrated to improve query latency for both a few candidate queries and in a simulated experiment modeling a user’s long-term interaction with Ora, an SDA application.
While this work proposes a soft-state architecture to ameliorate the impact of a breach, there is still much work to be done in secure data storage and distribution so that breaches are diminished or, preferably, eliminated in the first place. On a related note, there are many improvements that can be made to ensure that the processed data does not compromise the raw data source, and to selectively control who can consume processed data in the case that it is sensitive.
Small data apps address the converse of the big data problem: rather than drawing insights about populations across broad swaths of data for purposes of similar scale (e.g., corporate, governmental, etc.), they draw insights about the individual across their own small data for personal growth and understanding. This work aspires to foster the growth of the small data ecosystem and the role of small data in fueling ubiquitous computing applications.
REFERENCES
|
{"Source-Url": "https://www.thinkmind.org/download.php?articleid=alldata_2018_1_10_80018", "len_cl100k_base": 9429, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33149, "total-output-tokens": 11528, "length": "2e13", "weborganizer": {"__label__adult": 0.00038743019104003906, "__label__art_design": 0.0012874603271484375, "__label__crime_law": 0.0004422664642333984, "__label__education_jobs": 0.001560211181640625, "__label__entertainment": 0.00018870830535888672, "__label__fashion_beauty": 0.00028014183044433594, "__label__finance_business": 0.0005769729614257812, "__label__food_dining": 0.0004379749298095703, "__label__games": 0.0008707046508789062, "__label__hardware": 0.00209808349609375, "__label__health": 0.0016269683837890625, "__label__history": 0.0005497932434082031, "__label__home_hobbies": 0.00015306472778320312, "__label__industrial": 0.0004677772521972656, "__label__literature": 0.0005688667297363281, "__label__politics": 0.00027561187744140625, "__label__religion": 0.0005383491516113281, "__label__science_tech": 0.47216796875, "__label__social_life": 0.00015056133270263672, "__label__software": 0.0419921875, "__label__software_dev": 0.47216796875, "__label__sports_fitness": 0.000244140625, "__label__transportation": 0.0005445480346679688, "__label__travel": 0.00023186206817626953}, "weborganizer_max": "__label__science_tech", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50633, 0.02112]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50633, 0.29306]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50633, 0.91023]], "google_gemma-3-12b-it_contains_pii": [[0, 6452, false], [6452, 13669, null], [13669, 18720, null], [18720, 21489, null], [21489, 28669, null], [28669, 31413, null], [31413, 38457, null], [38457, 41635, null], [41635, 48022, null], [48022, 50633, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6452, true], [6452, 13669, null], [13669, 18720, null], [18720, 21489, null], [21489, 28669, null], [28669, 31413, null], [31413, 38457, null], [38457, 41635, null], [41635, 48022, null], [48022, 50633, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50633, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50633, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50633, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50633, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50633, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50633, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50633, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50633, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50633, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50633, null]], "pdf_page_numbers": [[0, 6452, 1], [6452, 13669, 2], [13669, 18720, 3], [18720, 21489, 4], [21489, 28669, 5], [28669, 31413, 6], [31413, 38457, 7], [38457, 41635, 8], [41635, 48022, 9], [48022, 50633, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50633, 0.12409]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
45dae0169ee384c9741e181119697b2a6441b785
|
Scade 6: A Formal Language for Embedded Critical Software Development
Jean-Louis Colaço, Bruno Pagano, Marc Pouzet
To cite this version:
HAL Id: hal-01666470
https://inria.hal.science/hal-01666470
Submitted on 18 Dec 2017
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Scade 6: A Formal Language for Embedded Critical Software Development
(I Invited Paper)
Jean-Louis Colaco
ANSYS/Estrel-Technologies, Jean-Louis.Colaco@ansys.com
Bruno Pagano
ANSYS/Estrel-Technologies, Bruno.Pagano@ansys.com
Marc Pouzet
UPMC/ENS/INRIA Paris, Marc.Pouzet@ens.fr
Abstract—SCADE is a high-level language and environment for developing safety critical embedded control software. It is used for more than twenty years in various application domains like avionics, nuclear plants, transportation, automotive. SCADE has been founded on the synchronous data-flow language Lustre invented by Caspi and Halbwachs. In the early years, it was mainly seen as a graphical notation for Lustre but with the unique and key addition of a code generator qualified with the highest standards for safety critical applications.
In 2008, a major revision based on the new language ‘Scade 6’ was released. This language originally combines the Lustre data-flow style with control structures borrowed from Esterel and SyncCharts, compilation and static analyses from Lucid Synchrone to ensure safety properties. This expressiveness increase for SCADE together with a qualified code generator have dramatically widened the scope of applications developed with.
While previous publications have described some of its language constructs and compiler algorithms, no reference publication on ‘Scade 6’ existed so far. In this paper, we come back to the decisions made for its design, illustrate the main language features, static analyses, and the compiler organization in the context of a qualification process.
I. INTRODUCTION
Synchronous languages [1] were introduced about thirty years ago by the concomitant work on three academic languages: SIGNAL [2], ESTEREL [3] and LUSTRE [4]. These domain specific languages were targeted for real-time control software, allowing to write a modular and mathematically precise system specification, to simulate, test and verify it, and to automatically translate it into embedded executable code.
They were founded on the synchronous approach [5] where a system is modeled ideally, with communications/computations supposed to be instantaneous, formally checking on the model important safety properties like determinism, deadlock freedom, the ability to generate an implementation that runs in bounded time and space, and verifying a posteriori that this implementation (software or hardware) is fast enough.
These foundations immediately raised the interest of industries having to deal with safety critical applications implemented in software or hardware, in particular those assessed by independent authorities and following certification standards [6]. This is the context in which SCADE 1 was initiated in the mid nineties, with the support of two companies, Airbus and Merlin Gerin, by a collaboration between the research laboratory VERIMAG in Grenoble, and the software editor VERILOG [7]. Since 2000, SCADE is developed by ANSYS/ESTEREL-TECHNOLOGIES. 2
In the early years, the underlying language of SCADE was essentially LUSTRE V3 [8], augmented with a few specific features requested by users but minor in terms of expressiveness, to which was added a graphical editor. This situation held up to the version 5 of SCADE. To support the development of critical applications without having to verify the consistency between the SCADE model and the generated code, a ‘qualified code generator’ known as KCG, was developed, with the first version released in 1999. KCG has been used (and is still used) in software projects up to the most demanding safety levels complying with standards DO-178C, IEC 61508, EN 50128, IEC 60880 and ISO 26262 where a high confidence in automation is expected. This code generator demonstrated the interest of a semantically well defined language for the qualification process. It is very unique in the field of embedded software and contributed to the industrial success of SCADE.
The objective in designing SCADE 6 was to provide novel language features to widen the scope of applications developed with SCADE, but carefully selected to preserve the qualities that made SCADE accepted for safety critical development. One was the mix of models, from purely data-flow ones already well covered, to control-flow ones better covered by languages like ESTEREL and the SyncChart [9], and complex interactions between the two. An other limitation of SCADE was the absence of arrays. LUSTRE V4 provided powerful recursive arrays definitions very well suited for hardware but the static expansion they imposed was inadequate for software. Finally, there were also quests for other language extensions (such as modules), more expressive types (in particular around numerics), compiler optimizations.
To meet these objectives, we were guided by several works:
• ESTEREL and SyncChart for control-dominated system expressed by hierarchical state machines;
• functional arrays and iterators [10];
1SCADE stands for: Safety Critical Development Environment
2http://www.ansys.com/products/embedded-software/ansys-scade-suite
Several other works were instrumental. For example, *mode automata* [14] gave a first answer for writing mixed models between a subset of LUSTRE and the hierarchical automata of ARGOS [15]. Yet, several questions remained, in particular the integration in a complete language. The language ESTEREL V7 did integrate data-flow and control-flow but it was tuned for generating very efficient hardware. How to adapt it for software and integrate it into a qualified compiler was unknown at that time.
The main design decision was to build the language and compiler on the following idea: (1) define a minimal kernel language together with a static and dynamic semantics and used as some kind of ‘typed assembly language’ from which sequential code is produced; (2) express richer programming constructs in terms of the basic language by a source-to-source translation, and give a static and dynamic semantics for all language constructs that preserves this translation semantics. For the kernel language, we defined a *clocked data-flow language*, close to LUSTRE but with some modifications that we motivate.
This design decision was put into practice in ReLuc 4, a prototype language and compiler written in OCAML that was used to experiment new programming constructs and compilation techniques. This prototype evolved continuously between 2000 and 2006. In 2006, SCADE 6 was launched from it, with a first release in 2008.
In this paper, we present the way this design decision has been followed. We illustrate the main language features, compile-time static analyses and the compiler architecture. The paper focuses on the language, information about graphical support and modeling tool were published in [17].
Section II reminds the LUSTRE kernel behind SCADEx till version 5. Section III presents the new core language on which SCADEX 6 is built. Section IV illustrates the static semantics of SCADEX 6. Section V presents the mix of data-flow and control-flow. Section VI explains the treatment of arrays. Section VII discusses the code generator design and qualification. Section VIII gives a few concluding remarks.
In the paper, we use LUSTRE for the underlying language of SCADE until version 5 and SCADEX 6 for the new versions.
II. FROM LUSTRE CORE TO SCADEX 6 CORE
LUSTRE is a synchronous interpretation of the block diagrams used for decades by control engineers. In this interpretation, time is discrete and can be identified by an integer. Hence, a discrete-time signal is a sequence or stream of values and a system is a stream function.
A. The core LUSTRE language
Since sequences are the basic elements of LUSTRE, operations are lifted to apply pointwise. This is what is done in
```plaintext
maths when writing the point-wise sum of two sequences:
(x_n)_{n \in \mathbb{N}} + (y_n)_{n \in \mathbb{N}} = (x_n + y_n)_{n \in \mathbb{N}}
```
Constants and literals are also lifted to streams by infinitely repeating them. The evolution can be represented by the table:
| x | 2 | 2 | 2 | 2 | ...
|----|-----|-----|-----|-----|
| y | y | y | y | y | ...
| x + y | x0 + y0 | x1 + y1 | x2 + y2 | x3 + y3 | ...
| 2 * x | 2 * x0 | 2 * x1 | 2 * x2 | 2 * x3 | ...
An important primitive is the unit delay `pre` (for 'previous'):
```plaintext
pre x = [x0, x1, ..., x_n, ...]
```
If `x=(x_n)_{n \in \mathbb{N}}`, `pre x` is the sequence `(p_n)_{n \in \mathbb{N}}` defined by:
```
p_0 = nil
\forall n \in \mathbb{N}, p_{n+1} = x_n
```
where `nil` is an undefined value of the right type.
The first value of a stream can be specified with the initialization operator `(-)`: x
| x | x0 | x1 | x2 | x3 | ...
|----|----|----|----|----|
| y | y0 | y1 | y2 | y3 | ...
| x -> y | y0 | y1 | y2 | y3 | ...
More formally:
```
\forall n \in \mathbb{N}, (x -> y)_n = y_0
```
Its combination with `pre` defines the initialized delay:
| x | x0 | x1 | x2 | x3 | ...
|----|----|----|----|----|
| y | y0 | y1 | y2 | y3 | ...
| x -> pre y | x0 | y0 | y1 | y2 | ...
The following LUSTRE equation illustrates them:
```
nat = 0 -> 1 + pre nat;
```
which means that, for all \( n \in \mathbb{N} \):
```
nat_n = (0 -> 1 + pre nat)_n
= 0 \text{ if } n = 0
= (1 + pre nat)_n
= 1 + nat_{n-1} \text{ otherwise}
```
The last important notion is that of a clock. The clock of a stream tells when its current value is present (or ready). Clocks are modified by two operators, *when* and *current*. A stream can be filtered according to a boolean condition:
| h | true | false | true | true | false | ...
|----|------|-------|------|------|-------|
| x | x0 | x1 | x2 | x3 | x4 | ...
| when h | x0 | _ | x2 | x3 | _ | ...
The `_` is not a special value but indicates the absence of a value. Thus, the stream `x when h` is the sub-sequence `x0, x2, x3, ...`. We say that its clock is `h`, that is, it is present when `h` is present and true. By filtering a stream, it is possible to model a slow process. E.g., if `f` is a stream function such that its input and output are on the same clock, then `f (x when h)` has clock `h` and contains the application of `f` to the sub-sequence of `x` filtered by `h`. Note that `h` can be any boolean expression and thus, it can encode a periodic clock.
A stream can be completed by keeping its value between two samples. This corresponds to a zero-order hold.
\[
\begin{array}{c|cccccc}
& \text{true} & \text{false} & \text{false} & \text{true} & \text{false} & \ldots \\
h & a & a_0 & a_0 & a_0 & a_1 & a_1 & \ldots \\
\end{array}
\]
If \( a \) has some clock \( h \), the clock of \( \text{current} \) \( a \) is the clock of the clock of \( a \). Hence, \( a \) cannot be on the fastest clock (termed the ‘base’ clock) of the system. \( \text{current} \) is the way to go from a slow process to a fast one. E.g., \( \text{current}(f(\times \text{ when } h)) \) returns a stream whose clock is that of \( x \) and \( h \).
A program can be synchronously executed when the execution can proceed as a global sequence of steps where streams expected to be present are indeed present and those expected absent are indeed absent. In particular, a combinatorial function that expects its two arguments to be present or absent, e.g., the operation \(+\), have its two arguments present or absent. All computations of the corresponding Kahn Process Network are clocked according to a global time scale, removing all necessary buffer synchronisations [18].
A dedicated static analysis, named the clock calculus, statically rejects a program that actually uses a stream at a clock different from what is expected. E.g., writing \( x + (x \text{ when } h) \) is rejected because the sum operator expects its two arguments to be on the same clock.
In LUSTRE, a user defined operator (or stream function) is introduced by the keyword \( \text{node} \). Below is the example of a smoothing function that computes the average of its input \( x \) with its previous value \pre x. Figure 1 shows the corresponding block diagram.
\[
\text{node sliding_average (} x : \text{real} \text{) returns (} \text{average : real} \text{);} \\
\text{let} \\
\text{average} = x \rightarrow (x + \pre x) / 2.0; \\
\text{tel}
\]
The body is an unordered set of equations which allows introduce/remove auxiliary equations. E.g., the following node computes the very same sequence with a local variable \( s \):
\[
\text{node sliding_average (} x : \text{real} \text{) returns (} \text{average : real} \text{);} \\
\text{let} \\
\text{average} = x \rightarrow s / 2.0; \\
s = x + \pre x; \\
\text{tel}
\]
An equation of the form \( x = e \), where \( x \) is a variable and \( e \) an expression holds at every instant, that is, \( \forall n \in \mathbb{N}, x_n = e_n. \)
B. The question of determinism
The semantics of LUSTRE formally defines what is the current value of a stream. The compiler checks that this value exists, is unique, can be computed sequentially from current inputs, possibly past computed values and in bounded time and space. Parallelism is not the only source of non determinism. Operators may introduce non determinism too. An example is the operator \pre \) whose initial value \( \text{nil} \) is undetermined. It is thus important that an observed output does not depend on it. \text{current} also introduces \( \text{nil} \). E.g.:
\[
\begin{array}{cccccccc}
& \text{true} & \text{false} & \text{false} & \text{true} & \text{false} & \ldots \\
\text{current} & a & \_ & \_ & a_0 & \_ & \ldots \\
\end{array}
\]
\( h \) is the clock of \( a \). The prefix of \( \text{nil} \) values is arbitrarily long, unless \( h \) is initially \text{true}. \text{current} (\pre x) \) is another example of a stream defined only after the second value of \( x \).
The decision problem — is a given output depend on the actual value of \( \text{nil} ? \) — is undecidable in the general case and at least combinatorial. It can be safely approximated by a SAT problem. Yet the time complexity and good diagnostics are difficult to give. Moreover, its conclusion — the system is safe — would have to be justified in the context of a qualified compiler. For SCADE 6, we took a more modest approach, designing a dedicated initialization analysis which deals with the particular case of the un-initialized delay and refuse to compile a program where \( \text{nil} \) may happen anywhere but in a first position of a sequence.
III. SCADE 6: A NEW DATA-FLOW CORE
Instead of \text{current} we chose an alternative operator \text{merge} borrowed from LUCID SYNCHrone and which merges two complementary streams.
\[
\begin{array}{cccccccc}
& \text{true} & \text{false} & \text{true} & \text{false} & \text{false} & \ldots \\
h & \_ & \_ & \_ & a_0 & \_ & \ldots \\
\text{merge} & a & \_ & b_0 & a_1 & a_2 & b_1 & \ldots \\
\text{hold(i, } h, a) & a_0 & a_0 & a_1 & a_2 & a_2 & \ldots \\
\end{array}
\]
A zero-holder \text{hold(i, } h, a) which holds the value of \( a \) itself on clock \( h \) is programmed: 5
\[
\text{node hold (i: \text{int}; } h : \text{bool }; a : \text{real} \text{ when } h) \\
\text{returns (o : } \text{real} \text{);} \\
o = \text{merge}(h; a; b) \text{ when not h);} \\
\]
Contrary to \text{current, merge} does not introduce a \text{nil}. Moreover, its implementation does not use any memory but only local variables and is easier to compile efficiently. Finally, there are common situations in LUSTRE of an equation of the form: \( o = \text{if } h \text{ then current a else current b } \); with \( a \) on clock \( h \) and \( b \) on clock not \( h \) that is difficult to compile efficiently. It uses two memories (one for each \text{current} that are difficult to remove and three conditionals on \( h \) (one for every \text{current} plus the one of the condition) which need to be fused. This equation is equivalent to: \( o = \text{merge}(h; a; b); \)
Finally, the \text{merge} is generalized to an n-ary form for merging several complementary sequences [19].
We now see the second change made on the data-flow core. LUSTRE does not provide a mean to modularly reset a system on a Boolean condition, that is, to re-initialize all its state
\footnote{The let/tel braces are optional when the body contains a single equation.}
This approach enjoys two properties: Robin Milner, “well-typed programs cannot go wrong” [23]. A program must satisfy before considering its execution. For account [22]. The static semantics gathers all the invariants that with extensions to take the merge and modular reset into account [21]. The reset primitive was first introduced in [20].
Figure 2 summarizes the differences between the two cores. This data-flow core is described in more detail in [21] and constitutes the basic language of SCADE 6.
IV. STATIC SEMANTICS
The dynamic semantics of SCADE 6 is that of LUSTRE with extensions to take the merge and modular reset into account [22]. The static semantics gathers all the invariants that a program must satisfy before considering its execution. For SCADE 6 we express them as typing problems so that, quoting Robin Milner, “well-typed programs cannot go wrong” [23]. This approach enjoys two properties:
- A type system is modular in the sense that a function type gathers all the information needed to check the correct use of this function.
- It allows for giving good error diagnostics, as far as the type language is simple enough.
Four dedicated type systems exist in the SCADE 6 compiler that are summarized below. They are applied in sequence: when one fail, the compilation stops. The type systems are presented following the order they are applied in KCG.
A. Types
The first (and pretty standard) static verification step is the type checking. Its main features are:
- All types must be declared; a type can be an enumerated set of values, a record, an array parameterized by a size, or an abstract type.
- type equivalence is based on structural equality;
- the language provides a number of built-in type classes, like numeric and integer. E.g., int8, int16, int32, etc. are elements of the class integer.
- types can be polymorphic and possibly constrained by the type class numeric, float, integer, signed, unsigned.
- functions may be parameterized by a size. Such a parameter can be used in an array type.
Figure 3 illustrates several of these features. It defines a few operators working on matrices and vector whose sizes are given as parameters and whose coefficients are of a numeric type. The function root makes use of the generic matrix product for particular type and sizes.
The type system is formalized in the KCG project documentation. It is a simplified form of the type classes used in Haskell [24]. In particular, type classes are built-it and cannot be defined by the user. Moreover, the language is first-order (it is not possible to write a function which takes or returns a function). A type expression may also contain a size expression (e.g., float64[3,7] defines the type of matrices of size 7 \times 3 of doubles). A size must be a compile-time static expression. To avoid having to incorporate a decision procedure — is size expression \( x + y \) equal to \( y + x \) — type checking is performed in two steps: the first step does regular type checking but generates a set of equality constraints between size expressions. In a second step, one static expansion has been performed, it checks that equality constraints are trivial.
Finally, Types must incorporate an information that is specific to SCADE 6. Combinatorial functions (whose current outputs only depend on current inputs) are given the kind function whereas a stateless function (whose outputs may also depend on the past) are given the kind node. Kinds are checked during typing simply: if a function is declared with kind \( k \), all the function it calls must also be of kind \( k \).
The compiler imposes a strong typing discipline, that is, programs which do not type check are rejected. This allows stating the following property.
Property 1 (Well typed program execution): A well-typed program is such that:
- arguments of functions have the expected type;
- array accesses are within array bounds.
B. Clock checking
The purpose of this analysis is to ensure that programs can be executed synchronously. Once done, every expression is clocked with respect to the global time scale, named the base clock. Precisely, the clock of a stream is an expression of the following language:
\[
ck ::= ck + e | \alpha
\]
where \( e \) is a Boolean expression of the core language and \( \alpha \) a clock variable. For example, an expression with clock \((\alpha + e_1) + e_2\) is present if an only if \( e_2 \) is present with clock \( e_1 \) and true. An expression with clock variable \( \alpha \) is present iff \( \alpha \) evaluates to true. The clock checking existed since the early days of LUSTRE [25]. In [18], it was shown to be a typing problem, precisely a typing problem with dependent types [26]. SCADE 6 adopts this point-of-view but takes a simpler formulation where equivalence between Boolean expressions is replaced by name equivalence [27]. It adds an extra simplification to this proposal by imposing that the clock of variables is declared and a function to use a single clock variable \((\alpha)\) whereas the original proposal, implemented in LUCID SYNCHRONHE V3, did not impose these two restrictions.
Given a function definition, the compiler checks clocks and computes a clock signature. The signature for the function \( \text{hold} \) (Section III) is: \( \forall \alpha. \alpha \times (h : \alpha) \times \alpha \times \alpha \rightarrow \alpha \). It states
that, for any clock $\alpha$, the first input of $\text{hold}$ must have clock $\alpha$, the second, named $h$, clock $\alpha$, the third, clock $\alpha$ on $h$. Then, the output has clock $\alpha$.
Property 2 (Synchronous execution): A well clocked SCADE 6 model can execute synchronously.
A corollary is that a SCADE 6 model can be implemented in bounded memory, provided that imported functions do.
C. Causality analysis
The purpose of this analysis is to ensure that a set of processes running synchronously produces one and at most one output at every reaction. LUSTRE follows a simple approach, reducing it to the analysis of instantaneous loops in the data-dependences relation between variables. A more expressive constructive causality was proposed for ESTEREL [28].
Following a preliminary work [29], the causality analysis for SCADE 6 has been specified as a type system. The intuition is to associate a time stamp to every variable and to check that the relation between those time stamps is a partial order (thus, with no cycle). We illustrate it on the following two integration functions:
```plaintext
node fwd_Euler <<K, T>> (IC : 't ; u : 't)
returns (y : 't) where 't numeric
y = IC -> pre (y + K * T * u);
node bwd_Euler <<K, T>> (IC : 't ; u : 't)
returns (y : 't) where 't numeric
y = IC -> pre y + K * T * u;
```
The causality type of $\text{fwd}_\text{Euler}$ is $\forall \gamma_1, \gamma_2. \gamma_1 \times \gamma_2 \rightarrow \gamma_1$ which indicates that the output only depends instantaneously of its first input. From this signature, one can see that this operator is able to break a dependency cycle on its second input. $\text{bwd}_\text{Euler}$ has type $\forall \gamma. \gamma \times \gamma \rightarrow \gamma$ that expresses the dependency of the output on both inputs. This is enough information to deduce that this integration function cannot be used to break a cycle.
Property 3 (Schedulability): A causal SCADE 6 model can compiled into a statically scheduled sequential code.
D. Initialization analysis
The purpose of this analysis is to ensure that the behaviour of a system does not depend on the unspecified value $\text{nil}$. A simple type-based analysis with sub-typing is described in [30]. For every expression, it computes its type with the following intuition:
- The type 1 is that of a stream which may have an uninitialized value $\text{nil}$ at the very first instant;
- the type 0 is that of a stream which is always initialized.
It induces the natural sub-typing relation $0 \leq 1$, meaning that an expression which is always initialized can be given to an expression that is expected to have type 1. E.g., the uninitialized rising edge operator:
```plaintext
node rising_edge (a : bool) returns (o : bool)
o = a and not pre a;
```
gets the initialization type signature: $0 \rightarrow 1$ and the following function:
```plaintext
node min_max(x, y : int32) returns (mi, ma : int32)
mi, ma = if x < y then (x, y) else (y, x);
```
gets the signature: $\forall \delta. \delta \times \delta \rightarrow \delta \times \delta$.
The initialization analysis does not force all functions to return well initialized streams. Hence, the following function (with signature $0 \times 0 \rightarrow 1$) which is accepted as a node declaration is not accepted if this node is the main node.
```plaintext
node root_bad (a, b : bool) returns (o : bool)
o = rising_edge (a) or rising_edge (b);
```
whereas the following (with signature $0 \times 0 \rightarrow 0$) is accepted.
```plaintext
node root_good (a, b : bool) returns (o : bool)
o = false -> (rising_edge (a) or rising_edge (b));
```
The main node defined what is finally executed on the target platform. Its outputs, in particular, must always be of type 0.
Property 4 (Determinism): A well initialized SCADE 6 model is deterministic in the sense that it never produces an output that depends on an undefined value ($\text{nil}$).
This analysis is defined for a synchronous data-flow language in [30]. It is applied to the full SCADE 6 language.
V. CONTROL-STRUCTURES
In LUSTRE, clocks are the only way for controlling the execution of a computation: an expression is computed only when its clock is true. Unfortunately, their use in LUSTRE is not easy, partly because of a lack of expressiveness of the clock language and automation (particularly clock polymorphism and inference).
Clocks exists in SCADE 6 but the language proposes an alternative by mean of dedicated control-structures. These are essentially syntactic sugar in the sense that they are translated into well-clocked equations of the data-flow core. This approach appeared extremely useful to ensure that all language extensions were consistent with each other. We were also convinced that the data-flow core was expressive enough to support this translation. The PhD work of Hamon [13] was pioneering in this direction.
In [14], Maraninchi and Rémont introduce the language of $\text{mode-automata}$ that mixes a subset of LUSTRE with ARGOS-like hierarchical automata. A compilation into guarded equations was proposed but with a source language which was less expressive than SCADE 6 and not done via a source-to-source transformation.
The following sections present and illustrate the new constructs. Their formalization and compilation are given in [19].
A. Activation blocks
The activation block is the simplest way of expressing that some equations are only active according to a boolean condition. The example below is a function that computes the complex solution of a second degree polynomial. This is a typical example where case analysis is needed. Depending on the sign of the discriminant, one of three solutions is selected.
```plaintext
function imported sqrt (x : float64) returns (y : float64);
function second_degree(a, b, c : float64)
returns (xr , xi , yr , yi : float64)
var delta : float64
...
Let
\[ \text{delta} = b^2 - 4 * a * c; \]
**activate**
\[ \text{if} \ \text{delta} > 0 \]
\[ \text{then} \]
\[ \text{var} \ d : \text{float64}; \]
\[ \text{let} \]
\[ d = \sqrt{\text{delta}}; \]
\[ x_r, x_i = ((-b + d) / (2 * a), 0); \]
\[ y_r, y_i = ((-b - d) / (2 * a), 0); \]
\[ \text{tel} \]
**else if** \ \text{delta} = 0
**then**
\[ x_r, x_i = (-b / (2 * a), 0); \]
\[ y_r, y_i = (x_r, x_i); \]
\[ \text{tel} \]
**else** -- \ \text{delta} < 0
**let**
\[ x_r, x_i = (-b / (2 * a), \sqrt{-\text{delta}} / (2 * a)); \]
\[ y_r, y_i = (x_r, -x_i); \]
\[ \text{tel} \]
**returns** \ \( x_r, y_r, x_i, y_i; \)
**tel**
The square root function is declared as imported (it is not a built-in primitive of SCADE 6). A local variable \( d \) is introduced to name the result. \( d \) only exists when \( \text{delta} > 0 \), as if \( d \) was 'clocked' by writing an equation \( d = \sqrt{\text{delta} \ ext{when} \ (\text{delta} > 0)} \). Indeed, the translation of the function second_degree precisely does that: it introduces such a clocked equation for every defined variable.
### B. Scope and shared variables
The previous example illustrates the situation where a shared variable is defined by different equations and a single one is active at a time. Only the active equations are executed. In particular, an expression \( \text{pre}(e) \) activated when a condition \( c \) is true denotes the previous 'observed' value of \( e \), that is, the value that \( e \) had, the last time \( e \) was true. This is illustrated on the function move1 with an execution trace given below:
\[
\text{node move1}(c : \text{bool}) \ \text{returns} \ (o : \text{int32})
\]
**activate**
\[ \text{if} \ c \ \text{then} \ o = (0 \rightarrow \text{pre} \ o) + 1; \]
**else**
\[ o = (0 \rightarrow \text{pre} \ o) - 1; \]
**returns** \( o; \)
\[
\text{node move2}(c : \text{bool}) \ \text{returns} \ (o : \text{int32} \ \text{last} = 0)
\]
**activate**
\[ \text{if} \ c \ \text{then} \ o = \text{last} \ 'o + 1; \]
**else**
\[ o = \text{last} \ 'o - 1; \]
**returns** \( o; \)
\[
\begin{array}{cccccccc}
\text{move1}(c) & \text{true} & \text{true} & \text{false} & \text{false} & \text{true} & \text{false} & \ldots \\
\text{move2}(c) & 1 & 2 & -1 & -2 & 3 & -3 & \ldots \\
\end{array}
\]
But how to communicate between two exclusive branches, e.g., to define a signal that is incremented and decremented? One solution is to add the equation \( \text{last}_o = 0 \rightarrow \text{pre} \ o \) in parallel and use \( \text{last}_o \) in the two branches. SCADE 6 provides a simpler and more intuitive way for communicating the value of a shared variable. It is illustrated in the function move2 with the corresponding chronogram. The variable \( o \) is initialized with 0. The construct \( \text{last} \ 'o \) applies to a name, not an expression. \( \text{last} \ 'o \) denotes the previous 'computed' value of \( o \). This construct is not primitive in SCADE 6 in the sense that it is translated into the basic data-flow core. It is a convenient construct to express in a data-flow manner, equations of the form \( x = \text{last} \ x + 1 \) which, by the way, gives an imperative flavor.
In the proposal for **mode automata** [14], the operator \( \text{pre} \) applied on a shared variable \( x \) behaves like \( \text{last} \ 'x \), which does not correspond to the \( \text{pre} \) of LUSTRE.
### C. Hierarchical Automata
State machines are a convenient way to specify sequential behaviour with the two classical forms:
- **Moore machines**, when the current output is a function of the current state only;
- **Mealy machines**, when the current output is a function of both the current state and current input.
In [31], Harel introduced Statecharts, an extension of state machines to express complex systems in a modular and hierarchical way. ARGOS [15], SyncChart [9] and ESTEREL integrate this expressiveness within a synchronous framework with static conditions to ensure the existence and uniqueness of a reaction in every state. SyncChart [9] was the graphical notation used in the industrial tool-set based on ESTEREL.
SCADE 6 incorporates hierarchy a la SyncChart but where states may themselves contain other state machines and/or data-flow equations. A difference with the approach of ESTEREL/SyncChart is the existence of a textual support for automata. In general, a graphical representation of state machines is preferred, but proposing a textual support maintains the language and the graphical notation in a simple one to one correspondence and all the transformation work is concentrated at the compiler level. The main features of SCADE 6 hierarchical state machines are borrowed from the SyncChart:
- An automaton must have one initial state;
- some states can be marked to be final;
- a transition can be weak or strong;
- a transition may either reset or resume its target state;
- a synchronization mechanism allows for firing a transition when all the automata inside the state are in a final state.
#### 1) Intuitive Semantics:
The semantics has been formalized in [32] and through a translation into the data-flow core [19]. SCADE 6 imposes an extra constraint: **at most one transition is fired per cycle**.
A cycle consists in deciding, from the current selected state what is the active state; execute the corresponding set of equations; then determine what is the selected state for the next cycle. Precisely:
- At first cycle, the selected state is the state marked initial.
- Evaluate all the guards of the selected state strong transitions. The active state is the target of the first (taken sequentially) firable strong transition if any, otherwise it is the selected state.
- Execute the equations of the active state.
- Evaluate all the guards of the active state weak transitions. The next selected state is the target of the first (taken sequentially) firable weak transition if any, otherwise it is the current active state.
#### 2) Two simple examples:
The example below shows a node that returns an integer output \( o \) with last value initialized to 0. It is defined by a two states automaton. Up is the initial state.
In this mode, \( o \) is incremented by 2 until \( o \geq 12 \). Then, the next state is \( \text{Down} \). In this state, \( o \) is decremented until it reaches value 0 and the next state is \( \text{Up} \), etc.
```plaintext
node up_down() returns (o : int32 last = 0) automaton
initial state Up
o = last * o + 2;
until if o >= 12 resume Down;
state Down
o = last * o - 1;
until if o = 0 resume Up;
returns o;
```
Because the transitions are weak, the guards can involve the current value of \( o \). Hence, replacing the weak transition \( (\text{until}) \) by a strong transition \( (\text{unless}) \) would lead to a causality error.
The second example is a two inputs node: \( \text{tic} \) and \( \text{toc} \).
```plaintext
node tic_toc_tic (tic, toc : bool) returns (o : int32 last = 0) automaton
initial state WaitTic
unless if tic restart CountTocs;
state CountTocs
unless if tic resume WaitTic;
o = 0 -> if toc then (last * o + 1) else last 'o;
returns o;
```
The initial state \( \text{WaitTic} \) waits for an occurrence of \( \text{tic} \) then immediately goes to the state \( \text{CountTocs} \). This state is entered by \( \text{restart} \) which reinitializes all of its state variables (in particular the initialization \( \rightarrow \)) and thus \( o \). Because \( \text{WaitTic} \) does not provide a definition for \( o \), its last value must be declared. The value of \( o \) stay unchanged in the initial state.
3) A complete example: The last example is a simple version of the digital watch written in \( \text{ESTEREL} \) [33] limited to watch and stopwatch mode. It has four input buttons:
- \( \text{stst} \) : start/stop button
- \( \text{rst} \) : reset button
- \( \text{set} \) : set time button
- \( \text{md} \) : mode selection button
and it displays the following information:
- \( \text{HH}, \text{MM}, \text{SS} \) : time information
- \( \text{L} \) : lap time indicator
- \( \text{S} \) : setting time mode active indicator
- \( \text{Sh} \) : setting hour mode (minutes otherwise)
Basically, three automata run in parallel. Two are simple counters, one for the time (automaton \( \text{Stopwatch} \)) and the other for the stop watch (automaton \( \text{Watch} \)). There is also a process that manages the display and the Lap time (automaton \( \text{Display} \)). The watch has two modes, one where it counts time, the other where the current time is set. This program is supposed to be executed periodically with a base clock of 10ms. When a variable is declared, it can be given a last value and/or a default value (e.g., \( \text{var} \ \text{isStart} : \text{bool default = false} \)). The default value is the definition of the variable in the sub-scopes that omit its definition. If no default value is specified, the implicit definition for a variable \( x \) is \( x = \text{last } 'x; \). The declared last value (e.g. \( d : \text{int}8 \text{ last } = 0; \)) defines the initial value of its last (here \( \text{last } 'd \) for instance). These two features allow for writing shorter programs. The implicit equation \( x = \text{last } 'x; \) participates to the imperative flavor of these constructs.
The node Button has a background and a foreground color depending on their state. When a button is pre-selected, its background is yellow. When locked, the background of the pre-selected buttons becomes green. The node Button defines all these behaviours; its inputs are the position of the considered button, the lock command, the unlock command and a Boolean indicating if another button is pushed to implement the radio button behaviour.
```
function prod_sum(acc_in, ui, vi : 'T)
returns (acc_out : 'T) where 'T numeric
acc_out = acc_in + ui + vi;
-- scalar product of two vectors: u . v
function ScalProd <<n>> (u, v : 'T^n)
returns (w : 'T^n) where 'T numeric
w = (fold prod_sum <<n>>)(0, u, v);
-- product of a matrix by a vector: A(m,n) * u(n)
function MatVect Prod <<m, n>> (A: 'T^m^n; u: 'T^n)
returns (w: 'T^m) where 'T numeric
w = (map (ScalProd <<n>>))<<m>>)(A^m, u);
-- matrix product: A(m,n) * B(n,p)
function Mat Prod <<m, n, p>> (A: 'T^m^n; B: 'T^n^p)
returns (C: 'T^m^p) where 'T numeric
C = (map (Mat Vect Prod <<m, n>>))^<<m, p>>(A^m, B);
function root (A: float64^3^7; B: float64^7^5)
returns (C: float64^3^5)
C = Mat Prod <<3, 7, 5>>(A, B);
```
Fig. 3. Example: matrix operations
As a first example, consider the function exists which, given a static parameter n and a boolean array b of length n, returns true if and only if one element is true.
```
function exists <<n>>(b : bool^n) returns (o : bool)
o = (fold $or$ <<n>>()(false, b));
$or$ is the operation or used in prefix notation. The function exists is combinatorial; hence, it can be declared with the keyword function. The semantics is that of the full unfolding; yet, the compiler generates a for loop.
A. A combinatorial example
Figure 3 gives a more complete example, with the scalar product of two vectors, the vector matrix product and the matrix product. All these functions are polymorphic and apply to any numerical type with vectors and arrays whose sizes are specified as a static input. Function root shows an instance of the matrix product for specific sizes and with type float64.
The function Mat Vect Prod uses a special primitive transpose that allows to permute two dimensions of an array of arrays.
B. A statefull example
The following example is inspired by the interface present in a fighter plane, where, because of acceleration, the pilot may not be precise in selecting the right push button. To overcome this risk, command selection is done in two steps: a pre-selection that works as radio buttons (selecting one button un-selects the other) and a second step done with a single button (no choice, thus no possible selection error) to confirm the pre-selection. The logic to manage this interface is quite regular and independent on the number of buttons. We give an implementation in SCADE 6 where a state machine specifies the behaviour of one button and a parameterized number n that are composed in parallel.
Buttons have a background and a foreground color depending on their state. When a button is pre-selected, its background is yellow. When locked, the background of the pre-selected buttons becomes green. The node Button defines all
Fig. 4. SCADE 6 Compiler Organization.
![Diagram of SCADE 6 Compiler Organization]
The interest of this example lies in the iteration of an operator that encapsulates a state (the state of the corresponding button, through the state machine). The semantics is that of the unfolded version; yet, compiled as a for loop.
VII. CODE GENERATION
A. Compiler Organization
The organization of the compiler (KCG) is rather classical. Static analysis are applied in sequence right after parsing. If they all succeed, code generation starts with a sequence of source-to-source transformations that rewrite all the constructs into the data-flow core. This core is extended with array iterators. Then, the data-flow core is translated into an intermediate sequential language. At last, target imperative code (mainly C and ADA) is emitted. Figure 4 summaries these steps at a coarse grain; corresponding bibliographic references are given on the arrows.
Among the transformation, many optimisations are done on the data-flow form (dead-code elimination, constant propagation, common sub-expression, iterator compositions, etc). The scheduling in the data-flow compilation implements heuristics to limit memory size. Control structures are merged in the sequential representation.
B. Qualified Development
Qualification is based on traceability between a specification and the implementation. The specification details the principles presented in this paper. The source and intermediate languages have been formally specified together with the static semantics (defined by inference rules) and source-to-source transformations (defined by rewrite rules). Those specifications are used by the development team to implement the compiler and by an independent verification team to test it.
For the implementation, we choose OCAML [34] which was quite a challenge for a qualified tool, in 2005. Indeed, certification standards often push companies to use well established technologies. We thus had to convince that OCAML was well adapted to write a compiler. The argumentation was built on the small distance between the formal specification and their implementation in OCAML. This industrial use of OCAML in a certified context is detailed in [35] and [36].
The current version of SCADE KCG is approximately fifty thousands lines of code (50 KLOC) and it uses a simplified OCAML runtime to satisfy the objectives of the standards. The formalized static semantics for the whole input language is about one hundred pages long and has been updated for more than ten years to integrate new language features. The detailed design is more than one thousand pages long.
C. Towards a Computer Aided Formal Qualification
The formalization made for SCADE 6 was an important step to get a qualified code generator. Yet, this formalization was done by hand and some important parts were not considered. The draft [22] was a first proof of correctness for the data-flow core down to sequential code. Extending it for the full language and with high confidence in the proof correctness without the help of a computer appeared out of reach.
Proof assistants like Coq [37] allow for writing both programs, properties and computer checked proofs. The CompCert C compiler [38], [39] is the first compiler that is developed this way. Its industrial application and qualification is now considered seriously but making a formal process match industrial certification standards is a new challenge that does not reduce to a scientific question.
The next step for SCADE 6 and KCG is now to go further by using computer aided tools to get a proof of correctness of the compiler. Then connecting this new object with the CompCert C compiler would lead to a mathematically proven translation from a high-level synchronous language to assembly code. A first step have been achieved recently for the data-flow core without the reset [40]. The prototype compiler is called Velus. When compilation succeeds, the generated assembly is proved to be semantically equivalent to the data-flow program.
VIII. CONCLUSION
This paper has shown principal language features of SCADE 6 together with the main design choices for its compilation. It relates a long and fruitful collaboration between industry and academia and is a concrete example of transfer of state-of-the-art research work on computer language design and implementation.
If the core of the language remained on the same size as Lustre, the new rich features it proposes are a big improvement for the SCADE designers. The proposed mix of fine grain data-flow and hierarchical automata mix is quite unique. The language is now as convenient to develop the logic of a cockpit display than it is to develop a discrete control law.
SCADE and KCG have been used in about hundred DO-178B/C level A avionic systems all over the world, a quite significant result in this market.
And the story continues. The language offers a good coverage of discrete-time system programming, adding continuous time modeling capabilities could be an axis of development for the near future. Following the same collaboration framework, the work on Zélus [41], [42] is opening the way.
ACKNOWLEDGMENT
This work owes a lot to Paul Caspi; and Gérard Berry for our living discussions. We also want to thank all the colleagues of the Core team for their hard work on SCADE 6 KCG and our CTO, Bernard Dion, for his confidence and support.
REFERENCES
[6] Berry, G., “Formally unifying modeling and design for embedded sys-
tional Programming with Lucid Synchrony, english translation of [43].
[18] P. Caspi and M. Pouzet, “Synchronous Kahn Networks,” in ACM SIG-
[20] G. Hamon and M. Pouzet, “Modular Resetting of Synchronous Data-
[24] P. Wadler and S. Blott, “How to make ad-hoc polymorphism less ad-
[26] S. Boulmé and G. Hamon, “Certifying Synchrony for Free,” in Inter-
national Conference on Logic for Programming, Artificial Intelligence and Rea
|
{"Source-Url": "https://inria.hal.science/hal-01666470/file/tase17.pdf", "len_cl100k_base": 11682, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 42835, "total-output-tokens": 15360, "length": "2e13", "weborganizer": {"__label__adult": 0.0003571510314941406, "__label__art_design": 0.0003533363342285156, "__label__crime_law": 0.000240325927734375, "__label__education_jobs": 0.000324249267578125, "__label__entertainment": 5.501508712768555e-05, "__label__fashion_beauty": 0.00014543533325195312, "__label__finance_business": 0.00018799304962158203, "__label__food_dining": 0.0003628730773925781, "__label__games": 0.0005121231079101562, "__label__hardware": 0.0013628005981445312, "__label__health": 0.000347137451171875, "__label__history": 0.0002036094665527344, "__label__home_hobbies": 9.173154830932616e-05, "__label__industrial": 0.0004467964172363281, "__label__literature": 0.0001957416534423828, "__label__politics": 0.00023281574249267575, "__label__religion": 0.0004656314849853515, "__label__science_tech": 0.011322021484375, "__label__social_life": 5.7578086853027344e-05, "__label__software": 0.00377655029296875, "__label__software_dev": 0.97802734375, "__label__sports_fitness": 0.00026988983154296875, "__label__transportation": 0.0005908012390136719, "__label__travel": 0.00018978118896484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55039, 0.02066]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55039, 0.4628]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55039, 0.85265]], "google_gemma-3-12b-it_contains_pii": [[0, 1020, false], [1020, 6230, null], [6230, 11472, null], [11472, 17480, null], [17480, 22902, null], [22902, 28833, null], [28833, 35038, null], [35038, 38250, null], [38250, 41441, null], [41441, 46629, null], [46629, 55039, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1020, true], [1020, 6230, null], [6230, 11472, null], [11472, 17480, null], [17480, 22902, null], [22902, 28833, null], [28833, 35038, null], [35038, 38250, null], [38250, 41441, null], [41441, 46629, null], [46629, 55039, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55039, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55039, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55039, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55039, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55039, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55039, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55039, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55039, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55039, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55039, null]], "pdf_page_numbers": [[0, 1020, 1], [1020, 6230, 2], [6230, 11472, 3], [11472, 17480, 4], [17480, 22902, 5], [22902, 28833, 6], [28833, 35038, 7], [35038, 38250, 8], [38250, 41441, 9], [41441, 46629, 10], [46629, 55039, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55039, 0.00897]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
dbb19954d1eaa9e7c580f0690cf1a3fe70222daa
|
Towards Merging PlatΩ and PGIP
David Aspinall
LFCS, School of Informatics, University of Edinburgh
Edinburgh, U.K. (homepages.inf.ed.ac.uk/da)
Serge Autuxier
DFKI GmbH, 28359 Bremen, Germany (www.dfki.de/~serge)
Christoph Lüth
DFKI GmbH & FB Informatik, Universität Bremen
28359 Bremen, Germany (www.informatik.uni-bremen.de/~cxl)
Marc Wagner
DFKI GmbH, 28359 Bremen, Germany & FR Informatik, Universität des Saarlandes, 66123 Saarbrücken, Germany (www.ags.uni-sb.de/~marc)
Abstract
The PGIP protocol is a standard, abstract interface protocol to connect theorem provers with user interfaces. Interaction in PGIP is based on ASCII-text input and a single focus point-of-control, which indicates a linear position in the input that has been checked thus far. This fits many interactive theorem provers whose interaction model stems from command-line interpreters. PlatΩ, on the other hand, is a system with a new protocol tailored to transparently integrate theorem provers into text editors like TEXMACS that support semi-structured XML input files and multiple foci of attention. In this paper we extend the PGIP protocol and middleware broker to support the functionalities provided by PlatΩ and beyond. More specifically, we extend PGIP (i) to support multiple foci in provers; (ii) to display semi-structured documents; (iii) to combine prover updates with user edits; (iv) to support context-sensitive service menus, and (v) to allow multiple displays. As well as supporting TEXMACS, the extended PGIP protocol in principle can support other editors such as OpenOffice, Word 2007 and graph viewers; we hope it will also provide guidance for extending provers to handle multiple foci.
Keywords: PlatΩ, Proof General, Mediator, Protocol, PGIP
1 Introduction
Proof General [2,3] is widely used by theorem proving experts for several interactive proof systems. In some cases, there is no alternative interface; in others, the alternatives are little different. Yet the limitations of Proof General are readily apparent and reveal its evolution from simple command line systems. For one thing, the input format is lines of ASCII-text, with the minor refinement of supporting Unicode or TeX-like markup. The presentation format during interaction is the same. For another thing, the proof-checking process has an overly simple linear progression with a single point-of-focus; this means that the user must explicitly undo and redo to manage changes in different positions in the document, which is quite tedious.
Meanwhile, theorem provers have increased in power, and the ability for workstations to handle multi-threaded applications with ease suggests that it is high time to liberate the single-threaded viewpoint of a user interface synchronised in lock-step to an underlying proof-checking process. Some provers now provide multiple foci of attention, or several prover instances might be run in concert. Text editors, too, have evolved beyond linear ASCII-based layout. The scientific WYSIWYG text editor $\text{TEXmacs}$, for example, allows editing $\text{TE\LaTeX}$ and $\LaTeX$-based layout, linked to an underlying interactive mathematical system.
Significant experiments with theorem proving using richer interfaces such as $\text{TEXmacs}$ have already been undertaken. In particular, the Plat$\Omega$ system [9,4] mediates between $\text{TEXmacs}$ and the theorem prover $\Omega$MEGA. While experiments with individual systems bring advances to those specific systems, we believe that many parts of the required technology are generic, and we can benefit from building standard protocols and tools to support provers and interfaces. The aim of this paper, then, is to integrate lessons learned from the Plat$\Omega$ system prototype with the mainstream tool Proof General and its underlying protocol PGIP, putting forward ideas for a new standard for theorem prover interfaces, dubbed here PGIP 2. Specifically, our contributions are to combine ideas of state-tracking from PGIP with semi-structured document models and menus as in Plat$\Omega$, and to add support for possibly distributed multiple views.
1.1 PG Kit system architecture
The Proof General Kit (PG Kit) is a software framework for conducting interactive proof. The framework connects together different kinds of components, exchanging messages using a common protocol called PGIP. The main components are interactive provers, displays, and a broker middleware compo-
nent which manages proof-in-progress and mediates between the components. Fig. 1 shows the system architecture; for details of the framework, we refer to [3].
The PG Kit architecture makes some assumptions and design decisions about the components. Generalising from existing interactive provers (such as Isabelle, Coq, or Lego), it assumes that provers implement a single-threaded state machine model, with states toplevel, file open, theory open and proof open. Displays, on the other hand, are assumed to be nearly stateless. Through the display, the user edits the proof text and triggers prover actions, e.g., by requesting that a part of the proof script is processed. Abstractly, the broker mediates between the nearly stateless display protocol PGIPD, and the statefull prover protocol PGIPP; it keeps track of the prover states, and translates display state change requests into sequences of concrete prover commands, which change the state of the prover as required.
1.2 PlatΩ system architecture
The aim of the PlatΩ system is to support the transparent integration of theorem provers into standard scientific text editors. The intention is that the author can write and freely edit a document with high-quality typesetting without fully imposing a restricted, formal language; proof support is provided in the same environment and in the same format. The PlatΩ system is the middleware that mediates between the text editor and the prover and currently connects the text editor TEXMACS and the theorem prover Ωmega. For the architecture of the system, see Fig. 2.
1.3 Outline
The rest of the paper is structured as follows. In Section 2 we give a scenario for conducting a simple proof, and describe the interaction processes in PlatΩ and in Proof General. Section 3 begins discussion of our proposal to merge the two architectures, explaining how to extend PGIP to support documents
Section 4 describes how to extend PGIP with a menu facility like that provided in PlatΩ, and Section 5 describes how to handle multiple displays, extending what is presently possible in PlatΩ. To complete our proposal, Section 6 explains how we can reconcile semi-structured documents with PGIP flat-structured documents, to connect theorem provers based on classical flat structured procedural proofs with our enhanced middleware for a richer document format. Section 7 discusses related work and future plans.
2 Interaction in PlatΩ and Proof General
We illustrate the overall functionality and workflow of PlatΩ and PG Kit with the following example, in which student Eva wants to prove the commutativity of addition in the standard Peano axiomatisation. Eva is typing this proof in a text editor, TeXmacs or Emacs, and receives assistance from a theorem prover, ΩMEGA or Isabelle, for PlatΩ and PG Kit respectively (cf. Fig. 3). Eva’s authoring process splits into the following five phases:
Phase 1. After having specified the theory and the conjecture
\[ \forall x, y. x + y = y + x \]
in the text editor the document is passed to the theorem prover.
Phase 2. Eva begins to prove the conjecture. She does an induction on \( x \) and gets stuck with the subgoals: (1a) \( 0 + y = y + 0 \) and (1b) \( z + y = y + z \) \( \Rightarrow \) \( (s(z) + y = y + s(z)) \).
Phase 3. She quickly realises that two lemmas are needed. Hence, she adds the following two lemmas somewhere in the document:
\[
\forall x. 0 + x = x + 0 \\
\forall x, y. (x + y = y + x) \Rightarrow (s(x) + y = y + s(x))
\]
Phase 4. Eva then tackles these lemmas one by one: for each, doing an induction on \(x\) and simplifying the cases proves the lemmas.
Phase 5. Eva then continues the proof of (1) by applying both lemmas to (1a) and (1b) respectively, which completes the proof.
2.1 PlatΩ
PlatΩ uses a custom XML document format called PL to connect to the text editor. The PL document contains markup for theories, theory items and linear, text-style proofs, and also notation definitions for defined concepts. Formulas in axioms, lemmas and proofs are in the standard, non-annotated \(\LaTeX\)-like syntax of \(\LaTeX\). To connect to the theorem prover, PlatΩ uses OMDoc for the hierarchical, axiomatic theories and another custom XML format (TL) for the proofs.\(^1\) PlatΩ holds the representations simultaneously, with a mapping that relates parts of the PL document to parts of the OMDoc(TL) document; a major task of the system is to propagate changes between the documents and maintain the mapping.
The text editor interface protocol (PLATO, see Fig.2) uses XML-RPC, with methods for complete document upload, service requests for specific parts of the PL document, and the execution of specific prover commands. On receiving a new document version, PlatΩ parses the live formulas using the document notations, producing OpenMath formulas. If a parse error occurs, an error description is returned to the editor. Otherwise PlatΩ performs an XML-based difference analysis \(^2\) against the old PL document, resulting in a list of XUpdate modifications, which are transformed into XUpdate modifications for the OMDoc(TL) document.
The interface to the theorem prover (PLATO) also uses XML-RPC, with methods for applying XUpdate modifications, service requests for parts of the OMDoc(TL) document, and executing specific prover commands. Applying an XUpdate modification may result in an error (e.g. a type error) or is simply acknowledged; either response is then relayed by PlatΩ to the display as an answer to the corresponding document upload method call. The result of a service request is a menu description in a custom XML format. That
---
\(^1\) The next version of PlatΩ will use the OMDoc format for proofs, though still with \(\Omega\) mega specific justifications for proof steps.
\(^2\) see xmldb-org.sourceforge.net/xupdate/
menu is relayed to the display as a reply to the corresponding service request, rendering OpenMath formulas in the menu into \textsc{texmacs} syntax using the notation information already used for parsing.
The result of executing a menu action is a list of XUpdates, which can either patch the menu (for lazy computation of sub-menus), or patch the document (for instance, inserting a subproof). PlatΩ transforms these OMDoc(TL) patches into PL patches and renders occurring OpenMath formulas into \textsc{texmacs} markup before sending the patch to the text editor.
\textit{Semantic Citation.} A characteristic of PlatΩ is that everything that can be used comes from a document. Hence, there is a specific mechanism to “semantically” cite other \textsc{texmacs} documents (see Fig. 2); these appear as normal citations in the editor but behind the scenes, are uploaded into PlatΩ, which then passes them to Ωmega. As a consequence, PlatΩ does not allow reuse of theories that are predefined in the theorem prover.
We now illustrate PlatΩ by describing the phases of the example scenario.
\textit{Phase 1.} First, the whole document is passed from \textsc{texmacs} to PlatΩ which extracts the formal content of the document including notational information to parse formulas. From the document, PlatΩ builds up the corresponding OMDoc theories and passes them as an XUpdate to Ωmega, which builds up the internal representation of the theory and initialises a proof for the open conjecture.
\textit{Phase 2.} To start the proof of the theorem, Eva requests a menu from Ωmega, which returns a menu that lists the available strategies. Eva selects the strategy InductThenSimplify, which applies an induction on $x$ to the open conjecture, simplifies the resulting subgoals terminates with the two open subgoals. This partial proof for Theorem (1) inside Ωmega is compiled into patch description and then passed to PlatΩ. PlatΩ transforms it into a patch for \textsc{texmacs} by rearranging the obtained tree-like subproof representation into a linear, text-style proof representation using pseudo-natural language, and rendering the formulas using the memorised notational information.
\textit{Phase 3.} After the two lemmas are written in the document, the whole document is uploaded and, after parsing, the difference analysis computes the patch to add the two lemmas. This is transformed into a patch description to add their formal counter-parts as open conjectures to the theory and sent to Ωmega. Ωmega, in turn, triggers the initialisation of two new active proofs.
\textit{Phase 4.} Eva uses for both lemmas the strategy InductThenSimplify (again suggested by Ωmega in a menu) which succeeds in proving them. The resulting proof descriptions are again transformed by PlatΩ into proof patches.
for the document and both lemmas are immediately available in the ongoing proof of Theorem (1).
**Phase 5.** $\Omega$MEGA proposes in a menu to apply the lemma (2) to the subgoal (1a) and the lemma (3) to the subgoal (1b). Eva selects these suggestions one by one, which then completes the proof inside $\Omega$MEGA. Subsequently, only the proof patch descriptions are transformed into patches for the $\LaTeXmacs$ document as before.
### 2.2 Proof General
Unlike OMDoc, PGIP is not a proof format, nor does the PG Kit prescribe one. Instead, PGIP uses proofs written in the prover’s native syntax, which are lightly marked up to exhibit existing implicit structure. The mark up divides the text into text spans, corresponding to prover commands which can be executed one-by-one in sequence. Different commands have different mark up, characterising e.g., start of a proof, a proof step, or (un)successful completion of a proof, as in:
```latex
\begin{verbatim}
<opengoal>\texttt{theorem add\_commute: }\texttt{x+ y= y+ x};\langle/opengoal>
\end{verbatim}
```
Elements like `<opengoal>` do not carry an inherent semantics (and they cannot be sent to the prover on their own), they merely make it clear that e.g. the command `\texttt{theorem add\_commute: }\texttt{...}` starts the proof. Each of these text spans has a state; the main ones are parsed, processed and outdated. Proving a given theorem means to turn the whole proof into the processed state, meaning that the prover has successfully proved it. Returning to the scenario, we discuss the flow of events between the Emacs display, the PG Kit broker and the Isabelle prover.
**Phase 1.** Eva starts with an initial plain text Isabelle file, giving the definitions for the natural numbers, addition and the conjecture. She requests the file to be loaded, causing the broker to read it and send the contents to Isabelle for parsing. While this happens, the display shows the unparsed text to give immediate feedback. Isabelle returns the parsed file, which is then inserted into the Emacs buffer.
**Phase 2.** Eva now wants to prove the conjecture. She requests the conjecture to become processed so she can work on the proof (a command `<setcmdstatus>` is sent to the broker). This triggers sending a series of commands to Isabelle, ending with the conjecture statement. Isabelle answers with the open subgoal, which is then shown on the display.
Eva attempts proof by induction. She writes the appropriate Isabelle com-
mands \((\text{proof (induct x rule: N.induct)})\). The new text is sent to the broker and then on to Isabelle for parsing. Once parsed the broker breaks the text into separately processable spans (here, only one), which is sent back to the display. Now Eva asks for the proof step to be processed, which sends the actual proof text to Isabelle, which answers with two open subgoals.
Phase 3. Realising she needs additional lemmas, and knowing Isabelle’s linear visibility, Eva knows she has to insert two lemmas before the main theorem she is trying to prove. Since she cannot edit text which is in state \textit{processed}, she first requests the text to change state to \textit{outdated}. This causes a few undo messages to be sent to the prover to undo the last proof commands, resetting Isabelle’s state back to where it has not processed the start of the main proof yet. Eva then inserts the needed lemmas in the document, and has them parsed as before.
Phase 4. Eva processes the lemma, and sees a message indicating that the proof worked. She finishes the other lemma similarly.
Phase 5. Eva returns to the main proof, editing the induction proof by inserting the induction base and induction step. Fig. 3 (right) shows the Emacs display at this point: the window is split in two parts, with the proof script in the upper part and the prover responses displayed below. The top portion of the proof script is blue, showing it has been processed, indicating the linear point of focus. After the induction step succeeds, Eva closes the proof with the command \texttt{qed}, which registers the theorem with the authorities. By turning the state of this closing command to \textit{processed}, the proof is successfully finished.
3 Semi-Structured Documents
We have now seen how PlatΩ and the PG Kit handle documents. The architecture is similar: a central component handles the actual document, managing communication with the prover on one side and a user-interface component on the other side. The main differences are technical, summarised in the first two columns of Tables 1 and 2. Given the similarity, the question naturally arises: can we overcome these differences and provide a unified framework? This section will tentatively answer in the positive by extending PGIP on the prover side with the necessary new concepts (Section 3.1) and multiple foci (Section 3.2), and by using XUpdate pervasively on the display side (Section 3.3). The right-most columns of Tables 1 and 2 show the technical unification for the proposed PGIP 2.
### 3.1 Document Formats
The two different document formats can both be treated as arbitrary XML, with the difference that for PlatΩ and OMDoc, there is deep structure inside the proof script (i.e., inside goals, proof steps etc) whereas in the case of PG Kit, there is only a shallow XML structure where the proof script is mainly plain text. To overcome this difference, we allow PGIP 2 proof scripts to contain arbitrary marked-up XML instead of marked-up plain text, turning the document into a proper XML tree. Here is the present PGIP schema, excerpted and slightly simplified:
```xml
opentheory = element opentheory { thyname_attr, parentnames_attr?, plaintext }
closetheory = element closetheory { plaintext }
thoryitem = element theoryitem { objtype_attr, plaintext }
openblock = element openblock { objtype_attr, plaintext }
closeblock = element closeblock { }
opengoal = element opengoal { thmname_attr?, plaintext }
proofstep = element proofstep { plaintext }
closegoal = element closegoal { plaintext }
```
The proposed PGIP 2 amends this as follows, again excerpted:
```xml
theory = element theory { thyname_attr, parentnames_attr?, any }
thoryitem = element theoryitem { objtype_attr, any }
block = element block { objtype_attr, xref_attr?, any }
```
---
3 This XML schema is written in RELAX NG, which can be read much as a BNF grammar, with non-terminals named on the left and element and attribute introducing terminals; see [http://relaxng.org/](http://relaxng.org/).
assertion = element assertion { thmname_attr?, id_attr?, any }
proofstep = element proofstep { xref_attr?, any }
endproof = element endproof { xref_attr?, proofstatus_attr?, any }
id_attr = attribute xml:id
thmname_attr = attribute thmname { xml:id }
thyname_attr = attribute thyname { xml:id }
xref_attr = attribute xref
proofstatus_attr = attribute ("proven"|"assert"|"unproven")
any = ( text | anyElement ) *
anyElement = element * { ( attribute * { text } | any ) * }
text = element text { plaintext }
There are two major changes here: (i) arbitrary XML can occur where before only text was allowed; of course, the prover must understand whatever XML syntax is used here (e.g. ΩMEGA can understand OMDoc); (ii) instead of a flat list structure, we now use a proper tree; that is, a theory is not everything between an <opentheory> and <closetheory> element, but the contents of the <theory> element; and similarly, a proof is not everything between <opengoal> and <closegoal>, but the contents of the <block> element of type proofbody that belongs to an <assertion> element. The <endproof> element replaces <closegoal> and can be annotated with status information about the proof proven, assert, or unproven. Another extension is the corresponding attributes xml:id for the <assertion>, and xref for the <block> elements, which allow assertions to refer to proofs which are elsewhere in the document, and not directly following the assertion. These attributes are optional, and may only appear in the display protocol (i.e., between displays and the broker); we assume that provers always expect proof scripts to be in linear order, and it is the responsibility of the broker to rearrange them if necessary before sending them to be checked.
Furthermore, the broker must be able to divine the structure in an OMDoc proof; e.g., the ΩMEGA prover or a component acting on its behalf must answer parse requests, and return XML documents using these elements. The revised version of our example proof with the PGIP 2 markup is shown in Fig. 4.
3.2 Multiple Foci
The present PGIP prover protocol imposes an abstract state machine model which the prover is required to implement. ΩMEGA can be made to fit this model, but beyond that provides multiple foci. By this we mean that it can keep track of more than one active proof at a time and switch between them. Ignoring this would lose potential benefits (such as the ability to use a natively multi-threaded implementation of the prover) unnecessarily, and it is easy to accommodate into PGIP: we merely need to add an attribute to the prover commands to identify the focus. Some of these attributes already exist for the display protocol, where files are identified by a unique identifier (srcid). By adding unique identifiers also for theories and proofs, the prover can identify which ongoing proof a proof step belongs to, and use the appropriate thread to handle it. To allow fall-back to the simple case, we need a prover configuration setting to declare if multiple foci are available.
3.3 XUpdate
In the PGIP_D protocol, changes in the document are communicated using specialised commands <createcmd> and <editcmd> from the display to the broker, and <newcmd>, <delcmd> and <replacecmd> from the broker to the display (so the protocol is asymmetric). We can rephrase this in terms of XUpdate; the unique identifier given by the broker to each command contained in the cmdid attribute allows to easily identify an object by the XPath expression *[cmd=c]. The key advantages of XUpdate are that it is standard, symmetric, and allows several changes to be bundled up in one <xupdate:modifications> packet that is processed atomically, adding a transaction capability to the display protocol.
Strict conformance to this protocol requires the displays to calculate or track differences, i.e., send only the smallest update. Not all displays (editors) are that sophisticated, and it is unrealistic to expect them to be; a basic design assumption of PG Kit is that the broker should contain the intelligence needed to handle proof documents, and displays should be easy to implement. Hence, displays can send back the whole document as changed, and expect the broker to figure out the actual differences (whole-document editing) using the XML difference mechanism from [11] that can take some semantics into account as already used by PlatΩ.
In the PGIP_P protocol, changes in the document communicated via XUpdate must be mapped to changes in the prover state. In the previous version of PGIP, this was done by the broker, because the single-focus state model does not easily accommodate arbitrary changes to the document. However, the multiple-focus extensions as described in Sect. 3.2 amount to supporting
XUpdate on the prover side; if the prover offers this support, it should be exploited, otherwise we use PGIP.<p></p>
### 3.4 Protocols
The underlying transport protocol of PGIP was custom designed, because communication with an interactive prover fits no simple standard single-request single-response protocol: the prover asynchronously sends information about proofs in progress, and we crucially need the ability to send out-of-band interrupts. However, on the display side these reasons do not apply; we might use XML-RPC or even plain HTTP in a REST architecture. REST (representational state transfer [6]) is an architecture style for distributed applications which, in a nutshell, is based on providing resources that are addressed using URIs and manipulated using four basic operations: creating, reading, updating and deleting ("CRUD"). The resources provided by the broker are as follows:
- The broker itself, with the list of all known provers, all loaded files, a global menu, and global configurations as attributes;
- each prover is a resource, with its status (not running or running, busy, ready, exited) as attributes, preferences for this prover, all identifiers for the prover, messages sent by the prover, its proof state, and prover-specific configurations such as types, icons, help documents, and a menu;
- and each file is a resource, containing the document as a structured text, and the status (saved or modified) as attributes.
Clients affect changes to the document by the XUpdate messages above, and trigger broker actions by changing the attributes. For example, to start a prover, the client will change the status of the prover resource from not running to running. Here, bundling up changes into one XUpdate modification becomes useful, as it allows displays to send several changes to the document resource in one transaction.
In the REST view, changes in the document produce a new version of the document; special links will always point to the latest version of the document, but may require the client to refresh them. This allows multiple displays; we will exploit this in Section 5. This REST-style interface is an alternative to the statefull protocol using PGIP or XML-RPC; in the long run, the broker will support both.
### 4 Service Menus
PGIP 2 supports context-sensitive service menus in the display for the interaction with the prover. The user can request a menu for any object in the
document; through the broker this triggers menu generation in the prover for the formal counterparts of the selected object. It remains to fix a format for menu descriptions.
Traditionally, menus are fully specified and include all submenus and the leafs are all actions with all possible actual arguments. Executing an action triggers modifications of the document and the menu is closed. For theorem provers, computing all submenus and action instances can be expensive and unduly delay the appearance of the menu. For example, a menu entry for applying a lemma would contain as a submenu all available lemmas, and for each lemma, all possibilities to apply it in the current proof situation. Once the user makes a choice, the other possibilities are discarded. So on-demand computation of submenus is desirable.
The PlatΩ system allows lazy menus, where actions executed in a menu can generate a submenu. The entire menu is modified by replacing the leaf action by the generated submenu. We adapt this model for PGIP 2 also. However, not all displays are able to incorporate changes to live menus; therefore we do not impose the partial menu representation. Instead, the display specifies in the service request whether it will accept a lazy menu.
The description language for these menus is:
```
menu = element menu { id, name, for_attr, ((menu|action)+ | error) }
action = element action { id, name, argument* }
argument = element argument { id, name, custom }
custom = element custom { id, alt, any }
error = element error { id, text }
```
(using the any element from above). A menu entry is rendered by its name and an action is rendered by its name and its arguments. Arguments are rendered with the given custom object, e.g., an OpenMath formula or some standard TEX/macros markup. The alt attribute provides a fallback ASCII representation in case the custom object content cannot be displayed.
When the user chooses an action, it is executed on the specified arguments. The result of the action may be an XUpdate patch to the document. This is sent to the broker and then on to the display, which incorporates the patch and closes the menu. Alternatively it is a patch for the menu only: in this case the action is replaced in the menu by the new submenu. If a submenu is empty, i.e., there are no possibilities to refine the abstract action, then the submenu consists solely of an error that describes the cause, which should be displayed inside the menu.
**Example 4.1** We illustrate the interactions when requesting a menu for a display that is able to deal with partial menus. In **Phase 5** of the scenario, Eva requests a menu for the subgoal \((1a)\) \(0 + y = y + 0\).
**Menu Request:** The menu is requested for a specific XPath of the document
and the broker maps it to a menu request to the prover for the corresponding formal object, that is, the open goal that corresponds to (1a) \(0 + y = y + 0\). The prover generates a top-level menu with the actions “Apply Axiom or Lemma”, “Apply Tactic” and returns that to the display via the broker.
**Lazy Menu Deployment:** Selecting “Apply Axiom or Lemma” triggers computing a submenu containing all available axioms and lemmas. That submenu is sent as an XUpdate patch to the display to replace the action “Apply Axiom or Lemma”. Selecting Lemma (2) triggers the prover action that computes the possible ways to apply the lemma on the open goal. In this case the resulting submenu has a few entries for the cases where the lemma is applied from left to right and one case for the application of the lemma from right to left. The submenu is sent as an XUpdate patch to the display to replace the action “Apply Lemma (2)”.
**Menu Action Execution:** The final top level action execution triggers applying the specific instance of the Lemma in the prover, modifying the formal proof. The modification is propagated via the broker to the display, either as an XUpdate patch for the document if the display is able to deal itself with these; otherwise the broker computes the new document version and forwards only the new document. Additionally, a patch description is sent for closing the menu.
### 5 Multiple Displays
The architecture of our new system inherits from the architecture of PG Kit (Fig. 1), which allows multiple displays to be connected to the broker. One use for this is to allow multiple views on a proof-in-progress, e.g., a display that shows a dependency graph, or a graphical interpretation of a proof (perhaps rendering geometric arguments diagrammatically), alongside the main proof editing display. These displays are prover-specific, but fit smoothly into the general architecture.
Another use for multiple displays is to support more than one display to change the document. For this we need a way to synchronise input from different displays. A way to do this is for the broker to act as a simple kind of source control repository, illustrated by example in Fig. 5. This works as follows:
- The broker maintains the latest revision (the head) of a document, and for each display, a copy of the latest revision acknowledged by that display. In Fig. 5, the head is Rev. 47.
- When Display 1 sends a change (Rev. 47’), the change is committed to the new head (Rev. 48), and the new revision broadcast to all displays.
Display 1 acknowledges the new revision. However, Display 2 has been changed meanwhile, so it does not acknowledge, instead attempting to commit its changes (Rev. 47”). The broker spots a potential conflict, and (in this case) merges the disjoint changes between 48 and 47” with respect to 47 into the current head revision without trouble. The merged document becomes the new head (Rev. 49), and is broadcast to all displays. Since no further changes have been made in the display, they both acknowledge.
If a conflict that cannot be merged occurs, the broker sends the merged document including conflict descriptions back to the display (using an extension to XUpdate to markup the conflicts, as in [11, Sect. 7.1.3]). The display (or the user) then needs to resolve the conflicts, and send in new changes.
This strategy is simple and flexible: displays could always send in changes to the whole document, and only acknowledge changes sent from the broker if the user has not edited the document at all. Alternatively, since this may create extensive conflicts without realising, displays might block between commit and acknowledge, or attempt to merge eagerly with new revisions sent by the broker.
6 Supporting Multiple Document Formats
So far the document format used with the display and the prover are essentially the same: for instance, in the classical PGIP with Isabelle, the document on
the display is an Isabelle input file with additional markup. With the extension for arbitrary XML document formats in Section 3, we could connect a display and prover that both use OMDoc. But we cannot yet connect two different formats, say, connecting the display based on a document format $D$, with a prover that works on a different format $P$. This is the final missing piece of the architecture for emulating PlatΩ, which connects $D = PL$ in the PLATO$_D$ protocol to $\TeXMACS$ through to $P = OMDoc$ format as used in the PLATO$_P$ protocol to $\Omega$MEGA.
To support multiple document formats at once, we propose to use a central structured document format $B$ in the PG Kit broker that is annotated by PGIP markup. The broker does not need to know the semantics of the format $B$. Instead, dedicated translators are required for each target document format, translating $D \leftrightarrow B$ and $B \leftrightarrow P$. Each translator maintains a document representation mapping, and converts XUpdate-patches in either direction, much as the PlatΩ system does between the PL representation and the OMDoc representation as described in Section 2.1. The advantage of using the central format $B$ is that provers do not need to be adapted to the document format of every display.
Experience with PlatΩ suggest the main difficulty lies in translating patch descriptions between the different document formats. Suppose we connect structured $\TeXMACS$ documents with plain text Isabelle proof scripts, and choose OMDoc as the broker’s central document format. On the display side we have a translator component that mediates between $\TeXMACS$ documents and OMDoc. Prover side, a translator mediates between OMDoc and Isabelle ASCII text. We encode ASCII documents in XML as
<document><text>...</text>...<text>...</text></document>, where text nodes are whitespace preserving.
Consider now the interactions when uploading and patching a document. Menu interactions are basically passed unchanged, but document patches must be translated. Since PlatΩ can already mediate between the $\TeXMACS$ and OMDoc formats, we need only one new translator for OMDoc and Isabelle, implementing:
**XUpdate flattening** going from OMDoc to ASCII, the structured XML representation must be transformed into a linearised text representation. A mapping must be setup between XML ranges and text ranges, i.e., the start XPath maps to the start text position (relative to the last range) and the end XPath maps to the end text position (relative to the last range). Start and end XPaths have the same parent XPath by definition. To flatten patches, the affected XML ranges must be recomputed and the mapping adapted; additions in the patch are flattened similarly.
XUpdate lifting: going from ASCII to OMDoc, the text spans must be lifted to the XML representation. Generally, this is done by mapping text spans to the corresponding sequence of adjacent XML ranges. As an invariant it must be checked whether the resulting sequence can be expressed by start and end XPaths with the same parent XPath. Similar to flattening, the mapping has to be adapted between text ranges and XML ranges.
Of course, the devil lies in the detail: OMDoc allows some embedding of legacy formats, but to usefully translate to and from Isabelle, we must accurately interpret a subset of syntax that reflects theory structure, and have some confidence about the correctness of the interpretation.
On the other side, we can now provide translators for further displays with advanced layout possibilities, such as Word 2007. The translator component must abstract the display document format to simplify it for the broker: e.g. in Word 2007, the document body is extracted and information about fonts, colours and spacing is stripped. On the way back, annotations are extracted from the patches coming from the broker, which guide heuristics for layout of new or modified text.
7 Related Work, Conclusion and Next Steps
Many user interfaces to theorem provers are similar to the Proof General style of line-by-line and single focus interaction using ASCII input files in native theorem prover format. Often, a custom interaction protocol is used. The main novelties for PGIP 2 proposed here are: (i) to handle semi-structured XML documents as input formats; (ii) to allow the user to work on different parts of a document in parallel by using multiple foci; (iii) to allow the theorem prover to change parts of the input document, possibly using menus, and (iv) to have multiple views and editing of the same document in different displays.
With respect to (i), the MathsTiles system [5] also allows to map semi-structured documents towards several special reasoning systems. However, the mapping is only unidirectional from the display to the reasoners and also does not support multiple displays and conjunctive editing. With respect to (ii), as far as we know, the ΩMEGA system is the only prover that currently supports semi-structured document input and multiple foci. State information describing which parts of the document have been checked by ΩMEGA is managed in an ad hoc style; making this explicit in the multi-threaded state machine model in PGIP 2 markup improves this and suggests ways to migrate a single-threaded theorem prover to a multi-threaded mode.
The IAPP infrastructure introduced in [7] is a new architecture designed to
support asynchronous processing using a communication protocol that transfers the ownership of proof commands between the interface and the prover. Thus IAPP locks parts of the document to prevent conflicts, whereas we use a versioning-based approach where conflicts are resolved in the broker (using undo operations); the obvious advantage is that the interface does not have to wait for the prover to release parts of the document before editing. Additionally, IAPP tracks changes using a tight integration with the interface based on the assumption that the interface implements the OBSERVER pattern [8]. We do not impose such strong requirements on the interface because we use a difference analysis mechanism to compute the changes. This allows us to support multiple views equally on the interface side.
Multiple views have been used in various forms in different systems, but not in a clearly distributed way that also allows editing, as in PGIP 2. In LΩUI [12] the display was split into a graph view on the proof and a display of the actual proof goals: those were based on pretty-printing and graph-visualisation tools built into the same display component. MATITA’s user interface [1] has one proof script buffer and a display for the actual proof goal: the latter uses GtkMathview based on MathML representation of formulas that is generated from the internal representation of MATITA. GEOProof [10] allows one to generate Coq proofs from its internal, geometric representation which can be viewed in CoqIDE [13]: this comes close to what we propose with multiple displays, except that currently there is no way back from Coq into GEOProof. The infrastructure of PGIP 2 and a (partial) mapping from Coq into GEOProof would allow for simultaneously working in GEOProof and CoqIDE. Away from proof assistant systems, multiple views are familiar in IDEs for programming languages such as Eclipse and NetBeans: there the same file may be presented in different ways in different windows (e.g., code and model), and either updated dynamically in step, or at clearly defined points in the interaction (e.g., window activation).
The ability to extend the input document by incorporating information from the prover has also been supported in various ways before. An example besides the general change mechanism of PlatΩ/ΩMEGA is that of MATITA, which can generate a tinyecal proof script from the GUI interactions on goals, and include it into the overall document. We hope that a generic infrastructure would allow functionality like this to be reused between systems. The facility to include information from the prover together with the multiple foci provide a good basis to use PG Kit for provers like Mizar, PVS and Agda that have different, non-linear interaction styles. The details of adapting to further
4 This could, of course, only be a partial mapping since not all Coq-proofs are geometric proofs.
prover interaction styles is left to future work.
The main next step is to implement our planned PGIP 2 and to rebuild PlatΩ’s functionality on that basis. Future work will also be devoted to use Word 2007 and OpenOffice as displays and especially to build bi-directional transformers between prover-specific textual input files and corresponding OMDoc representations. We hope this will lead to a rich family of improved prover user interfaces.
References
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/19403359/Aspinall_2009_Towards_Merging_Plat_and_PGIP.pdf", "len_cl100k_base": 9270, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 47872, "total-output-tokens": 10988, "length": "2e13", "weborganizer": {"__label__adult": 0.0003619194030761719, "__label__art_design": 0.0008268356323242188, "__label__crime_law": 0.00045108795166015625, "__label__education_jobs": 0.0031337738037109375, "__label__entertainment": 0.00015652179718017578, "__label__fashion_beauty": 0.00019121170043945312, "__label__finance_business": 0.0004444122314453125, "__label__food_dining": 0.0004062652587890625, "__label__games": 0.0009150505065917968, "__label__hardware": 0.000980377197265625, "__label__health": 0.0006356239318847656, "__label__history": 0.0005259513854980469, "__label__home_hobbies": 0.00012636184692382812, "__label__industrial": 0.0007176399230957031, "__label__literature": 0.0006318092346191406, "__label__politics": 0.0003199577331542969, "__label__religion": 0.0005979537963867188, "__label__science_tech": 0.2451171875, "__label__social_life": 0.00017631053924560547, "__label__software": 0.03271484375, "__label__software_dev": 0.70947265625, "__label__sports_fitness": 0.00033974647521972656, "__label__transportation": 0.0006313323974609375, "__label__travel": 0.0002498626708984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45055, 0.01607]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45055, 0.33204]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45055, 0.87509]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1757, false], [1757, 4462, null], [4462, 6363, null], [6363, 7741, null], [7741, 10303, null], [10303, 13109, null], [13109, 15597, null], [15597, 18146, null], [18146, 19639, null], [19639, 21685, null], [21685, 24406, null], [24406, 26846, null], [26846, 29620, null], [29620, 32170, null], [32170, 33572, null], [33572, 36330, null], [36330, 38995, null], [38995, 41914, null], [41914, 45055, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1757, true], [1757, 4462, null], [4462, 6363, null], [6363, 7741, null], [7741, 10303, null], [10303, 13109, null], [13109, 15597, null], [15597, 18146, null], [18146, 19639, null], [19639, 21685, null], [21685, 24406, null], [24406, 26846, null], [26846, 29620, null], [29620, 32170, null], [32170, 33572, null], [33572, 36330, null], [36330, 38995, null], [38995, 41914, null], [41914, 45055, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45055, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45055, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45055, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45055, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45055, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45055, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45055, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45055, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45055, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45055, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1757, 2], [1757, 4462, 3], [4462, 6363, 4], [6363, 7741, 5], [7741, 10303, 6], [10303, 13109, 7], [13109, 15597, 8], [15597, 18146, 9], [18146, 19639, 10], [19639, 21685, 11], [21685, 24406, 12], [24406, 26846, 13], [26846, 29620, 14], [29620, 32170, 15], [32170, 33572, 16], [33572, 36330, 17], [36330, 38995, 18], [38995, 41914, 19], [41914, 45055, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45055, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
ffba13321fd499927fd7c7f78a50836d4850623b
|
DSMS – Overview
• Introduction:
– What are DSMS? (terms)
– DSMS vs. DBMS
– Why do we need DSMS? (applications)
• Concepts and issues:
– Architecture(s)
– Data modeling
– Query processing and optimization
– Data Reduction & Stream Mining
• Examples
• Summary: Open issues & conclusions
Handle Data Streams in DBS?
Traditional DBS
- SQL Query
- Main Memory
- Disk
- Query Processing
DSMS
- Register CQs
- Main Memory
- Data Stream(s)
- Scratch store (main memory or disk)
- Archive Stored relations
- Result (stored)
- Query Processing
Data Management
• Traditional DBS:
– stored sets of relatively static records with no pre-defined notion of time
– good for applications that require persistent data storage and complex querying
• DSMS:
– support on-line analysis of rapidly changing data streams
– data stream: real-time, continuous, ordered (implicitly by arrival time or explicitly by timestamp) sequence of items, too large to store entirely, not ending
– continuous queries
# Data Management: Comparison - DBS versus DSMS
## Database Systems (DBS)
- Persistent relations
(relatively static, stored)
- One-time queries
- Random access
- “Unbounded” disk store
- Only current state matters
- No real-time services
- Relatively low update rate
- Data at any granularity
- Assume precise data
- Access plan determined by query processor, physical DB design
## DSMS
- Transient streams
(on-line analysis)
- Continuous queries (CQs)
- Sequential access
- Bounded main memory
- Historical data is important
- Real-time requirements
- Possibly multi-GB arrival rate
- Data at fine granularity
- Data stale/imprecise
- Unpredictable/variable data arrival and characteristics
Related DBS Technologies
• Continuous queries
• Active DBS (triggers)
• Real-time DBS
• Adaptive, on-line, partial results
• View management (materialized views)
• Sequence/temporal/timeseries DBS
• Main memory DBS
• Distributed DBS
• Parallel DBS
• Pub/sub systems
• Filtering systems
• …
=> Must be adapted for DSMS!
DSMS Applications
- **Sensor Networks:**
- Monitoring of sensor data from many sources, complex filtering, activation of alarms, aggregation and joins over single or multiple streams
- **Network Traffic Analysis:**
- Analyzing Internet traffic in near real-time to compute traffic statistics and detect critical conditions
- **Financial Tickers:**
- On-line analysis of stock prices, discover correlations, identify trends
- **On-line auctions**
- **Transaction Log Analysis**, e.g., Web, telephone calls, …
Data Streams - Terms
- A **data stream** is a (potentially unbounded) sequence of tuples
- **Transactional data streams**: log interactions between entities
- Credit card: purchases by consumers from merchants
- Telecommunications: phone calls by callers to dialed parties
- Web: accesses by clients of resources at servers
- **Measurement data streams**: monitor evolution of entity states
- Sensor networks: physical phenomena, road traffic
- IP network: traffic at router interfaces
- Earth climate: temperature, moisture at weather stations
Motivation for DSMS
• Large amounts of interesting data:
– deploy transactional data observation points, e.g.,
• AT&T long-distance: ~300M call tuples/day
• AT&T IP backbone: ~10B IP flows/day
– generate automated, highly detailed measurements
• NOAA: satellite-based measurement of earth geodetics
• Sensor networks: huge number of measurement points
Motivation for DSMS (cont.)
• Near real-time queries/analyses
– ISPs: controlling the service level
– NOAA: tornado detection using weather radar data
• Traditional data feeds
– Simple queries (e.g., value lookup) needed in real-time
– Complex queries (e.g., trend analyses) performed off-line
Motivation for DSMS (cont.)
- Performance of disks:
<table>
<thead>
<tr>
<th></th>
<th>1987</th>
<th>2004</th>
<th>Increase</th>
</tr>
</thead>
<tbody>
<tr>
<td>CPU Performance</td>
<td>1 MIPS</td>
<td>2,000,000 MIPS</td>
<td>2,000,000 x</td>
</tr>
<tr>
<td>Memory Size</td>
<td>16 Kbytes</td>
<td>32 Gbytes</td>
<td>2,000,000 x</td>
</tr>
<tr>
<td>Memory Performance</td>
<td>100 usec</td>
<td>2 nsec</td>
<td>50,000 x</td>
</tr>
<tr>
<td>Disc Drive Capacity</td>
<td>20 Mbytes</td>
<td>300 Gbytes</td>
<td>15,000 x</td>
</tr>
<tr>
<td>Disc Drive Performance</td>
<td>60 msec</td>
<td>5.3 msec</td>
<td>11 x</td>
</tr>
</tbody>
</table>
Source: Seagate Technology Paper: "Economies of Capacity and Speed: Choosing the most cost-effective disc drive size and RPM to meet IT requirements"
Motivation for DSMS (cont.)
• The PingER project:
– Believed to be the most extensive Internet end-to-end performance monitoring tool in the world
Motivation for DSMS (cont.)
Disk Throughput
Motivation for DSMS (cont.)
• Take-away points:
– Large amounts of raw data
– Analysis needed as fast as possible
– Data feed problem
Application Requirements
• **Data model and query semantics:** order- and time-based operations
– Selection
– Nested aggregation
– Multiplexing and demultiplexing
– Frequent item queries
– Joins
– Windowed queries
• **Query processing:**
– Streaming query plans must use non-blocking operators
– Only single-pass algorithms over data streams
• **Data reduction:** approximate summary structures
– Synopses, digests => no exact answers
• **Real-time reactions** for monitoring applications => active mechanisms
• **Long-running queries:** variable system conditions
• **Scalability:** shared execution of many continuous queries, monitoring multiple streams
• **Stream Mining**
Generic DSMS Architecture
[Diagram showing the architecture with components such as Input Monitor, Working Storage, Summary Storage, Static Storage, Query Processor, and Output Buffer, with arrows indicating streaming inputs and outputs, and updates to static data and user queries.]
DSMS: 3-Level Architecture
**DBS**
- Data feeds to database can also be treated as data streams
- Resource (memory, disk, per-tuple computation) rich
- Useful to audit query results of DSMS
- Supports sophisticated query processing, analyses
**DSMS**
- DSMS at multiple observation points, (voluminous) streams-in, (data reduced) streams-out
- Resource (memory, per tuple computation) limited, esp. at low-level
- Reasonably complex, near real-time, query processing
- Identify what data to populate in DB
---
*VLDB 2003 Tutorial [Koudas & Srivastava 2003]*
Data Models
- **Real-time data stream**: sequence of data items that arrive in some order and may be seen only once.
- **Stream items**: like relational tuples
- relation-based models, e.g., STREAM, TelegraphCQ; or instanciations of objects
- object-based models, e.g., COUGAR, Tribeca
- **Window models:**
- Direction of movement of the endpoints: fixed window, sliding window, landmark window
- Physical / time-based windows versus logical / count-based windows
- Update interval: eager (update for each new arriving tuple) versus lazy (batch processing -> jumping window), non-overlapping tumbling windows
Relation: Tuple Set or Sequence?
• Traditional relation = set/bag of tuples
• Tuple sequences:
– Temporal databases: multiple time orderings
– Sequence databases: integer “position” -> tuple
• DSMS:
– Ordering domains: Gigascope, Hancock
– Position ordering: Aurora, STREAM
Timestamps
• Explicit
– Injected by data source
– Models real-world event represented by tuple
– Tuples may be out-of-order, but if near-ordered can reorder with small buffers
• Implicit
– Introduced as special field by DSMS
– Arrival time in system
– Enables order-based querying and sliding windows
• Issues
– Distributed streams?
– Composite tuples created by DSMS?
Time
• Easiest: global system clock
– Stream elements and relation updates timestamped on entry to system
• Application-defined time
– Streams and relation updates contain application timestamps, may be out of order
– Application generates “heartbeat”
• Or deduce heartbeat from parameters: stream skew, scrambling, latency, and clock progress
– Query results in application time
Update: Modifications or Appends?
• Traditional relational updates: arbitrary data modifications
• Append-only relations have been studied:
– Tapestry: emails and news articles
– Chronicle data model: transactional data
• DSMS:
– Streams-in, stream-out: Aurora, Gigascope, STREAM
– Stream-in, relation-out: Hancock
Queries - I
• DBS: one-time (transient) queries
• DSMS: continuous (persistent) queries
– Support persistent and transient queries
– Predefined and ad hoc queries (CQs)
– Examples (persistent CQs):
• Tapestry: content-based email, news filtering
• OpenCQ, NiagaraCQ: monitor web sites
• Chronicle: incremental view maintenance
• Unbounded memory requirements
• Blocking operators: window techniques
• Queries referencing past data
Queries - II
• DBS: (mostly) exact query answer
• DSMS: (mostly) approximate query answer
– Approximate query answers have been studied:
• Synopsis construction: histograms, sampling, sketches
• Approximating query answers: using synopsis structures
• Approximate joins: using windows to limit scope
• Approximate aggregates: using synopsis structures
• Batch processing
• Data reduction: sampling, synopses, sketches, wavelets, histograms, …
One-pass Query Evaluation
• **DBS:**
- Arbitrary data access
- One/few pass algorithms have been studied:
• Limited memory selection/sorting: \( n \)-pass quantiles
• Tertiary memory databases: reordering execution
• Complex aggregates: bounding number of passes
• **DSMS:**
- Per-element processing: single pass to reduce drops
- Block processing: multiple passes to optimize I/O cost
Query Plan
- **DBS**: fixed query plans optimized at beginning
- **DSMS**: adaptive query operators
- Adaptive plans
- Adaptive query plans have been studied:
- Query scrambling: wide-area data access
- Eddies: volatile, unpredictable environments
Query Languages & Processing
• Stream query language issues (compositionality, windows)
• SQL-like proposals suitably extended for a stream environment:
– Composable SQL operators
– Queries reference relations or streams
– Queries produce relations or streams
• Query operators (selection/projection, join, aggregation)
• Examples:
– GSQL (Gigascope)
– CQL (STREAM)
• Optimization objectives
• Multi-query execution
Query Languages
3 querying paradigms for streaming data:
1. **Relation-based**: SQL-like syntax and enhanced support for windows and ordering, e.g., CQL (STREAM), StreaQuel (TelegraphCQ), AQuery, GigaScope.
2. **Object-based**: object-oriented stream modeling, classify stream elements according to type hierarchy, e.g., Tribeca, or model the sources as ADTs, e.g., COUGAR.
3. **Procedural**: users specify the data flow, e.g., Aurora, users construct query plans via a graphical interface.
(1) and (2) are declarative query languages, currently, the relation-based paradigm is mostly used.
Windows
- Mechanism for extracting a finite relation from an infinite stream
- Various window proposals for restricting operator scope
- Windows based on ordering attributes (e.g., time)
- Windows based on tuple counts
- Windows based on explicit markers (e.g., punctuations)
- Variants (e.g., partitioning tuples in a window)
Ordering Attribute Based Windows
- Assumes the existence of an attribute that defines the order of stream elements/tuples (e.g., time)
- Let T be the window length (size) expressed in units of the ordering attribute (e.g., T may be a time window)
- Various possibilities exist:
Tuple Count Based Windows
- Window of size N tuples (sliding, shifting) over the stream
- Problematic with non-unique time stamps associated with tuples
- Ties broken arbitrarily may lead to non-deterministic output
Punctuation Based Windows
• Application inserted “end-of-processing” markers
– Each data item identifies “beginning-of-processing”
• Enables data item-dependent variable length windows
– e.g., a stream of auctions
• Similar utility in query processing
– Limit the scope of query operators relative to the stream
Sample Stream
Traffic (
sourceIP -- source IP address
sourcePort -- port number on source
destIP -- destination IP address
destPort -- port number on destination
length -- length in bytes
time -- time stamp
);
Selections, Projections
- Selections, (duplicate preserving) projections are straightforward
- Local, per-element operators
- Duplicate eliminating projection is like grouping
- Projection needs to include ordering attribute
- No restriction for position ordered streams
```
SELECT sourceIP, time
FROM Traffic
WHERE length > 512
```
Join Operators
- General case of join operators problematic on streams
- May need to join arbitrarily far apart stream tuples
- Equijoin on stream ordering attributes is tractable
- Majority of work focuses on joins between streams with windows specified on each stream
```sql
SELECT A.sourceIP, B.sourceIP
FROM Traffic1 A [window T1], Traffic2 B [window T2]
WHERE A.destIP = B.destIP
```
Aggregation
• General form:
– `select G, F1 from S where P group by G` having `F2 op 9`
– G: grouping attributes, F1,F2: aggregate expressions
• Aggregate expressions:
– distributive: sum, count, min, max
– algebraic: avg
– holistic: count-distinct, median
Aggregation in Theory
• An aggregate query result can be streamed if group by attributes include the ordering attribute.
• A single stream aggregate query “select G,F from S where P group by G” can be executed in bounded memory if:
– every attribute in G is bounded
– no aggregate expression in F, executed on an unbounded
– attribute, is holistic
• Conditions for bounded memory execution of aggregate queries on multiple streams
Aggregation & Approximation
• When aggregates cannot be computed exactly in limited storage, approximation may be possible and acceptable
• Examples:
– select G, median(A) from S group by G
– select G, count(distinct A) from S group by G
– select G, count(*) from S group by G having count(*) > f|S|
• Data reduction: use summary structures
– samples, histograms, sketches …
• Focus of different tutorial
Sampling
- A small random sample $S$ of the data often well-represents all the data
- Example: select $\text{agg}$ from $R$ where $R.e$ is odd ($n=12$)
- Data stream: $[9 \ 3 \ 5 \ 2 \ 7 \ 1 \ 6 \ 5 \ 8 \ 4 \ 9 \ 1]$
- Sample $S$: $[9 \ 5 \ 1 \ 8]$
- If $\text{agg}$ is $\text{avg}$, return average of odd elements in $S$
- answer: $5$
- If $\text{agg}$ is $\text{count}$, return average over all elements $e$ in $S$ of
- $n$ if $e$ is odd
- $0$ if $e$ is even
- answer: $12 \times 3 / 4 = 9$ Unbiased!
Histograms
- Histograms approximate the frequency distribution of element values in a stream
- A histogram (typically) consists of
- A partitioning of element domain values into buckets
- A count $C_B$ per bucket $B$ (of the number of elements in $B$)
- Long history of use for selectivity estimation within a query optimizer
Wavelets
- For hierarchical decomposition of functions/signals
- Haar wavelets
- Simplest wavelet basis => Recursive pairwise averaging and differencing at different resolutions
<table>
<thead>
<tr>
<th>Resolution</th>
<th>Averages</th>
<th>Detail Coefficients</th>
</tr>
</thead>
<tbody>
<tr>
<td>3</td>
<td>[2, 2, 0, 2, 3, 5, 4, 4]</td>
<td>----</td>
</tr>
<tr>
<td>2</td>
<td>[2, 1, 4, 4]</td>
<td>[0, -1, -1, 0]</td>
</tr>
<tr>
<td>1</td>
<td>[1.5, 4]</td>
<td>[0.5, 0]</td>
</tr>
<tr>
<td>0</td>
<td>[2.75]</td>
<td>[-1.25]</td>
</tr>
</tbody>
</table>
Haar wavelet decomposition: [2.75, -1.25, 0.5, 0, 0, -1, -1, 0]
Query Optimization
- DBS: table based cardinalities used in query optimization => Problematic in a streaming environment
- Cost metrics and statistics: accuracy and reporting delay vs. memory usage, output rate, power usage
- Query optimization: query rewriting to minimize cost metric, adaptive query plans, due to changing processing time of operators, selectivity of predicates, and stream arrival rates
- Query optimization techniques
- stream rate based
- resource based
- QoS based
- Continuously adaptive optimization
- Possibility that objectives cannot be met:
- resource constraints
- bursty arrivals under limited processing capability
Disorder in Data Streams
• Many queries over data streams rely on some kind of order on the input data items
– Can often use more efficient operator implementations if the input is sorted on “interesting attributes” (e.g. aggregates)
• What causes disorder in streams?
– Items from the same source may take different routes
– Many sources with varying delays
– May have been sorted on different attribute
• Sorting a stream may be undesirable
• May be more than one possible interesting order over a stream
– For example, data items may have creation time and arrival time
– Sorted on arrival time, but creation time also interesting
Punctuations
- Punctuations embedded in stream denote end of subset of data
- Unblocks blocking operators
- Reduces state required by stateful operators
- New operator: Punctuate
- Has special knowledge regarding the input stream
- timer-based, k-constraints, communication with stream source
- Emits punctuations in source schema based on special knowledge
- Punctuations can help in two ways:
- Maintain order – Punctuations unblock sort
- Similar to approach in Gigascope
- Order-preserving operators include sort behavior for punctuations
- Allow disorder – Punctuations define the end of subsets
- Operators use punctuations, not order, to output results
- Reduces tuple latency
IP Network Application: P2P Traffic Detection
- AT&T IP customer wanted to accurately monitor P2P traffic evolution within its network
- Netflow can be used to determine P2P traffic volumes using TCP port number found in Netflow data
- P2P traffic might not use known P2P port numbers
- Using Gigascope SQL-based packet monitor
- Search for P2P related keywords within each TCP datagram
- Identified 3 times more traffic as P2P than Netflow
- **Lessons:**
- Essential to query massive volume data streams
- Layer independence
- Correlation of different sources (different app.)
Example - I: Queries for Network Traffic Management
- Large network, e.g., backbone network of ISP
- Monitor a variety of continuous data streams that may be unpredictable and have high data rates
- Provide a "general-purpose" system for monitoring
- Traditional DBS do not support on-line continuous query processing
- Example: network packet traces from multiple network links, here only two specific links: customer link C, backbone link B, we consider only five packet header fields: src, desr, id, len, time
Example - II: Queries for Network Traffic Management
- Compute load on link \( B \) averaged over one-minute intervals, notifying the network operator when the load crosses a specified threshold \( t \).
Two special functions: getminute, notifyoperator
```
SELECT notifyoperator(sum(len))
FROM B
GROUP BY getminute(time)
HAVING sum(len) > t
```
Example - III: Queries for Network Traffic Management
- Isolate flows in the backbone link and determine amount of traffic generated by each flow. Flow: sequence of packets grouped in time, and sent from a specific source to a specific destination.
```sql
SELECT flowid, src, dest, sum(len) AS flowlen
FROM (SELECT src, dest, len, time
FROM B
ORDER BY time)
GROUP BY src, dest, getflowid(src, dest, time)
AS flowid
```
Example - IV: Queries for Network Traffic Management
• Ad hoc continuous queries when network is congested to determine whether the customer network is the cause.
```
SELECT count(*)
FROM C, B
WHERE C.src = B.src and C.dest = B.dest
and C.id = B.id /
(SELECT count(*) FROM B)
```
Example - V: Queries for Network Traffic Management
• Continuous query for monitoring the source-destination pairs in the top 5% in terms of backbone traffic.
WITH Load AS
(SELECT src, dest, sum(len) AS traffic
FROM B
GROUP BY src, dest)
SELECT src, dest, traffic
FROM Load AS L1
WHERE (SELECT count (*)
FROM Load AS L2
WHERE L2.traffic > L1.traffic) >
(SELECT 0.95xcount(*) FROM Load)
ORDER BY traffic
Query Processing - I
• Continuous query plans:
– push-based approaches - data is pushed to the DSMS by the source(s)
– trad.DBS approaches are pull-based, queue problems (overflows)
– open problems: redesign disk-based data structures and indices
• Processing multiple continuous queries:
– sharing query plans
– indexing query predicates
• Distributed query processing:
– multiple data streams arriving from remote sources
=> distributed optimization strategies
Query Processing - II
(1) Non-blocking operators - 3 techniques for unblocking stream operators:
• windowing
• incremental evaluation
• exploiting stream constraints (punctuations)
(2) Approximate algorithms – if (1) does not work, compact stream summaries may be stored and approximate queries may be run over the summaries
-> Trade-off: accuracy vs. amount of memory
Methods of generating synopses: counting methods, hashing methods, sampling methods, sketches, wavelet transformations
(3) Sliding window algorithms:
• windowed sampling
• symmetric hash join
(4) On-line data stream mining (single pass): computing stream signatures, decision trees, forecasting, k-medians clustering, nearest neighbour queries, regression analysis, similarity detection, pattern matching
Approximate Query Answering Methods
- **Sliding windows**
- Only over sliding windows of *recent stream data*
- Approximation but often more desirable in applications
- **Batched processing, sampling and synopses**
- **Batched** if update is fast but computing is slow
- Compute periodically, not very timely
- **Sampling** if update is slow but computing is fast
- Compute using sample data, but not good for joins, etc.
- **Synopsis** data structures
- Maintain a small *synopsis* or *sketch* of data
- Good for querying historical data
- **Blocking operators, e.g., sorting, avg, min, etc.**
- **Blocking** if unable to produce the first output until seeing the entire input
Query Optimization
- DBS: table based cardinalities used in query optimization => Problematic in a streaming environment
- Cost metrics and statistics: accuracy and reporting delay vs. memory usage, output rate, power usage
- Query optimization: query rewriting to minimize cost metric, adaptive query plans, due to changing processing time of operators, selectivity of predicates, and stream arrival rates
- Query optimization techniques
- stream rate based
- resource based
- QoS based
- Continuously adaptive optimization
- Possibility that objectives cannot be met:
- resource constraints
- bursty arrivals under limited processing capability
Traditional Query Optimization
Statistics Manager:
Periodically collects statistics, e.g., table sizes, histograms
Estimated statistics
Optimizer:
Finds “best” query plan to process this query
Which statistics are required
Executor:
Runs chosen plan to completion
Query
Chosen query plan
[Babu 2004]
STREAM - Optimizing CQs
- Continuous queries are long-running
- Stream characteristics can change over time
- Data properties: Selectivities, correlations
- Arrival properties: Bursts, delays
- System conditions can change over time
➔ Performance of a fixed plan can change significantly over time
➔ Adaptive processing: find best plan for current conditions
[Babu 2004]
STREAM - Traditional Optimization → StreaMon
Profiler:
Monitors current stream and system characteristics
Estimated statistics
Reoptimizer:
Ensures that plan is efficient for current characteristics
Executor:
Executes current plan
Which statistics are required
Decisions to adapt
Combined in part for efficiency
[Babu 2004]
STREAM - Pipelined Filters
- Order commutative filters over a stream
- Example: Track HTTP packets with destination address matching a prefix in given table and content matching "*.ida"
- Simple to complex filters
- Boolean predicates
- Table lookups
- Pattern matching
- User-defined functions
- Joins as we will see later
[Babu 2004]
STREAM - Metrics for an Adaptive Algorithm
- Speed of adaptivity
- Detecting changes and finding new plan
- Run-time overhead
- Collecting statistics, reoptimization, plan migration
- Convergence properties
- Plan properties under stable statistics
[Babu 2004]
Optimization Objectives
• Rate-based optimization:
– Take into account the rates of the streams in the query evaluation tree during optimization
– Rates can be known and/or estimated
• Maximize tuple output rate for a query
– Instead of seeking the least cost plan, seek the plan with the highest tuple output rate
Rate Based Optimization – I
- Output rate of a plan: number of tuples produced per unit time
- Derive expressions for the rate of each operator
- Combine expressions to derive expression $r(t)$ for the plan output rate as a function of time:
- Optimize for a specific point in time in the execution
- Optimize for the output production size
Rate Based Optimization – II
• Optimize for resource (memory) consumption
• A query plan consists of interacting operators, with each tuple passing through a sequence of operators
• When streams are bursty, tuple backlog between operators may increase, affecting memory requirements
• Goal: scheduling policies that minimize resource consumption
Operator Scheduling
- When tuple arrival rate is uniform:
- a simple FIFO scheduling policy suffices
- let each tuple flow through the relevant operators
<table>
<thead>
<tr>
<th>Time</th>
<th>Greedy</th>
<th>FIFO</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>1.2</td>
<td>1.2</td>
</tr>
<tr>
<td>2</td>
<td>1.4</td>
<td>2.0</td>
</tr>
<tr>
<td>3</td>
<td>1.6</td>
<td>2.2</td>
</tr>
<tr>
<td>4</td>
<td>1.8</td>
<td>3.0</td>
</tr>
</tbody>
</table>
Average arrival rate: 0.5 tuples/sec
FIFO: tuples processed in arrival order
Greedy: if tuple before s1 schedule it; otherwise process tuples before s2
Progress Chart: Chain Scheduling
- assign priorities to operators equal to the slope of the lower envelope segment to which the operator belongs
- Schedule the operator with the highest priority
QoS Based Optimization
• Query and operator scheduling based on QoS requirements
• Two-level scheduling policy:
– Operator batching: superbox selection, superbox traversal based on avg throughput, avg latency, minimizing memory
– Tuple batching
Optimization Objectives
• Multi-way join techniques proposed:
– start with a fixed plan
– moderately adjust it as tuples arrive
• Eddies framework for adaptive query optimization:
– Continuously adapt the evaluation order as tuples arrive
Load Shedding
• When input stream rate exceeds system capacity a stream manager can shed load (tuples)
• Load shedding affects queries and their answers
• Introducing load shedding in a data stream manager is a challenging problem
• Random and semantic load shedding
Load Shedding in Aurora
• QoS for each application as a function relating output to its utility
– Delay based, drop based, value based
• Techniques for introducing load shedding operators in a plan such that QoS is disrupted the least
– Determining when, where and how much load to shed
Load Shedding in STREAM
• Formulate load shedding as an optimization problem for multiple sliding window aggregate queries
– Minimize inaccuracy in answers subject to output rate matching or exceeding arrival rate
• Consider placement of load shedding operators in query plan
– Each operator sheds load uniformly with probability $p_i$
Multi-query Processing
• In traditional multi-query optimization:
– sharing (of expressions, results, etc.) among queries can lead
– to improved performance
• Similar issues arise when processing queries on streams:
– sharing between select/project expressions
– sharing between sliding window join expressions
Grouped Filters
<table>
<thead>
<tr>
<th>Select Predicates for Stream S.A</th>
</tr>
</thead>
<tbody>
<tr>
<td>S.A > 1</td>
</tr>
<tr>
<td>S.A > 7</td>
</tr>
<tr>
<td>S.A > 11</td>
</tr>
<tr>
<td>S.A < 3</td>
</tr>
<tr>
<td>S.A < 5</td>
</tr>
<tr>
<td>S.A = 6</td>
</tr>
<tr>
<td>S.A = 8</td>
</tr>
</tbody>
</table>
Tuple S.A = 8
Shared Window Joins
Consider the two queries:
```
select sum (A.length
from Traffic1 A [window 1 hour],
Traffic2 B [window 1 hour]
where A.destIP = B.destIP
```
```
select count (distinct A.sourceIP)
from Traffic1 A [window 1 min],
Traffic2 B [window 1 min]
where A.destIP = B.destIP
```
- Great opportunity for optimization as windows are highly shared
- Strategies for scheduling the evaluation of shared joins:
- Largest window only
- Smallest window first
- Process at any instant the tuple that is likely to benefit the largest number of joins (maximize throughput)
Stream Data Mining
• Stream mining
– It shares most of the difficulties with stream querying
– Patterns are hidden and more general than querying
– It may require exploratory analysis
• Not necessarily continuous queries
• Stream data mining tasks
– Multi-dimensional on-line analysis of streams
– Mining outliers and unusual patterns in stream data
– Clustering data streams
– Classification of stream data
Stream Mining - Challenges
• Most stream data are at pretty low-level or multi-dimensional in nature: needs ML/MD processing
• Analysis requirements
– Multi-dimensional trends and unusual patterns
– Capturing important changes at multi-dimensions/levels
– Fast, real-time detection and response
– Comparing with data cube: Similarity and differences
• Stream (data) cube or stream OLAP: Is this feasible?
– Can we implement it efficiently?
[Han 2004]
Examples: Multi-Dimensional Stream Analysis
• Analysis of Web click streams
– Raw data at low levels: seconds, web page addresses, user IP addresses, …
– Analysts want: changes, trends, unusual patterns, at reasonable levels of details
– E.g., *Average clicking traffic in North America on sports in the last 15 minutes is 40% higher than that in the last 24 hours.*
• Analysis of power consumption streams
– Raw data: power consumption flow for every household, every minute
– Patterns one may find: *average hourly power consumption surges up 30% for manufacturing companies in Chicago in the last 2 hours today than that of the same day a week ago*
Stream Data Reduction
Challenges of OLAPing stream data
- Raw data cannot be stored
- Simple aggregates are not powerful enough
- History shape and patterns at different levels are desirable: multi-dimensional regression analysis
Proposal
- A scalable multi-dimensional stream “data cube” that can aggregate regression model of stream data efficiently without accessing the raw data
Stream data compression
- Compress the stream data to support memory- and time-efficient multi-dimensional regression analysis
[Han 2004]
Data Warehouse: Stream Cube Architecture
- A tilt time frame
- Different time granularities (second, minute, quarter, hour, day, week, ...)
- Critical layers
- Minimum interest layer (m-layer)
- Observation layer (o-layer)
- User: watches at o-layer and occasionally needs to drill-down down to m-layer
- Partial materialization of stream cubes
- Full materialization: too space and time consuming
- No materialization: slow response at query time
- Partial materialization: what do we mean “partial”?
- On-line materialization
- Materialization takes precious resources and time
- Only incremental materialization (with slide window)
- Only materialize “cuboids” of the critical layers?
- Some intermediate cells that should be materialized
- Popular path approach vs. exception cell approach
- Materialize intermediate cells along the popular paths
- Exception cells: how to set up exception thresholds?
- Notice exceptions do not have monotonic behaviour
- Computation problem
- How to compute and store stream cubes efficiently?
- How to discover unusual cells between the critical layer?
Data Warehouse: Stream Cube Computation
• Cube structure from m-layer to o-layer
• Three approaches
– All cuboids approach
• Materializing all cells (too much in both space and time)
– Exceptional cells approach
• Materializing only exceptional cells (saves space but not time to compute and definition of exception is not flexible)
– Popular path approach
• Computing and materializing cells only along a popular path
• Using H-tree structure to store computed cells (which form the stream cube—a selectively materialized cube)
[Han 2004]
Other Approaches for Mining Unusual Patterns in Stream Data
• Beyond multi-dimensional regression analysis
– Other approaches can be effective for mining unusual patterns
• Multi-dimensional gradient analysis of multiple streams
– Gradient analysis: finding substantial changes (notable gradients) in relevance to other dimensions
– E.g., those stocks that increase over 10% in the last hour
• Clustering and outlier analysis for stream mining
– Clustering data streams
– History-sensitive, high-quality incremental clustering
• Decision tree analysis of stream data
– Evolution of decision trees
– Incremental integration of new streams in decision-tree induction
[Han 2004]
Research Problems: Stream Classification
- What about decision tree may need dramatic restructuring?
- Especially when new data is rather different from the existing model
- Efficient detection of outliers (far away from majority) using constructed models
- Weighted by history of the data: pay more attention to new data?
- Mining evolutions and changes of models?
- Multi-dimensional decision tree analysis?
- Stream classification with other classification approaches?
- Constraint-based classification with data streams?
[Han 2004]
Research Problems: Stream Data Mining
• Stream data mining: should it be a general approach or application-specific ones?
– Do stream mining applications share common core requirements and features?
• Killer applications in stream data mining
• General architectures and mining language
• Multi-dimensional, multi-level stream data mining
– Algorithms and applications
• How will stream mining make good use of user-specified constraints?
• Stream association and correlation analysis
– Measures: approximation? Without seeing the global picture?
– How to mine changes of associations?
Outline
• Introduction:
– What are DSMS? (terms)
– Why do we need DSMS? (applications)
• Example 1:
– Network monitoring with TelegraphCQ
• Concepts and issues:
– Architecture(s)
– Data modeling
– Query processing and optimization
– Data reduction
– Stream Mining
• Overview of existing systems
• Example 2:
– DSMS for sensor networks
• Summary:
– Open issues
– Conclusions
Systems
• **Aurora** (Brandeis, Brown, MIT, [http://www.cs.brown.edu/research/aurora](http://www.cs.brown.edu/research/aurora)): workflow-oriented system, sensor monitoring, dataflow
• **COUGAR** (Cornell, [http://www.cs.cornell.edu/database/cougar](http://www.cs.cornell.edu/database/cougar)): sensor database, time series
• **GigaScope** (AT&T): distributed network monitoring architecture, proprietary system
• **Hancock** (AT&T): telecom streams
• **NiagaraCQ** (OGI/Wisconsin, [http://www.cs.wisc.edu/niagara](http://www.cs.wisc.edu/niagara)): continuous XML query system for dynamic web content
• **OpenCQ** (Georgia Tech, [http://disl.cc.gatech.edu/CQ](http://disl.cc.gatech.edu/CQ)): continuous query system for monitoring streaming web content, triggers, incr. view maintenance
• **StatStream** ([http://cs.nyu.edu/cs/faculty/shasha/papers/statstream.html](http://cs.nyu.edu/cs/faculty/shasha/papers/statstream.html)): multi-stream monitoring system for on-line statistics
• **STREAM** (Stanford, [http://www-db.stanford.edu/stream](http://www-db.stanford.edu/stream)): general-purpose relation-based system
• **Streaminer** (UIUC): stream data mining project
• **Tapestry** (Xerox): pub/sub content-based filtering
• **TelegraphCQ** (UC Berkeley, [http://telegraph.cs.berkeley.edu](http://telegraph.cs.berkeley.edu)): adaptive engine for sensors, continuous query processing system
• **Tradebot** ([www.tradebot.com](http://www.tradebot.com)): stock tickers & streams
• **Tribeca** (Bellcore): network monitoring, early on-line Internet traffic monitoring tool
Aurora
• Data processing system targeted towards monitoring applications:
– Streams: for each monitoring task DBA adds 1-n triggers into trigger network
– Large network of triggers
– Imprecise data
– Real-time requirements
• Specified set of operators, connected in a data flow graph
• Each trigger is data flow graph (each node is one of seven built-in operators)
• Optimization of:
– Data flow graph
– Compile-time and run-time optimization of trigger network
• Three query modes (continuous, ad-hoc, view)
• Detects resource overload: accepts QoS specifications and attempts to optimize QoS for outputs produced
• Real-time scheduling, introspection and load shedding
GigaScope
- Specialized stream database for network applications
- GSQL for declarative query specifications: pure stream query language (stream input/output)
- Uses ordering attributes in IP streams (timestamps and their properties) to turn blocking operators into non blocking ones
- GSQL processor is code generator.
- Query optimization uses a two level hierarchy
Hancock
- A C-based domain specific language which facilitates transactor signature extraction from transactional data streams
- Support for efficient and tunable representation of signature collections
- Support for custom scalable persistent data structures
- Elaborate statistics collection from streams
NiagaraCQ
- CQs for monitoring persistent data sets distributed over WAN
- Scalability (# queries) by grouping CQs for efficient evaluation
- Problem of blocking operators in query plans for streams
OpenCQ
• CQs for monitoring persistent data sets distributed over WAN
• QP based on incremental view maintenance
STREAM
• General purpose stream data manager
– Data streams and stored relations
• CQL (continuous query language) for declarative query specification
• Timestamps in streams
• Flexible query plan generation
• Query processing architecture
• Resource management:
– Operator scheduling
– Graceful approximation: can handle high data rates
• Static and dynamic approximations
Tapestry
• CQs for content-based filtering
– Over append-only database containing email and bulletin board messages
• Restricted subset of SQL
– To guarantee efficient evaluation and append-only results
Telegraph
- CQ processing system
- Uses adaptive query engine
- Query execution strategies over data streams generated by sensors
- Processing techniques for multiple CQs
- Support for stream oriented operators
- Support for adaptivity in query processing
- Optimization
- Various aspects of optimized multi-query stream processing
Tribeca
- Restricted querying capability over network packet streams
# System Comparison
<table>
<thead>
<tr>
<th>System</th>
<th>Data Stream Architecture</th>
<th>Data Model</th>
<th>Query Language</th>
<th>Query Answers</th>
<th>Query Plan</th>
</tr>
</thead>
<tbody>
<tr>
<td>Aurora</td>
<td>low-level</td>
<td>RS-in RS-out</td>
<td>Operators</td>
<td>approximate</td>
<td>QoS-based, load shedding</td>
</tr>
<tr>
<td>Gigascope</td>
<td>two level (low, high)</td>
<td>S-in S-out</td>
<td>GSQL</td>
<td>exact</td>
<td>decomposition, avoid drops</td>
</tr>
<tr>
<td>Hancock</td>
<td>High-level</td>
<td>RS-in R-out</td>
<td>Procedural</td>
<td>exact, signatures</td>
<td>optimize for I/O, process blocks</td>
</tr>
<tr>
<td>STREAM</td>
<td>low-level</td>
<td>RS-in RS-out</td>
<td>CQL</td>
<td>approximate</td>
<td>optimize space, static analysis</td>
</tr>
<tr>
<td>Telegraph</td>
<td>high-level</td>
<td>RS-in RS-out</td>
<td>SQL-based</td>
<td>exact</td>
<td>adaptive plans, multi-query</td>
</tr>
</tbody>
</table>
Example 1: Traffic Analysis
- Need to analyze Internet traffic is increasing ....
- .... and so is the number of tools for this
- Examples:
- ISP monitor service levels, look for bottlenecks, etc.
- development of new protocols, like P2P
- Basic structure of tools:
Traffic Analysis (cont.)
- Performing traffic analysis to gain new knowledge is an iterative process:
- Packet capturing
- Analysis
- Develop new analysis
- New insights
- Network link
Expectations
• Be helpful for typical traffic analysis tasks:
– the load of a system
• how often are certain ports, like FTP, or HTTP, of a server contacted
• which share of bandwidth is used by different applications
• which departments use how much bandwidth on the university backbone
– characteristics of flows
• distribution of life time and size of flows
• relation between number of lost packets and life time of flows
• what are the reasons for throughput limitations, or
– characteristics of sessions:
• how long do clients interact with a web server
• which response time do clients accept from servers
• how long are P2P clients on-line after they have successfully downloaded a file
Expectations (cont.)
- Allow online and offline analysis
- Manage data and analyze data with the same tool
- Facilitate development and reuse of analysis components
Expectations (cont.)
• Provide sufficient performance:
– idealized gigabit/s link
• all packets 1500 byte, TCP/IP header 64 byte
• 42 megabit/s of header information
– more realistic: compression of 9:1 or less
• approx. 880 megabit/s on gigabit/s link
• approx. 11 megabit/s for 100 megabit/s network
Approach
• Public domain DSMS (fall 2003):
– TelegraphCQ
– Aurora ... only source tree, complete??
• Student project by A. Bergamini & G. Tulo:
– install TelegraphCQ
– connect it to wrappers, i.e., sources
– model TCP traces/streams
– develop queries for simple but typical tasks
– try to re-implement an existing complex tool
– identify performance bounds
TelegraphCQ
• Characterization of it’s developers:
– “a system for continuous dataflow processing”
– “aims at handling large streams of continuous queries over high-volume highly variable data streams”
• Extends PostgreSQL
– adaptive query processing operators
– shared continuous queries
– data ingress operations
TelegraphCQ Architecture
Phase 1: Data Acquisition
- Source1 TCPdump
- TCQ Wrapper
- Source2 TCPdump
- TCQ Wrapper
Shared Memory Infrastructure
- TCQ Clearing House
- TCQ BackEnd
- TCQ FrontEnd
Phase 2: Continuous Query Execution
Phase 3: Presentation of results
Client
Continuous Queries in TCQ
• Data streams are defined in DDL with CREATE STREAM (like tables)
```
SELECT <select_list>
FROM <relation_and_pstream_list>
WHERE <predicate>
GROUP BY <group_by_expressions>
WINDOW stream[interval], ...
ORDER BY <order_by_expressions>
```
Continous Queries in TCQ (cont.)
• Restrictions in TelegraphCQ 0.2 alpha release [9]:
– windows can only be defined over streams (not for PostgreSQL tables)
– **WHERE** clause qualifications that join two streams may only involve attributes, not attribute expressions or functions
– **WHERE** clause qualifications that filter tuples must be of the form attribute operand constant
– **WHERE** clause may only contain **AND** (not **OR**); sub queries are not allowed
– **GROUP BY** and **ORDER BY** clauses are only allowed in window queries
### Stream Definition
- CREATE STREAM p6trace.tcp (ip_src cidr, ip_dst cidr, hlen bigint, tos int, length bigint, id bigint, frag_off bigint, ttl bigint, prot int, ip_hcsum bigint, port_src bigint, port_dst bigint, sqn bigint, ack bigint, tcp_hlen bigint, flags varchar(10), window bigint, tcp_csum bigint, tcqtime timestamp TIMESTAMP COLUMN) type ARCHIVED;
Task 1
• How many packets have been sent during the last five minutes to certain ports?
• Store all ports of interests in a table and join with the stream
CREATE TABLE services (port bigint, counter bigint);
SELECT services.port, count(*)
FROM p6trace.tcp, services
WHERE p6trace.tcp.port_dst=services.port
GROUP BY services.port
WINDOW p6trace.tcp ['5 min'];
Task 2
- How many bytes have been exchanged on each connection during the last minute?
- Simple heuristic to identify connections:
- during a one minute window all packets with the same sender and receiver IP addresses and port numbers belong to the same connection
SELECT ip_src, port_src, ip_dst, port_dst, sum(length-ip_len-tcphlen) FROM p6trace.tcp GROUP BY ip_src, port_src, ip_dst, port_dst WINDOW p6trace.tcp [‘1 min’];
Task 3
• How many bytes are exchanged over the different connections during each week?
• Two problems to handle this in a CQ:
– GROUP BY clause can only be used together with a WINDOW clause
• window smaller than one week
• payload of each packet would contribute several times to intermediate results
• how to remove this redundancy?
• tumbling or jumping windows are needed
– identification of connections
• simple heuristic from task 2 does not work
• boils down to the generic problem of association identification
Identification of Associations
• Use address fields and rules
• Example: TCP connections
– \texttt{GROUP BY} addresses only
– \texttt{rule: if } t_n - t_1 < T \texttt{ then same connection}
\texttt{else new connection}
Identification of Associations (cont.)
A priori no address values are known
Check for each new packet:
- is address combination known?
NO: insert new entry
YES: is it a new or old connection?
OLD: update statistics
NEW: insert new connection
<table>
<thead>
<tr>
<th>IP d.</th>
<th>IP s.</th>
<th>Port d.</th>
<th>Port s.</th>
<th>Statistics</th>
<th>Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>8</td>
<td>9</td>
<td>1</td>
<td>( t_1 )</td>
</tr>
<tr>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>1</td>
<td>( t_2 )</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>8</td>
<td>9</td>
<td>1</td>
<td>( t_n )</td>
</tr>
</tbody>
</table>
Identification of Associations (cont.)
A priori no address values are known
Check for each new packet:
-is address combination known?
NO: insert new entry
YES: is it a new or old connection?
OLD: update statistics
NEW: insert new connection
With a single pass over the data this is only possible with sub-queries in SQL
Task 4
- Which department has used how much bandwidth on the university backbone in the last five minutes?
- Store address ranges of all departments in a table
- Check with “>>” which address range contains the IP address of the packet in the data stream
- CREATE TABLE departments (name varchar(30), prefix cidr, traffic bigint);
SELECT departments.name, sum(length-hlen-tcp_hlen) FROM p6trace.tcp, departments WHERE departments.prefix >> p6trace.tcp.ip_src GROUP BY departments.name WINDOW p6trace.tcp ['5 min'];
- TelegraphCQ prototype produces incorrect results if “>>” is used in a join, but works correctly with “=”
Task 4 (cont.)
• "Solution": use "=" and enumerate all addresses in a stored table
• CREATE TABLE departments (name varchar(30), ip_addr cidr, traffic bigint);
SELECT departments.name, sum(length-hlen-tcp_hlen)
FROM p6trace.tcp, departments
WHERE departments.ip_addr = p6trace.tcp.ip_src
GROUP BY departments.name
WINDOW p6trace.tcp ['5 min'];
|
{"Source-Url": "http://www.uio.no/studier/emner/matnat/ifi/INF5100/h06/undervisningsmateriale/handouts/DSMS.pdf", "len_cl100k_base": 11464, "olmocr-version": "0.1.53", "pdf-total-pages": 112, "total-fallback-pages": 0, "total-input-tokens": 143441, "total-output-tokens": 15240, "length": "2e13", "weborganizer": {"__label__adult": 0.000385284423828125, "__label__art_design": 0.0004820823669433594, "__label__crime_law": 0.000637054443359375, "__label__education_jobs": 0.006626129150390625, "__label__entertainment": 0.00019550323486328125, "__label__fashion_beauty": 0.0002053976058959961, "__label__finance_business": 0.0008029937744140625, "__label__food_dining": 0.00036835670471191406, "__label__games": 0.0008096694946289062, "__label__hardware": 0.0050201416015625, "__label__health": 0.0004687309265136719, "__label__history": 0.0006318092346191406, "__label__home_hobbies": 0.00021660327911376953, "__label__industrial": 0.0013971328735351562, "__label__literature": 0.0004935264587402344, "__label__politics": 0.0003371238708496094, "__label__religion": 0.0004906654357910156, "__label__science_tech": 0.352783203125, "__label__social_life": 0.00018966197967529297, "__label__software": 0.09271240234375, "__label__software_dev": 0.53369140625, "__label__sports_fitness": 0.00024235248565673828, "__label__transportation": 0.0006470680236816406, "__label__travel": 0.0002416372299194336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47643, 0.01098]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47643, 0.35415]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47643, 0.81818]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 303, false], [303, 556, null], [556, 1013, null], [1013, 1715, null], [1715, 2036, null], [2036, 2555, null], [2555, 3113, null], [3113, 3486, null], [3486, 3790, null], [3790, 4546, null], [4546, 4765, null], [4765, 4884, null], [4884, 5025, null], [5025, 5727, null], [5727, 6012, null], [6012, 6574, null], [6574, 7194, null], [7194, 7477, null], [7477, 7865, null], [7865, 8259, null], [8259, 8585, null], [8585, 9036, null], [9036, 9498, null], [9498, 9906, null], [9906, 10173, null], [10173, 10600, null], [10600, 11196, null], [11196, 11532, null], [11532, 11811, null], [11811, 12028, null], [12028, 12347, null], [12347, 12573, null], [12573, 12914, null], [12914, 13309, null], [13309, 13578, null], [13578, 14018, null], [14018, 14435, null], [14435, 14970, null], [14970, 15301, null], [15301, 15879, null], [15879, 16537, null], [16537, 17187, null], [17187, 17892, null], [17892, 18492, null], [18492, 19006, null], [19006, 19370, null], [19370, 19803, null], [19803, 20085, null], [20085, 20507, null], [20507, 20987, null], [20987, 21786, null], [21786, 22495, null], [22495, 23153, null], [23153, 23461, null], [23461, 23842, null], [23842, 24174, null], [24174, 24522, null], [24522, 24793, null], [24793, 25116, null], [25116, 25462, null], [25462, 25809, null], [25809, 26299, null], [26299, 26495, null], [26495, 26991, null], [26991, 27259, null], [27259, 27552, null], [27552, 27894, null], [27894, 28215, null], [28215, 28580, null], [28580, 29179, null], [29179, 29607, null], [29607, 30072, null], [30072, 30736, null], [30736, 31261, null], [31261, 32399, null], [32399, 32963, null], [32963, 33658, null], [33658, 34200, null], [34200, 34795, null], [34795, 35192, null], [35192, 36776, null], [36776, 37460, null], [37460, 37829, null], [37829, 38137, null], [38137, 38337, null], [38337, 38451, null], [38451, 38832, null], [38832, 39041, null], [39041, 39382, null], [39382, 39452, null], [39452, 40338, null], [40338, 40609, null], [40609, 40805, null], [40805, 41543, null], [41543, 41709, null], [41709, 42032, null], [42032, 42407, null], [42407, 42734, null], [42734, 42734, null], [42734, 43009, null], [43009, 43284, null], [43284, 43837, null], [43837, 44196, null], [44196, 44569, null], [44569, 45000, null], [45000, 45549, null], [45549, 45776, null], [45776, 46338, null], [46338, 46673, null], [46673, 47298, null], [47298, 47643, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 303, true], [303, 556, null], [556, 1013, null], [1013, 1715, null], [1715, 2036, null], [2036, 2555, null], [2555, 3113, null], [3113, 3486, null], [3486, 3790, null], [3790, 4546, null], [4546, 4765, null], [4765, 4884, null], [4884, 5025, null], [5025, 5727, null], [5727, 6012, null], [6012, 6574, null], [6574, 7194, null], [7194, 7477, null], [7477, 7865, null], [7865, 8259, null], [8259, 8585, null], [8585, 9036, null], [9036, 9498, null], [9498, 9906, null], [9906, 10173, null], [10173, 10600, null], [10600, 11196, null], [11196, 11532, null], [11532, 11811, null], [11811, 12028, null], [12028, 12347, null], [12347, 12573, null], [12573, 12914, null], [12914, 13309, null], [13309, 13578, null], [13578, 14018, null], [14018, 14435, null], [14435, 14970, null], [14970, 15301, null], [15301, 15879, null], [15879, 16537, null], [16537, 17187, null], [17187, 17892, null], [17892, 18492, null], [18492, 19006, null], [19006, 19370, null], [19370, 19803, null], [19803, 20085, null], [20085, 20507, null], [20507, 20987, null], [20987, 21786, null], [21786, 22495, null], [22495, 23153, null], [23153, 23461, null], [23461, 23842, null], [23842, 24174, null], [24174, 24522, null], [24522, 24793, null], [24793, 25116, null], [25116, 25462, null], [25462, 25809, null], [25809, 26299, null], [26299, 26495, null], [26495, 26991, null], [26991, 27259, null], [27259, 27552, null], [27552, 27894, null], [27894, 28215, null], [28215, 28580, null], [28580, 29179, null], [29179, 29607, null], [29607, 30072, null], [30072, 30736, null], [30736, 31261, null], [31261, 32399, null], [32399, 32963, null], [32963, 33658, null], [33658, 34200, null], [34200, 34795, null], [34795, 35192, null], [35192, 36776, null], [36776, 37460, null], [37460, 37829, null], [37829, 38137, null], [38137, 38337, null], [38337, 38451, null], [38451, 38832, null], [38832, 39041, null], [39041, 39382, null], [39382, 39452, null], [39452, 40338, null], [40338, 40609, null], [40609, 40805, null], [40805, 41543, null], [41543, 41709, null], [41709, 42032, null], [42032, 42407, null], [42407, 42734, null], [42734, 42734, null], [42734, 43009, null], [43009, 43284, null], [43284, 43837, null], [43837, 44196, null], [44196, 44569, null], [44569, 45000, null], [45000, 45549, null], [45549, 45776, null], [45776, 46338, null], [46338, 46673, null], [46673, 47298, null], [47298, 47643, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47643, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47643, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47643, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47643, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47643, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47643, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47643, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47643, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47643, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47643, null]], "pdf_page_numbers": [[0, 0, 1], [0, 303, 2], [303, 556, 3], [556, 1013, 4], [1013, 1715, 5], [1715, 2036, 6], [2036, 2555, 7], [2555, 3113, 8], [3113, 3486, 9], [3486, 3790, 10], [3790, 4546, 11], [4546, 4765, 12], [4765, 4884, 13], [4884, 5025, 14], [5025, 5727, 15], [5727, 6012, 16], [6012, 6574, 17], [6574, 7194, 18], [7194, 7477, 19], [7477, 7865, 20], [7865, 8259, 21], [8259, 8585, 22], [8585, 9036, 23], [9036, 9498, 24], [9498, 9906, 25], [9906, 10173, 26], [10173, 10600, 27], [10600, 11196, 28], [11196, 11532, 29], [11532, 11811, 30], [11811, 12028, 31], [12028, 12347, 32], [12347, 12573, 33], [12573, 12914, 34], [12914, 13309, 35], [13309, 13578, 36], [13578, 14018, 37], [14018, 14435, 38], [14435, 14970, 39], [14970, 15301, 40], [15301, 15879, 41], [15879, 16537, 42], [16537, 17187, 43], [17187, 17892, 44], [17892, 18492, 45], [18492, 19006, 46], [19006, 19370, 47], [19370, 19803, 48], [19803, 20085, 49], [20085, 20507, 50], [20507, 20987, 51], [20987, 21786, 52], [21786, 22495, 53], [22495, 23153, 54], [23153, 23461, 55], [23461, 23842, 56], [23842, 24174, 57], [24174, 24522, 58], [24522, 24793, 59], [24793, 25116, 60], [25116, 25462, 61], [25462, 25809, 62], [25809, 26299, 63], [26299, 26495, 64], [26495, 26991, 65], [26991, 27259, 66], [27259, 27552, 67], [27552, 27894, 68], [27894, 28215, 69], [28215, 28580, 70], [28580, 29179, 71], [29179, 29607, 72], [29607, 30072, 73], [30072, 30736, 74], [30736, 31261, 75], [31261, 32399, 76], [32399, 32963, 77], [32963, 33658, 78], [33658, 34200, 79], [34200, 34795, 80], [34795, 35192, 81], [35192, 36776, 82], [36776, 37460, 83], [37460, 37829, 84], [37829, 38137, 85], [38137, 38337, 86], [38337, 38451, 87], [38451, 38832, 88], [38832, 39041, 89], [39041, 39382, 90], [39382, 39452, 91], [39452, 40338, 92], [40338, 40609, 93], [40609, 40805, 94], [40805, 41543, 95], [41543, 41709, 96], [41709, 42032, 97], [42032, 42407, 98], [42407, 42734, 99], [42734, 42734, 100], [42734, 43009, 101], [43009, 43284, 102], [43284, 43837, 103], [43837, 44196, 104], [44196, 44569, 105], [44569, 45000, 106], [45000, 45549, 107], [45549, 45776, 108], [45776, 46338, 109], [46338, 46673, 110], [46673, 47298, 111], [47298, 47643, 112]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47643, 0.04]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
165be9fd8d301ace27b73f5502d424073ef4f964
|
Runtime Verification for the Web
A Tutorial Introduction to Interface Contracts in Web Applications
Sylvain Hallé¹ and Roger Villemaire²
¹ Université du Québec à Chicoutimi, Canada
shalle@acm.org
² Université du Québec à Montréal, Canada
villemaire.roger@uqam.ca
Abstract. This tutorial presents an introduction to the monitoring of web applications. These applications run in a user’s web browser and exchange requests and responses with a server in the background to update their display. A demo application, called the Beep Store, illustrates why complex properties on this exchange must be verified at runtime. These properties can be formalized using an extension of Linear Temporal Logic called LTL-FO*. The tutorial concludes with the presentation of BeepBeep, a lightweight runtime monitor for web applications.
1 Introduction
In the past decade, numerous applications, such as Facebook and Google Mail, have become part of popular culture. These so-called “web” applications come into the scope of a programming paradigm called cloud computing, where the user’s web browser is responsible for loading from a server and displaying the various elements of the application’s page. The user can interact with some of these elements, which in turn trigger the browser to send further requests to the server, and update the display.
To be properly understood by their respective recipients, each request and each response is expected to follow a specific structure, where the possible operations, parameters and values are precisely defined. In many cases, the browser-server exchange also moves forward according to a protocol, where the validity of a request depends on past events.
The technologies over which web applications are built were not designed with complex interactions in mind. Consequently, they do not provide facilities to define or enforce such an “interface contract”. Ensuring a correct match between the browser’s and the server’s behaviour is an open problem, currently left as the developer’s sole responsibility. Recording the sequence of requests and responses, and providing a means of preventing contract violations from occurring is an appealing prospect in this regard.
The present tutorial summarizes our experiments in the enforcement of interface contracts in web applications. Its interest lies primarily in providing a
self-contained introduction to a domain that meets many favourable conditions for the application of runtime verification techniques. To this end, Section 2 presents a running web application typical of many real-world web services we studied in the past; Section 3 discusses the interface contract for this application. Section 4 introduces a formal language, LTL-FO+, expressive enough for the constraints encountered, and describes how BeepBeep, a lightweight LTL-FO+ runtime monitor, can be integrated into the initial application to effectively enforce the contract.
2 Anatomy of a Web Application: The Beep Store
For the purpose of this tutorial, we designed a simple web application called the Beep Store that will be used as a running example to illustrate web-based runtime verification concepts.
2.1 End-User Perspective
The Beep Store allows registered users to browse a fictional collection of books and music, and to manage a virtual shopping cart made of these elements. It runs out-of-the-box in any modern web browser pointed at the store’s URL.
Figure 1 shows a typical application screen. At any time, users can use the search box at the top right of the screen to type any keyword. Similarly, they can click on the “Search an item” menu element at the left to summon a more complete search pane, where they can restrict the search to a specific artist, a specific title, and split the result into pages of a fixed number of entries.
Pressing the “Go” button retrieves from the server the list of all relevant entries. Optionally, users have the option of adding an item from that list into
1 http://beepbeep.sourceforge.net/examples/beepstore
Fig. 1. The Beep Store’s web interface
a personal inventory called a “shopping cart”. To do so, they must first log into the application (using the “Sign in” link at the top of the page) and provide their username and password. A shopping cart is automatically created when users add their first item into it.
Once a cart is created, a “Your Cart” button (not shown) appears at the right of the search box. Clicking this button opens the cart pane, which displays the list of all items currently in the user’s cart, their quantity and total price. Buttons allow the user to edit the quantity for an item, or remove it altogether. Each action updates the cart’s list on the fly.
Such a scenario is a purposefully condensed version of popular commercial web sites, such as Amazon or eBay. Indeed, although the Beep Store is a demo application, all its functionalities —and constraints on its use, as we shall see— have been found in at least one of the real-world web services we studied in the past [14]. This includes in particular the User-Controlled Lightpath Service [7], the Amazon e-Commerce Service [1], and the PayPal Express Checkout Service [2].
2.2 Internal Workings
Asynchronous JavaScript and XML (Ajax) refers to the collection of technologies used to develop such rich and interactive web applications. The execution of an Ajax application in a web browser is a straightforward process. First, the client’s browser loads the application’s page, beepstore.html. It uses it to render the page’s content by interpreting its markup elements: text boxes, buttons, menu elements, headings, images. The header of this HTML file contains a link to a JavaScript document hosted in the same directory, called beepstore.js.
The JavaScript functions it contains are used for three purposes. First, it associates snippets of code to some page elements. For example, a button in the HTML file can be linked to a JavaScript function through the onClick event; any click on this button triggers the execution of the associated JavaScript function. Second, the web browser provides a JavaScript object, called document, whose methods can be used to access the HTML page’s elements and modify their content and appearance dynamically. Hence, the button’s onClick event can toggle the visibility of some page section that was previously hidden, producing an effect similar to a pop-up window. With proper coding, JavaScript can reproduce in the browser most of the look-and-feel of a traditional desktop application.
The last use of JavaScript is for the handling of requests and responses over the network. This is done through a standard object called XMLHttpRequest, also provided by the local browser.
2.3 Interaction through XML
The second part of an Ajax application is a script running on the server side, and answering to requests initiated by the local browser’s XMLHttpRequest.
---
2 An exception is Internet Explorer, which exposes the same functionalities under a different object called MSXML. Their differences are superficial.
object. In the case of the Beep Store, a PHP script called `beepstore.php` acts as the application’s front door on the server. Data is exchanged using a standard markup called XML. Each XML document sent and received is called a *message*, and the communication between the browser and the server hence generates a message sequence.
Figure 2 shows the structure of two typical request-response pairs of messages sent by the Beep Store’s application to its server. For instance, Figure 2(a) shows the message sent by the browser when a user clicks on the Login button: it includes an element called *Action* whose value indicates the name of the action to be executed by the server, and two additional parameters providing a *Username* and *Password*. The actual values inserted inside these two elements are dynamically fetched by the JavaScript function responsible for sending the Login message on the browser.
```
<Message>
<Action>Login</Action>
<Username>Sylvain</Username>
<Password>banana</Password>
</Message>
```
```
<Message>
<Action>LoginResponse</Action>
<SessionKey>123456</SessionKey>
</Message>
```
(a) Login (request) (b) Login (response)
```
<Message>
<Action>CartCreate</Action>
<SessionKey>123456</SessionKey>
<Items>
<Item>
<ItemId>123</ItemId>
<Quantity>1</Quantity>
</Item>
...
</Items>
</Message>
```
```
<Message>
<Action>CartCreateResponse</Action>
<SessionKey>123456</SessionKey>
<CartID>789123</CartId>
<Items>
<Item>
<ItemId>123</ItemId>
<Quantity>1</Quantity>
<Price>12.00</Price>
<Author>The Beatniks</Author>
<Title>Yelp!</Title>
</Item>
...
</Items>
</Message>
```
(c) Create a cart (request) (d) Create a cart (response)
*Fig. 2.* Examples of XML messages for the Beep Store
The server’s PHP script processes this request by checking that the name-password pair is contained in its user database. In such a case, it creates and records a new unique session key, and produces the response message shown in Figure 2(b). The JavaScript code on the client side parses it and keeps the session key in local memory for future requests.
Request and response messages for cart creation, shown in Figure 2(c)-(d), are more complex. In addition to the Action and SessionKey, the creation request includes a compound element, Items, itself made of one or more Item elements. Each item specifies an item ID (taken from the store’s catalogue) and the quantity of this item to be included in the cart. The response returned by the server repeats that information, provides a unique ID to the newly created cart, and adds pricing, title and author information for each item, as obtained from the store’s database.
2.4 The Beep Store as a Web Service
One can see how the exchange of XML messages outsources the application’s core functionalities to the server over the network, leaving the client with only the lighter, GUI-related processing. For example, database search and cart manipulations are handled by the server, which only sends the results of these operations to the browser for proper display. This architecture is appealing, if only for practical reasons: a browser-side search for an item would involve downloading the whole store’s catalogue on the client.
As a matter of fact, the server’s functionality is not limited to this particular web client: it is made publicly available as an instance of a web service. Any third-party developer can produce a working pair of HTML/JavaScript files and send requests to the Beep Store’s PHP script; provided that the requests are properly formed and sent in a reasonable sequence, the store’s script will serve them.
Similarly, a different server, accepting the same messages as the Beep Store, could be used indifferently by the web client. A web service can even send requests to another service. Ultimately, the vision of web services is to separate functionalities into simple, stand-alone units, communicating over the network through standardized mechanisms such as XML messaging. A web application is a particular case of this scenario consisting of a single browser-server pair.
3 Interface Contracts in Web Applications
The appealing modularity of web services is the source of one major issue: how can one ensure the interaction between each application and each service proceeds as was intended by their respective providers? Without any clear and mutual understanding of the acceptable requests and responses, an Ajax client might try to send a message that the server does not recognize, and vice versa. A correct interoperation between a client and a service is only guaranteed if both partners follow a well defined and enforceable interface contract.
3.1 The Beep Store Interface Contract
The source for such an interface contract invariably comes from the service’s documentation, intended for developers. The online documentation for the Beep Store\(^\text{3}\) is modelled after that of real-world web services, in particular the Amazon E-Commerce Service.
\(^3\)\text{http://beepbeep.sourceforge.net/examples/beepstore/documentation}
The first observable part of an interface contract that this documentation provides consists of the description of all the XML request and response messages for each operation, in a way similar to Figure 2. Any client and service must produce messages following the structure mentioned there.
In addition, accompanying text explains the semantics of each operation, and lists a number of conditions that must be fulfilled for each operation to be properly processed and return a response. Some of these constraints have been purposefully integrated into the Beep Store to faithfully reproduce behaviour found in some real-world web service we studied. Our prior work led us to divide these constraints into three categories:
**Data Constraints.** The first class of properties expresses constraints over the structure and values inside a single message at a time. For example, in the ItemSearch message:
P1. The element Page must be an integer between 1 and 20.
P2. The element Page is mandatory only if Results is present; otherwise it is forbidden.
These requirements go beyond the specification of a rigid XML structure: they also provide ranges for possible values, and even state that the presence of some element be dependent on the presence of another. Further data constraints could, for example, impose possible values for some element as a function of the value in another element —an example of such a constraint can be found in the Amazon E-Commerce Service [12].
**Control-Flow Constraints.** Other restrictions are related to the sequence in which operations are invoked. Any application introducing the concept of session, or manipulating persistent objects such as a shopping cart, includes control-flow constraints of that kind. For example:
P3. The Login request cannot be resent if its response is successful.
P4. All cart operations, such as CartCreate, must follow a successful Login-Response.
These constraints introduce the notion of state into the application: the possible future messages allowed depend on what has happened in the past. Indeed, it does not make sense for a user to try to login again after a successful login. Similarly, since shopping carts must be associated to a logged user, it is impossible to create such a cart without first logging in. An attempt at such operations hints at some programming flaw on the client side, and should be replied by an error message from the server.
**Data-Aware Constraints.** Furthermore, the Beep Store includes properties referencing data elements inside exchanged messages, such that these data elements are taken at two different moments in the execution and need to be compared. Properties having this characteristic have been dubbed “data-aware” temporal properties [15]. For example:
There can be at most one active cart ID per session key.
P6. You cannot add the same item twice to the shopping cart.
Property 5 obviously forbids a client to involve a \texttt{CartCreate} operation twice. However, it also requires that at any time, the \texttt{CartId} value found in a message be the same for all subsequent messages. This must be respected both by the client (which cannot try to sneak information about another cart by simply providing a different ID) and the server (which cannot change a cart’s ID after it has been communicated to the client).
Property 6, although seemingly counter-intuitive, has actually been found in the Amazon E-Commerce Service, as reported in \cite{14}. The service requires that, to add one more of an existing item into a cart, the \texttt{CartEdit} operation be invoked on that item instead of repeating a \texttt{CartAdd} message. Therefore, this property entails that any \texttt{ItemId} appearing in a \texttt{CartAdd} message no longer appears in a future \texttt{CartAdd} (unless the item is found in a \texttt{CartRemove} message in between).
The reader is referred to the Beep Store documentation for a list of all constraints in the interface contract; further examples of constraints in other scenarios can be found in our earlier papers \cite{13,15,14}.
### 3.2 Issues with Current Technologies
The examples shown above represent a small portion of all the constraints imposed by the Beep Store. The interface contract for a typical web service is made of dozens of such properties. However, as numerous and well-documented as these properties are, the technologies over which web applications are built bring a number of issues when it comes to handling them.
**Free-Form Messages.** As such, there is no “web service protocol”. The closest one gets to such a concept is with the Simple Object Access Protocol (SOAP) \cite{20}, itself built as a special case of the HTTP protocol that web browsers have been using for decades. A SOAP request is little more than a collection of HTTP headers, followed by an XML payload formed of two mandatory sections: \texttt{Head} and \texttt{Body} (the XML documents in Figure 2 are sent inside the \texttt{Body}). Apart from these conditions, SOAP regards the payload as a free-form document. This entails that the message structure—the web equivalent of types in a classical programming language—is not even checked.
**Stateful Behaviour, Stateless Protocol.** HTTP is also a \textit{stateless} protocol, where each new request processed by the server is detached from previous ones, and unrelated to those that follow. At the time HTTP was designed, this characteristic was appealing for its simplicity of implementation and the limited resources it requires for processing a request. Yet, we have seen how the Beep Store, typical of many web applications, requires long-running interactions spanning multiple requests and responses, and where past requests determine current valid ones.
Since session logic is not carried transparently through the protocol, it must be explicitly handled by the application itself. This is why the Beep Store must simulate sessions through a sequence of individual request-response pairs, where a unique identifier created at the start of a session (the \texttt{SessionKey}) is repeated in each subsequent message. The session’s state (shopping cart contents, user name) is written to persistent storage between requests and can be retrieved using this identifier.
**No Standardized Contract Notation.** It follows from these observations that most properties of an interface contract lie at a higher conceptual level than current web protocols. Their expression and enforcement should therefore be handled in an extra control layer on top of HTTP and SOAP.
The only part of interface contracts that made it to some form of standardization is the Web Service Description Language (WSDL) \cite{9}. WSDL allows the creation of an auxiliary document that specifies the XML structure of each request and response accepted by a service. Existing software frameworks, such as Apache Axis \cite{3}, can generate template functions called stubs for each message. By communicating only through these auto-generated stubs, a client or server can be guaranteed to send only WSDL-compliant messages. The same stubs can also verify at runtime that any incoming message follows the WSDL specification.
If the generation of WSDL-based stubs and the runtime verification of message structures is now considered routine, the Beep Store shows that there is much more to interface contracts than checking XML message structures: WSDL runtime verification only traps violations of Property 1. No standardized language exists to express Properties 2-6; no framework helps building an application that complies with them, or traps their violations at runtime. A developer needs to peruse the service’s natural language documentation, and check each constraint manually with a copious amount of tests.
To illustrate this fact, the Beep Store browser client can be turned into a deliberately faulty application. Its user interface contains a “Fault Parameters” pane, shown in Figure \ref{fig:3} that provides the complete list of constraints specified by the store’s documentation. Normally, the client is robust and performs thorough checks of all these constraints before sending any message to the server. For example, once a shopping cart is created, it hides the “Create cart” button to avoid users creating a second one (see P5). Similarly, it hides the Login button once a user has successfully logged in (see P4). With the Fault pane, the user can tick the checkbox for any of these constraints, causing the application to bypass these measures and allow actions at inappropriate moments.
### 3.3 Particularities of Web Service Interface Contracts
Web service interface contracts bear many resemblances with temporal properties or contracts found in other domains. In object-oriented languages, some classes, such as Java’s \texttt{File} or \texttt{Iterator}, also impose constraints on the sequence of method calls; these class contracts can be checked at runtime using tools such
Fig. 3. The Beep Store’s “Fault Parameters” pane allows the application to deliberately ignore some elements of its interface contract, causing the server to reply with an error message on purpose.
Similarly, research on trace validation applied to spacecraft test sequences unveiled constraints that correlate both data values and ordering of events [6]. This hints that existing solutions developed for other scenarios could be ported to the web service realm. However, web services exhibit a combination of characteristics that makes them unique.
**Data-Aware Dependencies.** Simplified versions of the contract properties could be verified using classical Petri nets, finite state automata or propositional linear temporal logic. However, many constraints can only be faithfully checked by taking into account dependencies between data parameters. Obviously, the data elements cannot be enumerated statically: Property 6 would have to be repeated for every item in the Beep Store’s catalogue, which would be required to be known in advance.
Data-aware dependencies do not merely require the access to parameters inside a message; they also need such values to be kept, and compared at a later time with values inside another message. Moreover, the time separating these two messages is unknown in advance, and potentially unbounded; hence it does not suffice to keep a fixed-size window of past messages.
**Complex Message Structure.** Not only do most messages contain an action name and a set of data parameters, these parameters themselves are subject to a potentially complex XML structure. In the Beep Store, one cannot simply refer to “the” item ID in a shopping cart, as there can be multiple instances of the ItemId element in a message. A property can require that all, or only one of these item IDs fulfils a constraint, hence a form of quantification over message contents is required.
This is probably the single most distinguishing point with respect to other verification applications. Most verification solutions that take data dependencies
into account work in a context where there is at most one instance of a parameter in a message (removing the need for quantification).
4 Runtime Verification of Interface Contracts
The previous sections described how the architecture of web applications, coupled with the state of current technologies, calls for a runtime verification solution of interface contracts. This section describes the authors’ attempts at developing and running a possible solution. It first shows how the properties in Section 3 can be expressed in a formal language, called LTL-FO$. \text{\textsuperscript{+}}$. It then presents BeepBeep, a Java-based runtime monitor for LTL-FO$. \text{\textsuperscript{+}}$. BeepBeep can be integrated into the Beep Store described in Section 2 and enforce its interface contract at runtime.
4.1 Formalizing Contracts with LTL-FO$\text{\textsuperscript{+}}$
LTL-FO$\text{\textsuperscript{+}}$ is an extension of Linear Temporal Logic (LTL) developed to address the characteristics of web application interface contracts. Relating the expressiveness of this logic to other solutions has extensively been done in previous papers [15,19].
Let $Q$ be a set of queries, $M$ a set of messages, and $V$ a set of atomic values. A query function $\pi$ is defined as $\pi : Q \times M \rightarrow 2^V$. Intuitively, $\pi(q, m)$ retrieves a set of values from a message $m$, given some “filtering criterion” $q$. We typically use as $\pi$ the function that takes as query a path in an XML document (a slash-separated list of element names) and which returns all the values at the end of such a path in the current message. For example, in the following message $m$, we have $\pi("\text{message/item"}, m) = \{A, B\}$.
```
<Message>
<Item>A</Item>
<Item>B</Item>
<Client>10</Client>
</Message>
```
A message trace is a sequence $m = m_1 m_2 \ldots$ such that $m_i \in M$ for $i \geq 1$; $m_i^\text{\textsuperscript{\textminus}}$ denotes the suffix of $m_i$.$m_{i+1}$\ldots.
Definition 1 (Syntax). The language LTL-FO$\text{\textsuperscript{+}}$ (Linear Temporal Logic with Full First-order Quantification) is obtained by closing LTL under the following construction rules:
1. If $x$ and $y$ are variables or constants, then $x = y$ is a LTL-FO$\text{\textsuperscript{+}}$ formula;
2. If $\varphi$ and $\psi$ are LTL-FO$\text{\textsuperscript{+}}$ formulae, then $\neg \varphi, \varphi \land \psi, \varphi \lor \psi, \varphi \rightarrow \psi, G \varphi, F \varphi, X \varphi, \varphi U \psi, \varphi V \psi$ are LTL-FO$\text{\textsuperscript{+}}$ formulae;
3. If $\varphi$ is a LTL-FO$\text{\textsuperscript{+}}$ formula, $x_i$ is a free variable in $\varphi$, $q \in Q$ is a query expression, then $\exists_q x_i : \varphi$ and $\forall_q x_i : \varphi$ are LTL-FO$\text{\textsuperscript{+}}$ formulae.
Definition 2 (Semantics). We say a message trace \( \overline{m} \) satisfies the LTL-FO+ formula \( \varphi \), and write \( \overline{m} \models \varphi \) if and only if it respects the following rules: if \( \varphi \) is of the form \( \neg \psi \), \( \psi \lor \psi' \), \( F \psi \), \( X \psi \) and \( \psi U \psi' \), the semantics is identical to LTL’s. Let \( q \in Q \) be some query expression. The remaining cases are defined as:
\[
\begin{align*}
\overline{m} \models c_1 = c_2 & \Leftrightarrow c_1 \text{ is equal to } c_2 \\
\overline{m} \models \exists x_i : \varphi & \Leftrightarrow \overline{m} \models \varphi[b/x_i] \text{ for some } b \in \pi(q,m_1)
\end{align*}
\]
We define the semantics of the other connectives with the usual identities:
\( \varphi \land \psi \equiv \neg (\neg \varphi \lor \neg \psi) \), \( \varphi \rightarrow \psi \equiv \neg \varphi \lor \psi \), \( G \varphi \equiv \neg (F \neg \varphi) \), \( \varphi V \psi \equiv \neg (\neg \varphi U \neg \psi) \), \( \forall \exists x : \varphi \equiv \neg (\exists x \cdot \neg \varphi) \).
Equipped with this language, it is possible to revisit the interface contract described earlier and formalize it with LTL-FO+ formulæ. Properties 1 and 2 are data constraints; they only involve the temporal operator \( G \) to specify that the data constraint applies to all messages. If we define \( q_1 = \text{Message/Action} \), \( q_2 = \text{Message/Page} \) and \( q_3 = \text{Message/Results} \), then Properties 1 and 2 become respectively equations 1 and 2 below:
\[
\begin{align*}
G (\forall_{q_1} a : a = \text{ItemSearch} \rightarrow (\forall_{q_2} p : p \geq 1 \land p \leq 20)) & \quad (1) \\
G (\forall_{q_1} a : a = \text{ItemSearch} \rightarrow (\exists_{q_2} r : \top \leftrightarrow \exists_{q_2} p : \top)) & \quad (2)
\end{align*}
\]
The first property states that globally, if the message’s action is \( \text{ItemSearch} \), then for every \( \text{Page} \) value \( p \) inside that message, \( p \) is in the range \([1, 20]\). Similarly, the second property states that any \( \text{ItemSearch} \) message is such that for every \( \text{Results} \) element, a \( \text{Page} \) element must exist (\( \pi \) returns the empty set if no element with the specified path can be found in a message). The symbol \( \top \) stands for “true”; \( \exists \pi x : \top \) is true whenever the path \( q \) exists.
In a similar way, control-flow properties P3 and P4 become formulæ 3 and 4 below:
\[
\begin{align*}
G (\forall_{q_1} a : a = \text{LoginResponse} \rightarrow (X G (\forall_{q_1} a' : a' \neq \text{LoginResponse}))) & \quad (3) \\
(G (\forall_{q_1} a : a \neq \text{CartCreate} W (\forall_{q_1} a' : a' \neq \text{LoginResponse})) & \quad (4)
\end{align*}
\]
Finally, by defining \( q_4 = \text{Message/CartId} \), \( q_5 = \text{Message/SessionKey} \) and \( q_6 = \text{Message/Items/Item} \), data-aware properties 5 and 6 can be formalized into the following:
\[
\begin{align*}
G (\forall_{q_4} c : \forall_{q_5} k : G (\forall_{q_4} c' : \forall_{q_5} k' : (k = k' \rightarrow c = c')))) & \quad (5) \\
G (\forall_{q_1} a : a = \text{CartAdd} \rightarrow (\forall_{q_6} i : X G (\forall_{q_1} a' : a' = \text{CartAdd} \rightarrow \forall_{q_6} i' : i' \neq i'))) & \quad (6)
\end{align*}
\]
Equation 5 states that in every message, the presence of a \( \text{CartId} \) \( c \) and \( \text{SessionKey} \) \( k \) entails that, from that point on, any other occurrences of a \( \text{CartId} \) \( c' \) and \( \text{SessionKey} \) \( k' \) are such that the same key imposes the same ID. This is equivalent to P5. The “data-awareness” of this constraint can be observed in
the fact that two variables that have been quantified across temporal operators (such as c and $c'$) are compared at a later point in the expression.
A particularity of LTL-FO+ lies in its quantification mechanism: note in the definition how the values over which quantification applies are only those found in the current message, $m_1$. For example, in equation 6, variables $i$ and $i'$ both quantify over catalogue item IDs. If quantification did not depend on the current message, the previous formula would always be false, as any value $c$ bound to $i$ would also be admissible for $i'$, making the assertion $i \neq i'$ false at least once.
The previous formula rather states that at any time in the execution of the application, for any item ID $i$ appearing in a CartAdd message, then from now on in any future CartAdd message, any item ID $i'$ is different from $i$. Hence, it will be true exactly when no item appears more than once in any CartAdd message, which is consistent with Property 6.
LTL-FO+ allows the Beep Store to publicize a formal version of its interface contract. To this end, an auxiliary file, contract.txt, is hosted along with the Beep Store’s other files on the server. It contains the list of all LTL-FO+ formulæ forming that contract, including equations (1)-(6) described above. Figure 4 shows a snippet of the contract file containing a text rendition of Property 1.
4.2 The BeepBeep Runtime Monitor
Since LTL-FO+ draws heavily on classical LTL, a runtime verification procedure can be obtained from an algorithm presented in [10], which creates the Büchi automaton for a given LTL formula. This algorithm performs on the fly and generates the automaton as the sequence of states unwinds. The LTL-FO+ monitoring procedure, detailed in [15], is an extension of this algorithm, adapted for first-order quantification on message elements.
LTL-FO+ monitoring can then be implemented into a lightweight tool for web applications. It suffices that incoming and outgoing messages be intercepted as “events” and fed to the monitor. The algorithm updates its internal state according to the processed event, and eventually blocks the actual transmission or reception if a violation is discovered.
Since a web application is inherently distributed, the location of this monitor leads to multiple architectural choices, shown in Figure 5. In client-side verification, shown in Figure 5(a) contract compliance is checked in the user’s web browser before any message is allowed to be transmitted over the network: an outgoing message $m$ is sent to a function $\delta$ monitoring a specification $\varphi$. Incoming messages are filtered in the same way before reaching the application’s code. Server-side verification [5(b)] works on the opposite. A third solution is to use a
third-party protocol coordinator (not shown) as suggested by [5]. The coordinator ideally resides neither in the client’s browser nor in the web server, and acts as a monitoring proxy for both ends of the communication. To illustrate monitoring on the client side, we developed BeepBeep, a lightweight, Java-based runtime monitor for Ajax web applications.

**Fig. 5.** Design choices for runtime verification of web applications
In classical (e.g. Java) programs, intercepting events generally requires instrumenting the code or resorting to mechanisms such as pointcuts [8]. In the present case, network operations converge to a single input-output point, the standard XMLHttpRequest object provided by the local browser. It becomes easy to interpose an extra layer of processing over that object, without resorting to any other form of instrumentation.
Including BeepBeep into an existing Ajax application is straightforward. It suffices to host BeepBeep’s two files (beepbeep.jar, the Java applet, and beepbeep.js, an auxiliary JavaScript file) in the same directory as the Ajax application. BeepBeep is bootstrapped by adding a single line in the `<head>` portion of the application’s HTML page.
When such a BeepBeep-enabled application is started, the procedure described in Section 2.2 is followed. BeepBeep’s additional JavaScript file dynamically appends the snippet of HTML code instructing the browser to load the Java applet implementing the LTL-FO⁺ monitoring algorithm, which appears as a small rectangle at the bottom of the application’s page. The specification passed to the applet is automatically retrieved from the `contract.txt` file hosted on the server.
The JavaScript code also overloads the methods of the standard XMLHttpRequest object. When the original application’s JavaScript invokes the `send` method of XMLHttpRequest, it actually calls the method implemented by BeepBeep first. This way, incoming and outgoing messages, before being actually sent (or returned), can be deviated to the applet for verification.
### 4.3 Wrapping Up
We can now return to the Beep Store application and perform runtime monitoring of its interface contract on the client side. Assuming that the store provides
---
[^4]: BeepBeep and its source code are available for download under a free software license: [http://beepbeep.sourceforge.net](http://beepbeep.sourceforge.net)
a contract file and hosts the two BeepBeep files, we can then modify its HTML code to include the additional JavaScript file, as described above.
The monitor-enabled Beep Store application can be started as usual in a standard browser. As previously, one can open the store’s Fault parameters pane, and disable, for example, the internal enforcement of property 3 (“don’t login twice”). This time, however, the rectangle at the bottom of the page tells us that BeepBeep successfully fetched a contract and is awaiting for incoming or outgoing XML messages.
The first login attempt can be executed as expected. BeepBeep’s display updates, indicating that it indeed witnessed the corresponding messages, but let them through as they did not violate any constraint. After successfully logging in, as expected the faulty client fails to hide the Login link. Clicking on it a second time summons the Login pane, where one can enter the same credentials and press on the Login button. Like before, the client attempts to send a Login XML message; however, this time, BeepBeep intercepts the message, correctly discovers that it violates property 3, and skips the piece of code that would normally send it. It also pops a window alerting the user, showing the caption associated with the violated property in the contract file.
This scenario has also been experimented on a real-world web application for the Amazon E-Commerce Service [16]. Our findings indicate that on a low-end computer, monitoring LTL-FO contract properties produces an average overhead of around 3% or 10 ms per message in absolute numbers. As a rule, the state of the network accounts for wider variations than the additional processing required by the monitor.
It shall be noted that BeepBeep is independent of any browser-server pair of applications. Its Java applet is self-contained, and the JavaScript auxiliary file can be included into any web page and load it at startup. It can correctly intercept and process any request as long as it is XML-based. Similarly, the contract to be monitored is hosted in a separate text file that is read each time the applet is loaded —hence the contract can be changed without changing the monitor. This way, BeepBeep is a runtime monitoring solution that can be applied to other scenarios than the Beep Store: it suffices to write an appropriate contract for the application under study.
5 Conclusion
This tutorial has highlighted the potential for the application of runtime verification techniques to the field of web services; yet several interesting questions have been left out from this presentation. For example, since events in web applications are sequences of XML messages, it is possible to treat a sequence of such events as one large XML “document” and leverage commercial XML query processors to perform an equivalent validation of message traces [18]. However, the monitoring of quantified formulae presents a potential for unbounded resource consumption. The forward-only fragment of LTL is an ongoing attempt at providing a bounded subset of the logic suitable for limited environments [17].
Finally, if the goal of client-side monitoring is to relieve the server from the burden of dealing with faulty clients, how can one be certain that a client indeed monitors the contract? The concept of cooperative runtime monitoring\[11\] has recently been put forward to resolve such an issue.
Finally, it could be very well possible that application developers refrain from integrating more complex behaviours into their web applications precisely for lack of tools to deal with them in a systematic way. Hence even a modest contribution from runtime verification to the practitioner’s toolbox could enhance the quality and ease of development of web applications. In this regard, we hope this tutorial will encourage researchers in the monitoring and validation community to consider web applications as a potential field of application to their work.
References
1. Amazon e-commerce service, \url{http://solutions.amazonwebservices.com}
2. Paypal web service API documentation, \url{http://www.paypal.com}
3. Apache Axis (2010), \url{http://ws.apache.org/axis2}
|
{"Source-Url": "https://www.labunix.uqam.ca/~villemaire_r/Articles/shrvRV10.pdf", "len_cl100k_base": 8703, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 39761, "total-output-tokens": 10661, "length": "2e13", "weborganizer": {"__label__adult": 0.0002639293670654297, "__label__art_design": 0.0002827644348144531, "__label__crime_law": 0.00027441978454589844, "__label__education_jobs": 0.0003204345703125, "__label__entertainment": 5.370378494262695e-05, "__label__fashion_beauty": 0.00011235475540161131, "__label__finance_business": 0.00014793872833251953, "__label__food_dining": 0.00022530555725097656, "__label__games": 0.0002994537353515625, "__label__hardware": 0.0005769729614257812, "__label__health": 0.0003070831298828125, "__label__history": 0.0001379251480102539, "__label__home_hobbies": 5.1975250244140625e-05, "__label__industrial": 0.00022399425506591797, "__label__literature": 0.0001825094223022461, "__label__politics": 0.0001709461212158203, "__label__religion": 0.0002770423889160156, "__label__science_tech": 0.01026153564453125, "__label__social_life": 5.894899368286133e-05, "__label__software": 0.00740814208984375, "__label__software_dev": 0.9775390625, "__label__sports_fitness": 0.0001800060272216797, "__label__transportation": 0.0003204345703125, "__label__travel": 0.00014865398406982422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42339, 0.02019]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42339, 0.47178]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42339, 0.85116]], "google_gemma-3-12b-it_contains_pii": [[0, 2366, false], [2366, 4075, null], [4075, 7083, null], [7083, 9251, null], [9251, 12227, null], [12227, 15005, null], [15005, 18002, null], [18002, 21220, null], [21220, 23285, null], [23285, 26106, null], [26106, 29813, null], [29813, 32623, null], [32623, 35035, null], [35035, 38160, null], [38160, 41354, null], [41354, 42339, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2366, true], [2366, 4075, null], [4075, 7083, null], [7083, 9251, null], [9251, 12227, null], [12227, 15005, null], [15005, 18002, null], [18002, 21220, null], [21220, 23285, null], [23285, 26106, null], [26106, 29813, null], [29813, 32623, null], [32623, 35035, null], [35035, 38160, null], [38160, 41354, null], [41354, 42339, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42339, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42339, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42339, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42339, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42339, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42339, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42339, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42339, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42339, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42339, null]], "pdf_page_numbers": [[0, 2366, 1], [2366, 4075, 2], [4075, 7083, 3], [7083, 9251, 4], [9251, 12227, 5], [12227, 15005, 6], [15005, 18002, 7], [18002, 21220, 8], [21220, 23285, 9], [23285, 26106, 10], [26106, 29813, 11], [29813, 32623, 12], [32623, 35035, 13], [35035, 38160, 14], [38160, 41354, 15], [41354, 42339, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42339, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
4d030577c36830413dc33548e98ed11295f2bd03
|
Design of a demonstrating system for use in PRX equipment courses
Akhbeis, T.
Award date:
1979
Disclaimer
This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration.
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
Design of a demonstrating system
for use in PRX Equipment Courses
afstudeerproject van: T. Akhbeis
verricht in de vakgroep EB
in de periode aug. '78 - aug. '79
in opdracht van prof. Ir. A. Heetman
en Ir. J.A. Samwel (Director Philips'
International Telecommunication
Training Centre - Hilversum)
afstudeerhoogleraar: Prof. Ir. Heetman
coaching: Ir. H. Kemper
<table>
<thead>
<tr>
<th>CONTENTS</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Subject</td>
<td>i</td>
</tr>
<tr>
<td>Introduction</td>
<td>ii</td>
</tr>
<tr>
<td>I.1. Processor controlled telephony</td>
<td>I.1.</td>
</tr>
<tr>
<td>I.2. PRX</td>
<td>I.2.</td>
</tr>
<tr>
<td>II.1. Hardware Configuration</td>
<td>II.1.</td>
</tr>
<tr>
<td>II.2. Input/output</td>
<td>II.2.</td>
</tr>
<tr>
<td>II.3. Programming the PIO</td>
<td>II.3.</td>
</tr>
<tr>
<td>II.4. The Z-80 CTC</td>
<td>II.8.</td>
</tr>
<tr>
<td>II.5. Structure of the Channel Logic</td>
<td>II.9.</td>
</tr>
<tr>
<td>II.6. CTC Operating Modes</td>
<td>II.10.</td>
</tr>
<tr>
<td>II.7. CTC Programming</td>
<td>II.11.</td>
</tr>
<tr>
<td>III. Demoprog</td>
<td>III.1.</td>
</tr>
<tr>
<td>III.1. Definition of The DEMOPROG User's Language</td>
<td>III.2.</td>
</tr>
<tr>
<td>IV.2. Scan Source Module</td>
<td>IV.7.</td>
</tr>
<tr>
<td>Appendix A.</td>
<td>1.</td>
</tr>
<tr>
<td>Conclusion</td>
<td>3.</td>
</tr>
<tr>
<td>References</td>
<td>4.</td>
</tr>
</tbody>
</table>
Subject: Design of a demonstration system for use in PRX Equipment Courses.
The PRX is a computer controlled telephone switching system. There is a need for simulation of several equipment functions for teaching purposes. I chose to design a system to demonstrate the salient features of the central processor and input/output device. Furthermore, the system design is flexible so that it can easily be used for other types of equipment.
The demonstration equipment comprises a Z80 microprocessor which controls the demonstration process and an interface to select and send data to a course oriented panel. A VDU device enables the lecturer to manipulate the process and select depth levels.
To give the system optimum flexibility, I have defined and implemented an application-oriented language called DEMOPROG, which is compiled using a compiler written in high level program language PLZ. The user prepares lessons in the form of Demoprog statements. Each statement defines a certain action at the panel. Sequences of actions can, for instance, demonstrate PRX addressing techniques, arithmetic operations, and suchlike.
Introduction
Understanding.
One way of beginning an inquiry into the nature of understanding is to look at the ways in which the words "understand" and "understanding" are used in everyday speech, the object being to see if it is possible to make general comments about the notion which will serve to provide hints and clues for educators in their role as "teachers of understanding".
Suppose someone says, "Piet doesn't understand how to get the computer to work", or "He is the only one who understands how to solve quadratic equation's. In these examples "understand" has the sense of "Know what to do" - "Piet doesn't know what to do to get the computer to work" - and, further, seems to be significantly different from another sense of the word in which the notions of explanation and theoretical rationale are well to the fore. For example, "He doesn't understand the general theory of relativity". Failure to understand, in the sense of not knowing what to do, can be remedied by giving a simple instruction or set of instructions- "You just stick this wire in there", or, in the case of a child who doesn't understand how to "divide fractions", "You just change the divide sign to multiply etc." But the giving of the instruction or instructions, while it may now enable the child to do the division- he now understands what to do - in no way enables him to understand why he does what he does do, just as sticking the wire "in there" in no way enables me to understand why these results in the computer are working now.
We hit the main reason of designing the demonstrating system which shall be used to explain and refine functioning principles of the PRX "Processor controlled Reed eXchange". Experience proofs that one who spends a certain period working on systems based on analog signals, will have difficulties to digest and understand systems founded on digital principles. Lecturers are facing this problem often in the Philips'training Centre "P.I.T.T.C.". One needs a didactic system to pass this barrier towards accumulating the necessary technical knowledge.
Accumulation of Technical Knowledge.
Scientific progress affects the entire social, economic, and technology framework, to which the problem of educational orientation is related. It furthermore directly affects those problems which face individuals during the periods when they are acquiring knowledge or bringing their knowledge up-to-date, and also directly influences educational institutions in which individuals are taught.
Scientific progress therefore has specific effects on orientation, in particular those relating to continuous learning or training, (permanent or recurrent education).
Knowledge can be roughly split in two types. The first type is that of technical knowledge. In this field, knowledge relating to a given sector is generally and necessarily ordered, hierarchical, and it can be accumulated. A particular problem can only be studied after first learning a certain amount of information, laws, and techniques, each of which in their turn demand still more preliminary elementary knowledge.
The second type of knowledge is found particularly in the literary fields, and is not necessarily hierarchical, or at least not to the same extent or in the same way: we can talk about a literary or philosophical work in a way which is more or less apt, more or less original, more of less profound, but none of these expressions implies the necessary preliminary acquisition of an ordered and accumulated body of knowledge. These two types of knowledge differ not only in the way in which they are acquired, they also differ in the way in which they become out of date. No doubt our understanding of a philosophical or literary work evolves over a period of time. But such an evolution can hardly be compared to the kind of knowledge involved in scientific progress.
Difficulties which may be experienced in taking up literary or philosophical studies after a certain lapse of time, are not comparable of those experienced by someone doing the same thing in scientific fields.
Finally also the criteria used to assess attainment are not the same for the two types of knowledge. In the scientific field, predictions can be confirmed or invalidated by facts which can be clearly observed, as can the success or failure of applications of scientific knowledge. This is not the case with literary or philosophical knowledge.
Individual acquisition of some kind of knowledge appears to be a necessity. The computer capable of supplying all the information we need without our having to "know" ourselves is, and will remain, a myth. In fact, at the documentation level alone, to feed a computer with questions presupposes the ability to manipulate a set of descriptive characteristics in the field of knowledge in question, and to manipulate such a set of characteristics, always complex in structure requires preliminary training and apprenticeship. Furthermore, and above all, the myth of the perfect computer is based on a confusion between documentation and knowledge. The use of computers in documentation cannot eliminate the fundamental distinction between these two ideas. A Frenchman may have a French-English dictionary and an English grammar and still be unable to read an English text. Undeniably, the rapid expansion of knowledge makes it indispensable to teach "know-how"; education as a process of teaching people how to learn. But is "know-how" possible without basic knowledge? Is it possible to teach how to learn without first accumulating knowledge?
We must be aware that continuous training, already necessary in many fields, will be more and more urgent requirements in the future.
Continuous education can have the aim of enabling a worker to change his occupation during his working life. To what extent can this be achieved?
It would be a mistake to think that the speed of scientific progress creates a greater equality of individual opportunity to acquire new knowledge in a given branch. In fact an individual educated in a given field today, has the best chance of mastering new techniques which may be developed in the future in his particular field. Despite facilities for continuous learning, it will probably remain difficult to become highly qualified in a field outside the with which one is familiar, and the higher the qualification required, the greater the difficulty.
We come now to the essence of our case, another function of continuous training should be enable an individual who has to change the form of his function to a modern one. One aspect of an orientation system would be to provide guidance and assistance in such cases. The speed with which knowledge and techniques become out of date, is indeed one of the main difficulties in designing systems of continuous training and guidance. A qualification obtained in the past is not like a railway ticket which enables you to rejoin the train at the station where you left it. The ticket may only be valid for a short period. Of course, readaptation courses could be provided, enabling people to bring themselves up-to-date in their various fields. The main problem is to decide on the duration of such refresher courses, and to estimate the investments required in terms of individual effort and financing. This difficulty will grow as educational systems themselves, become less and less inert and traditionally minded, with rapid changes in method and context.
Building up a demonstrating system to explain the principles of the PRX has an embedded aim; for such trainees who have been building up their experience in the conventional central telephony stations, and suddenly have to maintain a completely different telephone station working in digital principles, controlled by a computer. Being accustomed to test analog signal, demands certain apparatus, while testing a digital signal requires a different technical science and philosophy. It is hard for human beings to change drastically from one way of thinking, imagining, to another. The essence of designing this system, is to be used as a tool for the teacher during his lessons. The system is flexible, it can be adapted to explain the working methods of the computer.
Section I.
I.1. Processor controlled telephony.
The history of telephony recognizes revolutionary principles based on the new developed technical science called the digital communication techniques. For a long time switching systems used to be conceived around a few ingenious electro-mechanical devices, such as relays, selectors and crossbar switches.
The study of switching systems was based on the evaluation of block diagrams and on the operation of the individual units. Nowadays one uses concepts such as switching network, directly and indirectly controlled systems, systems having common network control and storage requirements of systems, without bothering, whether certain functions are realized by hardware and wiring or by software and electrically changeable memory contents. Hardware needs no longer to be designed for carrying out certain specific functions per call, but can be generalized to the point where it can be used, in combination with the coded information stored in the memories, in the decision making process of each call, in other words, most functions can be realized as software modules.
Increasing the traffic volume of the telephone network, has led to new traffic measurement and network management techniques, which are to enable better use and control of the network as a whole and, in this way, guarantee an overall high grade of service. The exchange has to be approached as integral with the network. Remote control and integration of all operational and administrative network functions are dominating factors in the architecture of the new generation of telephone exchanges, such as the PRX family.
1.2. PRX.
The Philips semi-electronic stored-program controlled telephone system - the characters stand for Processor-controlled Reed eXchange - is the result of long experience gained in telephone switching and in the application of electronics and computers in communication technology.
The PRX is designed for use in public telephone networks, its functions ranging from satellite to transit operation. It may also be used in multi-exchange networks or as a combined local/trunk exchange.
The principle of the "stored-program control" is the most important milestone in telephone switching development. According to the principle, all logical control functions and data are concentrated in the central processor, allowing a flexibility and variety in switching facilities which, if ever, could be achieved in electromechanical systems only at great expenditure. The software nature of the logics permits easy accessibility and changeability of functions at data via internal and also external data channels. Functions such as blocking and unblocking subscribers, changing routing patterns, reading out metering data and other more complicated functions can be initiated in a rather simple way, by commands given via data links.
1.3. Parts of The PRX.
As shown in the general block diagram of Fig.II.1, the system comprises three principal sections:
- Switching Network
- Central Control Unit
- Interface Equipment.
I.3.1. Switching Network.
The heart of the switching network is a trunklink network. It contains all unit and switches which are required to set up local and trunk connections. The switches are in the form of reed relays.
I.3.2. Central Control Unit.
All functions to be performed in the switching network for the establishment, supervision and release of connections, are fully controlled by the Central Control Unit (CCU, see Fig. I.1), in other words, it gives commands, receives information and stores data in the memory, makes calculations with its arithmetic unit. Also it updates all connections, initiates the operation of the various units, shortly, it controls the operation of the PRX.
The CCU consists roughly of the following parts,
a. Processor Core memory,
b. Central Processing Unit, (CPU)
c. Input/Output unit, (IOU)
d. Alarm and Switching Unit, (ASU)
and of course many other parts which I have neglected for the sake of the subject I am handling.
I.3.2a. The memory contains different kinds of information:
- The program section containing instructions for the processor,
- Data store for subscribers' numbers, subscribers' meters, trunkline numbers, etc.
- Working area for temporary storage of data.
I.3.2b. The Central Processor is the TCP 18 unit.
It is a binary single-address machine with a word and instruction length of 16 bits: it has six program-accessible registers. Address modification and indirect-addressing is specified per instruction, and full addressability of the memory is obtained by relocating the 8-bit address field of the instruction, using the contents of an 18-bit relocation register, which is loaded per program module.
Interrupts, autonomous transports and scan results received via the I/O unit have hardware priority over instructions. A program hesitation technique is used, in which the processor control decides after each completed instruction which operation will be executed next.
The data channels (see Fig. 1.1) have memory access priority over the central processor, and operates using the cycle-stealing principle.
A crystal-controlled processor clock provides 16 times slots with an interval of 110 nanoseconds. Moreover, each central processor unit is equipped with a real-time clock which is used to schedule the autonomous scan procedure in accordance with the real-time requirements of the telephone equipment, to measure the recognition time of signalling systems, to guard the operation time of slow subsystems such as markers and drivers, etc. The basic clock period is 12.5 milliseconds, which can be halved or doubled by means of strapping.
I .3.2c. The input/output unit permits asynchronous co-operation between the central processor unit and the interface equipment, and is to some extent independent of its associated central processor unit. It contains six registers, three of which are used to store data required for the autonomous scan procedure which determines the current status of the exchange. The test points in subscriber line circuits and junctors are arranged in groups of 16 which are scanned in a parallel mode. The scanning is not performed by program to prevent the processor loading from becoming very high to yield only a small amount of status changes. The procedure is a program-initiated hardware subroutine which autonomously increments the current memory address and seizes the appropriate registers of the central processor to perform, compare, load and store functions at the moment the new status of the test points in the switching network is received.
At the end of each autonomous scan cycle, the collected logical differences are processed by program.
Approximately the same procedure is used in the case of autonomous data transports via the control channel.
A test-access connection between the dual processors is provided, through which a faulty processor can be investigated by means of diagnostic programs in the on-line machine.
I .3.2d. The two processors operate instruction-synchronously in a dual mode, and are continuous compared for proper performance. The configuration is governed automatically by program-controlled Alarm and Switch-over Unit.
Section II.
II.1 Hardware Configuration
The design of any digital system can be broken down into a succession of manageable steps. This is an essential characteristic of the design process. And, to the extent that the designer consciously subdivides the design task into these steps, he is able to bring a variety of potent design tools to bear on the problem. Some of the steps are:
1. Assessing how the system inputs and outputs constrain the system design
2. Deciding how much time is available to carry out the required data manipulations. This determines whether specific algorithms will be implemented in parallel (for speed) or serially (for economy). It also determines whether a specific logic line is fast enough to do the job.
3. Studying the operations involved in order to develop an algorithm which is particularly well suited to the problem.
4. Specifying a system structure considering the above mentioned points, we shall describe during this selection the parts of the system. (see fig.II.1.)
1. Z80 microcomputer system the MCZ 1/20
2. Interface between the microcomputer and the output
3. A lamp panel simulating the necessary course, in this case the panel describes the Basic Computer Technique.
Let us explain those parts point by point:
1. The Z80 microcomputer is a software developing system supplied with 32K byte memory. The system is controlled with the R10 operating system. The user can develop his software with the assembler or the high programming language the PLZ.
2. The interface has the task to save the data instantly which send by the microprocessor, till the right unit has been selected. (see fig. II.2.)
This will be conceived by the latches type 8212 of Intel, attached to the lamp drivers.
3. The panel consists of 256 lamps, to simulate the necessary functions, needed for the BCT course. (see fig. II.3.)
II.2. Input/Output
In this section we need to apply PLZ/SYS parameter passing given in the interrupt routine. To be more accurate the interrupt routine needs to call sequentially the Panel output procedure, called PANIO, to send two actual parameters to the output. By means of this call two actual parameters are passing to the panel output. The parameters are the pattern and the group.
The procedure PANIO had to be written in PLZ/ASM, due to its hardware facility. In the PANIO procedure we are selecting one PIO (programmable input output) of the four which are mounted in the IOB (input output board). Further detail can be achieved in the next section.
Let us discuss now how these passing parameters procedures are applied. Each PLZ procedure has access to both static data (declared Global or Internal), allocated in what is called the Activation Record (AREC) for each invocation of a procedure. Static data is simply accessed at absolute memory address. Each AREC is allocated on the stack where the current Top of Stack word is pointed to by the register SP. Consequently, new AREC's are allocated to memory addresses which are decreasing as the stack grows. In addition, the register IX always points to a fixed position within the AREC and is used as a base to access local variables and parameters which are all known to be fixed displacements from the IX register.
The format of the AREC consists of the following sections:
1. An "In Parameters" passed from the caller of the procedure.
2. A Mark-Stack Record (MREC) described below.
The MREC is simply two fields: the first contains the return address of the calling procedure. The second field is the value of the calling procedure's IX register, which is to be restored upon return to reestablish the correct environment for the calling procedure. The IX point to the low memory address of the field which contains the calling procedure's IX value during the active procedure's lifetime.
The following diagram clarifies the format of an AREC and indicates the in parameters.
The procedure PANIO can be found at the end of this section.
II.3 Programming the PIO (Programmable Input Output)
The PIO which we are using is one of the four PIO's mounted on the IOB Board. The IOB interfaces directly to the MCS system data bus, with buffer logic.
The PIO is programmed by writing data to the PIO control ports as a series of commands.
The command formats are standardised as follows:
The operating mode of each I/O port can be selected by a control byte with the following format, in our case we have selected mode 0.
\[
\begin{array}{ccccccc}
D7 & D6 & D5 & D4 & D3 & D2 & D1 & D0 \\
0 & 0 & x & x & 1 & 1 & 1 & 1 \\
\end{array}
\]
\( \equiv \phi F H \)
\begin{itemize}
\item mode select code
\item select mode \( \phi \) (output)
\end{itemize}
To select one of the PIO mounted in the IOB board, we need to send the following sequence of bytes.
\[
\begin{array}{ccccccc}
D7 & D6 & D5 & D4 & D3 & D2 & D1 & D0 \\
1 & 1 & / & / & / & / & / & / \\
\end{array}
\]
- 0 0 selects data for Port A
- 0 1 selects data for Port B
- 1 0 selects control for Port A
- 1 1 selects control for Port B.
selecting the PIO inside the IOB selection of the IOB board.
This leads to the following code:
- FC H ; selecting port A for the pattern
- FD H ; selecting port B for the group
- FE H ; selecting port A address
- FF H ; selecting port B address.
PANIO Procedure (written in PLZ/ASM)
patout module
Global
panio procedure
entry
Push IX ; save caller's IX and establish own
LD IX, 0
ADD IX, sp
LD C, $FEH ; port Address of PIO
LD A, $FFH ; select operating mode (output)
Out (C), A
LD C, $FFH ; program port B of PIO
LD A, $FFH
Out (C), A
LD C, $FCH ; select port A for pattern
LD A, (IX+6); get 1st byte
Out (C), A ; send pattern out
LD C, $FDH ; select port B for group
LD A, (IX+4); get 2nd byte
Out (C), A ; send group out
pop IX ; restore caller's IX
pop HL ; get return address
pop DE ; deallocate in parameters
Jp (HL)
end panio
End patout
Fig. II.1. System parts
Fig. II.2a
Unit 0
Fig. II.2
Interface
II.4 The Z-80 CTC
Most microcomputer applications require some kind of counting or timing whether it is for the provision of time delays in simulating a large system, 'time out's' in a communication controller, or counting the number of cans which pass a photo-electric cell. We shall use the CTC as a timer to enable a sequential interrupt to the main interpreter program, during running, in order to achieve the 3 blinking modes i.e. ModeFast (8 Hz), Mode Medium (2 Hz), Mode slowly ($\frac{1}{2}$ Hz).
The CTC provides the following features:
- Operation of each channel in either timer or counter mode;
- Triggering by either system clock or external asynchronous clock;
- Zero Count (Timeout output on channels 1 and 2):
- A Readable down counter indicating the number of counts to zero;
- Programmable, nested interrupts (mode 2) on all four channels in both counting and timing modes.
The CTC is programmed by writing to the channel control register, but before we start discussing how to program then CTC, we ought
to know the IC (chip) itself.
Fig. II.4 shows the construction of this chip, it contains a bus interface to the Z-80 CPU, Internal Control Logic, four sets of Counter/Timer Channel Logic, and Interrupt Control Logic. The CTC has the capability of generating a unique interrupt vector for each separate channel (for automatic vectoring to an interrupt service routine). The 4 channels can be connected into four contiguous slots in the standard Z80 priority chain with channel number 1 having the highest priority. The CPU bus interface logic allows the CTC device to interface directly to the CPU with no other external logic.
Now we get to know the parts of the Z-80 CTC, let us direct our attention to the construction of the channel.
II.5 Structure of the Channel Logic.
Fig. II.5.
Fig. II.5 shows the channel logic structure which is composed of 2 registers, 2 counters and control logic.
The registers are an 8-bit Time Constant Register and an 8-bit Channel Control Register. The counters are an 8-bit CPU-readable Down Counter and an 8-bit Prescaler.
II.6 C T C. Operating Modes.
Conceptually, the CTC may be viewed as one of four channels, each of which functions as in the diagram below:
By loading appropriate information into the Control Register, the channel's operating mode is determined. These are two modes, Counter / Timer mode.
For our application we shall use the Timer Mode, where the system clock is used to trigger the down counter. The clock is fed through a prescaler, which can be either 16 or 256, and hence the period of the timer is:
\[ t_c \times P \times T_c \]
- \( t_c \) is the system clock period
- \( P \) is the prescale (16 or 256)
- \( T_c \) is the initial value of the contents of the down counter.
The timing sequence can be triggered-off either by a pulse on the appropriate CCK/TRG line or under software control via the control register.
Again, when Zero is reached ZC/T0 is output high and interrupt is available. When Zero Timeout is reached, the down counter register will automatically be reloaded and, if timing is under software control, the time period will again be counted down. If the ZC/T0 output from timer is used as a trigger for a down counter the actual length of the timer can be increased to
\[ 256 \times t_c \times P \times Tc \]
which is about 6.7 secs. for a Z-80, and 4.2 secs. for a Z-80A.
II.7 CTC programming
II.7.1 Channel Control Word.
Programming the CTC as a timer can be achieved by loading the Channel Control Word as follows:
\[
\begin{array}{cccccccc}
D_7 & D_6 & D_5 & D_4 & D_3 & D_2 & D_1 & D_0 \\
1 & 0 & 1 & 1 & 0 & 1 & 0 & 1
\end{array}
\]
\[ \equiv \Phi B 5 \]
The explanation of such a byte will be explained as follows:
Setting bit Zero to unity indicates a channel control command, and this is to be stored in the channel control register.
The first three bits of the control register are
\[
\begin{array}{cccc}
D_2 & D_1 & D_0 \\
1 & 0 & 1
\end{array}
\]
channel stop counting until Tc loaded Time Constant follows.
Setting the second bit, forces us to provide a Time Constant as the next byte written to the selected channel. The Time Constant register also stores the value to be 'counted down', all values should be programmed in 1's complement form, i.e. \( F \) is equivalent to a Time Constant of 256, (see later calculating the Time Constant).
These next three bits are used in time mode only. Bit 3 selects the cycle on which the timer operation starts and depends upon the setting of bit 2 as shown in the table below:
<table>
<thead>
<tr>
<th>bit 3</th>
<th>bit 2</th>
<th>interpretation</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>start (or restart) operation at next chosen machine cycle</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>start at next machine cycle, following loading of down counter</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>start operation after trigger makes specified transition</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>start operation after down counter loaded and trigger has made specified transition.</td>
</tr>
</tbody>
</table>
Bit 4 specifies the trigger operation: unity for a positive sloping trigger zero for a negative slope. The prescaler is specified by bit 5:
Lastly, the operating mode and interrupt enable can be specified as shown below:
II.7.2 Time Constant Calculation
To achieve the highest blinking mode, 8Hz, we are obliged to choose the interrupt period every 1/16 sec.
\[ T_i = \frac{1}{8} = \frac{2}{16} \]
One period of 8Hz means that we have to create an interrupt by changing state.
\[ T_c = \frac{t_c}{2/6} \]
this means that
\[ T_{int/2} = t_c \times P \times T_c \]
Thus
\[ t_c = 1/2 = 1/2.25 \times 10^6 = 0.44 \mu \text{sec} \]
\[ T_{int/2} = 62500 \mu \text{sec} \]
then
\[ T_c \approx 555 \]
which means an unreachable result, so we have to solve this problem by software, by acknowledging one interrupt after three fictive ones, this means that
\[ T_c \text{ (active)} = \frac{555}{3} = 185. \]
and the 1's complement of 185D is \( 46 \text{ H} \)
Time constant byte = 46 H
As we have seen by choosing the channel control word, we are setting the 7th bit to 1 to enable interrupts. This means that an interrupt Vector must be written to the appropriate register in the CTC, due to automatic features in the Interrupt Control Logic, one pre-programmed Interrupt Vector suffices for all four channels.
II.7.3 What is an interrupt?
The purpose of an interrupt is to allow peripheral devices to suspend CPU operation in an orderly manner and force the CPU to start a peripheral service routine. Usually this service routine is involved with the exchange of data, or status and control information, between the CPU and the peripheral. Once the service routine is completed, the CPU returns to the operation from which it was interrupted.
![Diagram of interrupt process]
**Fig. II.6.**
Fig. II.6. shows figuratively the communication principle between the interpreter program and the interrupt routine, by creating an interrupt to the interpreter program, the address of the last executed instruction should be saved and a start command to the interrupt routine must be delivered, while the interrupt routine is running, the main program waits for its saved address to continue running. This means that by discussion (demanding, and answering) of the CTC and the CPU respectively the running flow will survive. For that sake we need to use Interrupt Mode 2.
With this mode we (programmers) should maintain a table of 16 bits starting addresses for every interrupt service routine. This table may be located anywhere in memory. When an interrupt is accepted, a 16 bit pointer must be formed to obtain the desired service routine starting address from the label. The upper 8 bits of this pointer is formed from the contents of the I register. The I register must have been previously loaded with the desired value by us i.e.
LD I, A
The CPU also resets clear the I register so that it is initialized to zero. The lower eight bits of the pointer must be supplied by the interrupting device; actually seven bits where
The least significant bit is zero, to guarantee an even location as starting address. Note that the pointer is used to get two adjacent bytes to form a complete 16 bit service starting address.
---
**Diagram:**
- **IREG:** Interrupt service routine address table
- **MEMORY:** Jump table (start address of the interrupt routine)
- **INTERRUPT VECTOR:** Loaded by the programmer
- **PC:** Program Counter
- **SP:** Stack Pointer
- **IREG:** Interrupt Register
- **Z-80 CPU:** Central Processing Unit
*Fig. III.7.*
Once the CTC generates the lower portion of the pointer and places the vector on the data bus in response to an interrupt acknowledge (1), the CPU automatically pushes the program counter onto the stack (1a), and obtains the starting address from the table (3), and does a jump to this address (4). Fig. III.7 shows the sequence of events for vector processing.
II.7.4 Programming the Interrupt Vector Register.
The high order 5 bit of the Interrupt Vector must be written to the CTC in advance as part of the initial programming sequence. To do so, the CPU must write to the I/O port address corresponding to the CTC channel 0. Bits 1,2 are not used when loading this vector. At the time when the interrupting channel must place the Interrupt Vector on the Z80 Data Bus, the Interrupt Control Logic of the CTC automatically supplies a binary code in bits 1 and 2 identifying which of the four CTC channels is to be serviced.
The "interrupt vector register" is then
$$\begin{array}{cccccccc}
1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 \\
\text{RANDOMLY SELECTED CHANNEL} & \text{INITIAL VALUE}
\end{array}$$
F8 H
Section III.
**Demoprog.**
After we have discussed the necessary hardware parts needed for the demonstration of lessons, aimed to explain the PRX apparatus, we would like to represent the user's language "Demoprog".
The "Demoprog" has been designed to reflect the salient features of real machines, of several computers. It shows the micro steps of data transfer between one register to another, from memory to register or the other way around. Specific cycles performed by the machine and the whole machine instructions, can be programmed and demonstrated. All addressing techniques, and all arithmetical operations executed by the machine, can also be didactically demonstrated. The unique possibility is of course to demonstrate a whole program, in our case the PRX program, and in the future the microcomputer programs, which are going to be used as an interface of the digital telephone central station, controlled by its own computer TCP 36.
We need to define the "Demoprog" due to our specific applications. The rules are written in BNF notation. The "Demoprog" consists of two parts:
1. Program part
2. Control part
An elegant way to define the user's language, can be done by using the BNF notation. (Backus-Naur-Form)
III.1. Definition of The "DEMOPROG" User's Language
<DEMOPROG>::= <PROGRAM>
<CONTROL PROGRAM>
<PROGRAM>::= <PROGRAM HEADING> <BLOCK>
<PROGRAM HEADING>::= BEGIN <PROGRAM NAME>
{(<FORMAL PARAMETER LIST>) [<DECLARATION>]}
<PROGRAM NAME>::= <IDENTIFIER>
<IDENTIFIER>::= <LETTER>{<LETTER> | <DIGIT>} [7]
<LETTER>::= A|B|C|D|E|F|G|H|I|J|K|L|M|N|O|P|Q|R|S|T|U|V|W|X|Y|Z
<DIGIT>::= 0|1|2|3|4|5|6|7|8|9
<FORMAL PARAMETER LIST>::= <IDENTIFIER>[, <IDENTIFIER>]*
<DECLARATION>::= <NAME> <VARIABLE>= <IDENTIFIER>
<Block>::= {<FLOW PART><LABEL PART><STATEMENT PART> END
<FLOW PART>::= <EMPTY>[@<FLOW MODE>
<EMPTY>::=
<FLOW MODE>::= F1 | F2 | F3 | F4 | F5
<LABEL PART>::= <EMPTY>[/<IDENTIFIER>
<STATEMENT PART>::= <MODE CHANGE STATEMENT>|
<ASSIGNMENT STATEMENT>|
<CODING STATEMENT>|
<COMMAND STATEMENT>
<MODE CHANGE STATEMENT>::= <VARIABLE>: <MODE>
<ASSIGNMENT STATEMENT>::= <VARIABLE>= <EXPRESSION>: <MODE>
<VARIABLE>::= <IDENTIFIER> | <IDENTIFIER> <SELECTOR>
<EXPRESSION>::= <SIMPLE EXPRESSION>|
<SIMPLE EXPRESSION><OPERATOR><SIMPLE EXPRESSION>
<SIMPLE EXPRESSION>::= <PATTERN> | <VARIABLE>
<PATTERN>::= #<HEX NUMBER>|
#<OCT NUMBER>|
#<BIN NUMBER>|
#<DEC NUMBER>
<HEX NUMBER> ::= H {<HEX DIGIT>} +4
<HEX DIGIT> ::= 0|1|2|3|4|5|6|7|8|9|A|B|C|D|E|F
<OCT NUMBER> ::= O{<OCT DIGIT>} +6
<OCT DIGIT> ::= 0|1|2|3|4|5|6|7
<BIN NUMBER> ::= B{<BIN DIGIT>} +16
<BIN DIGIT> ::= 0|1
<DEC NUMBER> ::= D{<DIGIT>} +5
<SELECTOR> ::= [<ELEMENT>{,< ELEMENT>}]
<ELEMENT> ::= <NUMBER>{ .. <NUMBER>} *1
<NUMBER> ::= <DIGIT> | <DIGIT><NUMBER>
<OPERATOR> ::= <ADDDING OPERATOR>
<LOGICAL OPERATOR>
<ADDDING OPERATOR> ::= +|-|<LOGICAL OPERATOR>
<LOGICAL OPERATOR> ::= =<AND> | <OR> | <XOR>
<AND> ::= &
<OR> ::= *
<XOR> ::= $
<MODE> ::= MZ | MC | MF | MM | MS | MT | MM | MCPY
<CODING STATEMENT> ::= <IDENTIFIER>= COD<PATTERN>:<MODE>
<IDENTIFIER>= DEC<PATTERN>:<MODE>
<COMMAND STATEMENT> ::= CLEAR( <IDENTIFIER>{,< IDENTIFIER>} )
CLEAR ALL|
SET ALL
<CONTROL PROGRAM> ::= RECALL<LABEL PART>| BYE|
| f1 | f2 | f3 | f4 | f5
Some examples of the statements used in the Demoprog
1. \(<\text{MODE CHANGE STATEMENT}>::=\langle\text{VARIABLE}\rangle:\langle\text{MODE}\rangle\>
\langle\text{VARIABLE}\rangle::=\langle\text{IDENTIFIER}\rangle|
\langle\text{IDENTIFIER}\rangle \langle\text{SELECTION}\rangle
\langle\text{SELECTION}\rangle::=\langle\text{ELEMENT}\rangle\{,\langle\text{ELEMENT}\rangle\}
\langle\text{ELEMENT}\rangle::=\langle\text{NUMBER}\rangle\{,\langle\text{NUMBER}\rangle\}^*\)
\langle\text{NUMBER}\rangle::=\langle\text{DIGIT}\rangle
Examples
1.a. IR \(\rightarrow\) MM
IR takes new mode the MM mode (mode medium)
1.b. IR[1,4,10..16] \(\rightarrow\) MF
The bits 1,2,10 till 16 keeps their value and change their mode to MF mode (mode fast)
2. \(<\text{ASSIGNMENT STATEMENT}>::=\langle\text{VARIABLE}\rangle=\langle\text{EXPRESSION}\rangle:\langle\text{MODE}\rangle\>
\langle\text{EXPRESSION}\rangle::=\langle\text{SIMPLE EXPRESSION}\rangle|
\langle\text{SIMPLE EXPRESSION}\rangle::=\langle\text{OPERATOR}\rangle\langle\text{SIMPLE EXPRESSION}\rangle
\langle\text{OPERATOR}\rangle::=\langle\text{PATTERN}\rangle|
\langle\text{VARIABLE}\rangle
Examples
2.a. IR=\#HA3D1 \(\rightarrow\) MM
Load IR with pattern A3D1 hexadecimal and change its mode to MM
2.b. IR=AR \(\rightarrow\) MM
Load IR with the content of AR and set its mode to MM
2.c. IR=UR+VR:MF
Load IR with the sum of UR and VR and set its mode to MF
2.d. IR=UR+3 \(\rightarrow\) MM
Load IR with the content of UR added to 3 and set its mode to MM
2.e. SC=VR[1..4] : MM
Load SC with the bits 1 to 4 of the vr register and set its mode to MM
Load the bits 1 to 4 of IR with the content of the bits 8 to 11 of VR, and set its mode to MF
3. < DECLARATION > ::= NAME <VARIABLE>=<IDENTIFIER>
Example
3.a. NAME IR[1..8]= PC
the bits 1 to 8 of register IR takes a new name PC
4. <CODING STATEMENT> ::= <IDENTIFIER>= COD<PATTERN>: <MODE>
DEC<PATTERN>: <MODE>
Examples
4.a. if the following statement is already existing,
SC= #B101 : MF
which means that the register SC is loaded with 101 binary and its mode is MF, then by writing this statement:
UR=DEC SC : MM
the register UR will be loaded with the decoding value of SC namely 0010000 and its mode will be the MM mode.
4.b. If the following statement is already existing,
UR= #B10000000 : MF
which means the UR register is loaded with the mentioned binary value and is blinking with the MF mode, the following statement:
SC=COD UR : MF
will load the register SC with the coding value of UR namely 111 and sets it mode to MF.
5. <COMMAND STATEMENT>: =RECALL <LABEL PART>
CLEAR(<IDENTIFIER>{,<IDENTIFIER>})
CLEAR ALL
SET ALL
BYE
Examples
5.a. RECALL LDA1
the label part of the inputfile is recalled
5.b. CLEAR (UR,WR)
the registers UR,WR are set to zero
5.c. CLEAR ALL
initialize the whole panel
5.d. SET ALL
set all lamps of the panel on
5.e. BYE
close and save inputfile and write to console
'PRINT DEMO <COURSE NAME>' and go to RIO '%' command.
III.2. Language Structures and Compilers.
We aim to develop the compiler of the Demoprog, using systematic programming technique in the high programming language PLZ. In this respect, it constitutes a welcome application of the program and data structuring disciplines exposed and elaborated in this section.
We shall start by describing language composition and will then concentrate exclusively on simple structures that lead to the modular translator.
Every language is based on a vocabulary. Its elements are ordinarily called words; in the realm of formal languages, however, they are called (basic) symbols. It is characteristic of languages that some sequences of words are recognized as correct, well-formed sentences of the language and that others are said to be incorrect or ill-formed. What is it that determines whether a sequence of words is a correct sentence or not? It is the grammar, syntax, or structure of the language. In fact, we define the syntax as the set of rules or formulas which defines the set of (formally correct) sentences. More importantly, however, such a set of rules not only allows us to decide whether or not a given sequence of words is a sentence, but it also provides the sentences with a structure which is instrumental in the recognition of a sentence’s meaning. Hence, it is clear that syntax and semantics (= meaning) are intimately connected. The structural definitions are therefore always to be considered as auxiliary to a higher purpose. This, however, must not prevent us from initially studying structural aspects exclusively, ignoring the issues of meaning and interpretation.
Take for example, the sentence:
Girls sleep
The word "Girls" is the subject and "sleep" is the predicate.
This sentence belongs to the language that may, for instance, be defined by the following syntax.
\[
\begin{align*}
\langle \text{sentence} \rangle &::= \langle \text{subject} \rangle \ \langle \text{predicate} \rangle \\
\langle \text{subject} \rangle &::= \text{girls} | \text{boys} \\
\langle \text{predicate} \rangle &::= \text{sleep} | \text{eat}
\end{align*}
\]
The meaning of these three lines is:
1. A sentence is formed by a subject followed by a predicate.
2. A subject consists of either the single word "girls" or the word "boys".
3. A predicate consists of either the word "sleep" or the word "eat".
The idea then is that a sentence may be derived from the start symbol \( \langle \text{sentence} \rangle \) by repeated application of replacement rules.
The formalism or notation in which these rules are written is called Backus-Naur-Form (BNF). The sentential constructs
\[
\langle \text{sentence} \rangle, \ \langle \text{subject} \rangle, \ \langle \text{predicate} \rangle
\]
are called non-terminal symbols, the words
\[
\text{girls}, \ \text{boys}, \ \text{sleep}, \ \text{eat}
\]
are called terminal symbols, and the rules are called productions. The symbols:
\[
::=, \ |
\]
are called meta-symbols of the BNF notation.
The language defined by this syntax consists of the four sentences:
\[
\text{girls sleep} \\
\text{boys sleep} \\
\text{girls eat} \\
\text{boys eat}
\]
To build up one of the above mentioned sentences, we can follow the following sequence:
<sentence> — <subject> <predicate> —
girls <predicate> — girls sleep:
hence <sentence> — girls sleep, and since girls sleep ∈ T
where T denotes the set of all sequences of symbols from T
T denotes the vocabulary of terminal symbols.
L denotes the language.
Not that <subject> and <predicate> occur in non-terminating steps only, whereas the terminating step must lead to a
sequence that contains to one of the above mentioned sentences.
The grammatical rules are called productions, because they determine how new forms may be generated or produced.
For the sake of brevity, we use the capital letters for non-terminal symbols, the Greek letters to denote sequences of symbols.
We can now present the following mathematical definitions:
1. Let a language \( L = L(T,N,P,S) \) be specified by
a. A vocabulary \( T \) of terminal symbols.
b. A set of non-terminal symbols (grammatical categories).
c. A set \( P \) of productions (syntactical rules).
d. A symbol \( S \) (from \( N \)), called the start symbol.
2. The language \( L(T,N,P,S) \) is the set of sequences of terminal
symbols \( \xi \) that can be generated from \( S \) according to rule
3 below.
3. A sequence \( \sigma \) can be generated from a sequence \( \sigma_0 \) if and
only if there exist sequences \( \sigma_1, \sigma_2, \ldots, \sigma_{n-1} \) such that
every \( \sigma_i \) can be directly generated from \( \sigma_{i-1} \) according
to rule 4 below:
\[
( \sigma_0 \rightarrow^* \sigma_n ) \quad \rightarrow \quad ( \sigma_{i-1} \rightarrow \sigma_i ) \text{ for } i = 1 \ldots n
\]
4. A sequence $\eta$ can be directly generated from a sequence $\xi$ if and only if there exist sequences $\alpha, \rho, \xi', \gamma'$ such that
a. $\xi = \alpha \xi' \beta$
b. $\gamma = \alpha \gamma' \beta$
c. $P$ contains the production $\xi' ::= \gamma'$
A language is said to be context free if and only if it can be defined in terms of a context free production set. A set of productions is context free if and only if all its members have the form,
$$<\text{subject}> ::= \xi \quad (\text{subject} \in N, \quad \xi \in (N \cup T)^*)$$
i.e., if the left side consists of a single non-terminal symbol and can be replaced by $\xi$ regardless of the context in which $<\text{subject}>$ occurs. If a production has the form
$$\alpha <\text{subject}> \beta ::= \alpha \xi \beta$$
then it is said to be context sensitive because the replacement of $<\text{subject}>$ by $\beta$ may take place only in the context of $\alpha$ and $\beta$.
Section IV.
IV.1. Software Design.
As computer programs get larger, and are made to carry out more complex tasks, they become more complex and prone to errors in logic. To produce a design free of logical errors, it is necessary to remove all ambiguities and inconsistencies from the specification. However, in order that the specification should convey sufficient information for the product to satisfy the predetermined requirements, it must be couched in terms that are natural to the application it describes. We shall follow the steps of structured programming method, because the design of software becomes a rigorously ordered and staged transition from the high-level specification to the low-level implementation. The initial specification is divided into a number of functionally related subsystems. Each subsystem has its associated specification which itself is a subset of the total system specification. The reasons for a specific subdivision being adopted should be completely documented as part of the design process. Each subsystem may be considered to contain functionally related tasks such that the degree of interaction between these tasks is greater than that between tasks of different subsystems. Proceeding further, each task is divided into a number of steps which specify the processing to be carried out. It is important to specify the processing involved at each step in a language appropriate to the level of detail being considered. No attempt to use a specific computer language should be made until the refinement of a particular step is sufficiently detailed to allow an almost one-to-one correspondence between the description of a substep and a statement in the language.
The top-down refinement process structures the design according to well defined hierarchy. At the heart of the method is the need to divide the design into easily comprehensible units.
This not only helps to ensure that the program does what is was supposed to do at the outset, but it also demonstrates the fact to other programmers. A program that can be understood stands a far greater chance of being modified correctly. Stepwise refinement, in fact, forces the design into a rigorous modifiable structure. Where a modification or extension of the specification is necessary, and the subsequent decomposition steps are replaced by those pertinent to the new requirements. Indeed this is one of the most important wishes requested by designing the 'demoprog', hence, it appears that the demonstrating language should be able to fulfil the changing courses in the future which is the typical character of the training centre of P.I.T.T.C.: 'Philips' International Telecommunications Training Centre.'
After we have completed the definition of the 'demoprog' (see section III) we can try designing the software structure of the interpreter:
**Main program.**
After initialisation of the important variables used in the whole program, and as long as a certain condition is true, the program has to check up the syntax used by the user, in computer terms "the source", and of course to interpret it. The interpretation will be executed if the syntax check agreed with the content of the source! After interpretation the action must be ended.
**Syntax analyzer (parser)**
The syntax analyzer has an input and an output.
The input: is a row of abstract symbols supplied by the lexical scanner, in the variable called symbol.
The output: error if there is any, or an intermediate form of the demogram, with some variables needed as an input to the interpreter.
Lexical scanner.
The lexical scanner is assessable with two tasks, getting the next symbol and gathering the next character.
Next symbols
The next symbols (abbr.: nextsym) consists of an input and an output.
The input: receiving an "interesting" character handed by nextcharacter with variable ch.
The output: the abstract symbols, in the variable sym.
Next character
The next character (abbr.: nextchr) consists of an input and an output.
The input: ASCII characters as they are supplied by getseq procedure.
The output: an interesting character in the variable ch.
Beside the main program, we need an interrupt routine to reach the several blinking modes (see also section IV), in other words the interrupt routine shall inspect an array of instructions which has been used by the user (written in "Demoprog" user's language) and send the right signals to the input/output port (see section III); precisely port A will serve the gate for the lamps, and port B will select sequentially the group of lamps.
We have to define the way of preparing the demoprogram (the users language program), before executing.
We shall make use of the existing file editing program supplied by the RIO operating system, to prepare the lesson written in the "Demoprog user's language", act from an existing file.
We based ourselfs on the assumption of an existing file which has to be called by the main program.
How to create a file?
After initializing the RIO operating system, the user will find a brief displayed explanation, how to edit and execute the demoprogram.
We can create the file by editing it, as follows:
after system initialization de symbol will appear which means that the system is on RIO command, this intitles use to request the EDIT program as follows:
```
% EDIT <Lesson name>
EDIT
NEW FILE
INPUT
{here follows the lesson written in Demoprog User's Language}
Begin
.
.
end
```
IV.5.
{ by twice return, the system will go to the editing command which has the symbol > } > changing and correcting the lesson can be fulfilled by the editing command (see text editor of the Z-80 system) > Quit { will turn us to RIO command }
% Testing the lesson:
After creating the lesson, the user has the ability to test and demonstrate the lesson by using the following command.
% DEMO < LESSON NAME >
N.B. In the future the user has to specify the course name afterwords the lesson name,
eg.
% DEMO <LESSON NAME> < LESSON NAME >
Let us try now the stepwise refinement method to explain the algorithms of the "Demoprog" program, for this aim we shall use the high-programming language PLZ. Before we start, let's explain the word algorithms;
What is an Algorithm?
According to Webster's Dictionary, an "algorithm" is any special method for solving a certain kind of problem. Actually we use an established algorithm for carrying out almost any familiar task, though we rarely stop to think of the various component steps of algorithms. To be useful in the computing context, we'll have to narrow down the definition of an algorithm a bit further.
For our use we can define "Algorithm" as "a list of instructions for carrying out some process step by step". Although this definition avoids specifying the level of detail to be used in the list, we will find it useful to start with a list of
very rough and informal instructions, and then to start refining these instructions in designing a final detailed version of the algorithm in the form of a program.
The nature of the digital computer leads to the specification of solutions to problems using algorithm, i.e. using "algorithmic" notation. Since the computer can perform only one small step at a time, people have generally been led to concentrate more on the order in which the steps are taken, than on understanding how each step relates to the problem as a whole. Today we know that it is often better to reverse this process, leaving the order of processing to be determined after the various major steps of the task have been defined.
The problem is then to develop a program which interpret the lesson. The interpret program can be formulated as follows:
```
demo module
external ...
internal ...
main procedure
entry
"openfile"
"read line after line of file and store in memory"
"scan source"
"perform syncheck, context free & sensitive"
if
"no errors found"
then "interpret program"
"close file"
fi
end main
```
We now proceed to specify the various statements in greater detail. Refining "openfile", "read line after line of file and store in memory", "scan source" and "perform syncheck, context free & sensitive", we note that those instructions are a preparing phase to
the instruction "interpret program."
The "openfile", "close file" and "read line after line of file and store in memory" will be executed automatically by the operating system, when we make use of the PLZ stream I/O facility, after declaring the following external procedure:
\[
\text{external open procedure (unit byte, filename-ptr byte flag byte) returns (rcode byte) close procedure (unit byte) returns (rcode byte)}
\]
open and closing the file will be as follows respectively:
\[
\text{code := open (infile, filenameptr, input)}
\]
\[
\text{code := close (infile)}
\]
IV.2 Scan Source Module
This procedure has the task to read the characters with the procedure nextch, and to hand it over to another procedure called nextsym. In the scan procedure we need the following declaration:
\[
\text{type alfa array [8 byte]} \quad \text{global idname alfa intval integer sym byte interval repr array 23 alfa := [['B','E','G','I','N','''',''',']], ['B','Y','E',' ','','''',']], ... [ all reserved words ] [ used in "Demoprog"]}
\]
The variable "idname" has originally the array type which is used as a dynamic mechanism. It will be filled by the characters fetched by nextchr procedure, afterwords compared with the content of repr array, if the result is true then the variable "sym" will take the corresponding value which declared as "constant", the value of "sym" is necessary to the parser.
```
nextchr procedure
entry
ch:= getch; putch(ch)
if rcode="endoffile" then ch= EOF fi
if ch= CR or if ch= TAB then ch:= ' ' fi
end nextchr
```
n.b. ch:= getch and putch(ch) are procedures which can be found in the main lexical scanner program.
```
nexsym procedure
entry
"after skipping blanks and reading comment"
if ch
case 'A',...'Z'
then I:=0
do idname[i]:=ch
i:=i+1; nextchr
if (i=8 or if ch 'A' or if 'z' ch)
and if (ch '0' or if '9' ch)
then exit
fi
od
od
"fill blanks in the rest of idname array if the read sym is less than eight character"
od
```
do
"if we meet ch \notin \{ch ['A'..'Z' or '0'..'G']\}
then exit"
"now we give sym a value called identifier"
"and we compare the content of idname variable with our data structure constructed
by name repr".
"if we find one then we recognise the
corresponding value to variable sym"
od
case '#'
then nextchr
if ch
case 'B' then "scanbinarynumber"
case 'Q' then "scanoctalnumber"
case 'X' then "scan hexnumber"
case 'D' then "scan decnumber"
else nextdel
fi
end nextsym
For the sake of brevity, we shall explain the "Scanbinarynumber" only, indeed the four scan numbers algorithm are similar.
```plaintext
case 'B'
the intval:=0; sym:=binnum
count:=1; nexchr
do if ch $\notin$ ['0', '1'] or if count > 16
then exit
fi
intval:=(2*intval) + (integer ch-48)
od
if count > 16 then error (illegal binnumber) fi
```
If the selecting statement "case statement" couldn't find the given character within the nextsym procedure, we invoke "nextdel procedure" expecting that the fetched character will be perhaps a delimiter.
The "nextdel" procedure has the task to select all used delimiters and present the "sym" variable its corresponding value:
nextdel procedure
entry
if ch
case '!' then sym:= exlmsym
case '#' then sym:= numsym
etc.
else "most probably the fetched character
doesn't exist in the chosen character set"
fi
end nextdel
end demo
IV.3 Parse (syntax analyser) Module
The task of language translators or processors is primarily not the generation but the recognition of sentences and sentence structure. This implies that the generating steps which lead to a sentence, must be reconstructed upon reading the sentence, and that its generation steps must be retraced. This is generally a very complicated and sometimes even an impossible task. Its complexity intimately depends on the kind of production rules used to define the language. It is the task of the theory of syntax analysis to develop recognizing algorithms for languages with rather complicated structural rules. Here, however the syntax of the "Demoprog" has not of that complexity, but neither preventing us of using the top down parsing method.
A first consequence of the basic efficiency requirement is that the choice of every analysis step must depend only on the present state of computation and on a single next symbol being read. Another most important requirement is that no step will have to be revoked later on. These two requirements are commonly known under the technical term one-symbol-lookahead without backtracking.
There are two essentially different techniques that can be applied. One is to design a general top-down parsing program, where particular grammars are to be supplied in the form of some data structure, on the basis of which the program operates. This parser is in some sense controlled by the data structure; the program is then called table driven. The other one, the one we are going to apply, is to develop a top-down parsing program, constructed systematically, and mapped a given syntax into a sequence of statements, i.e. into a program.
It is advantageous to represent the given syntax by a so-called recognition or syntaxgraph. This graph reflects the flow of control during the process of parsing a sentence, a general view
of the syntaxgraph can be seen in appendix.
It must be clear for us by applying this method, that the goal of the parsing process will be known from the start. The goal is to recognize a sentence, i.e., a sequence of symbols generatable from the start symbol.
Let us start from the top, we have seen from the definition of the "Demoprog", that it consists of:
1. Program
2. Control program
Before we start discussing the parser procedures, it is worthwhile to give a view of the data structure. We need some important specifications to define the panel, first of all the name of the registers, starting number of every register and of course the length of the register. For that sake we define a record with two fields;
```
Type
record [name array [8 byte]
lampnovar, lengthvar byte]
internal
regtable array [48 lampinfo] = [
['1°name reg'], 0, 16]
['2°name reg'], 16, 16]
...
lampreg, lengthreg byte
```
All parsing procedures assume that the first symbol of a recognizing construction is represented in the variable "sym" which has been discussed in the lexical scanner.
1. The program procedure actually consists of invoking the two procedures "programheading" and "block".
Programheading
```
prog.name
```
programheading procedure
```
entry
terminal (beginsym) ; terminal (identifier)
if sym = opensym then param-list fi
if sym = namesym
then nextsym; lamreg, lengthreg:= variable
terminal (eqsym); terminal (identifier)
fi
end programheading
```
block
```
flowmode
label-part
Statement END
```
block procedure
```
entry
do
if sym=flowsym
then nextsym ; newflow:=flowmode
fi
if sym=lablesym
then nextsym ; label-part
fi
statement
if sym=endsym
then exit
else error(14) ; exit
fi
od
end block
```
We have invoked the procedure statement in the block procedure, how does this procedure looks like? Well, actually this is the heart of the "demoprog", it terminates with semicolon or carriage return symbol.
```plaintext
statement procedure
entry
if sym=identifier
then lamreg,lengthreg:=variable
if sym=colonsym
then nextsym;newmode:=mode
else
if sym eqsym
then nextsym
if sym=codsym or if sym=decsym
then nextsym;pattern
terminal(colonsym)
newmode:=mode
else value:=expression
terminal(colonsym)
newmode:=mode
fi
fi
if error(9)
fi
else
```
if sym=clearsym
then nextsym; param_list
else
if sym=clearallsym
then newmode:=mode
else
if sym=setallsym
then newmode:=mode
else error(13)
fi
fi
fi
end statement
The procedure variable was necessary in the statement procedure, which delivers the number of the first register and its length;
variable procedure
returns (lampnovar, lengthvar byte)
local i, j byte
entry
j:=0
do
if j=49 then exit fi
i:=0
do
if regtable[j].name[i]<>idname[i]
then exit
fi
if i=7 then
lampreg:=regtable[j].lampnovar
lengthreg:=regtable[j].lengthvar
exit
fi
i:=i+1
od
j:=j+1
od
end variable
We have tried to give an impression on the way of programming, during the parsing procedures, a further detail can be found in the attached programs.
2. Control program.
The control program has the task to act independently, on the main program. A kind of GOTO statement has been provided to
enable the user during demonstrating, to repeat the demonstration in one or other parts of the lesson. This command statement is called RECALL statement. The user can quit demonstrating by using the command word BYE.
The control program is provided by five flow modes. By means of these flow modes the teacher can define demonstration steps, it consists of step by step flow modes (f1) which is defined for the programmer as a default value, while the rest can be used in random places in the program. (Note: that the user shall make use of the ASCII characters F1, F2, F3, F4, F5 within the program, while the corresponding function push button f1, f2, f3, f4, f5) The functions button has been chosen for the sake of simplicity and quickness during the demonstration. The syntax graph will be as follows:
Appendix A.
Demoprog Syntax Diagram
Program
---
Programheading Block
Programheading
BEGIN Programname
Param_list NAME Variable ident
Block
Flow Mode Label_Part
statement END
Statement
variable
expression
COD
DEC
Pattern
CLEAR
CLRALL
SE TALL
Param_list
Conclusion:
At the early stage of developing my subject, I took special care to achieve an applicable system, after all, the project has been supplied financially by the PITTC.
I believe the simplicity of the DEMOPROG rules is important to encourage the lecturer of frequent use. Although it is a bit exaggerating to use a software developing system Z-80 for the sake of teaching only, one can advise that the next step which has to be taken, to develop a kind of time sharing procedure to make the most of the microprocessor.
I would like to thank prof. A. Heetman and Ir. J.A. Samwel who gave the opportunity to perform this subject. I am very grateful to Ir. Engbers, Ir. Kemper, Ir. v/d Berg for their advice on the general layout. I appreciate the time prof. Kruseman Aretz and Ir. Hemerik could find to help me on software problems. From the side of the PITTC I would like to thank mr. P. Dane for his practical advice.
References:
David Gries
2. Algorithm+Data Structure = Programs
Niklaus Wirth
Niklaus Wirth
Robert M. Graham
5. Microprocessors.
Rodnay Zaks
Tod Snook
7. Soft & Hardware of the microcomputer system Z-80 Zilog.
|
{"Source-Url": "https://pure.tue.nl/ws/files/46755831/336719-1.pdf", "len_cl100k_base": 15721, "olmocr-version": "0.1.48", "pdf-total-pages": 60, "total-fallback-pages": 0, "total-input-tokens": 106165, "total-output-tokens": 18360, "length": "2e13", "weborganizer": {"__label__adult": 0.001018524169921875, "__label__art_design": 0.003618240356445313, "__label__crime_law": 0.0006866455078125, "__label__education_jobs": 0.1307373046875, "__label__entertainment": 0.0004270076751708984, "__label__fashion_beauty": 0.0006093978881835938, "__label__finance_business": 0.00202178955078125, "__label__food_dining": 0.0011968612670898438, "__label__games": 0.0022983551025390625, "__label__hardware": 0.05511474609375, "__label__health": 0.0011548995971679688, "__label__history": 0.0016736984252929688, "__label__home_hobbies": 0.0011806488037109375, "__label__industrial": 0.004791259765625, "__label__literature": 0.0012483596801757812, "__label__politics": 0.0007801055908203125, "__label__religion": 0.0016736984252929688, "__label__science_tech": 0.1912841796875, "__label__social_life": 0.00041794776916503906, "__label__software": 0.0230255126953125, "__label__software_dev": 0.5712890625, "__label__sports_fitness": 0.0007586479187011719, "__label__transportation": 0.00273895263671875, "__label__travel": 0.0005064010620117188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67396, 0.0188]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67396, 0.64826]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67396, 0.88806]], "google_gemma-3-12b-it_contains_pii": [[0, 1091, false], [1091, 1453, null], [1453, 3304, null], [3304, 4431, null], [4431, 6947, null], [6947, 10005, null], [10005, 12671, null], [12671, 14318, null], [14318, 15742, null], [15742, 17421, null], [17421, 19415, null], [19415, 19924, null], [19924, 19924, null], [19924, 21577, null], [21577, 23168, null], [23168, 24193, null], [24193, 25206, null], [25206, 25811, null], [25811, 25875, null], [25875, 25875, null], [25875, 26901, null], [26901, 27799, null], [27799, 28795, null], [28795, 30267, null], [30267, 31194, null], [31194, 32291, null], [32291, 33347, null], [33347, 34523, null], [34523, 35628, null], [35628, 36862, null], [36862, 38060, null], [38060, 38899, null], [38899, 40449, null], [40449, 41424, null], [41424, 41977, null], [41977, 43720, null], [43720, 45215, null], [45215, 46793, null], [46793, 47747, null], [47747, 49643, null], [49643, 51320, null], [51320, 51896, null], [51896, 53221, null], [53221, 54630, null], [54630, 56021, null], [56021, 57059, null], [57059, 58025, null], [58025, 58549, null], [58549, 59164, null], [59164, 59523, null], [59523, 61424, null], [61424, 62634, null], [62634, 63165, null], [63165, 63860, null], [63860, 64960, null], [64960, 65768, null], [65768, 66047, null], [66047, 66047, null], [66047, 66976, null], [66976, 67396, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1091, true], [1091, 1453, null], [1453, 3304, null], [3304, 4431, null], [4431, 6947, null], [6947, 10005, null], [10005, 12671, null], [12671, 14318, null], [14318, 15742, null], [15742, 17421, null], [17421, 19415, null], [19415, 19924, null], [19924, 19924, null], [19924, 21577, null], [21577, 23168, null], [23168, 24193, null], [24193, 25206, null], [25206, 25811, null], [25811, 25875, null], [25875, 25875, null], [25875, 26901, null], [26901, 27799, null], [27799, 28795, null], [28795, 30267, null], [30267, 31194, null], [31194, 32291, null], [32291, 33347, null], [33347, 34523, null], [34523, 35628, null], [35628, 36862, null], [36862, 38060, null], [38060, 38899, null], [38899, 40449, null], [40449, 41424, null], [41424, 41977, null], [41977, 43720, null], [43720, 45215, null], [45215, 46793, null], [46793, 47747, null], [47747, 49643, null], [49643, 51320, null], [51320, 51896, null], [51896, 53221, null], [53221, 54630, null], [54630, 56021, null], [56021, 57059, null], [57059, 58025, null], [58025, 58549, null], [58549, 59164, null], [59164, 59523, null], [59523, 61424, null], [61424, 62634, null], [62634, 63165, null], [63165, 63860, null], [63860, 64960, null], [64960, 65768, null], [65768, 66047, null], [66047, 66047, null], [66047, 66976, null], [66976, 67396, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67396, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67396, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67396, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67396, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67396, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67396, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67396, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67396, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67396, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67396, null]], "pdf_page_numbers": [[0, 1091, 1], [1091, 1453, 2], [1453, 3304, 3], [3304, 4431, 4], [4431, 6947, 5], [6947, 10005, 6], [10005, 12671, 7], [12671, 14318, 8], [14318, 15742, 9], [15742, 17421, 10], [17421, 19415, 11], [19415, 19924, 12], [19924, 19924, 13], [19924, 21577, 14], [21577, 23168, 15], [23168, 24193, 16], [24193, 25206, 17], [25206, 25811, 18], [25811, 25875, 19], [25875, 25875, 20], [25875, 26901, 21], [26901, 27799, 22], [27799, 28795, 23], [28795, 30267, 24], [30267, 31194, 25], [31194, 32291, 26], [32291, 33347, 27], [33347, 34523, 28], [34523, 35628, 29], [35628, 36862, 30], [36862, 38060, 31], [38060, 38899, 32], [38899, 40449, 33], [40449, 41424, 34], [41424, 41977, 35], [41977, 43720, 36], [43720, 45215, 37], [45215, 46793, 38], [46793, 47747, 39], [47747, 49643, 40], [49643, 51320, 41], [51320, 51896, 42], [51896, 53221, 43], [53221, 54630, 44], [54630, 56021, 45], [56021, 57059, 46], [57059, 58025, 47], [58025, 58549, 48], [58549, 59164, 49], [59164, 59523, 50], [59523, 61424, 51], [61424, 62634, 52], [62634, 63165, 53], [63165, 63860, 54], [63860, 64960, 55], [64960, 65768, 56], [65768, 66047, 57], [66047, 66047, 58], [66047, 66976, 59], [66976, 67396, 60]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67396, 0.03494]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
01d4a3947c4750ac9332a14064a5c48b8381eb11
|
[REMOVED]
|
{"Source-Url": "https://hal.archives-ouvertes.fr/file/index/docid/491664/filename/dka.pdf", "len_cl100k_base": 10800, "olmocr-version": "0.1.49", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 53001, "total-output-tokens": 13689, "length": "2e13", "weborganizer": {"__label__adult": 0.0005407333374023438, "__label__art_design": 0.0005908012390136719, "__label__crime_law": 0.0006456375122070312, "__label__education_jobs": 0.001232147216796875, "__label__entertainment": 0.0001842975616455078, "__label__fashion_beauty": 0.0002942085266113281, "__label__finance_business": 0.00036716461181640625, "__label__food_dining": 0.0007190704345703125, "__label__games": 0.0011043548583984375, "__label__hardware": 0.0012693405151367188, "__label__health": 0.0014133453369140625, "__label__history": 0.0005750656127929688, "__label__home_hobbies": 0.00021755695343017575, "__label__industrial": 0.0010461807250976562, "__label__literature": 0.0008158683776855469, "__label__politics": 0.0006275177001953125, "__label__religion": 0.0010366439819335938, "__label__science_tech": 0.293212890625, "__label__social_life": 0.0002014636993408203, "__label__software": 0.00768280029296875, "__label__software_dev": 0.68408203125, "__label__sports_fitness": 0.0004782676696777344, "__label__transportation": 0.0011606216430664062, "__label__travel": 0.0002925395965576172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48350, 0.02597]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48350, 0.44035]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48350, 0.84322]], "google_gemma-3-12b-it_contains_pii": [[0, 887, false], [887, 3508, null], [3508, 5805, null], [5805, 8943, null], [8943, 12186, null], [12186, 16123, null], [16123, 18517, null], [18517, 21965, null], [21965, 25009, null], [25009, 27375, null], [27375, 28907, null], [28907, 32270, null], [32270, 36125, null], [36125, 39224, null], [39224, 42063, null], [42063, 44886, null], [44886, 48350, null]], "google_gemma-3-12b-it_is_public_document": [[0, 887, true], [887, 3508, null], [3508, 5805, null], [5805, 8943, null], [8943, 12186, null], [12186, 16123, null], [16123, 18517, null], [18517, 21965, null], [21965, 25009, null], [25009, 27375, null], [27375, 28907, null], [28907, 32270, null], [32270, 36125, null], [36125, 39224, null], [39224, 42063, null], [42063, 44886, null], [44886, 48350, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48350, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48350, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48350, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48350, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48350, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48350, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48350, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48350, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48350, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48350, null]], "pdf_page_numbers": [[0, 887, 1], [887, 3508, 2], [3508, 5805, 3], [5805, 8943, 4], [8943, 12186, 5], [12186, 16123, 6], [16123, 18517, 7], [18517, 21965, 8], [21965, 25009, 9], [25009, 27375, 10], [27375, 28907, 11], [28907, 32270, 12], [32270, 36125, 13], [36125, 39224, 14], [39224, 42063, 15], [42063, 44886, 16], [44886, 48350, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48350, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
56b8176c9253e4f1e2a360facbf644add28561b9
|
Problem 1. Faulty Turbines
You live in Windyland where the winds are always blowing at 50 miles an hour. To harness the energy of this windy bonanza, Windyland has laid out \( N^2 \) windturbines on an \( N \times N \) grid – that is, for each \( 1 \leq i \leq N, 1 \leq j \leq N \), a turbine is placed at location \((i, j)\). Unfortunately, some \( m \) (very few compared to \( N \)) of these turbines are faulty. When the wind passes through at 50 miles per hour, a good turbine generates 1 mega-joule per second, while a faulty turbine generates only a \( \frac{1}{2} \) mega-joule per second. Windyland is trying to determine the location of the faulty turbines. To help this quest, they have a built-in test mechanism \( \text{TEST}(i, j) \) that tells them the total amount of energy generated by the turbines in the set \([1, \ldots, i] \times [1, \ldots, j]\), where they get to specify \( i, j \in [1, \ldots, N] \). However every \( \text{TEST} \) requires them to shut down the entire grid for an hour and is thus extremely expensive to run.
Give an algorithm that Windyland could use to find all the faulty turbines using as few \( \text{TESTs} \) as possible. Specify the running time of your algorithm as a function of \( m \) and \( N \). Your algorithm should be efficient when \( m \ll N \).
Solution: Executive Summary. Given an \( N \times N \) matrix of 0/1’s with \( m \) ones, we try to find the ones efficiently. To solve the problem, we use binary search (more abstractly, divide-and-conquer). We obtain an algorithm that does \( O(m \log \frac{n^2}{m}) \) tests and has the same running time.
Detailed solution. First we design a procedure \( \text{TEST-SUBRECT}(i_1, j_1, i_2, j_2) \), \( i_1 \leq i_2 \) and \( j_1 \leq j_2 \), that returns the amount of power generated by the turbines in the rectangle \( (i_1, j_1, i_2, j_2) \). This is done as follows:
\[
\text{TEST-SUBRECT}(i_1, j_1, i_2, j_2) = \text{TEST}(i_2, j_2) - \text{TEST}(i_1 - 1, j_2) - \text{TEST}(i_2, j_1 - 1) + \text{TEST}(i_1 - 1, j_1 - 1).
\]
(This is easy verifiable by a picture.)
Next imagine an abstract tree on the grid, which is in fact a quad-tree. The quad-tree is constructed as follows. Suppose, wlog, \( n \) is a power of 2. Divide the grid into 4 squares (of side-length \( n/2 \)). Then subdivide each square into another 4 squares (of side-length \( n/4 \)), and so forth. Now the quad-tree has as its root the entire grid, and it has 4 children, corresponding to the squares of side-length \( n/2 \). These squares also have 4 children each, each of side-length \( n/4 \), and so forth. At the leaf level, we have squares of side-length 1, i.e., they correspond to turbines.
The algorithm is now simple: design a procedure \( \text{SEARCH}(\text{node}) \) that returns all faulty turbines in the square corresponding to \( \text{node} \). For this, test each of the 4 subtrees of \( \text{node} \) if they have faulty turbines (using \( \text{TEST-SUBRECT} \)) and recurse into the subtrees that have.
To see the correctness and running time of this algorithm consider the following argument. Paint black all nodes that have faulty turbines inside. Then, the black nodes are exactly the union of the
paths from all faulty turbines to the root. There are thus $m \log n$ black nodes (the depth of the tree is $\log n$). We perform tests on the black nodes and their children only.
A more attentive counting of the black nodes actually gives a bound of $O(m \log \frac{n}{\sqrt{m}})$. The reason is that previously we overcounted the black nodes, especially near the root. The number of black nodes is maximized when all the first $\log_4 m$ level of the quad tree are black, and, in the rest, we have $m$ disjoint black paths. In this case the number of black nodes is $m + m(\log n - \log_4 m) = O(m \log \frac{n^2}{m})$.
Thus the running time is $O(m \log n^2/m)$. Consequently, we can at most the same number of tests.
One can prove that the above number of tests is optimal. An easy bound of $\Omega(\frac{m \log(n^2/m)}{\log m})$ can be obtained as follows. One can do the same lower bound for this problem as for the sorting: we draw a tree of queries, with each node having $d+1$ children (depending on the answer of faulty turbines, which is a number in $\{0, \ldots, d\}$). Then, there are $\left(\frac{n^2}{m}\right) = 2^{O(m \log n^2/m)}$ total leaves, and the lower bound is the log of that, divided by the log of the degree, $m + 1$. To obtain the optimal lower bound, partition the array into $n^2/m$ parts, each with 1 faulty turbine, and prove that to uncover the turbine in each part takes $\Omega(\log n^2/m)$, and that in total one needs $m$ times that (note that this is not an immediate corollary).
**Grading (out of 30 points).** Most students gave the right idea of divide-and-conquer by partitioning in subrectangles and recursing into non-empty ones. Another option was to find the columns with faulty turbines and to find faulty turbines inside the columns (both with binary search). We gave full credit for $O(m \log n)$ solutions.
Most students lost 7 points for not giving complete analysis of the running time. The following argument was insufficient: “to find one fault, we need $O(\log n)$ time (because of binary search); and so, for $m$ faults, we need $m$ times that”. This is not an immediate implication, and one needed to justify this step (e.g., see above). Also, many students lost 1-4 points for either not specifying the computational running time (besides the number of tests performed) or having a computationally inefficient algorithm. Note that this was the requirement for all problems (as stated in the preambule). Otherwise, points were deducted for imprecise description of the algorithm (e.g., for not showing how to compute the number of faults in a general subrectangle). Solutions with $\Omega(N)$ tests received at most 15 points.
**Problem 2. Tax Status**
You live in a community of $n$ people who have approached you for help with a tax question. They would like to find out how many of them will have to pay taxes, how many would receive tax credits (in this hypothetical example we assume that the IRS does give money to people with sufficiently low income), and how many will neither owe taxes nor receive any credit.
The amount that someone pay in taxes is a monotone non-decreasing function $f(x)$ of their income $x$ (so $f(x) > f(y)$ if $x > y$). The IRS has provided you with software to compute $f(x)$, but in
their characteristic style (showing lack of proper 6.046 training), this software is extremely slow and takes \(\sqrt{n}\) time to compute the \(f(x)\) on any single input \(x\).
You have available to you the incomes of all people in your community in the form of an unsorted array \(X[1..n]\), where \(X[i]\) is the income of the \(i\)th person. Give an efficient algorithm to compute an array \(Y[1..n]\) where \(Y[i]\) indicates the tax status of person \(i\), i.e., \(Y[i] = +\) if the \(i\)th person owes taxes (i.e., \(f(x) > 0\)), \(Y[i] = -\) if the \(i\)th person is owed money by the IRS, and \(Y[i] = 0\) if the \(i\)th person neither owes taxes, nor is owed money by the IRS. Analyze the running time of your algorithm as a function of \(n\).
You may assume all incomes are distinct.
**Solution: Executive Summary.** Search for the two “boundary” values of \(X\) in the community \(k, j\), where \(k\) is the index of the person who makes the most money but is still owed money by the IRS, and \(j\) is the index of the person who makes the most money of those that neither owe taxes nor are owed money by the IRS. Then a linear scan through \(X\) allows you to determine the values of \(Y\). In order to find the boundaries, find a query to \(X\), using the linear time median algorithm, which allows you to successively rule out half of the remaining elements from consideration as possible candidates for the boundary.
**Detailed Solution.**
In order to find the value of \(k\) (finding the value of \(j\) is analogous), the plan is to find a single query that allows you to rule out half of the indices from consideration. Find the median \(m\) of the \(X[i]\)'s using the linear time median algorithm. Query \(f\) on \(X[m]\). If \(X[m] \geq 0\), then construct a new list which contains only those people for which \(X[i] \leq X[m]\) (by the monotonicity of \(f\), the indices that have been thrown out must have \(X\) values which are \(\geq X[m] \geq 0\), and therefore cannot be the boundary), otherwise, construct a new list which contains only those indices for which \(X[i] > X[m]\) (again, by the monotonicity of \(f\), the indices that have been thrown out must have \(X\) values which are \(\leq X[m] < 0\) and therefore cannot be the boundary). Repeating this \(O(\log n)\) times brings you down to a list of constant size.
Once \(k, j\) have been found, scan through the indices of \(X\) and for each \(i\), set the value of \(Y[i]\) by comparing \(X[i]\) to \(X[k]\) and \(X[j]\). That is, set \(Y[i]\) to \(-\) if \(X[i]\) is smaller than or equal to \(X[k]\), set \(Y[i]\) to \(0\) if \(X[k] < X[i] \leq X[j]\), and set \(Y[i]\) to \(+\) otherwise.
**Time analysis:** Since half of the indices are ruled out from consideration after each query, \(O(\log n)\) queries to \(f\) will be made, taking \(O(\sqrt{n} \log n)\) time. The extra time to implement the search for each boundary value is given by \(T_b(n) = T(n/2) + c \cdot n\) which gives \(T_b(n) = O(n)\). Once the two boundary values are found, the scan to set the values of \(Y\) takes linear time. The total time is \(T(n) = O(\sqrt{n} \log n + 2T_b(n) + n) = O(n)\).
**Grading (out of 30 points).** Many sorted \(X\) and then did two binary searches for the boundaries. This requires \(\theta(n \log n)\) time, which is slower. Done correctly, this solution received 20 points.
Some used randomized selection. Depending on how well the analysis was done (which is harder than the deterministic case), up to 27 points were given for such a solution.
One common mistake was to give a recurrence for $T(n)$ in which the cost of the queries to $f$ was incorporated. The problem is that the cost of a query is $\sqrt{n_0}$ for $n_0$ the original number of people in the community, and does not decrease with the level. One point was taken off for this type of mistake.
Up to three points were taken off for mishandling the the case in which several people satisfy $f(X[i]) = 0$.
**Problem 3. Repair Work**
A hurricane just hit Cambridge and wiped out all of the roads. You need to repair the roads to connect up all the public buildings as quickly as possible. Every road connects two buildings, and the time to repair the road between buildings $i$ and $j$ is $t_{ij}$, where $t_{ij}$ is an integer between 1 and 10. Let $T_{\text{upandrunning}}$ be the minimum time you need to get enough roads built to connect all the public buildings (you can only work on one road at a time – no parallelism here!).
Give an efficient algorithm to compute $T_{\text{upandrunning}}$. Assume there are $n$ buildings and $m$ roads, and that your input comes in the form of an array of $n$ adjacency lists, where the $i$th list specifies all the roads incident to building $i$, the other endpoint for each road, and the time it takes to repair the road. Express the running time of your algorithm as a function of $n$ and $m$. (The more efficient your algorithm, the better your score.)
**Solution: Executive Summary:** The problem can be restated as finding the weight of a minimum spanning tree in the underlying graph. Using that all the edge weights are small integers, we modify Prim’s algorithm to run in time $O(m)$. This is achieved by using a different implementation of the Priority Queue inside that allows Extract-min and Decrease-Key operations in $O(1)$ time.
**Detailed Solution:** Let $G = (V, E)$ be the graph obtained by letting $V$ be the set of public buildings in Cambridge and $E$ be the set of roads. Assign to every road $ij \in E$ the weight $w(ij) = t_{ij}$. Then $T_{\text{upandrunning}}$ is simply the weight of a minimum spanning tree (MST) of $G$. We can find an MST of $G$ using either Prim or Kruskal’s algorithm. In the implementation seen in lecture, the running time of Prim is $O(nT_{\text{Extract-min}} + mT_{\text{Decrease-Key}})$, where $n = |V|$, $m = |E|$, and $T_{\text{Extract-min}}$ and $T_{\text{Decrease-Key}}$ are the time needed by the respective operation in the Priority Queue, which is used to keep track of the vertices not yet covered by the tree so far. Recall that the priority of a vertex in the Priority Queue is equal to the weight of the smallest edge connecting the vertex to any vertex in the tree constructed so far (or “$\infty$” if none exists).
In this problem, we are given an additional constraint that the edge weights are small integers. Using this, we can construct a Priority Queue where both extract-min and decrease-key operations take $O(1)$. There are many ways to do this. One way is to keep 2 arrays $A$ and $B$. $A$ is an array of length 11, where, for $1 \leq i \leq 10$, $A[i]$ contains a doubly linked list containing all the vertices with priority $i$, and $A[11]$ contains a doubly linked list with the vertices with infinite priority. $B$, on the other hand, is an array of size $n = |V|$ such that for every $v \in V$, $B[v]$ has a pointer to the node corresponding to the vertex $v$ in the linked list of $A$ (or a null pointer if $v$ is not in the priority queue anymore).
To extract the minimum element of the queue, we can do the following: Find the smallest \( i \) for which \( A[i] \) is not empty. Remove the first element in the linked list in \( A[i] \) (call this element \( v \)). Then, set \( B[v] \) to be a null pointer and output \( v \). Since \( A \) is of size 11, this takes \( O(1) \) time.
To change the priority of some vertex \( v \) from \( k' \) to \( k \) in the priority queue, we simply use \( B[v] \) to locate \( v \) in its corresponding doubly linked list. Then, we remove \( v \) from the list of \( A[k'] \) (this can be done in constant time by changing the pointer of the predecessor and successor of the node in the list). Then, we insert \( v \) to the beginning of the linked list in \( A[k] \) and modify \( B[v] \) to point the corresponding node associated to \( v \). This also takes \( O(1) \) time.
Thus, both \( T_{\text{Extract-min}} \) and \( T_{\text{DecreaseKey}} \) run in constant time, and therefore, Prim’s algorithm takes time \( O(nT_{\text{Extract-min}} + mT_{\text{DecreaseKey}}) = O(n + m) = O(m) \).
**Grading comments:** Any solution achieving this running time received 30 points. Many solutions were of this flavor with some mistakes. The most common mistake was to have only array \( A \) without \( B \) to help search in the priority queue. This kind of solution did not take into account the time needed to find an element in the priority queue \( A \) in order to decrease its key. This mistake was penalized with 8 points.
There were some alternative correct solutions. The most common one was an implementation of the previous priority queue \( A \) with arrays instead of linked list and an implementation of a priority queue without decrease-key method (Simply reinsert the elements to the queue with the new (smaller) priority, and modify the extract-min method to just check if the element had been extracted before or not. A careful analysis shows that the running time of Prim in this case is still \( O(m) \)). Also, there were some solutions that used a modification of Prim in which the priority queue holds edges instead of vertices.
Any solution that applied a direct implementation of Prim using binary heaps or a direct application of Kruskal (giving a running time of \( O(m \log n) \)) received 15 points. Better implementations that used Fibonacci Heaps (with running time \( O(m + n \log n) \)) or used Counting-Sort to sort the edges as the first step in Kruskal (giving a running time \( O(m \alpha(n)) \), where \( \alpha(\cdot) \) is the inverse of the Ackermann function) received up to 20 points.
**Problem 4. Brady Bunch Marriage**
In a small farming village there lives an old man with \( n \) sons, and on the farm next to his there lives an old woman with \( n \) daughters. In fact, they are the only two families remaining on the planet after the Bubonic plague wiped out everyone else in the world. Both the man and woman would like very much to have grandchildren in order to re-populate the Earth, but none of their children are yet married. Clearly, the only way for them to have grandchildren is to intermarry their children between their two families. But before they start making matches, the old man and woman agree on the following rule:
*If son \( A \) marries daughter \( B \), then no son younger than son \( A \) may marry a daughter older than daughter \( B \), and no son older than son \( A \) may marry a daughter younger than daughter \( B \).*
This rule prevents age-crossings in marriages between the two families, which the old man and woman are afraid may lead to infidelity, tearing the two families apart (and thus endangering the future of our entire species).
In addition, village records over the last 14 generations show that the number of children a couple has is affected by the couple’s height difference—the closer in height the husband and wife are, the more kids they can expect to have! If they are more than 12 inches apart, they will not have any children. Before he died of Bubonic plague, the village statistician found the exact formula for the expected number of children that a couple with a height difference of \( d \) inches (in absolute value), would have. This quantity, denoted by \( C(d) \), is given by the following formula:
\[
C(d) = \begin{cases}
\frac{12-d}{2} & \text{if } d < 12 \\
0 & \text{otherwise}
\end{cases}
\]
(1)
The old man and woman would like to maximize the expected number of grandchildren their children will give them. Your task is to give an efficient algorithm (in the number of children in each family, \( n \)), to help them find the best way to intermarry their children (where best is defined as yielding the highest expected number of grandchildren). Assume that the algorithm is given as input four arrays:
- \( A[1..n] \) where \( A(i) \) denotes the age of the \( i^{th} \) son.
- \( B[1..n] \) where \( B(i) \) denotes the age of the \( i^{th} \) daughter.
- \( G[1..n] \) where \( G(i) \) denotes the height of the \( i^{th} \) son.
- \( H[1..n] \) where \( H(i) \) denotes the height of the \( i^{th} \) daughter.
Note that it may be preferable to leave some children unmarried for the social good. Be sure to prove the correctness and running time of your algorithm—the fate of humanity rests in your hands!
**Solution: Executive Summary:** We will solve this using Dynamic Programming. To find the recursive substructure, consider the oldest son. Either he can marry the oldest daughter, or he can marry one of the \( n-1 \) younger daughters, or he can not marry anyone. Each of these cases reduces to a smaller subproblem, in a matrix of \( n \times n \) subproblems. Since each problem only depends on (at most) 3 subproblems, DP will find the optimal matching in \( \Theta(n^2) \) time.
**Detailed Solution:** At first, sort sons and daughters in increasing order of their ages. It takes \( \Theta(n \log n) \) time, and now we can assume \( A \) and \( B \) are sorted. \( T \) is a \( n \times n \) matrix, where the element \( T(i, j) \) represents the highest expected number of grandchildren by considering only the youngest \( i \) sons and the youngest \( j \) daughters. (Our goal is to compute \( T(n, n) \).) To find the recursive formula for \( T(i, j) \), take the \( i^{th} \) youngest son and consider 3 possible options he can choose when the youngest \( i \) sons and the youngest \( j \) daughters are intermarried. Either he can marry the \( j^{th} \) daughter, or he can marry one of \( j-1 \) younger daughters, or he cannot marry anyone. The best way for his choice can be decided by comparing these three cases can be expressed as the
following recursion:
\[
T(i, j) = \max \left\{ \frac{12 - |G(i) - H(j)|}{2} + T(i - 1, j - 1), \quad T(i, j - 1), \quad T(i - 1, j) \right\}.
\]
Using this recursion, the table \(T\) can be filled up starting from the base case \(T(i, 0) = T(0, i) = 0\), and \(T(n, n)\) is the highest expected number of grandchildren achievable. To find the explicit marriage strategy that gives \(T(n, n)\), we maintain another \(n \times n\) matrix \(S\). By assuming \(S(0, 0) = S(0, i) = \emptyset, S(i, j)\) can be computed together with \(T(i, j)\) as follows:
\[
S(i, j) = \begin{cases}
(i, j) \cup S(i - 1, j - 1), & \text{if } T(i, j) = \frac{12 - |G(i) - H(j)|}{2} + T(i - 1, j - 1) \\
S(i, j - 1), & \text{if } T(i, j) = T(i, j - 1) \\
S(i - 1, j), & \text{if } T(i, j) = T(i - 1, j)
\end{cases}.
\]
\(S(n, n)\) is the best way we are looking for, and it takes \(\Theta(n^2)\) time to fill up both \(S\) and \(T\) completely. Note that to fill up \(S\) in \(\Theta(n^2)\) time we must either build the lists up using pointers to smaller lists, rather than copying the sub-list into the bigger cell, or we must store directions in \(S\): “EAST”, “SOUTHEAST”, and “SOUTH” which tell us which path to take through \(S\) after we fill in the whole matrix—following these directions will allow us to read off the path (and thus the optimal marriages) in linear time.
The total running time is \(\Theta(n^2)\) because the sorting time \(\Theta(n \log n)\) at the initial stage is dominated by \(\Theta(n^2)\).
**Grading comments:** Many students designed Dynamic Programming with the \(n \times n\) table, but used a slow recursion which took \(\Theta(n)\) time to compute. It lead to the \(\Theta(n^3)\) running time, which got at most 23 points. Several solutions were naive or brute-force solutions, which got at most 7 points if their analysis was right. Any fundamentally incorrect algorithm got at most 10 points. (Some students used a greedy scheme, but it does not work.) Not writing down the recursion or making a small mistake in the recursion was a 3 to 5 point penalty. Incorrect analysis or no analysis at all took around 5 to 7 points off. Most implementations of the right \(\Theta(n^2)\) algorithm got 28 to 30 points depending on the clarity of the writeup.
**Problem 5. Dynamic Navigation**
Recall (from Lecture 9) the nightmare that Professor Rubinfeld faces when driving to work every morning. She can either take the path from her home \(X_0 \rightarrow X_1 \rightarrow X_2 \rightarrow \cdots \rightarrow X_n\) to work \(X_{n+1}\), or she can take the path from \(X_0 \rightarrow Y_1 \rightarrow Y_2 \rightarrow \cdots \rightarrow Y_n \rightarrow X_{n+1}\), or switch, as many times as she wants, from \(X_i \rightarrow Y_{i+1}\) or \(Y_i \rightarrow Y_{i+1}\). The delay in getting from \(X_{i-1} \rightarrow X_i\) is \(a_i\), while the delay in getting from \(Y_{i-1} \rightarrow Y_i\) is \(b_i\). The switching delay from \(X_i \rightarrow Y_{i+1}\) is \(\ell_i\) and the delay in switching from \(Y_i \rightarrow X_{i+1}\) is \(u_i\). For an illustration, see figure 1.
Her goal was to get to work as quickly as possible and this is still the case. But now she has a new feature to help her cope with the nightmare. A generous non-profit institution is monitoring
Figure 1: The routes from Professor Rubinfeld’s home, $X_0$, to her work, $X_{n+1}$.
all the streets and can provide live updates on the current delays. A typical update is of the form $\text{UPDATE}(U, W, i, v)$, where $U, W \in \{X, Y\}$ and the implication is that the associated street from $U_{i-1} \rightarrow W_i$ now has a delay of $v$. These updates are beamed directly to her laptop, possibly even as she drives. Professor Rubinfeld would like to be able to execute two kinds of queries: $\text{CURRENT-TRAVEL-TIME}$ which returns the current total travel time from $X_0$ to $X_{n+1}$; and $\text{NEXT}(U, i)$ where $U \in \{X, Y\}$ which should return the next destination she should drive to to get to $X_{n+1}$ by the shortest path, if she were currently at $U_i$.
Design a data structure to initialize the data structure and handle the updates and queries efficiently. (Your score will depend on the time complexity of the initialization, the updates, the queries, as well as the space complexity of the data structure. So specify all of these in your solution.)
The following example may be illustrative. Assume that initially all horizontal delays equal 1 (i.e., $a_i = b_i = 1$) and all switches cost 2 units of delay except for $l_1 = u_{n+1} = 1$, then the following table gives an example of a sequence of updates/queries and the desired responses. An illustration of the example appears in figure 2.
<table>
<thead>
<tr>
<th>Query/Update</th>
<th>Desired Response</th>
</tr>
</thead>
<tbody>
<tr>
<td>INITIALIZE($a_1, \ldots, a_{n+1}, b_2, \ldots, b_n, l_1, \ldots, l_n, u_2, \ldots, u_{n+1}$)</td>
<td>ACK</td>
</tr>
<tr>
<td>CURRENT-TRAVEL-TIME</td>
<td></td>
</tr>
<tr>
<td>NEXT($X, 2$)</td>
<td>$X_3$</td>
</tr>
<tr>
<td>UPDATE($X, X, 3, 5$)</td>
<td>ACK</td>
</tr>
<tr>
<td>CURRENT-TRAVEL-TIME</td>
<td></td>
</tr>
<tr>
<td>NEXT($X, 2$)</td>
<td>$X_3$</td>
</tr>
<tr>
<td>NEXT($X, 3$)</td>
<td></td>
</tr>
<tr>
<td>UPDATE($Y, Y, 3, 5$)</td>
<td>ACK</td>
</tr>
<tr>
<td>CURRENT-TRAVEL-TIME</td>
<td></td>
</tr>
<tr>
<td>NEXT($X, 2$)</td>
<td>$Y_3$</td>
</tr>
<tr>
<td>NEXT($Y, 2$)</td>
<td>$X_3$</td>
</tr>
</tbody>
</table>
**Solution:** Executive Summary: The problem asks to maintain shortest paths in a dynamic setting. The solution involves augmenting a data structure, a simple balanced binary tree, which maintains shortest path lengths of some, but not all, intervals from $X_i/Y_i$ to $X_j/Y_j$ so that changing
**Figure 2:** An example under the update shown in the table. The first picture shows the initial configuration, the second picture shows the configuration after the operation $\text{UPDATE}(X, X, 3, 5)$, and the third picture shows the configuration after the operation $\text{UPDATE}(Y, Y, 3, 5)$. For example, after we perform $\text{UPDATE}(X, X, 3, 5)$ (second picture), it is now faster to switch from $X_2$ to $Y_3$.
any one edge length changes the path length of at most $\log n$ of the intervals that we maintain. The resulting strategy is implemented below using $O(n)$ space, so that INITIALIZE takes $O(n)$ time, UPDATE and NEXT take $O(\log n)$ time while CURRENT-TRAVEL-TIME takes $O(1)$ time.
**Detailed Solution:** We maintain a nearly full balanced static binary search tree with $n + 1$ leaves. The $i$th leaf represents the $i$th transition point from $U_{i-1}$ to $W_i$ for $U, W \in \{X, Y\}$. The tree is keyed on the index $i$ and so every internal node represents a contiguous interval (or path) from $i$ to $j$, such that its children represent the intervals $i$ to $k$ and $k$ to $j$, where $k = (i + j)/2$. A node representing the interval $i$ to $j$ maintains the shortest path length from $U_{i-1}$ to $W_j$ for $U, W \in \{X, Y\}$. Note that this is information that can be computed locally given the information for a nodes children. This will allow us to modify the tree in $O(\log n)$ time during an update.
**INITIALIZE:** We build the tree bottom up. Every node $v$, representing say the interval $[i, j]$ has four fields, $v.XX$, $v.XY$, $v.YX$ and $v.YY$ representing the shortest paths from $X_i$ to $X_j$ etc. If the node $v$ has two children $v_1$ covering the interval $[i, k]$ and $v_2$ covering the interval $[k, j]$ then
the recurrence giving the four field values for $v$ is as follows, for $U, V \in \{X, Y\}$:
$$v.UV = \min\{v_1.UX + v_2.XV, v_1.UY + v_2.YV\}.$$
It is clear that this algorithm runs in $O(n)$ time.
**UPDATE($U, W, i, v$):** We modify the cost of the edge $U_{i-1} \rightarrow W_i$ to $v$, and then walk up the tree from the leaf $i - 1 \rightarrow i$ to the root, updating the information at all nodes using the recurrence above. Clearly this takes $O(\log n)$ time.
**CURRENT-TRAVEL-TIME:** Simply returns the value $r.XX$ where $r$ is the root node. This takes $O(1)$ time.
**NEXT($U, i$):** As a helper routine we first design an algorithm TIME-REMAINING($U, i$) which computes the length of the shortest path from $U_i$ to $X_{n+1}$. This algorithm walks up from the leaf for $i \rightarrow i + 1$ to the root. At a node $v_1$ representing the interval $[j, k]$ with $j \leq i < k$ it maintains the information for the shortest path length from $U_i$ to $X_k$ and $U_i$ to $Y_k$. It then uses the information from sibling $v_2$ of $v_1$ (if $v_1$ is the left child) to compute this information at the parent $v$ of $v_1$. When this algorithm reaches the root, it now has the shortest path length from $U_i$ to $X_{n+1}$.
Now using TIME-REMAINING it is easy to compute the next step from $U_i$. One simply has to go to the node $W_{i+1}$ for which the edge length from $U_i$ to $W_{i+1}$ equals TIME-REMAINING($U, i$) – TIME-REMAINING($W, i + 1$). Determining this requires computing TIME-REMAINING thrice, where each invocation takes $O(\log n)$ time. Thus NEXT takes $O(\log n)$ time.
**Grading comments:** Unfortunately, a large number of solutions simply recomputed all path lengths, either during UPDATE, or during the queries. Such solutions are roughly analogous to the use of arrays (sorted or unsorted) to maintain dynamic sets — either the update, or the query, takes linear time. Such solutions got somewhere between 5 to 10 points. Some solutions additionally pointed out the possibility of doing better using balanced trees, without being able to give the algorithm. For noticing the possibility that one can do better, the writeups got up to 5 more points. Most implementations of the above algorithm got 25-30 points depending on the clarity of the writeup.
|
{"Source-Url": "http://stellar.mit.edu/S/course/6/fa07/6.046/courseMaterial/topics/topic3/resource/quiz2/quiz2.pdf", "len_cl100k_base": 8284, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 41273, "total-output-tokens": 8936, "length": "2e13", "weborganizer": {"__label__adult": 0.0006189346313476562, "__label__art_design": 0.0005941390991210938, "__label__crime_law": 0.0006079673767089844, "__label__education_jobs": 0.008453369140625, "__label__entertainment": 0.0002491474151611328, "__label__fashion_beauty": 0.00032067298889160156, "__label__finance_business": 0.0005545616149902344, "__label__food_dining": 0.0010128021240234375, "__label__games": 0.0026092529296875, "__label__hardware": 0.0030765533447265625, "__label__health": 0.0010557174682617188, "__label__history": 0.0011816024780273438, "__label__home_hobbies": 0.0004181861877441406, "__label__industrial": 0.0014553070068359375, "__label__literature": 0.0009169578552246094, "__label__politics": 0.0004754066467285156, "__label__religion": 0.00092315673828125, "__label__science_tech": 0.241455078125, "__label__social_life": 0.00027561187744140625, "__label__software": 0.00994110107421875, "__label__software_dev": 0.720703125, "__label__sports_fitness": 0.0005230903625488281, "__label__transportation": 0.001903533935546875, "__label__travel": 0.0004198551177978515}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29977, 0.01749]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29977, 0.50411]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29977, 0.91072]], "google_gemma-3-12b-it_contains_pii": [[0, 3227, false], [3227, 6510, null], [6510, 10062, null], [10062, 13559, null], [13559, 17055, null], [17055, 20251, null], [20251, 23541, null], [23541, 25926, null], [25926, 27694, null], [27694, 29977, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3227, true], [3227, 6510, null], [6510, 10062, null], [10062, 13559, null], [13559, 17055, null], [17055, 20251, null], [20251, 23541, null], [23541, 25926, null], [25926, 27694, null], [27694, 29977, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29977, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29977, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29977, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29977, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 29977, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29977, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29977, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29977, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29977, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29977, null]], "pdf_page_numbers": [[0, 3227, 1], [3227, 6510, 2], [6510, 10062, 3], [10062, 13559, 4], [13559, 17055, 5], [17055, 20251, 6], [20251, 23541, 7], [23541, 25926, 8], [25926, 27694, 9], [27694, 29977, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29977, 0.11504]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
14be168ad032ab29cbc2181ccc4064f4ab821c10
|
This page summarizes all the changes you may find when you upgrade from Tiki12 'Long Term Support' (LTS) to Tiki15 LTS. You may also read the partial changes in each version Tiki13, Tiki14 & Tiki15.
- Tiki15.0 was released in April 2016.
- As it is a Long Term Support (LTS) version, it will be supported for 5 years, and requires PHP 5.5. See version lifecycle.
- It uses Bootstrap CSS framework.
- More information: at the development page https://dev.tiki.org/Tiki15
1.1. Action log extended
1.2. "Admin home": renamed to "Control Panels"
1.3. Addons
1.4. Advanced Rating
1.5. Articles improved
1.6. AutoTOC mini-revamp
1.7. Banning multiple registration IPs from user management
1.8. Bootstrap
1.9. Console
1.10. Content templates can be categorised and locked
1.11. Custom LESS preference
1.12. Customise mail templates using a simple 'preference' setting
1.13. Federated Search
1.14. File Gallery Batch Upload Improvements
1.15. File Gallery Upload Improvements
1.16. Frontpage can be created in html
1.17. Goals
1.18. Installation faster
1.19. Jitsi
1.20. Layouts
1.21. Look and Feel: custom LESS section
1.22. Mail-in improved
1.23. Mobile-ready display
1.24. Modules
- 1.24.1. Module zone
- 1.24.2. Module minichat
1.25. Multilike
1.26. Must Reads
1.27. Namespaces improved
1.28. Newsletters
1.29. Notifications
1.30. Payments improved
1.31. Profiles
- 1.31.1. Profile exporter improved
- 1.31.2. Hide Fixed Top Nav Bar on Scroll
- 1.31.3. Profiles Wizard
- 1.32. Remote Tiki Autologin
- 1.33. Removal of "Synchronize categories of user tracker item to user groups" feature
- 1.34. Score
- 1.35. SEFURL improved
- 1.36. Server requirements
- 1.37. SLUG
- 1.38. Social Networks with Linkedin
- 1.39. Stored Search Queries
- 1.40. Structures can be locked
- 1.41. Structure tools added to the top page and permissions 'tidy-up'
- 1.42. Surveys improved
- 1.43. Tablesorter
- 1.44. Temporary User Accounts
- 1.45. Terms and Conditions improved
- 1.46. Themes
- 1.47. Themes
- 1.47.1. Theme refactoring
- 1.47.2. Icon sets
- 1.48. Time Ago Date Formatting
- 1.49. Trackers
- 1.49.1. Service_inline to display info from linked trackers
- 1.49.2. Tracker Tabular
- 1.49.3. Tracker Field 'Computed' extended
- 1.49.4. Tracker Field 'Mathematical Calculation' extended
- 1.50. Unified index improvements
- 1.51. User Encryption
- 1.52. Validation syntax
- 1.53. ViewerJS
- 1.54. Wiki Argument Variables
- 1.55. wikiLingo
- 1.56. Wiki Plugins
- 1.56.1. Plugin AjaxLoad
- 1.56.2. Plugin CatOrphans
- 1.56.3. Plugin Data Channel
- 1.56.4. Plugin FullWidthTitle
- 1.56.5. Plugin GDgraph
- 1.56.6. Plugin Like
- 1.56.7. Plugin List
- 1.56.8. PluginListExecute
- 1.56.9. Plugin Tour
- 1.56.10. Plugin Tracker
- 1.56.11. Plugin UserInGroup
- 1.56.12. Plugin XMLUpdate
- 1.57. Wiki Structures
- 1.58. Upload and download translations
• Known Installation Issues
◦ Timeout when installing with InnoDB option
◦ Problem with installing in subdirectory with a tilda
• Upgrade
◦ Theme upgrade notes
◦ Top and topbar modules
◦ jQuery-UI Theme may need to be set to something
◦ Bootstrap style menus
◦ Convert PluginSplit to PluginDiv
◦ Behavior change of Tracker field "Dropdown with other"
• Behavior change of Tracker field "Item Link"
◦ Floated box class styles are now redundant
◦ Plugins need approval
◦ Themes
◦ Multilevel menus in modules
◦ Regular Expressions in fields requiring validation
◦ 'SSL connection error' or error displaying HomePage after upgrade of SSL enabled websites
◦ JQuery selectors for forms
◦ Potential Issue: Composer error
◦ General upgrade notes
• Removed features
1.1. Action log extended
Creating, modifying or deleting calendar events can also be recorded and displayed through the Action log, since version 15.1.
1.2. "Admin home": renamed to "Control Panels"
The page where you can access the main settings for all features was traditionally called "Admin home". In order to prevent confusions for the new admins with the page to administer a single feature, "Admin home" was renamed to "Control Panels".
This way, and as example, the page to manage the settings for all forums in Tiki is called "Forums control Panel" (tiki-admin.php?page=forums), while the page to modify a specific forum, keeps the name "Admin Forums" (tiki-admin_forums.php)
1.3. Addons
Addons is a way of packaging Profiles, Smarty Templates and other building blocks of application functionality that can be used to better manage different Tiki configurations by developers and also for developers to offer these as independently maintained "apps" to others. See: Addons
1.4. Advanced Rating
Some new functions have been added since Tiki 14.0: Not, IsEmpty, date, less-than, more-than and contains.
See Advanced Rating
1.5. Articles improved
- Added the ability to grab the content of an article from URL, and store it in articles.
- Article can be attached to a tracker item [http://sourceforge.net/p/tikiwiki/code/50708/](http://sourceforge.net/p/tikiwiki/code/50708/)
- Adding basic option to fetch the content from source when using article generator instead of relying on the excerpt from the feed [http://sourceforge.net/p/tikiwiki/code/49059/](http://sourceforge.net/p/tikiwiki/code/49059/)
1.6. AutoTOC mini-revamp
The AutoTOC feature has been updated to use Bootstrap ScrollSpy - normal server-side maketoc one to come (hopefully also for 15.0)
(Several commits leading up to r57161)
1.7. Banning multiple registration IPs from user management
Admins can easily ban multiple IPs from spam registrations directly with just a few clicks. They can also optionally remove the user accounts and their user tracker items, as well as their user pages.
See [Users](https://sourceforge.net/p/tikiwiki/code/)
1.8. Bootstrap
- [Bootstrap](https://sourceforge.net/p/tikiwiki/code/)
See also [#Themes](https://sourceforge.net/p/tikiwiki/code/) below.
1.9. Console
Usual management tasks done through a console are now handled by a common console.php script with many parameters.
- [DB Redactor added](http://sourceforge.net/p/tikiwiki/code/47589/)
- Mail queue (through zend stmp) fixed.
See [Console](https://sourceforge.net/p/tikiwiki/code/)
1.10. Content templates can be categorised and locked
From the Tiki15 release Content templates can be categorised so that access to individual templates can be restricted to designated Groups.
Content templates can also be 'locked' from Tiki 15 onwards, in a similar way to individual wiki pages, so that even when Categorisation is used to restrict access to a Group, editing of a Content template can be restricted further to the individual user that locks it. A new preference needs to be set to allow this locking option to be available and a new permission has been created to allow individual users the privilege to do it. As with all the various permissions 'sets', if a user has been granted tiki_p_admin_content_templates they can do all the various content template activities including the unlocking of a template locked by another user.
Content templates are a good way to ensure new content is created with a consistent layout and style especially when a Tiki system has multiple editors, and it also provides a fast way to set up complex new content with initial 'starting' data - see Content templates for more details. The categorisation and locking functions then provide a range of options to allow groups/individuals to control how they are updated.
1.11. Custom LESS preference
New preference textarea on control-panels/look & feel/customisation where you can add LESS definitions that then recompiles a new version of the currently selected theme and option using definitions declared there.
(Mostly added in r56867 but with subsequent fixes to r57047 so far)
Some minimal documentation and one example (so far) can be found here: Custom Less
1.12. Customise mail templates using a simple 'preference' setting
The format/style of the various automated email notifications are set using smarty templates held in the templates/mail/ folder and it has always been possible to make customised copies of these and store them in the template area of your theme/style. In this way they are only invoked when your theme is used and they are not overwritten when the site is upgraded. But this can cause a significant maintenance overhead since new templates are added from time to time and existing ones are improved or updated to fix bugs, so any customised copies have to be continually updated.
From Tiki15 however a simple text preference setting has been introduced where a short text string can be stored - which is set in the "General Settings" tab of the "Editing and Plugins" admin screen - where the text would typically be a description of your site. A reference to this stored text has been added to all the mail templates in appropriate places so that a very simple customisation of the existing 'vanilla' templates is then automatically produced and mail notifications can be made to be much easier to understand, especially if you are involved with multiple Tiki sites. It should be noted that the default value of the customising text string is an empty string so that if it not set it has no effect on the output of the standard template text - but a consequence of this is that if a customised text is set then it should have a blank character at the end of the text so that its insertion scans properly.
1.13. Federated Search
Federated Search
1.14. File Gallery Batch Upload Improvements
Some general improvements and new options to organise files according to subdirectories and create missing ones, also to file by gallery ID.
Including new console commands `files:batchupload` and `files:deleteold` to enable scheduled tasks to be set up for automatic gallery maintenance.
More info here Batch Upload
1.15. File Gallery Upload Improvements
New preference to enable a new jQuery File Upload interface allowing drag and drop, multiple file uploads, progress bars etc.
1.16. Frontpage can be created in html
The Tiki front page can be created in HTML format, if wysiwyg HTML mode is selected. This enables full inline editing of front page. http://sourceforge.net/p/tikiwiki/code/47565/
1.17. Goals
See Goals and Tutorial: Goals
1.18. Installation faster
The installation is faster than before. There is a batch of insert `tiki_secdb` during installation, if a `mysql_data` file exists and the user has a mysql file permission. http://sourceforge.net/p/tikiwiki/code/50986/
1.19. Jitsi
Added configuration for jitsi provisioning http://sourceforge.net/p/tikiwiki/code/49699/
See Jitsi
1.20. Layouts
- Layout template for 3-6-3 column width configuration (25%, 50%, 25% widths) http://sourceforge.net/p/tikiwiki/code/49517/
- New layout for social sites with classic top navigation bar http://sourceforge.net/p/tikiwiki/code/49734/
1.21. Look and Feel: custom LESS section
There is a new section under the Customization tab of the Look & Feel Control Panel which allows to set some base parameters for colors and other css properties, which will be applied to generate all properties for the Theme style of your choice.
Example:
See: Customization
1.22. Mail-in improved
Allow mail-in post comment on notification reply, fixed encoding issues, better reply-stripping for gmail
http://sourceforge.net/p/tikiwiki/code/50660/
1.23. Mobile-ready display
There is no need to apply profile Mobile anymore on Tiki13.0, since it automatically adapts its display to mobile devices when needed.
1.24. Modules
1.24.1. Module zone
New module meant to provide a module "navbar" for the website. You add a "zone", and then you can drop a module menu in it. The menu module creates the toggle button and internal nabbers.
See Module zone
1.24.2. Module minichat
The date is shown for the messages from previous days to avoid confusing users.
See Module minichat
1.25. Multilike
This feature expands on the like functionality to allow users to provide more qualitative feedback to the system.
See Multilike
1.26. Must Reads
See Must Reads
1.27. Namespaces improved
Ability to force all non-namespace page links to same namespace of the page being edited
http://sourceforge.net/p/tikiwiki/code/50755/
1.28. Newsletters
Counters have now been added when sending Newsletters, which is useful when sending newsletters with a large subscriber list. It is also particularly useful when the option to 'throttle' the newsletter send rate is being used which is an option that may have to be used to avoid the sending system being 'designated' as a spammer and being blocked by receiving mail servers. When using the 'throttle' option, additional remarks are now provided to remind the user that this option is in use, and the send process has also been fixed so that the 'completion' notices and information are correctly displayed.
These additional Newsletter features have also been backported for the Tiki12.5 and Tiki14.2 releases.
1.29. Notifications
Notifications have been revamped. See: Notifications and Tutorial: User Notifications
1.30. Payments improved
- Added payment gateway for Israel Post.
- Added method to be able to referance paypal by invoice id, and password and signature from PayPal Pro account http://sourceforge.net/p/tikiwiki/code/49504/
See Payments
1.31. Profiles
1.31.1. Profile exporter improved
- Profile handlers (and export) for activity stream custom rules http://sourceforge.net/p/tikiwiki/code/47430/
- Group profile exporter http://sourceforge.net/p/tikiwiki/code/49606/
1.31.2. Hide Fixed Top Nav Bar on Scroll
If you choose in 'Look & Feel' Control Panel > Theme (tab) > Site layout: "Fixed_top_modules", then you can apply this "scroll" (search for it) profile through the Profiles Control Panel and you will get your top zone hidden temporarily when you scroll down your site.
See profile: Hide Fixed Top Nav Bar on Scroll
1.31.3. Profiles Wizard
New profile Revision Approval (ISO9001) added to the Profiles Wizard, preconfigured to use also the new Wiki Argument Variables introduced in Tiki14 (see below):
1.32. Remote Tiki Autologin
Users from another Tiki are allowed to login to this Tiki using their credentials there. This provides a quick way to create a sub-site or sister site.
See Remote Tiki Autologin
1.33. Removal of "Synchronize categories of user tracker item to user groups" feature
This feature has been removed in Tiki 15. The preferred way to achieve similar functionality is to use the User Groups field instead. For more information, see Synchronize categories of user tracker item to user groups.
1.34. Score
The points system was reworked in Tiki15. The main scoring events are still pre-configured for beginner users to be able to use, but the ability to add new scoring events was implemented as well for added flexibility.
See Score
1.35. SEFURL improved
Search Engine Friendly URL (SEFURL) for Calendar events added to route and upcoming_events module http://sourceforge.net/p/tikiwiki/code/49247/
1.36. Server requirements
A few more packages are required to provide the required php extensions that this version of Tiki needs:
- **php5-curl**
- CURL module for php5
- **php5-intl**
- internationalisation module for php5
- **php5.5-xml** - DOM, SimpleXML, WDDX, XML, and XSL module for PHP
In Debian-based distributions, you can install them in a terminal on the server with this type of command:
```
sudo apt-get install php5-curl php5-intl php-5.5-xml
```
Adapt to your case in other GNU/Linux distributions or operating systems at the server.
1.37. SLUG
SLIG's are alternate urls designed for brevity, search engine friendliness, not changing over time, etc. Tiki 14 allows to change the URL scheme for wiki pages, currently to replace spaces with underscores.
For more information:
- [https://sourceforge.net/p/tikiwiki/code/51860](https://sourceforge.net/p/tikiwiki/code/51860)
- [https://tiki.org/tiki-view_forum_thread.php?forumId=26&comments_parentId=52799](https://tiki.org/tiki-view_forum_thread.php?forumId=26&comments_parentId=52799)
1.38. Social Networks with LinkedIn
You can also set up your site to allow users to log into LinkedIn as they can with Facebook.
See [Social Networks](#)
1.39. Stored Search Queries
- **Stored Search** using Elasticsearch's Percolator feature:
1.40. Structures can be locked
From Tiki 15 a wiki structure can be locked, meaning that the pages that are in a structure and the order/hierarchy of these pages can only be changed by the locking user.
A new preference needs to be set to allow this structure locking option to be available and a new permission has been created to allow individual users the privilege to do it. As with all the various permissions 'sets', if a user has been granted tiki_p_admin_structures (now added as a new permission) they can do all the various structure activities including the unlocking of a structure locked by another user.
To make structure permission management clearer and easier to find, a new ‘wiki structure' section has been created in the permission tables.
For details of all these additions see the Structure documentation page.
1.41. Structure tools added to the top page and permissions 'tidy-up'
From Tiki15 tools for the top page have been added so that the range of structure admin functions can be carried out on the top page of the Structure as well the individual pages. See the Structure editing documentation for more details.
In addition what permissions are needed to carry out the various Actions on a Structure have been clarified and made more consistent. See the Structure permissions documentation for more details.
1.42. Surveys improved
There have been a few improvements to surveys:
- Use sefurl for take_survey links
- Make questions (drag & drop) sortable on admin questions
- Allow control of the textarea toolbar per question, and add a "c" option to use the "comments" minimal toolbar
- Added options for header type - newpage triggers pagination mode and inserts a page "break", and tag allows setting of heading type (default is h3)
- Object permissions used for take survey and view stats
- Allow showing of user's voted option in survey states
- Added a drop down selector for user to show icon next to responses in stats, and other minor fixes
See Surveys
1.43. Tablesorter
The tablesorter usage has been extended to new features like:
- List of wiki pages (tiki-listpages.php),
- List of forums (tiki-forums.php),
- Topic list for a forum (tiki-view_forum.php),
- Forums Administration (tiki-admin_forums.php)
See Tablesorter
1.44. Temporary User Accounts
Temporary users cannot login the usual way but instead do so via an autologin URL that is associated with a **Token**. These temporary users will be deleted (but can be set to be preserved in Admin Tokens) once the validity period is over. Normally, these users should have read-only access. Nevertheless, if you are allowing these users to submit information, e.g. fill in a tracker form, make sure to ask for their information again in those forms. You can use this feature through:
**Admin Users > Temporary Users (tab)**
(Related commit: r56888)
1.45. Terms and Conditions improved
- Added terms and conditions feature to manage terms page and make sure users approved the latest approved terms before accessing the site http://sourceforge.net/p/tikiwiki/code/49418/
- Added age validation prior to user conditions http://sourceforge.net/p/tikiwiki/code/49483/
1.46. Themes
1.47. Themes
File design.css has been renamed to tiki.css.
Some style themes have been converted to **Bootstrap** (maybe at different degrees of completion at the time of this writing):
- Fivealive
- Jgui
- TheNews
And some new Themes have been added:
- Bootswatch themes. See bootswatch.com for more information (r50408, r50790)
- Amelia
- Cerulean
Some Bootswatch themes incorporated
To use them (for instance, "cerulean"), you can select them at the panel Admin home > Look and Feel:
- Theme selection: Bootstrap themes in the "styles" directory
- Theme: bootswatch_themes
- Theme options: cerulean
1.47.1. Theme refactoring
In addition, there has been some refactoring in the way to handle theme styles. See:
- https://themes.tiki.org/Concept+and+Design
- https://themes.tiki.org/How+To+Add+a+New+Theme
In short, you can define colors in variables, in apply those variables to other css selectors. See: http://themes.tiki.org/Using+the+Less+CSS+pre-processor+with+Tiki
In addition, if you use the Newsletters feature, please note that placement of the newsletter.css has changed with the new bootstrap theme architecture. Since Tiki version 14, the customised newsletter.css file should be placed in either one of the next folders:
- /themes/yourtheme/css/
- /themes/yourtheme/options/youroption/css/
1.47.2. Icon sets
The reason for having different iconsets is the same as for having different themes, users are given the freedom to choose whichever they like and to customize/create new ones if they dont like what is shipped with Tiki.
There are different sets of icons available in the Look&Feel control panel.
Options:
- **Default (Font-awesome)**: The default system icon set using Font-awesome, see [http://fortawesome.github.io/Font-Awesome/icons/](http://fortawesome.github.io/Font-Awesome/icons/)
- **Glyphicons**: Glyphicon focused iconset, see [http://getbootstrap.com/components/](http://getbootstrap.com/components/)
- **Legacy (pre Tiki14) Icons**: Legacy (pre Tiki14) icons, mainly using famfamfam images, see [http://www.famfamfam.com/lab/icons/silk/](http://www.famfamfam.com/lab/icons/silk/)
- **Icons of the displayed theme**: This option is for advanced administrators. Icon sets are applied for all themes, except when the setting "Icons of the displayed theme" is applied. In this case the theme is displayed always using the icon set defined for that theme (eg: defined in /themes/mytheme/icons/mytheme.php)
More information:
- [https://themes.tiki.org/Icons](https://themes.tiki.org/Icons)
- [http://dev.tiki.org/Icons](http://dev.tiki.org/Icons)
- [http://dev.tiki.org/Tiki14#Icon_Sets](http://dev.tiki.org/Tiki14#Icon_Sets)
### 1.48. Time Ago Date Formatting
Optional *fuzzy* date formatting throughout Tiki, such as "Last modified 5 minutes ago" or "Created 2 months ago" etc.
Added in r57081
### 1.49. Trackers
Trackers improved in a few places:
- Added an articles field type, allowing to attach articles to a tracker item. [http://sourceforge.net/p/tikiwiki/code/50708](http://sourceforge.net/p/tikiwiki/code/50708)
- Ability to link wiki pages to tracker items (not just through page field but through text fields) and associated functionality [http://sourceforge.net/p/tikiwiki/code/50769/](http://sourceforge.net/p/tikiwiki/code/50769/)
In addition:
#### 1.49.1. Service_inline to display info from linked trackers
When using linked trackers with fields "item link/items list", if you use custom smarty templates, you might be able to include some view of the other tracker using `{service_inline}` and likely a custom tracker item
template file. See more information.
1.49.2. Tracker Tabular
A new system to import / export tracker data, called "Tracker Tabular", has been implemented.
1.49.3. Tracker Field 'Computed' extended
Three new options have been added to the display of the Computed tracker field, in a similar fashion to the existing options in the numeric field:
- **Decimal Places**: Amount of decimals to preserve before rounding.
- **Decimal separator when displaying data**: Single character. Use c for comma, d for dot or s for space. The valid decimal separator when inserting numbers may depend on site language and web browser. See documentation for more details.
- **Thousand separator when displaying data**: Single character, Use c for comma, d for dot or s for space. When inserting data no thousands separator is needed.
See: Computed Tracker Field
1.49.4. Tracker Field 'Mathematical Calculation' extended
Some new operators and functions are added to the advanced rating language in Tiki 14.x, which can be used in trackers through the Mathematical Calculation Tracker Field
See #Advanced_Rating
1.50. Unified index improvements
Some extra information have been exposed at the Unified Index for the features than take profit of it like the Stored Search and similar:
- hits
- lastpost_title
- lastpost_modification_date
- lastpost_contributors
- lastpost_post_content
- lastpost_post_snippet
- lastpost_hits
- lastpost_thread_id
1.51. User Encryption
See User Encryption
1.52. Validation syntax
Validation (in tracker fields and other areas of Tiki where some validation rules can be defined) require no more escaping of backslashes. Thus, this former regular expression worked in Tiki 12 LTS:
```
Regular expression for positive numbers in 12.x
^[0]{1}$|^(?!0*[.,]0*$|[.,]0*$|0*$)\d+|[.,]?d{0,2}$
```
...but it needs to be converted for 14.x into:
```
Regular expression for positive numbers in 14.x
^[0]{1}$|^(?!0*[.,]0*$|[.,]0*$|0*$)\d+|[.,]?d{0,2}$
```
See Regular Expressions
1.53. ViewerJS
Support has been added for ViewerJS, which will allow your site to easily display embedded PDF documents, as well as presentations, spreadsheets and other documents without any external dependencies. This external library needs separate installation in Tiki (it cannot be included by default due to licensing issues, but it is fairly easy to add). See PDF#ViewerJS.
1.54. Wiki Argument Variables
New Wiki Argument Variables added, useful in cases when feature Flagged Revisions is enabled (such as environments following ISO9001 quality certification, where revision approval is needed for document versions):
```
{{currentVersion}} (current version being displayed of the wiki page when revision approval is on; added in Tiki 14.0)
{{currentVersionApprover}} (approver of the current version being displayed when revision approval is on; added in Tiki 14.0)
{{currentVersionApproval}} (approval date, in short format, of the current version being displayed when revision approval is on; added in Tiki 14.0)
{{currentVersionApproved}} (indicate whether current version being displayed of the wiki page is approved or not when revision approval is on; added in Tiki 14.0)
```
1.55. wikiLingo
wikiLingo has been added as experimental feature. (r49859, among other commits)
See wikiLingo
1.56. Wiki Plugins
New or improved plugins:
1.56.1. Plugin AjaxLoad
Available from 14.1, this plugin can be used to load HTML into a wiki page, from another page on the same site or an external site.
See PluginAjaxLoad
1.56.2. Plugin CatOrphans
This plugin was extended for Tiki14 and the update backported to Tiki12 to allow more Tiki objects to be checked whether they were categorised - prior to this update only wiki pages could be checked.
1.56.3. Plugin Data Channel
Improved. This plugin allows to use a manual template in a profile requests. It is possible to use a custom Smarty (ie. HTML) template to create the form that is used to collect user input for the datachannel. This provides for complete flexibility.
See: PluginDataChannel
1.56.4. Plugin FullWidthTitle
Creates a full page width title. You can use your own tpl file for the styling, and you can also indicate the source of the Icon that you want to use for that Title.
See PluginFullWidthTitle
1.56.5. Plugin GDgraph
Available from 14.0 (and backported to the 12.x branch in March '15) this plugin displays a graph/chart as an image using x,y pairs of data placed in the plugin body. The x,y pairs, or indeed the whole plugin format, could be generated by using another plugin eg TRACKERLIST or LIST.
This plugin is a simple alternative to using the more feature rich R plugin for web sites where it is not possible or easy to install all the necessary libraries, etc., on the server to enable the R plugin.
Only a bar chart option is currently available - other display options could be developed
1.56.6. Plugin Like
PluginLike allows users to assign a like button to particular objects, as seen on many social networking sites.
See PluginLike
1.56.7. Plugin List
PluginList can be used with tablesorter. Use a \{tablesorter\} tag to add the tablesorter parameters.
See PluginList
1.56.8. PluginListExecute
Since version 15.3, you can select columns and customize output of the table with items to be processed, using the same syntax as LIST - OUTPUT command, which also allows for context filtering and inline edition of records before mass execution of actions on groups of items.
Added Advanced Rating calculations as ListExecute actions
See PluginListExecute
1.56.9. Plugin Tour
Quick and easy way to build your product tours with Bootstrap Popovers. This plugin makes page tours using bootstrap-tour.
See PluginTour
1.56.10. Plugin Tracker
Plugin tracker's transaction feature enables a sequence of trackers to be chained into a single transaction, which is submitted only after the user submits the last tracker form. Otherwise the transaction is cancelled.
In addition, wiki page templates can be used for email notifications using the "wiki:page name tpl" format instead of template_name.tpl files in the templates/mail dir. Tiki will use a page "page name subject tpl" if found.
See PluginTracker
1.56.11. Plugin UserInGroup
Use this wiki plugin to check whether a specific user is in a designated Group and to display defined text for either case.
See Plugin UserInGroup
1.56.12. Plugin XMLUpdate
Allows multiple elements of an XML file stored in a File Gallery to be updated - the File Gallery (at present) must store all files in a Directory instead of using the database storage option.
See Plugin XMLUpdate
1.57. Wiki Structures
A wiki structure can be locked since Tiki15, meaning that the pages that are in a structure and the order/hierarchy of these pages can only be changed by the locking user. A new preference and new permissions to lock/unlock or to admin the whole structure are introduced associated with this improvement, as well as a new section in the permissions management UI.
See Structure User
1.58. Upload and download translations
It is possible to upload translation files from the user interface.
Allowed file types:
- Tiki custom.php: a custom.php file with your custom translations
- Tiki language.php: a regular Tiki translation file
- Transifex php: a translation file from Transifex Tiki translation project (https://www.transifex.com/tiki/)
It is also possible to download custom.php and language.php files for a selected language from the frontend.
Known Installation Issues
Timeout when installing with InnoDB option
We are aware that a minority of users have timeout issues when choosing InnoDB instead of MyISAM as the database engine in the web installer. This is not a new issue and affected Tiki 12.2 also. This affects only certain server environments. The workaround for this problem is to install using MyISAM and then running db/tiki_convert_myisam_to_innodb.sql in order to convert the databases to InnoDB.
Problem with installing in subdirectory with a tilda
Users have reported problems with installing in subdirectories with a tilda (~) in the URL. This problem affects Tiki 12 too.
Upgrade
You need intl, curl and dom extensions in PHP5 (ext-intl & ext-curl & ext-xml). See above the section #Server_requirements for more instructions.
Theme upgrade notes
When upgrading an old site, you will experience disruptions in visual look and feel, but at the same time you will see much improvement to a modern and responsive look.
Nevertheless, you will need to upgrade your theme to Bootstrap, or choose a theme which is bootstrap-ready from somewhere else. Also, some reconfiguration will be required. Please see below.
For more information, see:
- [http://themes.tiki.org/Bootstrap](http://themes.tiki.org/Bootstrap)
- [http://themes.tiki.org/Bootstrap+-+CSS+Development](http://themes.tiki.org/Bootstrap+-+CSS+Development)
- [http://themes.tiki.org/Bootstrap+-+Smarty+Templates](http://themes.tiki.org/Bootstrap+-+Smarty+Templates)
Since file design.css has been renamed to tiki.css, any customization to that design.css file on disk must be ported to the new tiki.css
Top and topbar modules
Note that top and topbar modules will now go full width by default. You will find that things like search and menu bars might be full width which is not what you want. To reduce their width, go to Admin..Modules and set under "Appearance" a "Containing Class".
For example, if you set the containing class to col-xs-9 (e.g. the top menu on Tiki.org sites), the module will take up 9 out of the 12 columns of the bootstrap grid. If you set the containing class to col-xs-3 (e.g. the top search bar module on Tiki.org sites), the module will only take up 3 out of 12 columns of the bootstrap grid.
jQuery-UI Theme may need to be set to something
In Tiki 12 you might have set the Jquery-UI theme to None in order to avoid other issues, but now in Tiki 13 you might find that jQuery-UI popups might have a transparent background. To fix this, simply set the jQuery-UI theme to something or reset it to the default.
Bootstrap style menus
If you have menus using the following syntax, e.g. in custom modules:
```
{menu id=42 css=y type=vert}
```
You will need to add bootstrap=y on order to get the bootstrap version
```
{menu id=42 bootstrap=y css=y type=vert}
```
Bootstrap has a mobile focused approach and so prefers to have one level deep menus as these are considered best for mobile devices. So for bootstrap style menus you should have section level 0 and options at Tiki menu administration.
If you want a deeper menu structure than set the "Use Bootstrap menus" option at module administration to "n" in order to escape Bootstrap styling and preserve multi-level menus (section level 1, section level 2, section level 3).
Use bootstrap=n in case you include a menu in a custom module or tpl.
```
{menu id=42 bootstrap=n css=y type=vert}
```
**Convert PluginSplit to PluginDiv**
For fluid responsive flow of page elements in Tiki15 compared to Tiki12, you need to replace PluginSplit calls with the appropriate PluginDiv equivalent calls.
For instance, this plugin Split
```
{BOX()}
{SPLIT(colsize="25%|75%")}
Foo
---
Bar
{SPLIT}
{BOX}
```
which produces this output:
Foo Bar
Should be replaced with these other Plugin Divs:
```
{BOX()}
```
Behavior change of Tracker field "Dropdown with other"
The default behavior of the tracker field "Dropdown with other" changed in Tiki 15, due to some validation issues with previous behavior.
You need to select the option "other" in the dropdown (in lowercase) for the extra input text field to be shown for the user.
Behavior change of Tracker field "Item Link"
In some cases you will need to change the "One item per value" property of the field to "Display only one item for each label" (from "Displays all the items for a same label with a notation value (itemId)") to maintain the previous behaviour as the multi option now works correctly.
Floated box class styles are now redundant
Instances of div classes in wiki text etc. (such as divs thirdfloat, to have display in 3 cols), should be replaced by Bootstrap grid divs.
More information:
http://themes.tiki.org/Floated+box+demo
Plugins need approval
After upgrading, if you have plugins throughout the site that need approval, you can goto tiki-plugins.php (as a admin) to approve them all. There is a button at the bottom of that page to approve all plugins.
Themes
Since the structure of themes changed significantly for Tiki14, the main task is expected to be to upgrade custom themes. And you may have to re-select the theme style you want to use, through the Look & Feel control panel.
Please note:
- your old css files will probably need to be reviewed modified as many selectors have been adjusted to bootstrap
- an upgrade patch is provided that modifies your database. The change is focusing on removing ".css"
If you want to provide some border to the icons in action bars, you can do so with extra css rules in the Custom CSS section of the Look And Feel Control panel:
```css
.btn-link, .btn-link:hover, .btn-link:focus, .btn-link:active {
border-color: lightgrey;
}
```
If you want icons bigger, in general, you can increase their size 20%, for instance, by adding 1.2em size to their class:
```css
.icon {
font-size: 1.2em;
}
```
And if you use Less, you can have a border-color adapted to your current choice of theme is you use the variable: @button-hover-border-color , which is defined by each theme's variables.less file.
For more information on these theme-related topics, see the section about "Themes" above.
**Multilevel menus in modules**
Menus in modules are now by default "bootstrap" menus, meaning that following bootstrap's idea they are suggested to be only one level deep.
If you have a deeper sublevel structure you need to set your menu module to be none-bootstrap to show all sublevels.
Regular Expressions in fields requiring validation
See above #Validation_syntax
'SSL connection error' or error displaying HomePage after upgrade of SSL enabled websites
If your SSL enabled website shows a port number in the address bar of the browser (http://www.example.com:80 for example) which results in a SSL connection error, then one way to fix the issue is to edit the .htaccess file in the tiki root directory, as follows:
- Add the comment "#" character to the start of the following line in .htaccess
DirectoryIndex index.php
- Remove the comment "#" character from the start of the following line in .htaccess
DirectoryIndex tiki-index.php index.php
The ControlPanels | Login page | General Preferences has settings which should be checked post upgrade to ensure that they have transferred correctly for your installation. These include, amongst others:
- Protect all sessions with HTTPS
- Use HTTPS login
- HTTP Basic Authentication
- Users can choose to stay in SSL mode after an HTTPS login
- Users can switch between secured or standard mode at login
- HTTP port
- HTTPS port
Additionally, if the HomePage does not display correctly or shows a permission related error then the settings in ControlPanels | General | Navigation should be checked:
- Use Tiki feature as homepage
- Use custom homepage
- Wiki Home page
- Domain prefix handling
JQuery selectors for forms
Since most html tables (mainly produced by Trackers, but also in other cases) have been replaced with html div's in the migration to Bootstrap in the last couple of Tiki releases, the former way in jquery to select a row in a form needs to be updated.
Or if using your custom templates in a Pretty Trackers setup, then you will need to surround your tracker fields with custom calls to plugin div, and refer in your PluginJQ call to those div id's, etc.
Ask in the developers list if you need more help on this topic (or consider hiring a Consultant to do the upgrade work for you, if that suits better your needs and possibilities).
Potential Issue: Composer error
For people who install through svn up then sh setup.sh, at upgrading time the "c" command under certain conditions can blow up with a:
[RuntimeException]
Could not scan for classes inside "vendor/smarty/smarty/libs/sysplugins/smarty_security.php" which does not appear to be a file nor a folder.
This does not happen reliably, but it is easy to step around with a simple:
```
rm -rf vendor/smarty/
```
Then sh setup.sh works fine.
General upgrade notes
Also, see the standard information about Upgrades in Tiki.
Removed features
- Ability for users to toggle left or right columns on/off by themselves (a feature request to put this back exists)
- Ability to override/let user control Module area visibility in Admin...Look and Feel is not possible anymore. If there is a module to show in the area, it will show automatically.
- Nested wiki-plugins are not allowed in Custom Module anymore. It will cause a Smarty error.
- Metrics Dashboard
Pages linking to Tiki15
No pages link to Tiki15from12
Alias
Tiki15.0from12 | TikiWiki15from2 | TikiWiki15.0from2 | TikiWiki 15.0from2 | TikiWiki 15from2 | Tiki 15from2 |
| Tiki 15.0from2
|
{"Source-Url": "http://doc.tiki.org/tiki-print.php?display=pdf&page=Tiki15from12", "len_cl100k_base": 9713, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 44851, "total-output-tokens": 11475, "length": "2e13", "weborganizer": {"__label__adult": 0.0002715587615966797, "__label__art_design": 0.0005702972412109375, "__label__crime_law": 0.00014603137969970703, "__label__education_jobs": 0.00045990943908691406, "__label__entertainment": 7.641315460205078e-05, "__label__fashion_beauty": 0.00010442733764648438, "__label__finance_business": 0.00022077560424804688, "__label__food_dining": 0.00019860267639160156, "__label__games": 0.0005517005920410156, "__label__hardware": 0.0004494190216064453, "__label__health": 0.00011336803436279296, "__label__history": 0.00014972686767578125, "__label__home_hobbies": 0.00010412931442260742, "__label__industrial": 0.00012129545211791992, "__label__literature": 0.00017344951629638672, "__label__politics": 0.0001131296157836914, "__label__religion": 0.00025653839111328125, "__label__science_tech": 0.00098419189453125, "__label__social_life": 9.876489639282228e-05, "__label__software": 0.0531005859375, "__label__software_dev": 0.94140625, "__label__sports_fitness": 0.0001169443130493164, "__label__transportation": 0.0001150369644165039, "__label__travel": 0.0001646280288696289}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41000, 0.05072]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41000, 0.0615]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41000, 0.81962]], "google_gemma-3-12b-it_contains_pii": [[0, 1407, false], [1407, 2874, null], [2874, 4803, null], [4803, 6684, null], [6684, 9777, null], [9777, 11473, null], [11473, 12538, null], [12538, 14396, null], [14396, 15323, null], [15323, 17509, null], [17509, 18889, null], [18889, 20438, null], [20438, 21719, null], [21719, 23832, null], [23832, 25308, null], [25308, 27131, null], [27131, 29008, null], [29008, 30872, null], [30872, 31813, null], [31813, 34180, null], [34180, 35181, null], [35181, 36774, null], [36774, 37791, null], [37791, 39162, null], [39162, 41000, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1407, true], [1407, 2874, null], [2874, 4803, null], [4803, 6684, null], [6684, 9777, null], [9777, 11473, null], [11473, 12538, null], [12538, 14396, null], [14396, 15323, null], [15323, 17509, null], [17509, 18889, null], [18889, 20438, null], [20438, 21719, null], [21719, 23832, null], [23832, 25308, null], [25308, 27131, null], [27131, 29008, null], [29008, 30872, null], [30872, 31813, null], [31813, 34180, null], [34180, 35181, null], [35181, 36774, null], [36774, 37791, null], [37791, 39162, null], [39162, 41000, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 41000, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41000, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41000, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41000, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41000, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41000, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41000, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41000, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41000, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41000, null]], "pdf_page_numbers": [[0, 1407, 1], [1407, 2874, 2], [2874, 4803, 3], [4803, 6684, 4], [6684, 9777, 5], [9777, 11473, 6], [11473, 12538, 7], [12538, 14396, 8], [14396, 15323, 9], [15323, 17509, 10], [17509, 18889, 11], [18889, 20438, 12], [20438, 21719, 13], [21719, 23832, 14], [23832, 25308, 15], [25308, 27131, 16], [27131, 29008, 17], [29008, 30872, 18], [30872, 31813, 19], [31813, 34180, 20], [34180, 35181, 21], [35181, 36774, 22], [36774, 37791, 23], [37791, 39162, 24], [39162, 41000, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41000, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
5a26b257c9fc46927607b0bf719d7cab2babec47
|
Database Tuning and Self-Tuning
Dr.-Ing. Eike Schallehn
OvG Universität Magdeburg
Fakultät für Informatik
Institut für Technische und Betriebliche Informationssysteme
2012
Overview
- **Database Tuning:**
What are issues, basic principles, and common techniques for optimizing the performance of a database system?
- **Database Self-Tuning:**
What are current approaches to automate the database tuning process and what techniques can be used for a possible full automation in the future?
Literature: Database Tuning
- **Dennis Shasha**: *Database Tuning - A Principled Approach*, Prentice-Hall 1992
- According tutorial at VLDB 2002
- According tutorial at SIGMOD 2002
- **Sitansu S. Mittra**: *Database Performance Tuning and Optimization*, Springer Verlag 2002
- **Michael J. Corey, Michael Abbey, Daniel J. Dechichio**: *Tuning Oracle*, Oracle Press 1994
- **Gunter Saake, Andreas Heuer**: *Datenbanken: Implementierungstechniken*, MITP-Verlag Bonn, 1999
- **Klemens Böhm**: Vorlesung *Datenbankimplementierung und -Tuning*, Kapitel 11: Tuning relationaler Datenbanken
Literature: Database Self-Tuning
- **Surajit Chaudhuri, Vivek Narasayya**: *Self-Tuning Database Systems: A Decade of Progress*, VLDB 2007, Ten Year Best Paper Award
- **Kai-Uwe Sattler**: *Self-Tuning in DBMS: Techniken und aktuelle Systemunterstützung, DB-Stammtisch an der HTW Dresden*, November, 2006
- **Surajit Chaudhuri, Benoit Dageville, Guy Lohman**: *Self-Managing Technology in Database Management Systems*, VLDB 2004 Tutorial
Part I
Database Tuning
Database Tuning: Overview
- What is Database Tuning? Which aspects does the term include?
- What are basic principles of database tuning?
- Why is database tuning such a hard task?
- An overview of common techniques for database tuning
- One example technique in some detail: Index Tuning
Database Tuning Definition
- According to [Shasha 1992]:
*Database tuning is the activity of making a database application run more quickly. „More quickly“ usually means higher throughput, though it may mean lower response time for some applications. To make a system run more quickly, the database tuner may have to change the way applications are constructed, the data structures and parameters of a database system, the configuration of the operating system, or the hardware.*
- A rather general definition used for this lecture
**Definition (Database Tuning)**
*Database Tuning* comprises all activities carried out to meet requirements regarding the performance of a database system.
What is the goal of Database Tuning?
- **Improve performance**! But performance may mean several things for a computer system: quality of processing (and results), availability, usability, etc.
- Database tuning mostly refers to runtime performance and related aspects, e.g.
- **Throughput**: number of queries or transactions that can be processed in a fixed time
- **Response time**: time from initiation of single query until full result is available
- **Resource consumption**: temporary (e.g. CPU, memory) or constant (e.g. harddisk) use of resources
- **Watch out**: some of these goals can be contradicting!
- General approach of optimization: set some goal(s) as a constraint (e.g. maximum resource usage) and find the best possible solution for a specific goal of interest (e.g. throughput)
What can be tuned?
- **Hardware**
- Used components for CPU, main memory, harddisks, backup solutions, network Communication, ...
- **Operating System (OS)**
- System parameters and configuration for IO, network communication, process management, ...
- **Database Management System (DBMS)**
- System configuration and parameters, database schema (views, tables), storage structures, ...
- **Application**
- Users, queries, transactions, interfaces, Mappings, …
Focus in this Lecture
- **Here focus on actual Database Tuning**, i.e. tuning properties of the running DBMS (database server) and the managed database (DB)
- In an ideal world application tuning should actually not be necessary because DBMS optimizes accesses, but
- SQL Tuning: reformulating queries (semantically equivalent or not) may result in better runtime performance
- Transaction Tuning: adjusting transactions (and possibly application logic) for better runtime performance, e.g. break long transactions to avoid lock conflicts, relax consistency requirements (MVCC), etc.
- **Operating System tuning**
- Provide sufficient processing power and data flow (IO, network) channels for the database system
- Things to be tuned: process and thread management, virtual memory management, file system, network
A Short Remark on Hardware Tuning
- **KIWI**: Kill It With Iron!
- „Why should we call in an expensive expert to tune our system, when buying cheap hardware can solve the problem?“
- Can always be considered first
- Limits of KIWI-Approach: can usually only improve performance by some linear factor and does not scale for future requirements
- General task of hardware tuning: provide suitable and fitting components that support the database system in an optimal way (multiprocessor architectures, fast and sufficient RAM, RAID-system as secondary storage, etc.)
The Database Tuning Quadrilemma
**Application**
How is the database used and what are the tuning goals?
**DBMS**
What possibilities of optimization does it provide?
**Operating System (OS)**
How does it manage hardware resources and services used by the overall system?
**Hardware**
What are suitable hardware components to support the performance requirements?
Fully efficient Database Tuning requires deep knowledge about …
Who does Database Tuning?
- **Database and application designers**
- During database development (physical database design) and initial testing and evaluation
- Database designers usually have strong knowledge about the application, fair to good knowledge about the DBMS, but maybe only fair to no knowledge about OS and hardware
- **Database administrators**
- During ongoing system maintenance
- Adjustment to changing requirements, system properties (e.g. data volume), and system environment (e.g. new hardware)
- Administrators usually have a fair knowledge about DBMS, OS, and Hardware, and their knowledge about the application depends on the given organizational structure
- **Database experts** (consultants, in-house experts)
- During system re-design, troubleshooting (solving constant or possible future problems), or fire fighting (solving urgent problems)
- Consultants usually have a very strong knowledge about DBMS, OS, and Hardware, but have little knowledge about the current application
Four Basic Principles (according to [Shasha 1992])
1. **Think globally, fix locally**
- Measure the right quantities and come to the right conclusions
- Localize problem by identifying a bottleneck (a part of the system that limits overall performance) and resolve it
2. **Partitioning breaks bottlenecks**
- When you find a bottleneck, first try to speed up that component
- If that does not work, then partition: divide the load over more resources or spread the load over time
3. **Startup and running costs**
- Most components (hardware, OS services, functionality of the dbms) devote a substantial portion of their resources to starting up
- Try to „keep things up and running“, avoid startups
4. **Render onto server what is due onto server**
- Try to balance computation load between application and server
- Let server do, what the server dos best, and application respectively
Basic Principles: DB Tuning as a (continuous) Process
**Overall system continuously changes**
Data volume, # of users, # of queries/TXNs, usage patterns, used software components (versions), hardware, etc.
**Requirements may change**
New company/organizational policies, new dependencies from other applications, etc.
- **Identify Exisiting Problem**
Current performance requirements are not fulfilled
- **Monitor**
system behavior and **identify** cause of problem
Observe and measure relevant quantities, e.g. time spent in queries, main memory, io, etc.
- **Apply changes to solve problem**
Adjust system parameters, remove bottlenecks, add resources, add indexes, etc.
**Problem solved**
Basic Principles: Controlling Trade-Offs
- Database tuning very often is the process of decision about costs for a certain solution or activity compared to its benefits → trade-off situation
- **Costs**: monetary costs (for hardware, software, working hours) or more technical costs (resource consumption, impact on other aspects)
- **Benefits**: improved performance (monetary effect most often not easily quantifiable)
Some examples
- Adding indexes → benefit: better query response time – costs: more harddisk space used, update processing time increases
- Schema denormalization: → benefit: better query response time – costs: need to control redundancy within tables
- Replace common disk by RAID-system → benefits: improved IO-performance, consistency, and availability – costs: hardware costs
- It’s not always about trade-offs! E.g. fixing a performance problem caused by a falsely set system parameter
Basic Principles: Don’t forget the 80/20 rule!
- **80/20 Rule (Pareto Principle):** by applying 20% of the efforts one can achieve 80% of the desired effect, while to achieve the remaining 20% effect takes 80% of the efforts invested.
- **Consequences for database tuning**
- 100% effect = fully optimized system
- Fully optimized system probably beyond necessary requirements
- Colloquial: "a little bit of DB tuning can help a lot"
- Solves DB tuning quadrilemma: one does not need to be an expert on all levels of the system to be able to implement a sufficient solution
- So ... don’t panic!
Basic Principles: Tuning Tools
- Special programs with support for **monitoring**, i.e.
- online inspection and/or
- statistics gathering
and **analysis**, i.e. mapping
- usage of DBMS (queries, TXNs) to
- resource consumption (CPU, IO)
- Most often specific tools for certain DBMS (deeply integrated with system itself using internal interfaces)
Database Tuning and Self-Tuning
DBMS Reference Architecture
Data System: translation and optimization of user queries, access and integrity control, access path selection, …
Access System: implementation of operations (e.g. relation and index scans, sorting, joins), concurrency control, data dictionary, …
Storage System: managing records on pages, access path management, lock management, log and recovery, …
Buffer Management: manage main memory region (buffer) to optimize IO accesses, page replacement, …
DB Hardware Tuning Overview
- Add, increase, or improve components
- Memory
- CPUs
- Disks
- Bus and network bandwidth
- ...
- Use RAID systems
DB Operating System Tuning Overview
- Threads
- Priorities
- Switching
- Multiprogramming Level (MPL) of the DB
- Adjust file system/disk layout
- Driver configuration for specific hardware components
DB Buffer/Memory Tuning Overview
- Adjust memory usage
- Adjust page replacement
- Control prefetching strategy
- Adjust logging and recovery strategy
DB Storage System Tuning Overview
- Adjust page and file properties
- Placement (allocation) of logical files (tables and indexes) and logs
- Partitioning of files (physical aspects: how to partition?)
- Index tuning (physical aspects: how to implement indexes?)
- Adjust locking strategies of TXNs
- Adjust deadlock handling of TXNs
- Distribution and replication design for Distributed DBS
DB Access System Tuning Overview
- Index tuning (logical aspects: which indexes?)
- Partitioning (logical aspects: which tables should be partitioned?)
- Materialized views
DB Data System Tuning Overview
- Optimizer hints
- Database statistics and cost models
- Control optimization goal
DB Application Tuning Overview
- **Transaction tuning**
- TXN chopping
- Adjusting isolation levels
- **Query tuning**
- Semantically equivalent re-writing
- Semantically non-equivalent re-writing
- **Schema tuning**
- Normalization
- Denormalization
- Vertical partitioning
One Tuning Technique in Detail
- **Index Tuning**
- Discussed in *some* detail (a separate lecture could be held on this topic alone)
Index Tuning
- Index tuning one of the most often applied tuning measures
- Great benefits (improved response time) with little effort for the database designer/administrator (if applied correctly)
- Cost of additional resource consumption (disk space) most often acceptable
- Strong support within all available DBMS (index structures, index usage controlled by optimizer)
Index Tuning: Basic Aspects of Indexes
- Main goal: avoid searching the full table (relation/table scan) to find few records
- Pre-computed and stored access path (tree, hash table) to provide fast access based on specific value(s) for attribute(s) → key(s) and key value(s)
- Two main kinds of indexes:
- One primary index for a table typically based on primary key → each key value corresponds to one record → most often data is stored according to that index (clustered) → most DBMS create this index automatically if a PRIMARY KEY is specified within the CREATE TABLE-statement
- Several secondary indexes to support access via other keys → one key value may correspond to several records in list of references → no influence on data organization (non-clustered)
Index Tuning: Common Index Types
- **B+-Trees**: supported by all DBMS for primary and secondary indexes (details →)
- **Hash Indexes**: hash table to control allocation (primary index) in some systems (e.g. optional in Oracle)
- **Multidimensional Indexes**: R-Trees for multimedia or spatial data defined within DBMS extensions for according data types (e.g. Oracle, IBM DB2, MySQL)
Index Tuning: Where Indexes can be used
- **Conditional selection** of tuples in **WHERE**-clause (point queries, range queries, multi-point queries, etc.)
- Mapping constant predicates, e.g. `matrnr=123456` or `age BETWEEN 42 AND 47`, to key + key-value(s) for index lookup
- **Grouping** according to the **GROUP BY**-clause
- Index on grouping attributes, e.g. `country, year` for `GROUP BY country, year`
- Tuples of one group must be adjacent in the corresponding index and can easily be scanned
- **Sorting** according to the **ORDER BY**-clause
- Index on sorting attributes, e.g. `revenue` for `ORDER BY revenue DESC`
- Index represents per-computed order, no further processing except for index scan required
**Duplicate removal** caused by \texttt{SELECT DISTINCT} (explicit) or \texttt{UNION} (implicit)
- Duplicates must be adjacent in any (!) index on the input relation
- Can be easily detected based on index scan
**Projection** to columns in \texttt{SELECT}-clause
- If all returned columns are included in the index result can be returned from the index without touching the data pages → *Covering Index*
- E.g.
```sql
SELECT name, firstname
FROM student
WHERE matrnr=123456
can be answered from index \texttt{idx1(matrnr,name,firstname)} without reading any record
```
- Some systems, e.g. IBM DB2, support \texttt{INCLUDE ONLY}-columns which are not part of the actual key, but are stored with it inside the tree
Natural or equi-join
- Sort-merge-join special implementation of join which exploits ordering of relations based on join keys
- Allows parallel scans of input relations (sizes $n$ and $m$) with $O(n + m)$ (better than Nested-loop-join with $O(n \times m)$
- Efficient, even when order has to be established first
$O(n \log n + m \log m)$
- Indexes on join key in relations represent pre-computed order
$O(n + m)$ complexity for join
- Especially efficient if indexes are clustered
Index Tuning: Further Important Considerations
- **Table size**
- For small tables (especially when fitting into a small fraction of the main memory) index usage may cause an unnecessary overhead.
- **Data distribution**
- Whether an index is useful for a query also depends on semantics of columns (number of values, upper and lower bound for values, etc.).
- E.g. scanning for `student.age>20` will return almost all tuples → even with index full relation will be scanned (or worse: pages are read more than once with non-clustered index).
- E.g. scanning for `student.gender='female'` will end up as a relation scan (or worse ...) for non-clustered index on `gender` because all pages contain male and female students.
- General: response time improvement heavily depends on both criteria.
- Only tune indexes for critical queries (slow response time, many IO operations)!
Schallehn (FIN/ITI)
Index Tuning: Considering the Downsides of Indexes
- **Storage costs for indexes**
- Indexes may use considerable harddisk space (most often acceptable)
- Indexes are primary objects for storage in buffer (main memory)
- Placement of indexes on (dedicated) disk should also be considered
- **Costs for index updates**
- Updates on key attributes may trigger index update or possibly expensive reorganization
- Index usage may be prohibitive in scenario with many constant updates on database objects
- E.g. positions of moving objects, surveillance data, sensor data
- **Locking overhead and lock conflicts**
- Indexes are hot spot objects (especially root and upper levels of trees)
- Write operations and reorganizations may more easily cause lock conflicts and deadlocks
Index Tuning: the Global View
- So far only **local view**: are indexes useful for **one query**
- Now **global view**: which indexes are (most) useful for the (typical) **workload** (all queries)
- May require trade-off decisions
- Compare benefit/cost ratio for indexes and decide which one to implement
- Requires consideration of space constraints
**Index subsumption**
- Some indexes may provide same or similar benefit of other indexes
- E.g. prefix indexes → an index $\text{idx1}(a, b, c)$ can replace indexes $\text{idx2}(a, b)$ and $\text{idx3}(a)$
- Subsumed indexes need not to be implemented
**Index merging**
- Given certain rules, two indexes supporting two different queries can be merged into one index supporting both queries with similar behavior
Index Tuning: Index Merging
```
SELECT DISTINCT region, product
FROM sales
WHERE region = 'East'
```
```
SELECT region, year, count(*)
FROM sales
GROUP BY region, year
```
Supporting index: idx1(region, product)
Supporting index: idx2(region, year)
Index supporting both queries: idx3(region, year, product)
Index Tuning: the Global View /2
Overall index tuning process for a database could be as follows:
1. Identify critical queries
2. Create possible indexes to tune single queries
3. From set of all indexes remove subsumed indexes
4. Merge indexes where possible
5. Evaluate benefit/cost ratio for remaining indexes (need to consider frequency of queries/index usage!)
6. Pick optimal index configuration satisfying storage constraints
Index Tuning: Concluding Remarks
- Typical aspects of query tuning mentioned
- Many more things to consider for
- Specific DBMS
- Specific application
- Specific data types
- ...
- Dependencies with other tuning measures, e.g.
- Materialized views can be indexed
- Materialized views can replace indexes
- Indexes can replace materialized views
- Indexes can be partitioned
- Schema tuning may make index re-design necessary
- TXN lock tuning for indexes
- ...
Part II
Database Self-Tuning
Database Self-Tuning: Overview
- **Introduction and Motivation**
- Why is it necessary?
- What is self-tuning?
- What are related terms?
- **Basic Principles**
- The Self-Tuning Cycle (Feedback Control Loop, IBM’s MAPE)
- Trade-off elimination
- Static vs. online optimization
- Overhead for Self-Tuning
- **Overview of Self-Tuning Approaches**
- **Some Details**
- Physical design tuning (mostly index tuning)
Database Management Systems were created to make data access easy.
- Declarative query language
- Query optimizer to provide most efficient access
- Internal storage structures hidden from the user
- ...
The implementation and maintenance of a (high-end) database application and meeting specific performance requirements is a very complex task.
- Choosing the right hardware
- Configuring the OS and DBMS settings
- Implementing a suitable physical design (indexes, MVs, partitioning)
- ...
DB Tuning: Reasons for ongoing Costs
- See DB Tuning principles → continuous tuning necessary
- DB systems constantly change
- Data is changing (schema, size of tables, data distribution, ...)
- Data usage is changing (number of users, new user groups, typical access patterns, applications, ...)
- Database environment (hardware, network, operating system, concurrent applications, ...)
- Requirements may change frequently
- Performance
- Scalability
- Availability, safety, security, ...
- DB administrators spend more than 50% of their time just to keep the system „up and running“
DB Self-Tuning: Motivation
Main goals:
▶ Decrease running costs for maintaining and administrating a database system
▶ Automate as many tasks as possible
▶ „Reduce number of tuning knobs“
▶ Meeting performance requirements with less efforts
DBMS providers currently working on making their systems more easily manageable
→ support/propagate efficient usage of DBMS product
→ increased usability
→ decreased running costs
→ increased customer acceptance
→ competitive advantage
According activities
▶ IBM Autonomic Computing Initiative
▶ Microsoft AutoAdmin project for MS SQL Server
▶ Oracle Self-Tuning Architecture since Oracle 10g
Remember, ...
**Application**
How is the database used and what are the tuning goals?
**DBMS**
What possibilities of optimization does it provide?
**Operating System (OS)**
How does it manage hardware resources and services used by the overall system?
**Hardware**
What are suitable hardware components to support the performance requirements?
Fully efficient Database Tuning **requires** deep knowledge about …
DB Self-Tuning: Basic Idea
**Application**
Knowledge about application from analyzing queries, TXNs, schema, data, etc.
**DBMS**
Naturally, DBMS knows best about its functionality and possible tuning options.
**DBMS itself is best Tuning Expert!!!**
Knowledge about OS encoded in platform-specific code + runtime information via OS interfaces.
**Operating System (OS)**
Knowledge about hardware encoded in platform-specific code + runtime information via OS interfaces.
**Hardware**
DB Self-Tuning: Basic Idea /2
- Analyzing information about previous and current database usage allows prediction of future behavior and applying necessary changes
- **Necessary**: requirements need to be specified
- Future task of DB designers/administrators
- Lifts DB tuning from a technical to a strategical level
- **Limitations**:
- Can not exploit information outside the overall system (e.g. possible improvements with new hardware) or about a foreseeable future (e.g. "the number of users will double next month")
- Possible lack of transparency of DBMS functionality ("there’s a new performance problem ... what has the DBMS just done??")
DB Self-Tuning: Definition
**Definition (Database Self-Tuning)**
**Database Self-Tuning** describes the capability of a DBMS to optimize its own functionality, parameters, and internal structures for given a database system to improve the performance and meet specified requirements.
- Shifts responsibility for tuning from users of DBMS (designers, administrators, consultants) to providers of DBMS (by developing and implementing according solutions for DBMS)
- Can be seen as a special field in the more general context of self-management of systems (self-*-techniques)
IBM Autonomic Computing Initiative (2001)
Autonomic System
Self Management :=
- Self Configuring
- Self Healing
- Self Optimizing
- Self Protecting
Controlled by general policies and rules
[Source: http://www.ibm.com/autonomic]
DB Self-Tuning: Basic Principles
1. **Trade-off elimination** describes (one) important goal of self-tuning: let the DBMS decide as much, as it can decide.
2. **Static vs. online tuning**: temporal aspect of how and when tuning decisions can or must be made.
3. **The self-tuning cycle**: outline of automatic online (continuous) decision process.
4. **Self-tuning overhead**: considering the negative impact of self-tuning measures.
Self-Tuning Principles: Trade-off Elimination
- **Trade-off Elimination:** if possible, remove parameter/tuning knob, or otherwise, replace hard to control, low-level parameter(s) with easy to control, high-level parameter(s) (policies, strategies)
1. „Automate straight-forward decisions“
2. „Replace hard decisions with easy decisions“
- Both aspects must be based on using available information in decision (support) process
- Examples:
- Adjust memory usage (buffer size and others)
- Choosing a buffer replacement strategy
- Choosing an optimal page size
- Index tuning (improves query response time, slows down updates)
- B+-trees vs. Hash indexes
- ...
Self-Tuning Principles: Static vs. Online
- **Static self-tuning:**
- Self-tuning is performed once or frequently
- Initiated manually or triggered by DBMS
- Actual processing (analysis + execution of tuning measures) can be decoupled from DBMS to a large degree
- Can be supported by external tools
- Suitable for adjustments to slowly changing or stable properties of a database system
- **Online self-tuning:**
- Self-tuning is performed continuously
- Deeply integrated with DBMS functionality
- Self-tuning algorithms (e.g. ARC)
- Suitable for adjustment to frequent or continuously changing properties of a database system
Static Self-Tuning: Example
Physical Design Tuning
- Decision about:
- Indexes
- Materialized views
- Partitioning
- ... (depending on given DBMS)
- State of the art (→)
- External tools (advisors, wizards) for creating recommendations
- Based on (automatically) gathered information about workload and database statistics
- Manually controlled, but automatic decision process
- Incorporates query optimizer to estimate benefits/costs of physical design structures for single queries („what if“-analysis)
Online Self-Tuning: Example
Statistics Management
- Two basic principles
1. Automatically monitor number of changes and trigger re-computation if critical threshold is reached
2. Use query feedback to improve quality (granularity, correction factors, etc.) of statistics
- Query optimizer selects plan and keeps information about estimated cardinalities of intermediate results
- Plan is executed and real cardinalities are derived
- In case of „significant“ differences changes to database statistics are triggered
Self-Tuning Principles: The Self-Tuning Cycle
- Based on manual tuning process described before
- **In general:** abstract description of typical tasks in automated decision processes
- **For specific self-tuning tasks:** fine-grained definition of single steps, their input and output parameters and according interrelations
- Based (more or less loosely) on concepts from **Control Theory**
Core principle in IBM’s Autonomic Computing Vision
MAPE = Monitor + Analyze + Plan + Execute
Self-Tuning: State of the Art
- Currently growing support for Monitoring
- DBMS automatically stores detailed information about queries, runtimes, resource usage, etc.
- E.g. Oracle’s Automatic Database Diagnostics Monitor (ADDM) and Automatic Workload Repository (AWR)
- Static support for aspects of Analysis and Plan
- External tools to suggest tuning measures
- Advisors and wizards for physical design, configuration, SQL query analysis for all big commercial DBMS
- Some support for automatic Execution of tuning measures (full self-tuning cycle)
- Self-tuning algorithms for trade-off elimination
- Current solutions incorporated in major DBMS for memory and buffer management
- IBM’s self-learning optimizer
One field in some detail: **Physical Design Tuning**
Often most successful tuning measure for database systems
Strong support by all vendors: advisors and wizards to recommend suitable set of indexes, MVs, partitioning, etc.
In the following, some details on how these advisors work + a look into a possible future
To avoid complexity: **focus on index self-tuning**
Index Self-Tuning: Problem
- Automatic decision about optimal index configuration theoretically requires considering all possible indexes in a database → combinatorial explosion
- Number of possible indexes for one table (!) with \( n \) columns:
\[
\sum_{k=1}^{n} \frac{2^k \cdot n!}{(n-k)!}
\]
This does not yet include different index types and variants, sorting order, etc.
- Can be narrowed down by just considering „useful“ indexes → analyze queries according to specific rules (→ where indexes can be used)
- Useful indexes can be discovered by query optimizer during „what if“-analysis
Index Self-Tuning: „What if“-Analysis /1
- **Local view:** special mode of current optimizers to recommend indexes for a single query
- **WHAT IF** all possible indexes would exist? Which ones were used for this query?
- **Two approaches:**
- Create metadata for possibly useful (hypothetical) indexes according to analysis of attributes used in queries before optimization → query optimizer works based on these metadata and proposes to use some of the indexes
- More recent optimization: query optimizer creates hypothetical index (and according metadata) on-the-fly wherever an index appears to be useful
- If hypothetical indexes are used in the found query plan, these are recommended indexes for this query
Index Self-Tuning: „What if“-Analysis /2
Query \rightarrow \text{Heuristics} \rightarrow \text{Hypothetical Objects}
\text{Optimizer} \rightarrow \text{Plan } P_A
Profit? \rightarrow \text{yes} \rightarrow \text{Hypothetical Objects in } P_B = \text{Recommendation}
\text{Optimizer} \rightarrow \text{Plan } P_B
Global view: for a given workload (including updates → consider indexing overhead), what is the best index configuration to support all queries?
Based on „what if“-analysis of all single queries: workload → set of recommended indexes
Index set can be pruned (e.g. remove subsumed indexes) and indexes can be merged (→ index tuning) → reduced index candidate set
...
The **problem definition** now is:
**Input:**
- Reduced set $\mathcal{I}_C$ of candidate indexes $I \in \mathcal{I}_C$ ($\text{benefit}(I)$: response time improvement, $\text{cost}(I)$: size) for single queries
- Space constraint $S$ of available space for indexes
**Output:**
- Set of recommended indexes $\mathcal{I}_R \subset \mathcal{I}_C$ for workload with optimal overall benefit
\[
\sum_{I \in \mathcal{I}_R} \text{benefit}(I) \rightarrow \max
\]
fulfilling space constraint
\[
\sum_{I \in \mathcal{I}_R} \text{size}(I) < S
\]
Handled as knapsack problem (though it is not quite because of dependencies, e.g. benefit of two indexes used in merge-join greater than sum of benefits of single indexes)
Applied solution: mix of greedy selection + genetic algorithms to improve solution
Index Self-Tuning: Index Advisors
- Workload
- Advisor Tool
- Index Candidate Selection
- Index Merging
- Enumeration of Candidates
- "What if" analysis for single queries
- Query Optimizer
- Recommendations
DBMS
Schallehn (FIN/ITI)
Index Self-Tuning: MS SQL Server DTA
Following slides: screenshots of MS SQL Server Database Tuning Advisor (DTA)
Database Engine Tuning Advisor will recommend clustered and nonclustered indexes to improve performance of your workload. Newly recommended structures will be partitioned to provide the best performance for the specified workload. All existing structures will remain intact in the database at the conclusion of the tuning process.
Database Engine Tuning Advisor
Estimated improvement: 72%
Partition Recommendations:
Index Recommendations:
- Database Name: [dbo]CUSTOMER
- Object Name: create
- Recommendation: _dia_stal_437576557_1_4
- Target of Recommendation: _dia_stal_437576557_7_1
- Partition Size (KB): 4456
- Definition: [IC_CUSTKEY],[IC_NATIONKEY]
- Database Name: [dbo]CUSTOMER
- Object Name: create
- Recommendation: _dia_index_CUSTOMER
- Target of Recommendation: _dia_index_CUSTOMER
- Partition Size (KB): 656
- Definition: [IC_CUSTKEY],[IC_CUSTKEYAsc]
- Database Name: [dbo]CUSTOMER
- Object Name: create
- Recommendation: _dia_index_CUSTOMER
- Target of Recommendation: _dia_index_CUSTOMER
- Partition Size (KB): 3632
- Definition: [IC_CUSTKEY],[IC_CUSTKEYAsc],[IC_NAME]
- Database Name: [dbo]LINEITEM
- Object Name: create
- Recommendation: _dia_stal_463576711_13
- Target of Recommendation: _dia_stal_463576711_14_2
- Partition Size (KB): 132032
- Definition: [IL_RECEIPTKEY],[IL_ORDERKEY],[IL_SHIPINSTRUCT],[IL_PARTKEY]
- Database Name: [dbo]LINEITEM
- Object Name: create
- Recommendation: _dia_stal_463576711_15
- Target of Recommendation: _dia_stal_463576711_16
- Partition Size (KB): 211248
- Definition: [IL_RECEIPTKEY],[IL_ORDERKEY],[IL_SHIPDATE],[IL_MONTH],[IL_QUANTITY],[IL_SHIPINSTRUCT],[IL_SHIPMETHOD]
Index Self-Tuning: Oracle 11g SQL Access Advisor
- Following slides: screenshots of Oracle 11g SQL Access Advisor
- Sorry ... German only!
**Ergebnisse für Task:SQLACCESTPCHALL**
Task-Name: SQLACCESTPCHALL
Status: COMPLETED
Advisor-Modus: COMPREHENSIVE
Scheduler-Job: ADV_SQLACCESTPCHALL
Veröffentlichungspunkt: 1
**Gesamte Arbeitsblatt-Performance**
- Potenzial zur Verbesserung
**Workload-I/O-Kosten**
- Originalkosten (3316259)
- Neue Kosten (237511)
**Verbesserung der Ausführungszeit der Abfrage**
- Keine Performance-Verbesserung
- Potenzielle Performance-Verbesserung
**Empfehlungen**
- Empfehlungen
- Anforderungen an Speicherplatz (GB): 1,620
- vom Benutzer angegebene Speicherplatzanpassung
- Anzahl von Empfehlungsaktionen anzeigen
**SQL-Anweisungen**
- SQL-Anweisungen: 20
- Anweisungen, die nach Anwendung von Filtern bestehen bleiben
- Anzahl von Anweisungen anzeigen
**Gesamte Arbeitsblast-Performance**
**Empfehlungen**
**Workload-I/O-Kosten**
<table>
<thead>
<tr>
<th>Originalkosten (3315259)</th>
<th>Neue Kosten (237594)</th>
</tr>
</thead>
</table>
**Verbesserung der Ausführungszeit der Abfrage**
- Keine Performance-Verbesserung
- Potenzielle Performance-Verbesserung
**SQL-Anweisungen**
**Empfehlungen**
- **Anforderungen an Speicherplatz (GB)**
- 1,620
- **Vom Benutzer angegebene Speicherplatzanpassung**
- Begrenzt
**Anzahl von Empfehlungsaktionen verbergen**
<table>
<thead>
<tr>
<th>Indizes</th>
<th>Erstellen</th>
<th>Löschen</th>
<th>Behalten</th>
</tr>
</thead>
<tbody>
<tr>
<td>13</td>
<td></td>
<td>0</td>
<td>4</td>
</tr>
<tr>
<td>Materialized Views</td>
<td>Erstellen</td>
<td>Löschen</td>
<td>Behalten</td>
</tr>
<tr>
<td>13</td>
<td></td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Materialized View Logs</td>
<td>Erstellen</td>
<td>Behalten</td>
<td>Ändern</td>
</tr>
<tr>
<td>8</td>
<td></td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Partitionen</td>
<td>Tabellen</td>
<td>Indizes</td>
<td>Materialized Views</td>
</tr>
<tr>
<td>5</td>
<td>9</td>
<td>3</td>
<td></td>
</tr>
</tbody>
</table>
**SQL-Anweisungen**
- **Anzahl von Anweisungen verbergen**
- **Einfügen**
- 0
- **Auswählen**
- 20
- **Aktualisieren**
- 0
- **Löschen**
- 0
- **Konsolidieren**
- 0
- **Übersprünge (Parsing- oder Berechtigungsfehler)**
- 0
### Empfehlungen zur Implementierung auswählen
- **Beibehalten-Aktionen aufnehmen**
<table>
<thead>
<tr>
<th>Auswählen</th>
<th>Implementierungsstatus</th>
<th>ID</th>
<th>Aktionen</th>
<th>Aktionstypen</th>
<th>Kostenverbesserung</th>
<th>Kostenverbesserung (%)</th>
<th>Speicherplatz (MB)</th>
<th>Betroffene SQL-Anweisungen</th>
</tr>
</thead>
<tbody>
<tr>
<td>✔</td>
<td></td>
<td>1</td>
<td>7</td>
<td></td>
<td>399872</td>
<td>12,99</td>
<td>470,056</td>
<td>1</td>
</tr>
<tr>
<td>✔</td>
<td></td>
<td>3</td>
<td>6</td>
<td></td>
<td>270770</td>
<td>8,80</td>
<td>5,906</td>
<td>1</td>
</tr>
<tr>
<td>✔</td>
<td></td>
<td>5</td>
<td>9</td>
<td></td>
<td>236265</td>
<td>7,67</td>
<td>32,320</td>
<td>1</td>
</tr>
<tr>
<td>✔</td>
<td></td>
<td>6</td>
<td>12</td>
<td></td>
<td>222251</td>
<td>7,22</td>
<td>0,773</td>
<td>1</td>
</tr>
<tr>
<td>✔</td>
<td></td>
<td>8</td>
<td>6</td>
<td></td>
<td>209055</td>
<td>6,60</td>
<td>0,180</td>
<td>1</td>
</tr>
<tr>
<td>✔</td>
<td></td>
<td>2</td>
<td>7</td>
<td></td>
<td>197120</td>
<td>6,40</td>
<td>0,117</td>
<td>1</td>
</tr>
<tr>
<td>✔</td>
<td></td>
<td>2</td>
<td>6</td>
<td></td>
<td>196372</td>
<td>6,38</td>
<td>152,576</td>
<td>1</td>
</tr>
<tr>
<td>✔</td>
<td></td>
<td>2</td>
<td>4</td>
<td></td>
<td>149050</td>
<td>4,84</td>
<td>146,202</td>
<td>1</td>
</tr>
<tr>
<td>✔</td>
<td></td>
<td>16</td>
<td>4</td>
<td></td>
<td>147690</td>
<td>4,80</td>
<td>205,331</td>
<td>1</td>
</tr>
<tr>
<td>✔</td>
<td></td>
<td>10</td>
<td>6</td>
<td></td>
<td>142092</td>
<td>4,62</td>
<td>204,06</td>
<td>1</td>
</tr>
</tbody>
</table>
Alerter: automatically notify administrators, when a change of the physical design configuration is recommended
▶ Bruno N. and Chaudhuri S. *To Tune or not to Tune? A Lightweight Physical Design Alerter*. VLDB 2006
Online Index Tuning: fully automated index tuning
→ screenshot on next slide
▶ Bruno, N., and Chaudhuri, S. *Automatic Physical Design Tuning: A Relaxation Based Approach*. ACM SIGMOD 2005
...
Index Self-Tuning: Next Steps /2
- ...
- **On-the-fly Index Creation**: create necessary indexes while performing a query for usage in this or for further queries
- **Dynamic Index Structures**: make index structures self-tuning (adaptable, access-balanced, etc.)
- Goetz Graefe, Harumi A. Kuno: *Self-selecting, self-tuning, incrementally optimized indexes*. EDBT 2010
- ...
“Nonetheless, the challenge in making database systems truly self-tuning is a tall task. For example, the nature of tuning a buffer pool or tuning allocation of working memory for queries is very different from that of selecting the right set of indexes or statistics. Each such tuning problem has different abstractions for workloads and different constraints on the desired solution. Therefore, it will probably be impossible to make database systems self-tuning by a single architectural or algorithmic breakthrough. As a consequence, it will be a long journey before this goal is accomplished just as it took the automobile industry a sustained effort to reduce the cost of ownership.”
S. Chaudhuri, V. Narasayya: Self-Tuning Database Systems: A Decade of Progress, Microsoft Research, 2007
Will Self-tuning replace DB administrators?
- **No**, self-*-techniques just help to make DBMS more easily usable in complex applications
- New tasks for administrator on higher level: set strategies, policies, requirements, constraints
- Fully self-managed databases for specific systems: Web-databases (CloudStorage, ZeroAdmin databases, hosted databases etc.), embedded data management, etc.
Current problem of many self-*-techniques: lack of acceptance due to immaturity
Interesting developments in other fields of DBMS research, e.g. Column-oriented DBMS → Database Cracking: optimize storage structure dynamically according to usage
Invitation to Join Self-Tuning Research ;-) ... with a master or diploma thesis
|
{"Source-Url": "http://wwwiti.cs.uni-magdeburg.de/iti_db/lehre/advdb/SoSe2012/tuning.pdf", "len_cl100k_base": 9614, "olmocr-version": "0.1.50", "pdf-total-pages": 78, "total-fallback-pages": 0, "total-input-tokens": 109226, "total-output-tokens": 12654, "length": "2e13", "weborganizer": {"__label__adult": 0.0004127025604248047, "__label__art_design": 0.0007305145263671875, "__label__crime_law": 0.00045561790466308594, "__label__education_jobs": 0.0238494873046875, "__label__entertainment": 0.00020194053649902344, "__label__fashion_beauty": 0.0002791881561279297, "__label__finance_business": 0.0018262863159179688, "__label__food_dining": 0.0004835128784179687, "__label__games": 0.0007586479187011719, "__label__hardware": 0.0020885467529296875, "__label__health": 0.0009355545043945312, "__label__history": 0.0006680488586425781, "__label__home_hobbies": 0.0002980232238769531, "__label__industrial": 0.0011653900146484375, "__label__literature": 0.0007739067077636719, "__label__politics": 0.00035500526428222656, "__label__religion": 0.0007214546203613281, "__label__science_tech": 0.35107421875, "__label__social_life": 0.00026988983154296875, "__label__software": 0.0718994140625, "__label__software_dev": 0.53955078125, "__label__sports_fitness": 0.00024771690368652344, "__label__transportation": 0.0006237030029296875, "__label__travel": 0.00027871131896972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40335, 0.015]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40335, 0.36043]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40335, 0.76123]], "google_gemma-3-12b-it_contains_pii": [[0, 175, false], [175, 497, null], [497, 1197, null], [1197, 1638, null], [1638, 1662, null], [1662, 1952, null], [1952, 2650, null], [2650, 3460, null], [3460, 3933, null], [3933, 4758, null], [4758, 5331, null], [5331, 5762, null], [5762, 6786, null], [6786, 7700, null], [7700, 8418, null], [8418, 9336, null], [9336, 9944, null], [9944, 10304, null], [10304, 10819, null], [10819, 10975, null], [10975, 11185, null], [11185, 11337, null], [11337, 11730, null], [11730, 11904, null], [11904, 12020, null], [12020, 12312, null], [12312, 12448, null], [12448, 12823, null], [12823, 13595, null], [13595, 13983, null], [13983, 14714, null], [14714, 15443, null], [15443, 15929, null], [15929, 16838, null], [16838, 17631, null], [17631, 18404, null], [18404, 18717, null], [18717, 19152, null], [19152, 19637, null], [19637, 19667, null], [19667, 20098, null], [20098, 20303, null], [20303, 20593, null], [20593, 21192, null], [21192, 21832, null], [21832, 22249, null], [22249, 22741, null], [22741, 23401, null], [23401, 23977, null], [23977, 24209, null], [24209, 24647, null], [24647, 25326, null], [25326, 25975, null], [25975, 26499, null], [26499, 27195, null], [27195, 27589, null], [27589, 27683, null], [27683, 28416, null], [28416, 28787, null], [28787, 29393, null], [29393, 30114, null], [30114, 30430, null], [30430, 30799, null], [30799, 31595, null], [31595, 31837, null], [31837, 31952, null], [31952, 32283, null], [32283, 33589, null], [33589, 33729, null], [33729, 34505, null], [34505, 35687, null], [35687, 37756, null], [37756, 38289, null], [38289, 38289, null], [38289, 38818, null], [38818, 39614, null], [39614, 40256, null], [40256, 40335, null]], "google_gemma-3-12b-it_is_public_document": [[0, 175, true], [175, 497, null], [497, 1197, null], [1197, 1638, null], [1638, 1662, null], [1662, 1952, null], [1952, 2650, null], [2650, 3460, null], [3460, 3933, null], [3933, 4758, null], [4758, 5331, null], [5331, 5762, null], [5762, 6786, null], [6786, 7700, null], [7700, 8418, null], [8418, 9336, null], [9336, 9944, null], [9944, 10304, null], [10304, 10819, null], [10819, 10975, null], [10975, 11185, null], [11185, 11337, null], [11337, 11730, null], [11730, 11904, null], [11904, 12020, null], [12020, 12312, null], [12312, 12448, null], [12448, 12823, null], [12823, 13595, null], [13595, 13983, null], [13983, 14714, null], [14714, 15443, null], [15443, 15929, null], [15929, 16838, null], [16838, 17631, null], [17631, 18404, null], [18404, 18717, null], [18717, 19152, null], [19152, 19637, null], [19637, 19667, null], [19667, 20098, null], [20098, 20303, null], [20303, 20593, null], [20593, 21192, null], [21192, 21832, null], [21832, 22249, null], [22249, 22741, null], [22741, 23401, null], [23401, 23977, null], [23977, 24209, null], [24209, 24647, null], [24647, 25326, null], [25326, 25975, null], [25975, 26499, null], [26499, 27195, null], [27195, 27589, null], [27589, 27683, null], [27683, 28416, null], [28416, 28787, null], [28787, 29393, null], [29393, 30114, null], [30114, 30430, null], [30430, 30799, null], [30799, 31595, null], [31595, 31837, null], [31837, 31952, null], [31952, 32283, null], [32283, 33589, null], [33589, 33729, null], [33729, 34505, null], [34505, 35687, null], [35687, 37756, null], [37756, 38289, null], [38289, 38289, null], [38289, 38818, null], [38818, 39614, null], [39614, 40256, null], [40256, 40335, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40335, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40335, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40335, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40335, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40335, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40335, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40335, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40335, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40335, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40335, null]], "pdf_page_numbers": [[0, 175, 1], [175, 497, 2], [497, 1197, 3], [1197, 1638, 4], [1638, 1662, 5], [1662, 1952, 6], [1952, 2650, 7], [2650, 3460, 8], [3460, 3933, 9], [3933, 4758, 10], [4758, 5331, 11], [5331, 5762, 12], [5762, 6786, 13], [6786, 7700, 14], [7700, 8418, 15], [8418, 9336, 16], [9336, 9944, 17], [9944, 10304, 18], [10304, 10819, 19], [10819, 10975, 20], [10975, 11185, 21], [11185, 11337, 22], [11337, 11730, 23], [11730, 11904, 24], [11904, 12020, 25], [12020, 12312, 26], [12312, 12448, 27], [12448, 12823, 28], [12823, 13595, 29], [13595, 13983, 30], [13983, 14714, 31], [14714, 15443, 32], [15443, 15929, 33], [15929, 16838, 34], [16838, 17631, 35], [17631, 18404, 36], [18404, 18717, 37], [18717, 19152, 38], [19152, 19637, 39], [19637, 19667, 40], [19667, 20098, 41], [20098, 20303, 42], [20303, 20593, 43], [20593, 21192, 44], [21192, 21832, 45], [21832, 22249, 46], [22249, 22741, 47], [22741, 23401, 48], [23401, 23977, 49], [23977, 24209, 50], [24209, 24647, 51], [24647, 25326, 52], [25326, 25975, 53], [25975, 26499, 54], [26499, 27195, 55], [27195, 27589, 56], [27589, 27683, 57], [27683, 28416, 58], [28416, 28787, 59], [28787, 29393, 60], [29393, 30114, 61], [30114, 30430, 62], [30430, 30799, 63], [30799, 31595, 64], [31595, 31837, 65], [31837, 31952, 66], [31952, 32283, 67], [32283, 33589, 68], [33589, 33729, 69], [33729, 34505, 70], [34505, 35687, 71], [35687, 37756, 72], [37756, 38289, 73], [38289, 38289, 74], [38289, 38818, 75], [38818, 39614, 76], [39614, 40256, 77], [40256, 40335, 78]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40335, 0.03438]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
16a4924b9a98e428cd55b60f91ab3d4e8b3f77ea
|
Smarties: An Input System for Wall Display Development
Olivier Chapuis, Anastasia Bezerianos, Stelios Frantzeskakis
To cite this version:
Olivier Chapuis, Anastasia Bezerianos, Stelios Frantzeskakis. Smarties: An Input System for Wall Display Development. Proceedings of the 32nd international conference on Human factors in computing systems, Apr 2014, Toronto, Canada. pp.2763-2772, 10.1145/2556288.2556956. hal-00979034v2
HAL Id: hal-00979034
https://hal.archives-ouvertes.fr/hal-00979034v2
Submitted on 6 May 2014
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Smarties: An Input System for Wall Display Development
Olivier Chapuis1,2 Anastasia Bezerianos1,2 Stelios Frantzeskakis1,3
1Univ Paris-Sud & CNRS (LRI) 2INRIA 3University of Crete
F-91405 Orsay, France F-91405 Orsay, France GR-70013 Heraklion, Greece
ABSTRACT
Wall-sized displays can support data visualization and collaboration, but making them interactive is challenging. Smarties allows wall application developers to easily add interactive support to their collaborative applications. It consists of an interface running on touch mobile devices for input, a communication protocol between devices and the wall, and a library that implements the protocol and handles synchronization, locking and input conflicts. The library presents the input as an event loop with callback functions. Each touch mobile has multiple cursor controllers, each associated with keyboards, widgets and clipboards. These controllers can be assigned to specific tasks, are persistent in nature, and can be shared by multiple collaborating users for sharing work. They can control simple cursors on the wall application, or specific content (objects or groups of them). The types of associated widgets are decided by the wall application, making the mobile interface customizable by the wall application it connects to.
Author Keywords
input toolkit; wall display; hand-held touch devices; cscw; multi-cursors.
ACM Classification Keywords
H.5.2 [Information Interfaces and Presentation]: User Interfaces - Graphical user interfaces
INTRODUCTION
High-resolution wall-sized displays allow multiple people to see and explore large amounts of data. They are well adapted to data analysis and collaboration, due to physical navigation that affords a natural pan-and-zoom in the information space, an enlarged physical space that enables collaborative work, and millions of pixels that allow viewing large amounts of data in one shared environment [1, 8]. They are well suited for application domains such as command and control, data visualization, astronomical imagery, collaborative design, etc.
Deciding on appropriate interaction techniques for wall displays is nevertheless not a simple matter. Mice, keyboards and direct touch are limiting in environments where more than one user can move freely, come close to the display to see details or move away to acquire an overview [1]. Research on mid-air interaction for remote displays (e.g. [22, 37, 23]), and recent work on mobile devices (mainly smartphones, e.g. [20]) focuses on specific interactions such as navigation, pointing and selection. Thus it cannot be applied as-is in real wall-display applications that need support for multiple users performing complex interactions that combine navigation, pointing, selection, dragging, text editing and content sharing. Finally, interaction techniques are often application or content specific (e.g. using a brain prop to rotate virtual brain scans [10]), requiring considerable design and implementation effort, thus making quick prototype development and setup challenging.
The few existing toolkits for programming collaborative interaction on walls require a significant effort to develop communication protocols between input devices and applications (e.g. [28]), prohibiting quick prototyping. Or assume users are static (e.g. [35]), forcing them to carry multiple devices (mice and keyboard) to perform complex tasks while moving.
The design goal of Smarties is to address all of the above: support complex interactions, using mobile devices to accommodate multiple mobile users, in a way that is easy to setup, develop, and use with different wall-display applications. Our motivation is the following: although specialized interaction techniques and devices can be very well adapted to specific applications, often wall application developers need input technology they can setup and use quickly to prototype and test their interactive applications.
Concept and Contributions
The components of the Smarties system, whose concept is described next, are: (i) an input interface on touch mobile devices (mobile input interface), (ii) a communication protocol between these devices and wall applications, and (iii) libraries implementing the protocol at the wall application side.
Mobile input interface
Classic desktops include a pointing device (mouse), a keyboard, and a clipboard to store data. If a large number of mice is available, together with associated keyboards and clipboards (we’ll call them extended mice), we could use them for different tasks (e.g. one for pointing, one for selecting objects, or one for editing a shape or a text object in a drawing application). Or we could leave a mouse permanently attached to one or more specific objects (e.g. selected drawing shapes), making it synonymous to the objects it’s attached to, i.e. a shortcut to them. In a simple desktop, if we copy a shape and then a text object in the same application, the second copy would overwrite the first. With the extended mouse idea both copies can still be available in their respective mouse’s clipboard, ensuring persistence of interaction at the task level.
These extended mice, which we call pucks due to their round shape, form the central component of our mobile input interface. Each puck also has specific actions available to them, in the form of gestures or widgets on the mobile device: e.g. a
puck attached to a text object can have shortcut action buttons for turning text bold, italic, etc. If we move this puck to another text editing object (even in another application) the same associated actions will still be available.
Multiple such pucks are available at any given time for performing different tasks, and their states (thus the users’ work) can be stored. They can be also shared between multiple people to share tasks with colleagues: e.g. a user can hand over all her text editing work (including current mouse position, widget states, clipboard) just by handing over the puck.
This is the concept behind our interface: it is a collection of extended mice, referred to as pucks, together with their associated keyboards, widgets and clipboards. They reside in multiple touch mobile devices, ensuring that users can move freely in front of the wall, and control a wall application. They can be seen as simple mouse cursors, or shortcuts to specific tasks or content on the wall display. They are persistent in nature and can be shared among collaborating users. In our design, the associated widgets and keyboards are decided upon by the wall application, making the puck interface customizable by the wall application they connect to. See Fig. 1.
Protocol and Library
Smarties uses a client - server logic: the server is the wall application and the clients are the mobile devices. A protocol ensures the communication between mobiles and the wall application, with messages to: set up connections, maintain synchronization of pucks and their widgets/clipboards/keyboards across devices, and send high level input events (e.g. gestures or widget values) associated with the pucks.
The libraries implement the protocol in the server side and provides developers with the following functionality: a centralized way to synchronize pucks and their widgets across devices, ways of implementing ownership and locking of pucks, an event loop with callback functions to handle the events sent by pucks and their widgets, and methods for dealing with event conflicts from multiple devices.
The major advantage of the protocol and library is that the internal workings of pucks are hidden and developers can setup and use them as they would use regular mice and widgets. Thus, they provide a quick way of setting up and prototyping interaction support for wall display applications.
The main contributions of our work are:
• An open-source framework that combines (i) a mobile input interface, (ii) a communication protocol between multiple mobiles running the interface and the wall applications, and (iii) libraries encapsulating the protocol and mobile interface customization functionality, allowing for fast development of input support for collaborative interactive wall applications.
• The library hides completely the communication between wall application and mobile interface devices(s) from the developer. It provides collaborative interaction support in a few lines of code in the wall application side. And it allows the customization of the mobile interface with simple instructions in the wall application side, without modifying the code on the mobile devices.
• The mobile input interface components support complex interaction, from touchpad, keyboard, and clipboard, to combinations of specialized widgets such as menus, buttons or sliders with programmable interaction behavior (e.g. a button for “gathering” a set of selected objects). Multiple interaction elements called pucks act as shortcuts either to user tasks or wall display content, allowing for persistent work, that can be stored and shared with other users.
Contrary to systems such as Pebbles [21], jBricks [28], ZOIL [17] and iRoom/iStuff [2], Smarties focuses on the input side only, offering an integrated system with a high-level protocol coupled with libraries that hide the complexity of the protocol, for quicker input prototyping. Contrary to these previous systems, it also comes with a ready to use (but customizable) original input interface running on mobile devices, handling advanced input (e.g. widgets and multi-touch gestures) and collaborative interaction (e.g. sharing policies).
SMARTIES MOBILE INPUT INTERFACE
The interface on the mobile clients is divided into two areas, the touch area where pucks exist and the widget area (Fig. 1).
Puck Visual Design and Basic Interaction
A proxy of the entire wall is represented visually on the top of the mobile device (touch area). A puck is represented as a small colored circle. We chose a round shape to both provide a large enough target and remind users of a touch footprint1.
1 Multiple pucks together look like Smarties candies, thus the name of the system.
Users can create several pucks on their device using the “Pucks” container on the widget area. Each device has at most one active puck at a time, rendered more opaque. Pucks can be deleted by moving them back to the “Pucks” area, or stored for later use by dragging them in a corridor on the left. Stored pucks can retain their interaction behavior and any properties the wall application associated with them. These design were informed by user studies (see Applications).
A puck can simply control a cursor of the same color that appears on the wall. Moving the puck on the touch area moves the corresponding wall cursor in different ways. When users drag the puck itself, its wall cursor is moved with a direct mapping, traversing quickly large distances on the wall display. When users start the drag outside the active puck, we use an indirect mapping with appropriate CD gain transfer functions (see [24]) that slow down at low dragging speeds. However, this allows precise cursor movements even when the touch area is relatively small compared to the size of the wall. To allow switching between pucks, but limit accidental switching, users can long-press on another puck to activate it.
By default, a puck is visible in all mobile devices connected to the wall application to provide awareness during collaboration. An active puck on one device is seen as locked (faded out) on other devices and other users cannot use it. This puck becomes available to all users implicitly when it is no longer active, i.e. when the user selects another puck, or explicitly, through a “sharing” button. We have implemented alternative sharing policies described in the library section.
Widgets and Advanced Interaction
The widgets contained in the widget area are application dependent, and specified directly by the wall application without changing the mobile interface code (see library section).
A widget can control the active puck’s behavior (e.g. a state button decides dragging vs hovering behavior for the corresponding wall cursor), execute actions (e.g. a button permanently attaches a set of objects to the puck), or control parameters (e.g. a slider changes the opacity of attached objects).
We currently support text view widgets, buttons, toggle buttons, check boxes, radio buttons, sliders, and different popup menu types. For example if users want to annotate objects attached to a puck, the wall application can specify a button “annotate” that pops up a keyboard and a dialogue window with a text field. When the user finishes typing and presses ok the text is sent from the mobile device to the wall application. We give more examples in the applications section.
By default a widget is puck dependent: its actions and values are associated to the active puck, and can thus change or even disappear when the user activates a new puck. However, a widget can be specified by the wall application as puck independent, for executing global actions, e.g. loading a new scene in our Lenses application example.
The system also supports several touch and tap gestures with single or multiple fingers. For example to allow wall applications to distinguish between cursor hover and drag, the touch area can distinguish a simple drag event (hovering) and a tap-and-drag gesture to emulate the usual press-drag-release interaction seen in touchpads. As we will see in the library and Lenses application example, detected multi-touch taps and gestures (e.g. multi-finger pinch or move) are not necessarily linked to puck movement and can be freely interpreted by the wall application for other purposes (e.g. zoom the wall view).
THE SMARTIES COMMUNICATION PROTOCOL
A common software architecture for tiled wall displays consists of a master application running on one machine (server), that may be connected to slave machines on a rendering cluster. User input is sent to the master application, that in turn instructs the slaves to modify their rendering state depending on the input events. A Smarties library sits on the master application side, managing input received from the mobile interface clients through our communication protocol (see Fig. 2).
This abstract protocol is hidden by a library (described next), and ensures that the mobile interface client implementation is independent of the wall application. This section describes the internal communication process between the mobile clients, that are application agnostic, and the wall application.
Our high-level protocol: (i) is extensible, (ii) does not require programming or restarting the mobile clients, (iii) synchronizes states among multiple mobiles, (iv) supports complex multi-finger input and (v) widgets and keyboard mapping.
Extensibility
For the mobile clients a server is an IP address and a communication port that sends or receives messages. All messages from a mobile client to the wall server start with the IP address of the client, considered as their unique identifier. A message consists of a name followed by a sequence of typed values (boolean, integer, float, double or string), whose length depends on the name of the message (e.g. <IP, menu, [list of item names]> for a popup menu).
Our protocol builds upon OSC2, a low level communication protocol that is flexible in message naming and length. It can thus be extended either by adding new messages or appending new values at the end of existing messages, ensuring that new types of widgets and behaviors can be added.
Mobile Client Customization
To connect to a wall server, a client sends a NewConnection message (msg) at startup or when the user changes the IP or port (i.e. the wall server ID). Clients send continuously interaction messages. Whenever the wall receives a msg from an unknown client it sends a Hello msg, and whenever a client
\footnote{http://opensoundcontrol.org/introduction-osc}
receives such a msg it resets itself to receive customization information. Thus, it is never necessary to restart a mobile client (even if a different wall application is started) and a wall server can ask a mobile client to reset itself at any time (e.g. to install a different interface on the mobile).
After communication is established, the server initializes and customizes the mobile client. This consists of: (i) a msg defining default behaviors, e.g. what touch events the client should send; (ii) a description of the widgets that will appear in the widget area, their types, relative positions, values, labels, etc.; and (iii) the description of any existing pucks, through a series of NewPuck msg, consisting of a unique puck id, a position, a color, an icon name and a status (free/locked/active).
Puck Synchronization
Mobile clients ask the server to create a puck with a AskNewPuck msg. The server responds with a NewPuck msg with a unique puck id to all the clients (with active status for the requesting client). After that, to ensure interactive response times, the mobile client can update its pucks’ state, and simultaneously send messages to reflect user interaction that modify the status of a puck (e.g. store, activate, move, etc.). In turn, the server forwards this information to the other clients, or can chose to ignore them and force a change of state on the requesting client. Thus while puck creation and management is centralized on the server side to synchronize different mobile clients, requests from mobile clients are also treated locally to ensure quick responses to users’ actions.
Single- and Multi-touch events
Our protocol distinguishes one finger drag on the touch area used to manipulate the pucks (move, activation, etc.) from multi-fingers gestures and multi-taps that a wall application can use for specific purposes. We provide two alternatives (chosen by the wall server at connection time): a raw protocol that simply forwards the touch events (with time stamps), and a Smarties protocol consisting of higher level events.
The raw protocol sends the usual three events: Down, Up, With a unique “finger” identifier and position, or Motion as an array of positions with a unique identifier for each down “finger”.
The Smarties protocol sends msgs consisting of single and multi-tap events that report: the number of taps and number of fingers for each tap, followed by single- or multi-finger move or multi-finger pinch gesture events. So a simple single-finger drag can be interpreted as cursor hover, while a tap and then drag as a press-drag-release interaction. Or a two finger pinch can be interpreted as global zoom, while a three finger pinch can scale a particular object. Thus due to the nature of the protocol, either the number of taps or the number of fingers can act as modifiers for the semantic of a gesture.
Widgets & Keyboard
When users interact with a widget on a mobile client, a msg is sent describing the id of the active puck and the new state of the widget (e.g. button click, state of a toggle button, value for a slider, etc.). The server propagates the msg to the other clients, synchronizing the widgets’ state. For example, if a client changes the value of a slider, the server communicates this value to all other clients that in turn update the value of the corresponding slider immediately, if the slider is global, or when the associated puck becomes active on them.
Finally there are messages to ask a client to map or unmap a keyboard. Regarding key events (up and down) sent by the mobile clients, we have fixed a keyboard mapping so that the protocol does not depend on a specific client toolkit or OS.
SMARTIES LIBRARIES FOR WALL APPLICATIONS
We wanted Smarties mobile clients (under Android), to be setup and used as input by wall application developers, almost as easily as desktop developers can use a mouse. To simplify the protocol, a library implementation takes care of issues not directly related to the behavior of a wall application, such as connections, maintaining states, etc. We developed a multi-platform C++ library (libSmarties) and a Java library (javaSmarties) for Smarties.
The libraries hide the protocol and the communication needed to keep puck properties synchronized across mobile clients. It also simplifies the initialization, widget creation and handling through callbacks. The heart of the libraries is an event queue that provides Smarties events of various types: puck create/delete/store/activate, touch events and widget use. These come with data structures and classes for the pucks, Smarties events and widgets. The class for pucks also includes an object (the “clipboard”) used solely for storing application specific data. Functions are also provided to facilitate the synchronization of widget states, and to access a large part of the protocol allowing customization, extensions and advanced use.
Wall application developers can create new sharing policies or use one of the three already implemented: strict, where pucks are unlocked and available to others only when an explicit share action is taken; medium, where a puck is immediately unlocked when another is selected; or permissive, where a puck is unlocked if it is not actively used.
Example walkthrough
Let us sketch the needed code for a wall application to support multi-cursors with pick-and-drop of graphical objects using Smarties. We use the C++ version of the library here, but both are (intentionally) very similar.
We first create a Smarties object with the wall geometry:
\[
\text{Smarties } \ast\text{smarties } = \text{ new Smarties}(\text{wallWidth}, ...);
\]
We can then override some defaults, e.g. the sharing policy and the type of multitouch events, using simple class methods. The final step in the initialization is to create some widgets in the widget area. Here we create a slider in the center of the widget area to change the size of the cursor associated to the active puck, set the default value of the slider, specify that it is puck dependent (default) and attach it to a callback function.
\[
\text{SmartiesWidget } \ast\text{slider } = \text{ smarties }->\text{addWidget}(\&\text{wid}, \text{SMARTIES_WIDGET_TYPE_SLIDER}, "Cursor Size", 0.3f, 0.3f, 0.3f, 0.6f); \\
\text{slider }->\text{slider_value } = 50; // default range from 0 to 100 \\
\text{slider }->\text{dependence} (\text{SMARTIES_WIDGET_DEF_PUCK}; // default \\
\text{SET_CALLBACK}(\text{slider}, \&\text{sliderHandler});
\]
\footnote{Sequence of finger taps separated by less than 200 ms.}
After the initialization, the `smarties` instance is ready to run on a thread, `smarties->run()`. The library provides access to the events that can be handled in a classic "event loop":
```c
SmartsiesEvent *evt;
while((evt = smarties->getNextEvent()) != NULL) {
puck *p = evt->puck; // the puck of this event
float x = (p->getX())*wallWidth; // x pos in the wall
float y = (p->getY())*wallHeight; // y pos in the wall
// switch on event type ...
switch(evt->type) {
case SMARTIES_TYPE_CREATE:
// a new puck; create an associated WallCursor
p->app_data = new WallCursor(x, y);
break;
case SMARTIES_TYPE_DELETE:
delete((WallCursor)p->app_data); // remove wall cursor
// allows the library to delete the puck
smarties->delete(p);
break;
// ... handle the other event types
}
}
```
In the code above we assume that we have a `WallCursor` class that draws a cursor at a given position, and the code just (i) creates an instance of this class for each new puck and stores it in the field of the puck object reserved for the application; and (ii) removes the wall cursor if the puck is deleted. Store and restore puck events can also be handled by using methods defined in the `WallCursor` class.
We assume that the application has a picker to select graphical objects rendered in the wall, and that such objects can be attached to a cursor. Here is an example of coding pick-and-drop interaction (tap to pick) using `Smarties` touch events.
```c
case SMARTIES_TYPE_TAP:
WallCursor *wc = (WallCursor)p->app_data;
if (wc->attached_object != NULL) {
wc->attached_object = NULL; // drop
} else {
wc->attached_object = pickObject(x, y);
}
break;
```
Widget callback functions are called in the same manner from the event loop, for synchronization purposes and for allowing to pass on data depending on the interaction context:
```c
case SMARTIES_TYPE_WIDGET:
evt->widget->handler(evt->widget, evt, some_data);
break;
```
In our example, the slider callback just calls the `setSize` method of the `WallCursor` class that changes the cursor size:
```c
void sliderHandler(
SmartsiesWidget *w, SmartsiesEvent *evt, void *user_data) {
WallCursor *wc = (WallCursor)w->puck->app_data;
wc->setSize(w->slider_value);
}
```
The example illustrates how implementing mobile multi-user input for a wall application with libSmarties resembles closely the usual development of interactive applications using an event loop. Here multi-user pick-and-drop is supported with code very similar to the one a developer would use to code pick-and-drop for a single mouse. Thus, `Smarties` allows to quickly prototype input for mobile multi-user interaction, so as to move fast into more interesting aspects, for instance collaborative pick-and-drop: observe how one user can pick an object with a puck on one side of the wall, share it with another user, that can then drop it on the other side.
Although the library treats commands executed simultaneously as FIFO (e.g. when two users want to activate the same free puck), it does not deal with complex operations that may cause conflicts in the state of the application, for instance if a user tries to pick a graphical object that is already picked by someone else in our example. These situations are highly application dependent and as such need to be handled by the wall application itself, e.g. by adding a picked state to graphical objects that is checked in `pickObject` for our example.
**APPLICATION EXAMPLES**
Besides toy examples for testing, we developed three wall display applications to demonstrate our framework. These server applications are developed in different rendering engines, a Java one (zvtm-cluster [28]) and two C++ ones (Equalizer [7], and Qt\(^5\) with OpenMPI\(^5\)), showing how Smarties is independent of the rendering engine. The first application was used to design the Smarties concepts and client interface, informed by user studies. The other two use libSmarties and are drastically different, demonstrating the generality of the Smarties system.
**a. Object Grouping (server in ZVTM cluster, Java)**
In a workshop we conducted on potential wall display uses, a group of biologists felt wall displays could be an appropriate environment to collaborate for their task of cataloging photos of plants sent by volunteers in the field. Depending on their expertise, they sort and tag the images based on specific characteristics (origin, leaf or stem shape and color, flower family, etc.), compare them with existing images for similarities, and group them into entries of existing plants. Similar needs for wall display use have been identified in [10] where a team of neuroscientists needed to compare and classify brain images to study variations in the brain.
Motivated by such scenarios we developed a prototype application (Smarties client and wall application), that allows users to access one or more objects on the wall display, apply properties (tagging), and perform actions on them (grouping and moving). The prototype can be seen in Fig. 3 and was preceded by two iterations used to run laboratory experiments.
**Interface:** A puck’s behavior is set through toggle buttons on the widget area: (i) a select mode adds or removes objects in a group associated with the puck by a simple tap when the corresponding cursor is on the object; (ii) a move mode where a simple gesture on the touch area moves together all the objects selected by the puck; and in our final prototype (iii) a cursor-inside mode that allows interaction inside an object as if it is a classical desktop application window. In this last case the cursor associated to the puck is confined inside the object and the touch area of the mobile devices acts as a touchpad (in our prototype we use it to treat some objects as post-it notes where free hand drawing and annotation is possible).
\(^4\) http://qt.digia.com/ \(^5\) http://www.open-mpi.org/
The Smarties widget area also has buttons to perform actions: “Gather” groups spatially all the selected objects of a puck; “Deselect All” deselects all objects selected by a puck; and “Annotate” pops up a keyboard and adds a text tag to the objects selected by a puck. These behaviors and actions indicate how a puck goes beyond cursor control and can be associated with multiple wall objects and properties.
**User Studies:** With an initial prototype, we run a first user study comparing Smarties to an interface where objects on the wall were represented on the client device, a simplification of a world in miniature (WiM) interface [34]. Participants in pairs, had to group rectangles on two different locations on the wall either based on their color or on a small text label that forced participants to see rectangles up-close (see Fig. 4). We varied the difficulty of the task by controlling the number of rectangles to be classified (10 or 30) and by optionally adding distractor rectangles (0, 10 or 30) using a third color or label.
We found that Smarties (i) leads to fewer input conflicts, i.e. manipulation of the same object by two user (a conflict happened in 1.9% of the trials for Smarties and in 8.6% of the trial for WiM); (ii) leads to fewer errors, i.e. moving an object in the wrong place (error rate of 5.3% for Smarties and of 14.3% for WiM); (iii) showed better performance when tasks become more difficult (presence of distractors and number of objects). Moreover, participants gave it significantly higher subjective scores on speed, accuracy, comfort and cognitive demand. We also noticed that participants could use the touch area of the Smarties client while keeping their attention on the wall.
In a second experiment we explored how users shared and reused pucks, refined their selections, and tagged sets of objects. The task was motivated by our biologists scenario: participants had to select groups of objects and then progressively refined these selections depending on different roles assigned to them. Optimal strategies led to exchanging selections by sharing pucks and keeping some selections alive for reuse. Overall, pucks helped users share and exchange work. Nevertheless, their reuse comes at the cost of display clutter: users either kept potentially useful pucks by placing them out of the way; or decided to continue using them for other tasks to avoid having many pucks on the screen, risking duplicating work later. This led to the design of the “storage” pucks area on the left of the widget area, that allows short term storage (persistence during a working session). Application programmers can turn this into a long term storage by saving the state of the stored pucks and their widgets, and resending them to mobile clients at connection time in the next session.
**b. Multiple Lenses and DragMags (server Equalizer, C++)**
Despite their size and resolution, wall displays are relatively small compared to existing data sets and big high resolution images (e.g. galaxy surveys). This led to the study of pan-and-zoom navigation alternatives [25]. However, these techniques are not well adapted to multi-user contexts, as they affect the entire screen and prevent concurrent navigation. We developed a prototype (Fig. 5) where users can create and use several fisheyes, magnification lenses and DragMags to navigate scenes, allowing local multi-user multi-scale navigation. We used Equalizer, a powerful platform for developing cluster applications for intensive, but fast, graphics rendering.
**Interface:** When created, a puck is a simple touchpad cursor on the wall. Actions are performed on the object (lens or anchor) that is each time under the puck cursor. Users can use a button on the widget area to create a magnification lens at the cursor’s position. They can then change the type of lens with a popup menu (magnification, magnification with transparency, or fisheye), transform a magnification lens into a DragMag (a lens whose focus or “anchor” is at a remote location), or move it to a new location using a tap-and-drag gesture.
A puck can also be attached to a specific object with a toggle button, and any subsequent puck movements move the attached object. This is interpreted by the wall application as locking the object to that puck, making it unaccessible to other pucks. Thus lenses can act as territories that mobile users can lock and move with them.
A DragMag can be manipulated by two pucks, one attached to the lens itself and one to its anchor. Users can move the anchor around to see content from different areas of the wall close to their position (as in [5, 19]). This can be done collaboratively by two users, each manipulating one puck.
We grouped global widgets (i.e., independent of the active puck) at the bottom of the widget area. A drop down menu loads a new scene, and another sets the behavior of two lenses bumping each other. We use a 2D physical model where the lenses are considered as disks with their center attached to the ground with a spring. A global slider is used to change the strength of the spring from rigid, where bumping lenses don’t move at all, to flexible, where they are pushed out of the way by other lenses and spring back when possible.
**Advanced Functionality:** Thanks to libSmarties it was easy to add multi-touch features: A two finger pinch changes the magnification factor, or resizes the lens if it is preceded by a tap. A three finger pinch changes both the size and the magnification factor so that the content rendered in the lens does not change. We also use a five finger pinch for lens creation (finger expansion) and deletion (finger contraction).
We used the store area to “bookmark” positions on the scene. Dragging a puck attached to a lens into the store area will hide the lens, but the wall application remembers both the position and the properties of the lens (type, magnification factor, etc.). If a user restores the puck, the wall application restores the lens and its properties to its original position.
**Input Extensions:** Although we developed this application using Smarties, we easily extended it to other input techniques, such as implicit input by tracking user movement. We used the distance of the user to the wall to change a lens’ magnification factor (keeping the viewing area constant), and the position of the user to have the lens follow her. This required some setup work (linking the prototype to a motion tracking system and code appropriate computations). When it comes to the user interaction, i.e., to allow a user to enable and disable these features, we added 10 lines of code to the wall application: two toggle buttons to the clients and corresponding widget handlers that enable/disable the features related to the different input techniques. By following a MVC development architecture we were able to easily share the event handling code from Smarties with other inputs.
c. **Wall Native Cursors (server in Qt with OpenMPI, C++)**
Several pieces of software allow sharing a mouse, keyboard and clipboard between computers. In the presence of a rendering wall cluster, one can use the mouse and keyboard of a computer outside the cluster to control different machines of the cluster by moving the cursor from the “edge” of one screen to another. This allows interacting with the native window system of each machine for testing or admin purposes.
**Interface:** We implemented such a software on top of Smarties and extended it further. An active puck moves the native cursor of the screen it is on, and the client’s touch area is used as a touch pad: one finger tap is a left click; two and three finger taps are middle and right clicks; tap and drag are a press and drag; and two moving fingers emulate a mouse wheel (with four directions). For text input the clients contain a popup keyboard, and buttons that emulate complete keyboard shortcuts that appear often (e.g. CTRL+Z). In the wall application, we forward the pointer/key events to the slave machines, and we added a transparent overlay covering each screen to render large cursors at the position of the pucks (larger than the native cursors), that are visible at a distance.
There are obvious advantages of using the Smarties over a traditional sharing application on a laptop or desktop computer: users can move freely and at any distance in front of the wall with their mobile, while “moving” interactively the cursor from screen to screen. They can also create several pucks, reserving some for certain areas of the wall, or associating them to some selection (as each puck has its own clipboard). Moreover, several users with their own pucks can interact in front of the wall, working with the native windowing system of the screens closest to them, thus transforming the wall into a computer lab. Users can also share their work via the clipboard by exchanging pucks. Note, however, that as our native window manager does not support multi-pointers, if two (or more) active pucks are on the same screen, then they will all send pointer events leading to cursor jumping. This can be solved by introducing a priority rule in the server side such as “the older puck in a screen controls the cursor”.
---
We also experimented with a propagation delay in the inter-
action. A drop-down menu has been triggered by the top-left button of
the widget area, to choose one of three replication modes.
**Replication Extension:** What came naturally to mind when
testing this prototype was to replicate the interaction done
in one screen to others. Our wall consists of 16 identical
machines, each with two graphic cards driving two screens
(30” and 2560×1600 each). We added a drop-down menu to the
client (Fig. 6) for choosing ways to replicate interaction
done in one screen with a puck: (i) no-replication (as the
examples so far); (ii) all-machine replication (i.e., on half of
the screens); and (iii) all-screen replication.
Replication can be used for graphical administration of a clus-
ter. We configured the window manager (FVWM) so that a
right click (3 fingers tap) pops up a menu of common applica-
tions (a terminal, packages manager, a browser, etc.). A user
can then open 16 terminals simultaneously (one on each ma-
chine) and start typing commands. She can start the package
manager and install an application on all the machines as she
would on her desktop computer (we installed Gimp and VLC
this way), or start simultaneously a web browser to search
and download a video on all machines (as we did with the
basketball clip shown in our video figure).
We used this replication mechanism to experiment with a col-
laborative artistic performance scenario using the image editor
Gimp\(^7\). In all-screen mode an artist starts Gimp and draws
simultaneously on all 32 canvases. The artist can switch to no-
replication mode to draw on a specific canvas, or collaborate
with other artists, each using their own puck in no-replication
mode. At the end the artist saves the 32 created images in
machine-replication mode and uses a script to upload and
combine all the images into a single art piece.
We also experimented with a propagation delay in the inter-
action replication. We added to the Smarties clients a toggle
button to switch on/off this propagation delay, and a slider
to set the delay \(d\). When this mode is on, the interaction
performed by the puck on one screen is replicated to its adjacent
screens after \(d\) ms, to their adjacent screens after \(2 \times d\) ms,
and so on. This leads to interesting effects, such as being able
to observe one’s interaction history.
Although this example started as a basic cursor support for
the wall, the ease of programming and flexibility of Smarties
allowed us to think of creative ways to interact beyond what
we originally envisioned.
\(^7\) [http://www.gimp.org/](http://www.gimp.org/)
**DISCUSSION AND FUTURE WORK**
As Greenberg [11] explains, building toolkits is an iterative
process. Smarties went through at least three major devel-
opment iterations (see Applications), that were informed by
usability studies of the interface and observations of its use in
complex tasks, while progressively hiding nonessential house-
keeping and input managing tasks. It has several desired
characteristics of groupware toolkits [11], such as the ability
to work in common languages (C++, Java) and use a known
event programming paradigm, hide low level implementation
details related to communication and sharing, and can be used
in a few lines of code. More importantly, the flexibility of the
library gave the means to think creatively and come up with
diverse and often playful application examples.
As researchers in wall display interaction and visualization, it
gave us the freedom to:
- quickly setup and run pilot studies to determine detailed
interaction needs and explore possibilities for applications
and systems we develop;
- easily prototype and run studies where the main focus is
not the interface design (e.g. studies on visual perception),
without worrying about interaction choices and mobility
constraints (e.g. how can we have moving participants that
need to provide answers both by typing and pointing);
- conduct iterative interface design, by progressively discov-
ering interaction needs, and slowly replacing functionalities
developed for Smarties by other interaction means (e.g. the
addition of motion tracking in the Lens example).
As Olsen mentions [26] there are several ways to evaluate a
toolkit. We hope that our application demonstrations showed
the generality of Smarties for wall display input support. Our
system is designed to reduce the effort of adding input control
and goes beyond previous work on wall display input support,
particularly during the early stages of wall application devel-
opment. Future work includes a more thorough evaluation
of our system by observing its use by other developers. We
will also extend the Smarties (Protocol & Libraries) to support
complex Multi Display Environments (e.g., several walls and
tabletops) and geographically distributed settings. Currently
developers can customize the mobile Smarties interface using
predefined widgets and events. We plan to extend our system
to allow developers to create their own widgets and gestures.
Beyond the toolkit extensions and evaluation, we will study
the potential of the Smarties interface as more than a proto-
typing input mechanism. This requires studies comparing this
interface to other interaction techniques adapted to mobile
settings, such as laser pointers mounted on mobile devices,
mid-air gesture input, and direct touch input on the wall. In
this context it is important to understand the cost of attention
switching between the wall and the mobile when users ma-
nipulate widgets on the Smarties interface. Finally, we will
investigate how the Smarties interface, that has a moveable
ownership context (e.g. groups of objects attached to a mov-
ing puck), affects the perception of working territories [32],
spatial separation [36] and coordination [23], and coworker
and workspace awareness during collaboration.
RELAT ED WORK
There is a large body of work on wall display interaction techniques using touch or a pen to reach and manipulate remote content (e.g. [9, 4]), accessing and manipulating content using pointing (e.g. [22]), freehand gestures (e.g. [25, 37]), or combinations of pointing and mobile devices [25]. There is also work on using custom interaction props (e.g. a motion tracked brain prop to rotate virtual brains [10]), simple physical devices [2] or tangible widgets attached to a tablet [16]. This work often requires specialized hardware (e.g. markers, devices, or touch enabled walls), or at the very least a setup, training and calibration phase.
Mobile devices such as smartphones and tablets, are widely available and familiar to users, and have been used as input for remote displays. Although there are several techniques that use the mobile’s camera to interact with large displays, they often require visual markers to identify the remote display (e.g. [30]). Recently, Touch Projector [6] allowed interaction via a live video captured with the mobile’s camera, without the need for markers on the remote screens. This work generally requires holding the mobiles at eye level to look at the remote display through their camera, and is better suited for brief interactions and not long term use.
Touch screens of new generation mobiles can be used to interact with wall displays without the need of additional tracking technology. They allow user mobility, while having a large enough interaction surface to accommodate more complex input. In Hachet et al. [13] multiple users can see in their devices a view of a 3D object displayed on the wall. Olwal et al. [27], use mobile devices to display and interact with parts of radiology material projected on a larger display. They support multi-touch navigation gestures for manipulating content. These approaches, following the idea of the peephole displays [39], assume that the mobile device is aware of the content displayed on the wall, or can see at the very least render part of it. On the other hand, the Overlay interface [31] uses the touch display only as input by defining interaction areas for each user on a wall display. Similarly, ARC-Pad [20] uses only the mobile’s touchpad to combine absolute (tap) and relative (drag) cursor positioning on a wall display. Smarties falls in the middle: our touch area has pucks representing links to display content (but not the content itself), and it is configurable while being application aagnostic. It is closer to older approaches using PDA’s, that treat the mobile device as a personal tool pallet (e.g. [29]), or as cursor and keyboard controllers (e.g in Pebbles [21] discussed later).
The majority of this work on interaction, supports only very specific tasks (e.g. point and select, pan and zoom), and needs to be re-thought in a fully operational environment where long term use may be tiring, text needs to be entered, and the wall includes interactive application windows [10, 38] and not simple targets. The rest tend to be complex to implement and are highly application dependent. Our goal is to provide a means to easily add complex interactive support to wall applications that is easy to setup and use for prototyping.
Existing toolkits for developing interaction on walls focus on other aspects. The SDG Toolkit [35] and nowadays native operating system support for multi-user interaction (e.g. [14]), focuses on managing classic input devices attached to a screen (mice, keyboards, touch input). Supporting user locomotion is challenging, even when we consider carrying mobile versions of these devices, as in real life users switch frequently between tasks that require different devices. JBricks [28], ZOIL [17] and iStuff [2] provide customizable bridges between remote displays and different possible input devices, but require programmers to define or use low level communication protocols to treat the input events.
Pebbles [21], although not a toolkit, is a concept close to our work: it includes two different mobile applications, one that sends cursor and keyboard events from a PDA to a remote machine and one that provides widget controllers. Smarties goes further: it is a development toolkit that requires programming only on the server side (not the mobile), with few and simple lines of code for communication, and thus allows quick input prototyping of collaborative apps. It has a larger input vocabulary (gestures and widgets), creates shortcuts to content on the wall beyond simple cursors, and allows storing and sharing of interactive work between users.
Work on Single Display Groupware (SDG) [33] has investigated the effects of such environments on user behavior (e.g. [36]), problems both in following fast moving cursors [3], and multiple cursor awareness and identification (e.g. [15]). As pucks are often represented as cursors on the wall, this work has influenced some of our designs (for example different colored cursors [12]). These important research directions are orthogonal to our work, as we focus more on the control side (puck UI), but they need to be considered on the server side in real world applications that go beyond prototyping.
CONCLUSION
Smarties is an input system for wall sized display application prototyping. It consists of an application agnostic client that acts as the input interface and runs on multiple mobile devices, a communication protocol between the clients and the wall application, and a library that implements the protocol and handles input management.
The mobile application is made up of multiple interactive pucks and associated widgets (e.g. buttons, sliders, menus, text fields) that allow for command activation, and for changing properties of the wall application or of the puck behavior. A puck can be associated with content on the wall display (cursors, objects, groups of objects), and users can store and share pucks and thus their interaction work. Each wall application can customize a puck’s widgets to fit its particular needs. A few lines of code initialize and setup the Smarties and widget management using an event loop and callback functions.
We demonstrated through 3 application examples using Smarties how the system supports very different wall applications, with different interaction needs, developed using different wall display software technology. We hope the ease and flexibility of Smarties will help wall application designers to quickly add mobile multi-user interaction support to their systems.
The Smarties software is available at http://smarties.lri.fr/ under free software licenses.
REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00979034/PDF/chi14-smartiestk-hal-v2.pdf", "len_cl100k_base": 10742, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34776, "total-output-tokens": 13250, "length": "2e13", "weborganizer": {"__label__adult": 0.0004260540008544922, "__label__art_design": 0.00543212890625, "__label__crime_law": 0.0002808570861816406, "__label__education_jobs": 0.0022563934326171875, "__label__entertainment": 0.00022470951080322263, "__label__fashion_beauty": 0.00026416778564453125, "__label__finance_business": 0.00026154518127441406, "__label__food_dining": 0.0003921985626220703, "__label__games": 0.0012416839599609375, "__label__hardware": 0.00554656982421875, "__label__health": 0.000423431396484375, "__label__history": 0.0005469322204589844, "__label__home_hobbies": 0.00024366378784179688, "__label__industrial": 0.0005941390991210938, "__label__literature": 0.0003743171691894531, "__label__politics": 0.00017130374908447266, "__label__religion": 0.0005478858947753906, "__label__science_tech": 0.1177978515625, "__label__social_life": 0.00013768672943115234, "__label__software": 0.041107177734375, "__label__software_dev": 0.82080078125, "__label__sports_fitness": 0.0002465248107910156, "__label__transportation": 0.000507354736328125, "__label__travel": 0.00023162364959716797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56947, 0.01503]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56947, 0.27431]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56947, 0.87375]], "google_gemma-3-12b-it_contains_pii": [[0, 1065, false], [1065, 6473, null], [6473, 11208, null], [11208, 17076, null], [17076, 23658, null], [23658, 29718, null], [29718, 34455, null], [34455, 39074, null], [39074, 45011, null], [45011, 51656, null], [51656, 56947, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1065, true], [1065, 6473, null], [6473, 11208, null], [11208, 17076, null], [17076, 23658, null], [23658, 29718, null], [29718, 34455, null], [34455, 39074, null], [39074, 45011, null], [45011, 51656, null], [51656, 56947, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56947, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56947, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56947, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56947, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56947, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56947, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56947, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56947, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56947, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56947, null]], "pdf_page_numbers": [[0, 1065, 1], [1065, 6473, 2], [6473, 11208, 3], [11208, 17076, 4], [17076, 23658, 5], [23658, 29718, 6], [29718, 34455, 7], [34455, 39074, 8], [39074, 45011, 9], [45011, 51656, 10], [51656, 56947, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56947, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
34a0e9c04a50d5003ded7e768b1649390d141299
|
Exercise 2
Mostly Qt
4 marks
(must be demonstrated in a laboratory class in week 4, 5, or 6)
Aims:
This exercise introduces the Qt packages that can be used to build graphical user interfaces for C++ programs.
Objectives:
On completion of this exercise, students should be able to:
- Create a simple GUI for a C++ program using procedural code to build an interface using basic Qt GUI classes;
- Use QtBuilder to compose interfaces within a graphical editor that generates the boiler-plate Qt/C++ code needed to populate windows with GUI widgets;
- Use the boost XML serialization libraries to save complex data structures to disk;
- Implement simple interactive GUI based programs in C++.
Overview
Text based interfaces (menu-select etc) are so very 1980s. Nowadays, essentially all programs that have any need for direct user interaction will require graphical interfaces. Web-based interfaces are often required. For cases where web use is inappropriate, modern languages like C# and Java supply standard GUI classes and frameworks. It's a bit more problematical with C++.
Microsoft does its best to make Microsoft defined GUI libraries the “standard” for Windows applications built with Visual Studio C++. In the Linux world, there are several competing graphics libraries, but the most successful appears to be Qt (which can also be used on Windows).
Qt started as a project by a few Norwegians in need of a good C++ graphical user interface toolkit. The original TrollTech company that created Qt was taken over by Nokia. For several years, Nokia did most of the development work on this toolkit. Nokia’s problems competing with iToys lead to the sale of the Qt software to a services company Digia. Most of Qt is free; there are some commercial extensions; Digia can act as a consultancy for commercial developers.
Qt involves extensions to the standard C++ language. Qt classes contain new syntactic elements such as “slots” and “signals”. Qt code has to pass through a pre-compiler that converts the code into conventional C++. Editing, compilation, and linkage of Qt programs can be a little complex when done at the command line. Fortunately, IDEs like NetBeans provide automation tools that simplify the entire build process.
Qt has many libraries. There are libraries for basic windows, for “widgets” (the standard text-boxes, check-boxes, radio-buttons etc as are used in all GUI systems), its own collection classes, classes that simplify the construction of multi-threaded programs, its own wrappers for TCP/IP communication, XML parsers, and a whole lot more. In addition to the libraries, a Qt installation will generally include a number of helper applications such as **QtAssistant** (demonstrated in exercise 1, this is essentially a tool for viewing reference documentation), **QtDesigner** (a GUI editor for building GUI interfaces for your programs), and **QtLinguist** (which helps create “resources”, e.g. sets of error messages and prompts, needed if you have to create a program that supports multiple languages English/French/German/...).
The QtAssistant program has a considerable amount of tutorial material (see below) along with the API documentation for Qt classes. You should work through some of these tutorials in addition to completing the tasks defined for this exercise. The actual code is all provided in the Qt tutorials, so getting the Qt examples to run is simply a matter of cutting and pasting code from the tutorial into files that you create within NetBeans projects. The tutorials take a very careful incremental approach, providing lots of explanatory commentary on each new feature introduced.
### The same old ...
You should already have built GUIs.
In CSCI110, you constructed GUI interfaces by composing HTML “form pages”. Your “code” was written in HTML, but it was still code. The code consisted of instructions for the browser to layout the GUI, handle most of the GUI events automatically, and call your Javascript code for a few specially chosen events. You will have used HTML `<table>` or HTML `<fieldset>` components to layout the elements of your GUI; maybe you will have used CSS as an alternative layout approach. You defined your event handling by adding `onEvent` (e.g. onclick, onmouseover, onsubmit, onchange) attributes to chosen interactive HTML elements. The attribute values would have been Javascript calls to functions that you had written to handle selection of an element, to show some highlighting, or to check entered data before sending the data to a remote web-server.
In CSCI213, you will have used the swing libraries and written Java code to create JFrame windows. You would have used instances of Java layout managers (GridLayout, GridBagLayout etc) to organise the placement of interactive components such as instances of JButton. You would have handled events by first defining classes that implement interfaces such as ActionListener (something that will handle action events such as those generated by clicking a JButton); your class would have had an `actionPerformed()` method that did whatever processing was required (comparable to your Javascript functions that handled events in a web page). In your overall Java program, you would have created an instance of your actionlistener class and then added that instance as an “event listener” on some element (JButton or other) in your GUI.
Hopefully, CSCI213 will also have covered the modern way of building a Java GUI using a GUI builder such as the one that is a part of NetBeans. Modern Java code relies entirely on auto-generated GUI handling elements that employ quite different layout managers from the long outdated GridBagLayout and others. In modern Java GUI building, an automatic code generator is used to create the code for all the low level mechanics of instantiating GUI widgets, placing them in a window, registering event listeners and so forth. The code generator leaves you with stub functions that you must complete by writing code to actually perform some action when the event occurs.
Qt is just the same:
• A few differences in terminology.
• A slightly different way of linking event sources (like action buttons) to handler functions.
Do try some of the Qt supplied tutorials!
The tasks given below for this exercise cover only a small aspect of Qt and don't really delve far into its event handling. The tasks focus on the kinds of GUI that you might need to build in a simple CSCI222 assignment or in a very basic CSCI321 project.
The initial Qt tutorial, (http://doc.qt.digia.com/4.3/tutorial-t14.html) shows how to build an interactive game – built in 14 steps (!) each adding a tiny bit more functionality, each step being fully explained in lengthy commentary. It's a great way to get a basic understanding of Qt's event handling, and to get a feel for some of its widget classes.
It is worth completing that tutorial, all of its steps! As the code is supplied complete, file-by-file, its just a matter of cutting and pasting into a series of NetBeans Qt projects. It will take you about one hour; you should try it in your own time (not in the CSCI222 labs).
Next, look at the tutorials in QtAssistant (look for Qt Assistant in 'Dash Home' as Qt 4 Assistant):
The AddressBook tutorial should be attempted, again in your own time, as it illustrates different approaches from those taken in the tasks below and gives a feel for how to build editing and correction aspects into an application.
The Widgets examples contain many simple exercises:
- Analog Clock
- Calculator
- Calendar Widget
- Character Map
- Code Editor
- Digital Clock
- Group Box
- Icons
- Image Viewer
- Line Edits
- Movie
- Scribble
- Shaped Clock
- Sliders
- Soft Keys
- Spin Boxes
- Styles
- StyleSheet
- Tablet
- Tetris
- Tooltips
- Validators
- Wiggly
- Window Flags
More and more Qt tutorials -
Threading and Concurrent Programming
Qt makes use of threads, and they signals and slots mechanism can now communicate between threads.
Tools
Qt is equipped with a range of capable tool classes, from containers and iterators to string handling and manipulation.
Multimedia Framework
Qt provides low-level audio support on Linux, Windows, and Mac platforms by default and audio plugin APIs to allow developers to implement their own audio support for custom devices and platforms. The Phonon Multimedia Framework brings multimedia support to Qt applications.
SQL
Qt provides extensive database interoperability, with support for products from both open source and proprietary vendors.
XML
XML parsing and handling is supported through SAX and DOM compliant APIs as well as streaming databases. The QXmlSax and QXmlSchema engines in the QXmlPatterns module provide support for querying XML files and custom data models.
Networking
Qt is provided with an extensive set of network classes to support both client/server and side network programming.
Inter-Process Communication
Simple, lightweight inter-process communication can be performed using shared local sockets.
Wireless
Qt provides an integrated wireless (WLAN) component based on WPA, the wireless network engine.
Help System
Support for interactive help is provided by the QtHelp application. Over 1,000 pages of the Qt documentation is offered to display specially prepared documentation for applications.
State Machine
Qt provides a powerful hierarchical finite state machine through the Qt State Machine QLibrary.
Animation Framework
These examples show how to use the animation framework to build realistic performance with.
Multi-touch Framework
Support for multi-touch input makes it possible for developers to create new intuitive user interfaces.
OpenGL and OpenVG Examples
Qt provides support for integration with OpenGL implementations on all platforms, giving developers the opportunity to display hardware-accelerated 3D graphics alongside in a conventional user interface.
Desktop
Qt provides features in mobile applications to integrate with the user’s preferred mobile framework. Features such as context-menu icons, access to the desktop widget, and support for drag and drop can be used to improve the appearance of applications and take advantage of the underlying desktop facilities.
Drop and Drop
Qt supports native drag and drop on all platforms via an extensible MIME-based system that enables applications to send data to each other in the most appropriate format. Drag and drop can also be implemented for Internet sites by applications.
If you do end up using C++ in a CSCI321 project, you will almost certainly have to delve into some of those tutorial examples.
In addition, QtAssistant contains an “Overview” section. This contains detailed explanations of the conceptual structure of GUI applications. For example, you will have heard a little about the “Model-View-Controller” paradigm in CSCI204/CSCI205/CSCI222. The commentary in QtAssistant is much more complete:
Task 1: A Simple Qt version of AddressBook
Create a new NetBeans project – this one is a C/C++ Qt Application:
This QtAddr1 project will use procedural code to generate a set of MyRecord objects, storing them in a STL vector. It then builds a simple Qt GUI that presents a tabular view of the data in the records.
Qt's approach to getting a tabular view of data is very similar to the approach in Java. The library (C++ Qt, or Java swing) provides a **generic table class**. The code for this class handles things like displaying column headers and, usually, working with a scrollbar mechanism if the number of data rows exceeds the number that can be displayed in the window. The generic table class must of course be able to determine how many columns will be needed, and must be able to access the data that are to go in any specific (row, column) cell of the table. The generic table class will work with an application defined **TableModel class** (the library will typically provide some AbstractTableModel base class). The programmer will define a suitable TableModel class; methods will supply the number of columns, the column headers, the values of cells (and, if editing is permitted on the table, there will be methods for changing values).
The actual data are going to be instances of some application defined record structure (they will be instances of MyRecord in this task). They will be held in some collection – e.g. STL vector, STL map, Java ArrayList, or whatever. The TableModel will have an instance member to store this collection. (My old [CSCI213 lab exercises](#) contain an example of how to build table models etc in Java.)
So you have:
- **Table** – a GUI library widget for display of data. It owns an instance of an application defined TableModel.
- **TableModel** – an auxiliary helper class that supplies the table class with information
such as the number of rows and columns in the table, the column headers, the values of cells. The TableModel will have an instance member that is some collection class that holds the actual data records.
- Collection class for the data.
- Many instances of an application defined data record.
The application structure should be as shown here:
You don't need to specify any extra include directories, nor do you need to add link libraries. These issues are handled by the “qmake” build process.
(You will need to create a folder for images, and add ~10 images of people of your own choice.)
Classes:
MyException and MyRecord:
- no change from previous.
MyTableModel:
Class MyTableModel is a subclass of, extension of the Qt library's QAbstractTableModel. It's definition includes a Qt feature – Q_OBJECT:
Q_OBJECT?
Q_OBJECT is like a macro; it adds lots of code to this class – code that allows an instance of class MyTableModel to work with Qt signals (events). It's a necessary part of the class – but we don't (at this level of usage) need to understand anything about how it works.
2015
QAbstractTableModel has numerous methods but we only need to change 4 in this simple example; we add an extra data member – recordsCollection – and provide a method to set this member.
The use of the rowCount() and columnCount() methods is obvious – it allows the generic table display mechanism to format the table correctly.
The `headerData()` method will return `QString` data objects that represent the headers for the table columns.
```cpp
QVariant MyTableModel::headerData(int section, Qt::Orientation orientation, int role) const
{
if (role == Qt::DisplayRole) {
if (orientation == Qt::Horizontal) {
switch (section) {
case 0:
return QString("Image");
case 1:
return QString("Name");
case 2:
return QString("Roles");
}
}
return QVariant();
}
}
```
The `data()` method is used to retrieve cell data for a particular cell identified by the row and column arguments.
Qt differs a little here from the otherwise very similar Java JTable class. In Java, the display code makes a single request for cell data and determines how to display the data from the type of object returned. Qt makes multiple requests for cell data – the DisplayRole argument identifies what the display code requires. It can be text data, or it maybe decorations like images that are required. The implementation of the `data()` function must resolve these different requests. The code should return a suitable data object or, in some cases, an empty `QVariant` object.
One point to note in my implementation of the data() method is the use of the Boost library’s “for-each” construct. Here, I need to construct a single string that has strung together all the entries in the vector<string> roles attribute of a MyRecord. I could use STL iterators (vector<string>::const_iterator it=grps.begin() etc); but I'm really not a fan of STL iterators which I regard as clumsy and intrusive. I use them where I have to. I prefer a Perl style (or C#, or Java style) for-each loop. The Boost library supplies it.
Mainline:
The mainline code, nothing too hard:
I have an auxiliary function getImage() that uses code similar to the little Qt Image loading example from exercise 1; it converts the image into a STL string:
```cpp
static string getImage(string filename) {
// Qt library has its own string class, convert STL string
QString qtfilename(filename.c_str());
QImage animage;
bool readimage = animage.load(qtfilename);
if (!readimage) {
cout << "Image load failed for " << filename << endl << "Bye" << endl;
exit(1);
}
QImage resized = animage.scaledToWidth(50, Qt::FastTransformation);
QByteArray ba;
QBuffer buf(&ba);
resized.save(&buf, "JPG");
QByteArray coded = ba.toBase64();
string result(coded); // relying on QByteArray operator char*
return result;
}
```
My createData() method simply creates a series of MyRecord data structures and adds the to the global g_theRecords collection:
The really complex Qt display code:
All that is left to do is implement the code that builds the GUI – a window, containing a table, with scrollbars.
```cpp
static void createData()
{
// Hard code procedural creation of a few records so that can
// have some data to show in the Qt based GUI
RecordPtr nxt;
string id;
string name;
string aRole;
string imagestr;
string file;
// You will need to adjust filenames etc to match the image files
// that you provide
{
id = "tom";
name = "Thomas";
file = "./images/tom.jpg";
imagestr = getImage(file);
nxt = new MyRecord(id);
nxt->setName(name);
nxt->setImage(imagestr);
aRole="User";
nxt->addRole(aRole);
aRole="Manager";
nxt->addRole(aRole);
g_theRecords.push_back(nxt);
}
{
id = "dick";
name = "Jack";
file = "./images/dick.jpg";
imagestr = getImage(file);
}
}
Simple GUIs in Qt are really simple.
The code here:
• Create an instance of QApplication – essentially this means initialise the Qt graphics system.
• Create an instance of the QTableView, an instance of the table model class, and load the table model with some data. The 0 argument to the model is passed to its base class constructor. What it is actually doing here is saying that there will be no parent GUI window – this model is to be displayed in a tableview that is the entire window.
• The tableview is linked to the model, and one of its display options is adjusted (I didn't want it using the default row size – which defaults to the height of a line of text – because I have those images; so it's to determine the height of each row before displaying the row);
• The table is shown.
• The Qt application's exec() method is called – this starts all the interactive event handling, like responding to the scrollbars. The default behaviour is for app.exec()'s event-handling loop to terminate and for the function to return if the window's close box is clicked.

**Task 1 – completion (1 mark)**
Demonstrate your working QtAddr1 project.
Task 2: A more complex Qt GUI
Simple GUIs are simple in Qt. But by the time you are getting to something more elaborate, the procedural code to create all the widgets and link them together is getting a bit tiresome.
The following is from Trolltech’s Qt AddressBook tutorial (it shows the interface and a part of the code needed to generate it):
It’s not the kind of code that is fun to write.
So, try using QtBuilder – an interactive visual editor that lets you build a GUI by selecting and placing widgets in a symbolic window. QtBuilder generates the messy boiler-plate code.
QtBuilder is integrated with NetBeans – you start a Qt project and then ask for a “new Qt form”. NetBeans temporarily passes control to QtBuilder; when you exit from QtBuilder, NetBeans picks up the generated files and adds them to your project.
So, what is the application for which we want a GUI?
Another variation on the ongoing MyRecord exercise – now we want the ability to create instances of MyRecord via a data entry form, and have the list of records displayed.
A common idiom for applications that have data entry forms, list displays, record displays etc is the “Tabbed Pane Interface”. The main window has tabs that will approximate different “Use Cases” of the program – a tab for record creation, a tab to view a list of records etc. These tab panes will hold instances of standard GUI widgets.
Fill in the form, add the new record, scroll down the display:
The interface consists of
- A main window that holds a “tab” widget; the tab widget has two tabs (container widgets) – with “currentTabText” fields holding the names “Create record” and “View list”;
- The first tab QWidget has a number of labels, line edit fields, and push buttons;
- The second tab has a tableview similar to that illustrated in the last task.
(It is all done at a fairly naïve level; things like the table do not grow if the overall window is enlarged.)
This interface can be built using the QtBuilder application invoked from within NetBeans.
As shown above, you start by adding a new “Qt Form”; there is a choice for the basic style (dialog,mainwindow etc), in this case it's best to start as a main window:
NetBeans creates three files – MyWindow.h, MyWindow.cpp, and MyWindow.ui; and starts QtBuilder. You can then edit the user interface in QtBuilder; on exit, it generates some more files that get hidden in your NetBeans project. If you need to modify your GUI, e.g. add some more input elements, you can later pick the MyWindow.ui file in NetBeans and “open” it – QtBuilder
again starts (sometimes it's a bit slow starting).
QtBuilder has a display with a work area, a palette of Widgets, and panes summarising the overall structure of the GUI and showing detailed properties of the currently selected GUI element.
The new Main window that you start with will be shown in the work area as blank apart from a “Menu Bar” at the top. We don't need the menu bar, so the first step should be to select it (right-click) and delete it.
Next, select the “Tab Widget” entry (in the Containers section of the Widget Box palette) and add a tab widget to the work area, resizing it to fill the window. A Tab Widget starts with two tabs; you should immediately change the display text to appropriate titles (“Create record” and “View List”).
Then it is a matter of adding labels (from Widget Box/Display Widgets), “line edits” (from Widget Box/Input Widgets) and “push buttons” (from Widget Box/Buttons) to the first tab. When you add a widget to a GUI, QtBuilder adds some code to the C++ files it is composing. Its actually defining a new C++ class to represent your GUI; each added widget becomes a new (public) data member of that class; code is added to the constructor for the class that will instantiate an instance of the appropriate Qt widget class and adjust its coordinates to match the placement on the work area. QtBuilder assigns names to the elements as they are added - “label”, “label_2”, “label_3”, “lineEdit”, “lineEdit_2” etc. You should rename fields that your program will be manipulating; names like “idField” and “nameField” make the code much easier to understand than “lineEdit” and “lineEdit_2” (you don't manipulate things like labels in your own code, so you don't have to rename them).
Layout is somewhat crude. QtBuilder does work with a pixel grid that helps align widgets, but things are simply positioned at absolute (x, y) coordinates. You should follow my design (you could try to be more ambitious – but then sort out your own problems!) and have the following:
- A label “Identifier” and a line edit (idField);
- A label “Name” with another line edit (nameField)
- A label “Picture” with a third line edit (pictureField); this line edit is not to allow direct
editing (it will be for the name of an input file selected by a dialog); so go to the properties pane and deselect the “Enabled” checkbox;
- A Push Button with text “Select Image File”, renamed as imageSelector;
- A label “Roles” and another of those disabled input fields (renamed as roleList);
- Another Push Button with text “Add Role” renamed as roleButton;
- Another line edit, renamed as newRoleName;
- A final Push Button, addRecord.
The other tab is simpler – it just holds a Table View (from Widget Box/Item Views (Model Based)). (Note that there is another 'Table Widget' – don't pick that.)
When you have finished creating the GUI in QtBuilder, save and exit. NetBeans will resume; your NetBeans project should now be something like the following:
There doesn't seem to be much there:
But, we can edit main:
```cpp
#include <QtGui/QApplication>
#include "MyWindow.h"
int main(int argc, char *argv[]) {
// initialize resources, if needed
// Q_INIT_RESOURCE(resfile);
QApplication app(argc, argv);
// create and show your widgets here
return app.exec();
}
```
and add the code to create and show our widgets:
```cpp
QApplication app(argc, argv);
MyWindow win;
win.show();
return app.exec();
```
and it runs (sort of):
![Image of a window with a table view and line edit fields]
The line edit fields can be selected, and data can be edited; but of course, the buttons do nothing and the table view is empty.
But how does it work when there is no code there?
The code is hidden – it you switch from NetBeans project view to file view you can see the files:
The file ui_MyWindow.h contains the class declaration and constructor generated by QtBuilder (don't try editing this class file, just view it when you need the names of the widgets):
```cpp
#include <QGuiApplication>
#include <QImage>
#include <QFileDialog>
#include <QTableView>
#include <QStatusBar>
class Ui_MyWindow
{
public:
QWidget *centralwidget;
QTabWidget *tabwidget;
QWidget *tab;
QLineEdit *fileField;
QLabel *label_2;
QLineEdit *nameField;
QLabel *label_3;
QLineEdit *pictureField;
QPushButton *imageSelector;
QLabel *label_4;
QLineEdit *dialog;
QPushButton *dialogButton;
QPushButton *dialogButton;
QFileDialog *fileDialog;
QVBoxLayout *verticalLayout;
QTableview *tableview;
QStatusBar *statusbar;
private:
void setupUi(Mainwindow *Mywindow)
{
if (Mywindow->mainWindow().isSimple())
Mywindow->setObjectName(QString::fromUtf8("MyWindow"));
MyWindow->resize(750, 550);
centralWidget = new QWidget(MyWindow);
centralWidget = setObjectName(QString::fromUtf8("centralWidget"));
QSizePolicy sizePolicy(QSizePolicy::Expanding, QSizePolicy::Expanding);
sizePolicy.setHorizontalStretch(0);
sizePolicy.setVerticalStretch(0);
sizePolicy.setHeightForWidth(centralWidget->sizePolicy().hasHeightForWidth());
centralWidget->setSizePolicy(sizePolicy);
tabWidget = new QTabWidget(centralWidget);
tabWidget->setObjectName(QString::fromUtf8("tabWidget"));
tabWidget->setGeometry(QRect(0, 0, 750, 554));
tab = new QWidget();
}
```
The generated MyWindow class has an instance data member “widget” of type Ui_MyWindow. Since all the data members in class Ui_MyWindow are public, code that will be written for class MyWindow will be able to directly manipulate the widgets.
So, we will be able to get a MyWindow object to handle a click on the QPushButton imageSelector by putting up an appropriate file dialog that asks the user to select an image file. Of course, we have to write that code – QtBuilder cannot guess what it is that we want the imageSelector button to do.
The working version of the program will also require classes and functions from the earlier 2015
exercise. The classes MyException, MyRecord, MyTableModel should all be recreated in this new project. MyException and MyRecord should be unchanged; MyTableModel will require a few additions that will allow for editing of data (editing is only partially implemented in this example task).
The main line code should create some records as done in the last task so that there are data to display:
```cpp
#include <QApplication>
#include "MyWindow.h"
#include <iostream>
#include "MyRecord.h"
#include "MyTableModel.h"
typedef MyRecord* RecordPtr;
vector<RecordPtr> g_theRecords;
string getImage(string filename) {...21 lines}
static void createData() {...164 lines}
int main(int argc, char *argv[]) {
// initialize resources, if needed
// Q_INIT_RESOURCE(resfile);
createData();
QApplication app(argc, argv);
MyWindow win(&g_theRecords);
win.show();
return app.exec();
}
```
Next, the MyWindow class must be more completely defined and some additions must be made to MyTableModel.
MyTableModel
The changes here are fairly limited – they involve the addition of functions that will allow data to be edited (actually, not much editing support is provided in this task – it was just the right time to add such functions even if their implementations are just stubs).
The class must now include versions of a few more public virtual functions defined in class AbstractTableModel:
```cpp
class MyTableModel : public QAbstractTableModel {
Q_OBJECT
public:
MyTableModel(QObject *parent);
void addData(vector<RecordPtr> *data) {
this->recordsCollection = data;
}
int rowCount(const QModelIndex &parent = QModelIndex()) const;
int columnCount(const QModelIndex &parent = QModelIndex()) const;
QVariant data(const QModelIndex &index, int role = Qt::DisplayRole) const;
QVariant headerData(int section, Qt::Orientation orientation, int role) const;
bool setData(const QModelIndex &index, const QVariant &value, int role = Qt::EditRole);
bool insertRows(int position, int rows, const QModelIndex &index = QModelIndex());
bool removeRows(int position, int rows, const QModelIndex &index = QModelIndex());
void addRecord(RecordPtr newOne);
}
```
(Method addRecord() is application specific; for this simple example, it's a more limited but more convenient way of adding records than use of insertRows().)
The flags() method will return an indicator that elements in the table can be selected but cannot actually be edited in situ (you could be more ambitious and allow some editable columns). The other overridden methods are just stubs. The addRecord() method puts a pointer to new record in the vector<RecordPtr> used to store data.
```cpp
bool MyTableModel::insertRows(int /* position */, int /* rows */, const QModelIndex & /* index */) {
beginInsertRows(QModelIndex(), position, position+rows-1);
// Add a row
endInsertRows();
return true;
}
bool MyTableModel::removeRows(int /* position */, int /* rows */, const QModelIndex & /* index */) {
return true;
}
bool MyTableModel::setData(const QModelIndex & /* index */, const QVariant & /* value */, int /* role */) {
return false;
}
```
Three of the overridden methods from QAbstractTableModel are just stubs – maybe later one might add more complete editing functionality:
The flags() method for this application simply returns an value that indicates that elements are selectable – we want to be able to click on a view of the table and determine which row was selected.
2015
The application defined addRecord() method is:
```cpp
t:ItemFlags MyTableModel::flags(const QModelIndex &index) const {
if (!index.isValid())
return Qt::ItemIsEnabled;
if (index.model() == &this)
return Qt::itemIsEnabled;
//return QAbstractTableModel::flags(index) | Qt::ItemIsEditable;
return QAbstractTableModel::flags(index) | Qt::itemIsSelectable;
}
```
The beginInsertRows() and endInsertRows() methods (from the base class) are used by Qt to coordinate updates; the emit operation (part of Qt's event-handling system) sends a signal to the table view object that is displaying the data in this model. On receipt of the signal, the table view object will redraw its display.
**MyWindow**
This is where the work of the application gets done. A few more methods (most are “slots” - i.e. event handling functions) will have to be added to those generated automatically:
The constructor for the class will take an argument that is a pointer to the vector<RecordPtr> collection for the data records; this gets used when constructing the instance of MyTableModel. The code in the constructor will initialise its instance Ui::MyWindow and then complete the configuration of the interface.
There are a few minor adjustments, e.g. the “selection mode” for the table view should allow for row selection rather than selection of individual cells.
The main setting up is the establishment of the event-handling links. As always, we have to connect the element that emits an event to the code that handles that event (so it's really just the same as adding an onEvent to a HTML tag). There are three buttons in the first tab pane of the interface – the imageSelector, the roleButton, and the addRecordButton. Buttons emit clicked signals. Such signals are to be routed to “slot” functions – the imageSelector's click should result in a call to the chooseFile() slot defined in this class.
```cpp
MyWindow::MyWindow(vector<RecordPtr> *theData) {
data = theData;
widget.setupUi(this);
tablemodel = new MyTableModel(0);
tablemodel->addTheData(theData);
widget.tableView->setModel(tablemodel);
widget.tableView->resizeRowsToContents();
widget.tableView->setSelectionBehavior(QAbstractItemView::SelectRows);
// We have to put in the event handling links
connect(widget.imageSelector, SIGNAL(clicked()), this, SLOT(chooseFile()));
connect(widget.roleButton, SIGNAL(clicked()), this, SLOT(addRole()));
connect(widget.addRecord, SIGNAL(clicked()), this, SLOT(addRecord()));
connect(
widget.tableView,
SIGNAL(clicked(const QModelIndex&)), this,
SLOT(itemSelection(const QModelIndex& ))
);
}
```
Code like `connect(widget.imageSelector, ...)) looks like C++, but really it isn't. It's part of the Qt language extensions. It gets converted into genuine C++ by the pre-compiler. The SIGNAL and SLOT “macros” take the names of signals and slot methods (the argument list for a slot must match that defined for the signal).
The first connect statement is interpreted as follows:
- The emitter of the event will be the QPushButton object represented by the imageSelector data member in the widget (instance of Ui::MyWindow);
- The signature of the signal function is clicked();
- The object with the slot to handle the signature is this object (i.e. the MyWindow object);
• The slot method of that handler object is called chooseFile().
Of course, you must know the declarations of the signal functions; you find these in the documentation in QtAssistant.
The fourth of the connect actions sets up a handler for a mouse click in the table view. The clicked() signal is actually defined in an ancestor class – QAbstractItemView:
**Signals**
- `void activated (const QModelIndex & index )`
- `void clicked (const QModelIndex & index )`
- `void doubleClicked (const QModelIndex & index )`
- `void entered (const QModelIndex & index )`
- `void pressed (const QModelIndex & index )`
- `void viewportEntered ()`
The slot method that handles this “clicked” signal must have a signature that matches the clicked function – hence `void MyWindow::itemSelection(const QModelIndex & index)`.
Don't include variable names in the argument lists when specifying the connect statement. The following will not compile:
```cpp
connect(
widget.tableView,
SIGNAL(clicked(const QModelIndex& index)), this,
SLOT(itemSelection(const QModelIndex& index))
);
```
the statement must have the form:
```cpp
connect(
widget.tableView,
SIGNAL(clicked(const QModelIndex& index)), this,
SLOT(itemSelection(const QModelIndex& index))
);
```
Selecting a file with an image
```cpp
void MyWindow::chooseFile() {
// Display a standard file dialog and let user select a file
// Dialog to open with MyWindow as parent, the messages is Open Image
// Start in current directory, restrict to standard image file types
QString fileName = QFileDialog::getOpenFileName(this,
"Open Image", ".", "Image Files (*.png *.jpg *.bmp)");
if (!fileName.isEmpty()) {
// A file was picked, set the pictureField to match
widget.pictureField->setText(fileName);
}
}
```
The program should open a standard file dialog that allows a user to select jpg, png, or bmp files. If a file is selected, the dialog will return with a non-empty filename. This name is to be copied into the disabled line edit element “picture field”:
Selecting a row in the table view
This operation doesn't really do anything in this example. It shows how you would identify a selection in the table view; in a more complex example, there could be a third tab in the tab pane interface that gets used to display all details of a selected record. But for now, it simply prints a log message identifying the row selected:
```cpp
void MyWindow::itemSelection(const QModelIndex & index) {
cout << "Selected row " << index.row() << "\n";
RecordPtr p = data->at(index.row());
cout << p->getName() << endl;
}
```
Adding a role
When the “add role” button is clicked, the program should check for a string in the new role line edit widget. If there is a string, it is to be appended to the string currently in the role list (disabled) line edit:
Note the use of QString variables. Qt defines its own string class. (It seems that almost every C++ library defines its own string class.) You will usually end up with code converting instances of class String from library X into class String from library Y (or to STL string, and sometimes even to const char*!). Qt's string class at least has a trimmed() method built in!
(There are problems with using std::string and QString and char* etc; firstly, you keep having to convert formats; secondly, you increase the chance of memory leaks when you keep allocating and reassigning objects.)
Adding a record
This entails some real work via addRecord().
The various input fields must be checked – has the user provided all of: identifier, name, roles, and image file? Can the image be loaded (using the function in file main.cpp as an extern function)? Is the identifier already in use? If there are any problems with the data, the program should display an error dialog and then return without adding the record.
If the data are good, a new instance of MyRecord should be created and added to the collection, and then the input fields should be tidied up so that a subsequent record can be added.
(The code filling in the fields of the MyRecord object is making further use of Qt's string libraries. A QString has regex methods – such as split(); there are iterators that work through collections of strings, QStringList, etc. Qt has many...
other collection classes, e.g. QMap, QList. Generally, I find the classes in the Qt libraries more convenient than those in STL. They have greater functionality, clearer semantics, and are far less syntactically fussy than the STL templates. Of course, given your experience with STL in CSCI204 you may well prefer the STL classes.)
```cpp
tablemodel->addRecord(newRec);
widget.tableView->resizeRowsToContents();
QMessageBox report;
report.setWindowTitle("Success!");
report.setText("Record added");
report.exec();
widget.idField->clear();
widget.nameField->clear();
widget.pictureField->clear();
widget.newRoleName->clear();
widget.roleList->clear();
```
Finally, add the new MyRecord to the data collection and clear up the input fields.
The helper function checkForId() is used when checking whether the identifier for a new record is unique:
```cpp
bool MyWindow::checkForId(string anid) {
vector<RecordPtr>::iterator it;
for (it = data->begin(); it != data->end(); it++) {
RecordPtr p = *it;
if (p->getId() == anid) return true;
}
return false;
}
```
**Task 2 – completion (2 marks)**
Demonstrate your working QtAddr2 project.
(Note: when building applications that involve a “QT form” class you may find that your code in the NetBeans editor is marked by numerous error flags but you cannot see anything wrong. The problem is related to the way the “qmake” script works. It has to generate C++ files with the definitions of code for displaying the widgets along with associated header files. These files get deleted and recreated each build. If one of the generated headers files is missing when the NetBeans’ editor loads one of your class files the editor will end up flagging spurious errors. Usually these disappear when you build the application; occasionally, an odd error flag or two may get left. Don't worry. If the application builds it should be OK.)
**Task 3: Saving the records**
If we are going to the trouble of constructing a collection of address records, we would probably want to save them to a file on disk when the program ended, and reload them when the program was again run.
If you look at Qt's own address book example, you will see saving and loading files as one-liners. The Qt example has an address book that is a simple QMap<QString, QString>; both Qt classes QMap and QString have overloaded operator `<<` output functions (and input functions) making it easy to serialize the data and send them to a text file.
The MyRecord objects are a bit more complex. There are variable numbers of roles, and there are those map data elements (OK, these haven't used those since exercise 1 but they are meant to be there and are meant to be used). The coding will inevitably be a little more complex.
Some approaches are discussed below – the one I favour (and which you should implement) is last.
**How to save? Define ostream& operator<<(ostream&, const MyRecord&)?**
How would you save the address book collection?
One approach would be to use text files. You would need output code that would start with an integer value for the number of MyRecord entries and then would have a series of MyRecords all serialized.
It would be a really bad idea to have a function that took a MyRecord argument and interrogated it for each data element and then wrote this out as text. A better approach would rely on definition of friend functions for the MyRecord class that used a private print function. The `ostream& operator<<` and `istream& operator>>` functions would get added to the MyRecord class declaration:
```cpp
friend ostream& operator<<(ostream& out, const MyRecord&);
void printOn(ostream& out) const;
};
inline ostream& operator<<(ostream& out, const MyRecord& rec) {
rec.printOn(out);
return out;
}
```
(The definition of the corresponding input function is left as an exercise!)
The `printOn()` function would output a text representation of a MyRecord:
2015
An address book (i.e. vector<RecordPtr>) could be saved as follows:
```cpp
void MyRecord::printOn(ostream& out) const {
out << id << endl;
out << name << endl;
out << email << endl;
out << image << endl;
out << info << endl;
out << roles.size() << endl;
vector<string>::const_iterator it1;
for (it1 = roles.begin(); it1 != roles.end(); it1++) {
out << (*it1) << endl;
}
out << phones.size() << endl;
map<string, string>::const_iterator it2;
for (it2 = phones.begin(); it2 != phones.end(); it2++) {
out << (*it2).first << " " << (*it2).second << endl;
}
out << addresses.size() << endl;
map<string, string>::const_iterator it3;
for (it3 = addresses.begin(); it3 != addresses.end(); it3++) {
out << (*it3).first << " " << (*it3).second << endl;
}
out << other.size() << endl;
map<string, string>::const_iterator it4;
for (it4 = other.begin(); it4 != other.end(); it4++) {
out << (*it4).first << " " << (*it4).second << endl;
}
}
```
```cpp
int main(int argc, char *argv[]) {
// Use the createData from earlier examples to populate the "address book"
createData();
ofstream ofile("archive.txt");
ofile << g_theRecords.size() << endl;
vector<RecordPtr>::const_iterator it;
for(it=g_theRecords.begin(); it != g_theRecords.end(); it++) {
RecordPtr nxt = *it;
ofile << (*nxt);
}
ofile.close();
return EXIT_SUCCESS;
}
```
This would create a text file such as the following:
The file could be read back by a program given appropriate definitions of istream& operator>> and a readFrom(istream&) function.
But there would likely be all sorts of “Gotchas” - bugs holding up development.
Strings can have new line characters ---
The output file will have two lines “Dick ...up” and “a company ... tax”. This would break a
a naively coded readFrom function that tried something like:
```cpp
void MyRecord::readFrom(istream& inputstream) {
string info;
inputstream >> info;
this->setInfo(info);
int rolecount;
inputstream >> rolecount;
}
```
Other problems would occur with things like space in “keys” for the key value collections that form
a part of a MyRecord:
```cpp
string email = "boss_tcm@ourcompany.com.au";
string phones = "Phones";
string mbl = "Mobile";
string phnnum = "0466666666";
this->addKeyValue(phones, mbl, phnnum);
```
```cpp
string others = "Other";
string key = "Height";
string value = "1.80m";
this->addKeyValue(others, key, value);
```
These data result in lines in the save file like:
```
Boss
Manager
1
Mobile 0466666666
0
2
golf handicap 6
Height 1.80m
```
which will break a naively coded readFrom function that had something like
```cpp
string key;
inputstream >> key;
string value;
inputstream >> value;
this->addKeyValue(others,key,value);
```
(Tom would end up with an “Other” attribute “Golf” with value “handicap” and there would be some remaining input on the line to disrupt the next read action.)
A correct implementation would be a lot more complex; strings would have to be delimited in some application defined way so that a readFrom() function could consume string data correctly.
Why not change to a scheme that inherently provides delimiters for data elements?
**XML!**
**Do it yourself with XML!**
XML data may be verbose but they have the great advantage of being self describing, with properly delimited elements.
It's quite easy to generate an XML file as output – you just remember to add output statements to put the begin tag and end tag XML tokens around each data element.
**Output of the address book would go something like:**
```cpp
out << "<address_book>" << endl;
vector<RecordPtr>::const_iterator it;
for(it=g_theRecords.begin(); it!= g_theRecords.end(); it++) {
RecordPtr p = (*it);
p->writeAsXML(out);
}
out << "</address_book>";
```
The output function writeAsXML() defined for the MyRecord class would similarly tag each data element.
This would result in a file something like:
```
<address_book>
<myrecord>
<id>tom</id>
<name>Thomas</name>
<email>boss_tom@outcompany.com.au</email>

<info>Thomas ... </info>
<roles>
<role>Boss</role>
<role>Manager</role>
</roles>
<phones>
<phonerec>
<type>Mobile</type>
<number>0466666666</number>
</phonerec>
</phones>
<addresses></addresses>
<others>
```
The self-describing nature of XML makes the file structure much clearer. An entry `<addresses></addresses>` is easier to interpret than a blank line (“no addresses given”).
**But how would you read the data file back and recreate a collection of MyRecord objects?**
You certainly would not want to write your own XML parser. There are implementations of the W3C DOM parser for all languages, so you could get a C++ DOM parser class from some library and use it to create a “Document Object Model” tree-structure from the data in your save file.
You would then have to write code to traverse the DOM and extract the data needed to recreate a collection of MyRecord structures. Most of you will have written some limited code to do such tasks in CSCI110 where you wrote Javascript code to do things like `document.getElementById(...)`. But it’s hard work.
**The Larry Wall approach – *laziness and impatience rule ok***
It is only at universities that people write programs from scratch. In the real world, developers create useful working programs by assembling a large number of pre-built components, threading these together, and adding the tiny amount of code that really is unique to their application.
You aren’t the first to need to transfer data structures to and from files. You aren’t the first to have complex nested data structures with nested collections that will best be saved in XML format. Others have dealt with these problems before and created utilities to help you.
If you need to save and restore data using XML files, then one solution is to use the “serialization” libraries in the “boost” suite. One of the advantages of the boost libraries is that they already incorporate code to handle the STL data collections like vector and map.
The boost serialization libraries make it very easy to save and restore arbitrarily complex collections of records. (A cyclic graph data structure might present some challenges – could be difficult to represent as an XML tree-structure; but there are ways of dealing with such things.)
For this task, you will create two new NetBeans projects, both being made up almost entirely from code written for earlier tasks. The first project “BoostXMLExperiment1” will create an XML file from the standard data that you have been using to initially populate the data collection for your exercises on creating Qt displays. The second project “BoostXMLExperiment2” is a minor reworking of the program that you completed in task 2; instead of having fixed code to initialise the data collection it will start by reading an archive file, and it will write a new version of the archive before terminating.
The project requires “includes” for the C++ compiler, and libraries for the C++ linker:
The main() function calls your createData() function (extend your code so that some of the MyRecord instances to have “info”, “phones”, “addresses” and “other” data added to them). It then opens an output file and uses the boost serialization libraries to save the collection:
#include "MyRecord.h"
typed MyRecord* RecordPtr;
vector<RecordPtr> g_theRecords;
string getImage(string filename) {...21 lines }
static void createData() {...189 lines }
int main(int argc, char *argv[]) {
createData();
ofstream ofile("archive.xml");
boost::archive::xml_oarchive oa(ofile);
oa & BOOST_SERIALIZATION_NVP(g_theRecords);
return EXIT_SUCCESS;
}
The line oa & BOOST_SERIALIZATION_NVP(g_theRecords) does look weird – but it's simply a matter of the guys who wrote the library defining an overloaded operator & function for the xml_oarchive class. (Namespaces get a bit complex with boost, so it is easiest if you use fully qualified class names – hence boost::archive::...)
That part worked because the boost library has code to handle the output of a STL vector object. But what about the MyRecords?
It's easy.
You add a public function to your class (after including a lot more header files):
Inline “serialize” function that is both a “save” function and a “reload” function!
bool hasRole(string& queryrole) const;
(The BOOST_SERIALIZATION_NVP “macro” standards for “save/restore name-value pair.)
That's all there is to it!
Your serialize function just has transfer statements for each data member that is part of the persistent state of your object. (It would be quite common for a class that you define to have other data
members such as pointers to instances of collaborating classes; you just don't include such
“transient” data members in your list of serialization actions.)
The one function handles both input and output. The boost library guys have overloaded the
operator & for their xml_oarchive class to mean output, while for their xml_iarchive it means input.
Your program should run and produce an output file like:
```
<xml version="1.0" encoding="UTF-8" standalone="yes" xmlns="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<boost_serialization signature="serialization::archive" version="5">
<a_theRecords class_id="0" tracking_level="0" version="0">
<count>2</count>
<item_version>0</item_version>
<item class_id="1" tracking_level="1" version="1" object_id="0">
<id>Thom</id>
<name>Thomas</name>
<email>boss_tom@ourcompany.com.au</email>
<info>Thomas is the founder and CEO of the company</info>
<roles class_id="2" tracking_level="0" version="0">
<item_version>0</item_version>
<item name="Boss"/>
<item name="Manager"/>
</roles>
<phones class_id="3" tracking_level="0" version="0">
<count>1</count>
<item_version>0</item_version>
<item class_id="4" tracking_level="0" version="0">
<first>Mobile</first>
<second>0400666666</second>
</item>
</phones>
<addresses>
<count>0</count>
<item version>0</item_version>
</addresses>
<other>
<count>2</count>
<item_version>0</item_version>
<item>
<first>Golf handicap</first>
<second>0</second>
</item>
<item>
<first>Height</first>
<second>1.89m</second>
</item>

<item class_id_reference="1" object_id="1">
<id>dick</id>
<name>Dick</name>
<email>Dick@yourcompany.com.au</email>
<info>Dick was recruited from Starbucks and so knows how to set up
a company so that pays no tax</info>
<roles>
<count>1</count>
<item_version>0</item_version>
<item name="Accountant"/>
</item>
```
2015
Finally (!), create another version of your program for editing and viewing the collection. It is to read in an xml file when it starts and write out an updated version when it finishes.
Recreate your QtAddr2 project (Reminder, you cannot simply duplicate the directory at command level or within NetBeans – the configuration files in the nbproject sub-directory will not be updated properly and you will end up trying to build with the versions of header files from the old project.)
Edit the include files for the compiler, and the library files for the linker in the projects properties. You do not need to specify any qt files (this is a Qt project so they are sorted out automatically). You have to add the references to the boost include directory and the boost serialization library.
Add #include <boost...> statements and the declaration of the serialize() method in your MyRecord.h file. You will also need to add a no-argument constructor – public: MyRecord() {}.
Change the main.cpp:
```cpp
typedef MyRecord* RecordPtr;
vector<RecordPtr> g_theRecords;
int main(int argc, char *argv[]) {
// initialize resources, if needed
// Q_INIT_RESOURCE(rosfile);
ifstream inputFile("archive.xml");
boost::archive::xml_iarchive ia(inputFile);
ia & BOOST.Serialization.NVP(g_theRecords);
inputFile.close();
QApplication app(argc, argv);
MyWindow win(g_theRecords);
win.show();
int result = app.exec();
ofstream outputFile("archive.xml");
boost::archive::xml_oarchive oa(outputFile);
oa & BOOST.Serialization.NVP(g_theRecords);
outputFile.close();
return result;
}
```
It should all work!
You never imagined things could be this simple.
You can learn more about the boost serialization libraries at:
http://www.boost.org/doc/libs/1_52_0/libs/serialization/doc/tutorial.html
2015
Task 3 – completion (1 mark)
Demonstrate that your application can create persistent “address books”.
(Don't forget to “clean” all the projects to recover disk space when you have completed all the tasks in this exercise.)
|
{"Source-Url": "http://www.uow.edu.au/~nabg/222/Exercises/Exercise2.pdf", "len_cl100k_base": 12534, "olmocr-version": "0.1.53", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 61435, "total-output-tokens": 14890, "length": "2e13", "weborganizer": {"__label__adult": 0.0004477500915527344, "__label__art_design": 0.0003676414489746094, "__label__crime_law": 0.0002181529998779297, "__label__education_jobs": 0.0027637481689453125, "__label__entertainment": 9.238719940185548e-05, "__label__fashion_beauty": 0.0001575946807861328, "__label__finance_business": 0.00012767314910888672, "__label__food_dining": 0.0004200935363769531, "__label__games": 0.0008420944213867188, "__label__hardware": 0.0006618499755859375, "__label__health": 0.00023627281188964844, "__label__history": 0.0002219676971435547, "__label__home_hobbies": 0.00011688470840454102, "__label__industrial": 0.0002803802490234375, "__label__literature": 0.00023746490478515625, "__label__politics": 0.0001819133758544922, "__label__religion": 0.0004973411560058594, "__label__science_tech": 0.0018768310546875, "__label__social_life": 0.0001621246337890625, "__label__software": 0.00412750244140625, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0003364086151123047, "__label__transportation": 0.0004818439483642578, "__label__travel": 0.00026416778564453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55955, 0.01733]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55955, 0.46175]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55955, 0.83295]], "google_gemma-3-12b-it_contains_pii": [[0, 2246, false], [2246, 6085, null], [6085, 7152, null], [7152, 7836, null], [7836, 10514, null], [10514, 10950, null], [10950, 12826, null], [12826, 13421, null], [13421, 13925, null], [13925, 14253, null], [14253, 15515, null], [15515, 16097, null], [16097, 17007, null], [17007, 18057, null], [18057, 19177, null], [19177, 20007, null], [20007, 20636, null], [20636, 21742, null], [21742, 23959, null], [23959, 24759, null], [24759, 25407, null], [25407, 27724, null], [27724, 28740, null], [28740, 31272, null], [31272, 32180, null], [32180, 34641, null], [34641, 35910, null], [35910, 37520, null], [37520, 38720, null], [38720, 38965, null], [38965, 40874, null], [40874, 42922, null], [42922, 44462, null], [44462, 44714, null], [44714, 45594, null], [45594, 47389, null], [47389, 50049, null], [50049, 50415, null], [50415, 51350, null], [51350, 51791, null], [51791, 53836, null], [53836, 55731, null], [55731, 55955, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2246, true], [2246, 6085, null], [6085, 7152, null], [7152, 7836, null], [7836, 10514, null], [10514, 10950, null], [10950, 12826, null], [12826, 13421, null], [13421, 13925, null], [13925, 14253, null], [14253, 15515, null], [15515, 16097, null], [16097, 17007, null], [17007, 18057, null], [18057, 19177, null], [19177, 20007, null], [20007, 20636, null], [20636, 21742, null], [21742, 23959, null], [23959, 24759, null], [24759, 25407, null], [25407, 27724, null], [27724, 28740, null], [28740, 31272, null], [31272, 32180, null], [32180, 34641, null], [34641, 35910, null], [35910, 37520, null], [37520, 38720, null], [38720, 38965, null], [38965, 40874, null], [40874, 42922, null], [42922, 44462, null], [44462, 44714, null], [44714, 45594, null], [45594, 47389, null], [47389, 50049, null], [50049, 50415, null], [50415, 51350, null], [51350, 51791, null], [51791, 53836, null], [53836, 55731, null], [55731, 55955, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 55955, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55955, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55955, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55955, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 55955, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55955, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55955, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55955, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55955, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55955, null]], "pdf_page_numbers": [[0, 2246, 1], [2246, 6085, 2], [6085, 7152, 3], [7152, 7836, 4], [7836, 10514, 5], [10514, 10950, 6], [10950, 12826, 7], [12826, 13421, 8], [13421, 13925, 9], [13925, 14253, 10], [14253, 15515, 11], [15515, 16097, 12], [16097, 17007, 13], [17007, 18057, 14], [18057, 19177, 15], [19177, 20007, 16], [20007, 20636, 17], [20636, 21742, 18], [21742, 23959, 19], [23959, 24759, 20], [24759, 25407, 21], [25407, 27724, 22], [27724, 28740, 23], [28740, 31272, 24], [31272, 32180, 25], [32180, 34641, 26], [34641, 35910, 27], [35910, 37520, 28], [37520, 38720, 29], [38720, 38965, 30], [38965, 40874, 31], [40874, 42922, 32], [42922, 44462, 33], [44462, 44714, 34], [44714, 45594, 35], [45594, 47389, 36], [47389, 50049, 37], [50049, 50415, 38], [50415, 51350, 39], [51350, 51791, 40], [51791, 53836, 41], [53836, 55731, 42], [55731, 55955, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55955, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
03f8c1ca933b7bb91fb502f9cc90acb6d3bee012
|
MLitB: Machine Learning in the Browser
Meeds, E.; Hendriks, R.; Al Faraby, S.; Bruntink, M.; Welling, M.
Published in:
PeerJ Computer Science
DOI:
10.7717/peerj-cs.11
Citation for published version (APA):
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.
MLitB: machine learning in the browser
Edward Meeds, Remco Hendriks, Said Al Faraby, Magiel Bruntink and Max Welling
Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
ABSTRACT
With few exceptions, the field of Machine Learning (ML) research has largely ignored the browser as a computational engine. Beyond an educational resource for ML, the browser has vast potential to not only improve the state-of-the-art in ML research, but also, inexpensively and on a massive scale, to bring sophisticated ML learning and prediction to the public at large. This paper introduces MLitB, a prototype ML framework written entirely in Javascript, capable of performing large-scale distributed computing with heterogeneous classes of devices. The development of MLitB has been driven by several underlying objectives whose aim is to make ML learning and usage ubiquitous (by using ubiquitous compute devices), cheap and effortlessly distributed, and collaborative. This is achieved by allowing every internet capable device to run training algorithms and predictive models with no software installation and by saving models in universally readable formats. Our prototype library is capable of training deep neural networks with synchronized, distributed stochastic gradient descent. MLitB offers several important opportunities for novel ML research, including: development of distributed learning algorithms, advancement of web GPU algorithms, novel field and mobile applications, privacy preserving computing, and green grid-computing. MLitB is available as open source software.
INTRODUCTION
The field of Machine Learning (ML) currently lacks a common platform for the development of massively distributed and collaborative computing. As a result, there are impediments to leveraging and reproducing the work of other ML researchers, potentially slowing down the progress of the field. The ubiquity of the browser as a computational engine makes it an ideal platform for the development of massively distributed and collaborative ML. Machine Learning in the Browser (MLitB) is an ambitious software development project whose aim is to bring ML, in all its facets, to an audience that includes both the general public and the research community.
By writing ML models and algorithms in browser-based programming languages, many research opportunities become available. The most obvious is software compatibility: nearly all computing devices can collaborate in the training of ML models by contributing
some computational resources to the overall training procedure and can, with the same code, harness the power of sophisticated predictive models on the same devices (see Fig. 1). This goal of ubiquitous ML has several important consequences: training ML models can now occur on a massive, even global scale, with minimal cost, and ML research can now be shared and reproduced everywhere, by everyone, making ML models a freely accessible, public good. In this paper, we present both a long-term vision for MLitB and a light-weight prototype implementation of MLitB, that represents a first step in completing the vision, and is based on an important ML use-case, Deep Neural Networks.
In Section ‘MLITB: Vision’ we describe in more detail our vision for MLitB in terms of three main objectives: (1) make ML models and algorithms ubiquitous, for both the public and the scientific community, (2) create an framework for cheap distributed computing by harnessing existing infrastructure and personal devices as novel computing resources, and (3) design research closures, software objects that archive ML models, algorithms, and parameters to be shared, reused, and in general, support reproducible research.
In Section ‘MLITB: Prototype’ we describe the current state of the MLitB software implementation, the MLitB prototype. We begin with a description of our design choices,
including arguments for using JavaScript and the other modern web libraries and utilities. Then we describe a bespoke map-reduce synchronized event-loop, specifically designed for training a large class of ML models using distributed stochastic gradient descent (SGD). Our prototype focuses on a specific ML model, Deep Neural Networks (DNNs), using an existing JavaScript implementation (Karpathy, 2014), modified only slightly for MLitB. We also report results of a scaling experiment, demonstrating the feasibility, but also the engineering challenges of using browsers for distributed ML applications. We then complete the prototype description with a walk-through of using MLitB to specify and train a neural network for image classification.
MLitB is influenced and inspired by current volunteer computing projects. These and other related projects, including those from machine learning, are presented in Section ‘Related Work.’ Our prototype has exposed several challenges requiring further research and engineering; these are presented in Section ‘Opportunities and Challenges,’ along with discussion of interesting application avenues MLitB makes possible. The most urgent software development directions follow in Section ‘Future MLitB Development.’
**MLITB: VISION**
Our long-term vision for MLitB is guided by three overarching objectives:
**Ubiquitous ML:** models can be training and executed in any web browsing environment without any further software installation.
**Cheap distributed computing:** algorithms can be executed on existing grid, cloud, etc., computing resources with minimal (and possibly no) software installation, and can be easily managed remotely via the web; additionally, small internet enabled devices can contribute computational resources.
**Reproducibility:** MLitB should foster reproducible science with research closures, universally readable objects containing ML model specifications, algorithms, and parameters, that can be used seamlessly to achieve the first two objectives, as well as support sharing of ML models and collaboration within the research community and the public at large.
**Ubiquitous machine learning**
The browser is the most ubiquitous computing device of our time, running, in some shape or form on all desktops, laptops, and mobile devices. Software for state-of-the-art ML algorithms and models, on the other hand, are very sophisticated software libraries written in highly specific programming languages within the ML research community (Bastien et al., 2012; Jia et al., 2014; Collobert, Kavukcuoglu & Farabet, 2011). As research tools, these software libraries have been invaluable. We argue, however, that to make ML truly ubiquitous requires writing ML models and algorithms with web programming languages and using the browser as the computational engine.
The software we propose can run sophisticated predictive models on cell phones or super-computers; for the former, this extends the distributed nature of ML to a global internet. By further encapsulating the algorithms and model together, the benefit of powerful predictive modeling becomes a public commodity.
Cheap distributed computing
The usage of web browsers as compute nodes provides the capability of running sophisticated ML algorithms without the expense and technical difficulty of using custom grid or super-computing facilities (e.g., Hadoop cloud computing Shvachko et al. (2010)). It has long been a dream to use volunteer computing to achieve real massive scale computing. Successes include Seti@Home (Anderson et al., 2002) and protein folding (Lane et al., 2013). MLitB is being developed to not only run natively on browsers but also for scaled distributed computing on existing cluster and/or grid resources and, by harnessing the capacity of non-traditional devices, for extremely massive scale computing with a global volunteer base. In the former set-up, low communication overhead and homogeneous devices (a “typical” grid computing solution) can be exploited. In the latter, volunteer computing via the internet opens the scaling possibilities tremendously, albeit at the cost of unreliable compute nodes, variable power, limited memory, etc. Both have serious implications for the user, but, most importantly, both are implemented by the same software.
Although the current version of MLitB does not provide GPU computing, it does not preclude its implementation in future versions. It is therefore possible to seamlessly provide GPU computing when available on existing grid computing resources. Using GPUs on mobile devices is a more delicate proposition since power consumption management is of paramount importance for mobile devices. However, it is possible for MLitB to manage power intelligently by detecting, for example, if the device is connected to a power source, its temperature, and whether it is actively used for other activities. A user might volunteer periodic “mini-bursts” of GPU power towards a learning problem with minimal disruption to or power consumption from their device. In other words, MLitB will be able to take advantage of the improvements and breakthroughs of GPU computing for web engines and mobile chips, with minimal software development and/or support.
Reproducible and collaborative research
Reproducibility is a difficult yet fundamental requirement for science (McNutt, 2014). Reproducibility is now considered just as essential for high-quality research as peer review; simply providing mathematical representations of models and algorithms is no longer considered acceptable (Stodden, Guo & Ma, 2013). Furthermore, merely replicating other work, despite its importance, can be given low publication priority (Casadevall & Fang, 2010) even though it is considered a prerequisite for publication. In other words, submissions must demonstrate that their research has been, or could be, independently reproduced.
For ML research there is no reason for not providing working software that allows reproduction of results (for other fields in science, constraints restricting software publication may exist). Currently, the main bottlenecks are the time cost to researchers for making research available, and the incompatibility of the research (i.e., code) for others, which further increases the time investment for researchers. One of our primary goals for MLitB is to provide reproducible research with minimal to no time cost to both the
primary researcher and other researchers in the community. Following (Stodden, Borwein & Bailey, 2013), we support “setting the default to reproducible.”
For ML disciplines, this means other researchers should not only be able to use a model reported in a paper to verify the reported results, but also retrain the model using the reported algorithm. This higher standard is difficult and time-consuming to achieve, but fortunately this approach is being adopted more and more often, in particular by a sub-discipline of machine learning called deep learning. In the deep learning community, the introduction of new datasets and competitions, along with innovations in algorithms and modeling, have produced a rapid progress on many ML prediction tasks. Model collections (also called model zoos), such as those built with Caffe (Jia et al., 2014) make this collaboration explicit and easy to access for researchers. However, there remains a significant time investment to run any particular deep learning model (these include compilation, library installations, platform dependencies, GPU dependencies, etc.). We argue that these are real barriers to reproducible research and choosing ubiquitous software and compute engines makes it easier. For example, during our testing we converted a very performant computer vision model (Lin, Chen & Yan, 2013) into JSON format and it can now be used on any browser with minimal effort.
1 JavaScript Object Notation json.org/
In a nod to the concept of closures concept common in functional programming, our approach treats a learning problem as a research closure: a single object containing model and algorithm configuration plus code, along with model parameters that can be executed (and therefore tested and analyzed) by other researchers.
**MLITB: PROTOTYPE**
The MLitB project and its accompanying software (application programming interfaces (APIs), libraries, etc.) are built entirely in JavaScript. We have taken a pragmatic software development approach to achieve as much of our vision as possible. To leverage our software development process, we have chosen, wherever possible, well-supported and actively developed external technology. By making these choices we have been able to quickly develop a working MLitB prototype that not only satisfies many of our objectives, but is as technologically future proof as possible. To demonstrate MLitB on a meaningful ML problem, we have similarly incorporated an existing JavaScript implementation of a Deep Neural Network into MLitB. The full implementation of the MLitB prototype can be found on GitHub (https://github.com/software-engineering-amsterdam/MLitB).
**Why JavaScript?**
JavaScript is a pervasive web programming language, embedded in approximately 90% of web-sites (W3Techs, 2014). This pervasiveness means it is highly supported (Can I Use, 2014), and is actively developed for efficiency and functionality (Chrome V8, 2014; asm.js, 2014). As a result, JavaScript is the most popular programming language on GitHub and its popularity is continuing to grow (Ray et al., 2014).
The main challenge for scientific computing with JavaScript is the lack of high-quality scientific libraries compared to platforms such as Matlab and Python. With the potential of native computational efficiency (or better, GPU computation) becoming available
for JavaScript, it is only a matter of time before JavaScript bridges this gap. A recent set of benchmarks showed that numerical JavaScript code can be competitive with native C (Khan et al., 2014).
**General architecture and design**
*Design considerations*
The minimal requirements for MLitB are based on the scenario of running the network as *public resource computing*. The downside of public resource computing is the lack of control over the computing environment. Participants are free to leave (or join) the network at anytime and their connectivity may be variable with high latency. MLitB is designed to be robust to these potentially destabilizing events. The loss of a participant results in the loss of computational power and data allocation. Most importantly, MLitB must robustly handle new and lost clients, re-allocation of data, and client variability in terms of computational power, storage capacity, and network latency.
Although we are agnostic to the specific technologies used to fulfill the vision of MLitB, in practice we are guided by both the requirements of MLitB and our development constraints. Therefore, as a first step towards implementing our vision, we chose technology pragmatically. Our choices also follow closely the design principles for web-based big data applications (Begoli & Horey, 2012), which recommend popular standards and light-weight architectures. As we will see, some of our choices may be limiting at large scale, but they have permitted a successful small-scale MLitB implementation (with up to 100 clients).
Figure 2 shows the high-level architecture and web technologies used in MLitB. Modern web browsers provide functionality for two essential aspects of MLitB: Web Workers (W3C, 2014) for parallelizing program execution with threads and Web Sockets (IETF, 2011) for fast bi-directional communication channels to exchange messages more quickly between server and browser. To maintain compatibility across browser vendors, there is little choice for alternatives to Web Workers and Web Sockets. These same choices are also used in another browser-based distributed computing platform (Cushing et al., 2013).
On the server-side, there are many choices that can be made based on scalability, memory management, etc. However, we chose Node.js for the server application (http://nodejs.org). Node.js provides several useful features for our application: it is lightweight, written in JavaScript, handles events asynchronously, and can serve many clients concurrently (Tilkov & Vinoski, 2010). Asynchronous events occur naturally in MLitB as clients join/leave the network, client computations are received by the server, users add new models and otherwise interact with the server. Since the main computational load is carried by the clients, and not the server, a light-weight server that can handle many clients concurrently is all that is required by MLitB.
**Design overview**
The general design of MLitB is composed of several parts. A *master server* hosts ML problems/projects and connects clients to them. The master server also manages the *main event loop*, where client triggered events are handled, along with the reduce steps
Figure 2 MLitB architecture and technologies. (1) Servers are Node.js applications. The master server is the main server controlling communication between clients and hosts ML projects. (2) Communication between the master server and clients occurs over Web Sockets. (3) When heterogeneous devices connect to the master server they use Web Workers to perform different tasks. Upon connection, a UI worker, or boss, is instantiated. Web Workers perform all the other tasks on a client and are controlled by the boss. See Fig. 3 for details. (4) A special data worker on the client communicates with the data server using XHR. (5) The data server, also a Node.js application, manages uploading of data in zip format and serves data vectors to the client data workers. Icon made by Freepik from www.flaticon.com.
of a (bespoke) map-reduce procedure used for computation. When a browser (i.e., a heterogeneous device) makes an initial connection to the master server, a user-interface (UI) client (also known as a boss) is instantiated. Through the UI, clients can add workers that can perform different tasks (e.g., train a model, download parameters, take a picture, etc.). An independent data server serves data to clients using zip files and prevents the master server from blocking while serving data. For efficiency, data transfer is performed using XHR. Trained models can be saved into JSON objects at any point in the training process; these can later be loaded in lieu of creating new models.
Master server
The master node (server) is implemented in Node.js with communication between the master and slave nodes handled by Web Sockets. The master server hosts multiple ML
problems/projects simultaneously along with all clients’ connections. All processes within the master are event-driven, triggered by actions of the slave nodes. Calling the appropriate functions by slave nodes to the master node is handled by the router. The master must efficiently perform its tasks (data reallocation and distribution, reduce-steps) because the clients are idle awaiting new parameters before their next work cycle. New clients must also wait until the end of an iteration before joining a network. The MLitB network is dynamic and permits slave nodes to join and leave during processing. The master monitors its connections and is able to detect lost participants. When this occurs, data that was allocated to the lost client is re-allocated the remaining clients, if possible, otherwise it is marked as to be allocated.
Data server
The data server is a bespoke application intended to work with our neural network use-case model and can be thought of a lightweight replacement for a proper image database. The data server is an independent Node.js application that can, but does not necessarily live on the same machine. Users upload data in zip files before training begins; currently, the data server handles zipped image classification datasets (where sub-directory names define class labels). Data is then downloaded from the data server and zipped files are sent to clients using XHR and unzipped and processed locally. XHR is used instead of WebSockets because they communicate large zip-files more efficiently. A redundant cache of data is stored locally in the clients’ browser’s memory. For example, a client may store 10,000 data vectors, but at each iteration it may only have the computational power to process 100 data vectors in its scheduled iteration duration. The data server uses specialized JavaScript APIs unzip.js and redis-server.
Clients
Clients are browser connections from heterogeneous devices that visit the master server’s url. Clients interact through a UI worker, called a boss, and can create slave workers to perform various tasks (see Workers). The boss is the main worker running in a client’s browser. It manages the slave and image download worker and functions as a bridge between the downloader and slaves. A simple wrapper handles UI interactions, and provides input/output to the boss. Client bosses use a data worker to download data from the data server using XHR. The data worker and server communicate using XHR and pass zip files in both directions. The boss handles unzipping and decoding data for slaves that request data. Clients therefore require no software installation other than its native browser. Clients can contribute to any project hosted by the master server. Clients can trigger several events through the UI worker. These include adjusting hyper-parameters, adding data, and adding slave workers, etc. (Fig. 3). Most tasks are run in a separate Web Worker thread (including the boss), ensuring a non-blocking and responsive client UI. Data downloading is a special task that, via the boss and the data worker, uses XHR to download from the data server.
Figure 3 MLitB client workers. Each client connection to the master server initiates a UI worker, also known as a boss. For uploading data from a client to the data server and for downloading data from the data server to a client, a separate Web Worker called the data worker is used. Users can add slaves through the UI worker; each slave performs a separate task using a Web Worker. Icon made by Freepik from www.flaticon.com.
Workers
In Fig. 3 the tasks implemented using Web Worker threads are shown. At the highest-level is the client UI, with which the user interacts with ML problems and controls their slave workers. From the client UI, a user can create a new project, load a project from file, upload data to a project, or add slave workers for a project. Slaves can perform several tasks; most important is the trainer, which connects to an event loop of a ML project and contributes to its computation (i.e., its map step). Each slave worker communicates directly to the master server using Web Sockets. For the latter three tasks, the communication is mainly for sending requests for models parameters and receiving them. The training slave has more complicated behavior because it must download data then perform computation
as part of the main event loop. To begin training, the user sets the slave task to train and selects start/restart. This will trigger a join event at the master server; model parameters and data will be downloaded and the slave will begin computation upon completion of the data download. The user can remove a slave at any time. Other slave tasks are tracking, which requires receiving model parameters from the master, and allows users to monitor statistics of the model on a dataset (e.g., classification error) or to execute the model (e.g., classify an image on a mobile device). Each slave worker communicates directly to the master server using Web Sockets.
**Events and software behavior**
The MLitB network is constructed as a master–slave relationship, with one server and multiple slave nodes (clients). The setup for computation is similar to a MapReduce network (Dean & Ghemawat, 2008); however, the master server performs many tasks during an iteration of the master event loop, including a reduce step, but also several other important tasks.
The specific tasks will be dictated by events triggered by the client, such as requests for parameters, new client workers, removed/lost clients, etc. Our master event loop can be considered as a synchronized map-reduce algorithm with a user defined iteration duration $T$, where values of $T$ may range from 1 to 30 s, depending on the size of the network and the problem. MLitB is not limited to a map-reduce paradigm and in fact we believe that our framework opens the door to peer-to-peer or gossip algorithms (Boyd et al., 2006). We are currently developing asynchronous algorithms to improve the scalability of MLitB.
**Master event loop**
The master event loop consists of five steps and is executed by the master server node as long there is at least one slave node connected. Each loop includes one map-reduce step, and runs for at least $T$ seconds. The following steps are executed, in order:
(a) New data uploading and allocation.
(b) New client trainer initialization and data allocation.
(c) Training workers reduce step.
(d) Latency monitoring and data allocation adjustment.
(e) Master broadcasts parameters.
**a) New data uploading and allocation**
When a client boss uploads data, it directly communicates with the data server using XHR. Once the data server has uploaded the zip file, it sends the data indices and classification labels to the boss. The boss then registers the indices with the master server. Each data index is managed: MLitB stores an allocated index (the worker that is allocated the ID) and a cached index (the worker that has cached the ID). The master ensures that the data allocation is balanced amongst its clients. Once a data set is allocated on the master server, the master allocates indices and sends the set of IDs to workers. Workers can then request data from the boss, who in turn use its data downloader worker to download those worker
specific IDs from the data server. The data server sends a zipped file to the data downloader, which are then unzipped and processed by the boss (e.g., JPEG decoding for images). The zip file transfers are fast but the decoding can be slow. We therefore allow workers to begin computing before the entire dataset is downloaded and decoded, allowing projects to start training almost immediately while data gets cached in the background.
(b) New client trainer initialization and data allocation
When a client boss adds a new slave, a request to join the project is sent to the master. If there is unallocated data, a balanced fraction of the data is allocated to the new worker. If there is no unallocated data, a pie-cutter algorithm is used to remove allocated data from other clients and assign it to the new client. This prevents unnecessary data transfers. The new worker is sent a set of data IDs it will need to download from the client’s data worker. Once the data has been downloaded and put into the new worker’s cache, the master will then add the new worker to the computation performed at each iteration. The master server is immediately informed when a client or one of its workers is removed from the network. Because of this, it can manage the newly unallocated data (that were allocated to the lost client).
(c) Training workers’ reduce step
The reduce step is completely problem specific. In our prototype, workers compute gradients with respect to model parameters over their allocated data vectors, and the reduce step sums over the gradients and updates the model parameters.
(d) Latency monitoring and data allocation adjustment
The interval $T$ represents both the time of computation and the latency between the client and the master node. The synchronization is stochastic and adaptive. At each reduce step, the master node estimates the latency between the client and the master and informs the client worker how long it should run for. A client does not need to have a batch size because it just clocks its own computation and returns results at the end of its scheduled work time. Under this setting, it is possible to have mobile devices that compute only a few gradients per second and a powerful desktop machine that performs hundreds or thousands. This simple approach also allows the master to account for unexpected user activity: if the user’s device slows or has increased latency, the master will decrease the load on the device for the next iteration. Generally, devices with a cellular network connection communicate with longer delays than hardwired machines. In practice, this means the reduction step in the master node receives delayed responses from slave nodes, forcing it to run the reduction function after the slowest slave node (with largest latency) has returned. This is called asynchronous reduction callback delay.
(e) Master broadcasts parameters
An array of model parameters is broadcast to each clients’ boss worker using XHR; when the boss receives new parameters, they are given to each of its workers who then start another computation iteration.
ML use-case: deep neural networks
The current version of the MLitB software is built around a pervasive ML use-case: deep neural networks (DNNs). DNNs are the current state-of-the-art prediction models for many tasks, including computer vision (Krizhevsky, Sutskever & Hinton, 2012; Lin, Chen & Yan, 2013), speech recognition (Hinton et al., 2012), and natural language processing and machine translation (Liu et al., 2014; Bahdanau, Cho & Bengio, 2014; Sutskever, Vinyals & Le, 2014). Our implementation only required superficial modifications to an existing JavaScript implementation (Karpathy, 2014) to fit into our network design.
Scaling behavior of MLitB
We performed an experiment to study the scaling behavior of MLitB prototype. Using up to 32 4-core workstation machines connected on a local area network using a single router, we trained a simple convolutional NN on the MNIST dataset for 100 iterations (with 4 seconds per iteration/synchronization event). The number of slave nodes doubled from one experiment to the next (i.e., 1, 2, 4, ..., 96). We are interested in the scaling behavior of two performance indicators: (1) power, measured in data vectors processed per second, and (2) latency in milliseconds between slaves and master node. Of secondary interest is the generalization performance on the MNIST test set. As a feasibility study of a distributed ML framework, we are most interested scaling power while minimizing latency effects during training, but we also want to ensure the correctness of the training algorithm. Since optimization using compiled JS and/or GPUs of the ML JavaScript library possible, but not our focus, we are less concerned with the power performance of a single slave node.
Results for power and latency are shown in Fig. 4. Power increases linearly up to 64 slave nodes, at which point a large increase in latency limits additional power gains for new nodes. This is due to a single server reaching the limit of its capacity to process incoming gradients synchronously. Solutions include using multiple server processes, asynchronous updates, and partial gradient communication. Test error, as a function of the number of nodes is shown in Fig. 5 after 50 iterations (200 s) and 100 iterations (400 s); i.e., each point represents the same wall-clock computation time. This demonstrates the correctness of MLitB for a given model architecture and learning hyperparameters.
Due to the data allocation policy that limits the data vector capacity of each node to 3,000 vectors, experiments with more nodes process more of the training set during the training procedure. For example, using only 1 slave node trains on 3/60 of the full training set. With 20 nodes, the network is training on the full dataset. This policy could easily be modified to include data refreshment when running with unallocated data.
The primary latency issue is due to all clients simultaneously sending gradients to the server at the end of each iteration. Three simple scaling solutions are (1) increasing the number of master node processes that receive gradients (2) using asynchronous update rules (each slave computes for a random amount of time, then sends updates), reducing the load of any one master node process, and (3) partial communication of gradients (decreasing bandwidth).
Figure 4: Effects of scaling on power and latency. Power—measured as the number of data vectors processed per second—scales linearly until 64 nodes, when the increase in latency jumps. The ideal linear scaling is shown in grey.
Figure 5: Effects of scaling on optimization. Convergence of the NN is measured in terms of test error after 50 and 100 iterations. Each point represents approximately the same wall-clock time (200/400 s for 50 and 100 iterations, respectively).
Walk-through of MLitB prototype
We briefly describe how MLitB works from a researcher’s point of view.
**Specification of neural network and training parameters**
Using a minimalist UI (not shown), the researcher can specify their neural network, for example they can add/remove layers of different types, and adjust regularization parameters (L1/L2/dropout) and learning rates. Alternatively, the researcher can load a previously saved neural network in JSON format (that may or may not have already been trained). Once a NN is specified (or loaded), it appears in the display, along with other neural networks also managed by the master node. By selecting a specific neural network, the researcher can then add workers and data (e.g., project `cifar10` in Fig. 6).
**Specification of training data**
Image classification data is simple to upload using named directory structures for image labels. For example, for CIFAR10 all files in the “apple” subdirectory will be given label “apple” once loaded (e.g., the image file `/cifar10/apple/apple_apple_s_000022.png`). The entire “cifar10” directory can be zipped and uploaded. MLitB processes JPEG and PNG formats. A test set can be uploaded in *tracker* mode.
**Training mode**
In the *training* mode, a training worker performs as many gradient computations as possible within the iteration duration \( T \) (i.e., during the *map* step of the main event loop). The total gradient and the number of gradients is sent to the master, which then in the *reduce* step computes a weighted average of gradients from all workers and takes a gradient step using AdaGrad (*Duchi, Hazan & Singer, 2011*). At the end of the main event loop, new neural network weights are sent via Web Sockets to both trainer workers (for the next
Figure 7 Tracking model (model execution). The label of a test image is predicted using the latest NN parameters. Users can execute a NN prediction using an image stored on their device or using their device’s camera. In this example, an image of a horse is correctly predicted with probability 0.687 (the class-conditional predictive probability).
**Tracking mode**
There are two possible functions in tracking mode: (1) executing the neural network on test data, and (2) monitoring classification error on an independent data set. For 1, users can predict class labels for images taken with a device’s camera or locally stored images. Users can also learn a new classification problem on the fly by taking a picture and giving it a new label; this is treated as a new data vector and a new output neuron is added dynamically to the neural network if the label is also new. Figure 7 shows a test image being classified by the cifar10 trained neural network. For 2, users create a statistics worker and can upload test images and track their error over time; after each complete evaluation of the test images, the latest neural network received from the master is used. Fig. 8 shows the error for cifar10 using a small test set for the first 600 parameter updates.
**Archiving trained neural network model**
The prototype does not include a research closure specification. However, it does provide easy archiving functionality. At any moment, users can download the entire model specification and current parameter values in JSON format. Users can then share or initialize a new training session with the JSON object by uploading it during the model specification phase, which represents a high-level of reproducibility. Although the JSON object fully specifies the model, it does not include training or testing code. Despite this shortcoming, using a standard protocol is simple way of providing a lightweight archiving system.
Limitations of MLitB prototype
In this section we briefly discuss the limitations of the current prototype; later in Section ‘Opportunities and Challenges’ we will discuss the challenges we face in scaling MLitB to a massive level.
Our scaling experiment demonstrates that the MLitB prototype can accommodate up to 64 clients before latency significantly degrades its performance. Latency, however, is primarily affected by the length of an iteration and by size of the neural network. For longer iterations, latency will become a smaller portion of the main event loop. For very large neural networks, latency will increase due to bandwidth pressure.
As discussed previously, the main computational efficiency loss is due to the synchronization requirement of the master event loop. This requirement causes the master server to be idle while the clients are computing and the clients to wait while the master processes all the gradients. As the size of the full gradients can be large (at least >1 MB for small neural networks), the network bandwidth is quickly saturated at the end of a computation iteration and during the parameter broadcast. By changing to an asynchronous model, the master can continuously process gradients and the bandwidth can be maximally utilized. By communicating partial gradients, further efficiency can be attained. We leave this for future work.
There is a theoretical limit of 500 MB data storage per client (the viable memory of a web-browser). In our experience, the practical limit is closer to 100 MB at which point performance is lost due to memory management issues. We found that 1 MB/s bandwidth was achievable on a local network, which meant that it could handle images on MNIST and CIFAR-10 easily, but would stall for larger images. With respect to Deep Neural Networks, the data processing ability of a single node was limited (especially is one compared...
to sophisticated GPU enables libraries (Bastien et al., 2012). Although we were most interested in the scaling performance, we note that naïve convolution implementations significantly slow performance. We found that reasonable sized images, up to $100 \times 100 \times 3$ pixels, can be processed on mobile devices in less than a second without convolutions, but can take several seconds with convolutions, limiting its usefulness. In the future, near native or better implementations will be required for the convolutional layers.
**RELATED WORK**
MLitB has been influenced by a several different technologies and ideas presented by previous authors and from work in different specialization areas. We briefly summarize this related work below.
**Volunteer computing**
BOINC (Anderson, 2004) is an open-source software library used to set up a grid computing network, allowing anyone with a desktop computer connected to the internet to participate in computation; this is called public resource computing. Public resource or volunteer computing was popularized by SETI@Home (Anderson et al., 2002), a research project that analyzes radio signals from space in the search of signs of extraterrestrial intelligence. More recently, protein folding has emerged as significant success story (Lane et al., 2013). Hadoop (Shvachko et al., 2010) is an open-source software system for storing very large datasets and executing user application tasks on large networks of computers. MapReduce (Dean & Ghemawat, 2008) is a general solution for performing computation on large datasets using computer clusters.
**JavaScript applications**
In (Cushing et al., 2013) a network of distributed web-browsers called WeevilScout is used for complex computation (regular expression matching and binary tree modifications) using a JavaScript engine. It uses similar technology (Web Workers and Web Sockets) as MLitB. ConvNetJS (Karpathy, 2014) is a JavaScript implementation of a convolutional neural-network, developed primarily for educational purposes, which is capable of building diverse neural networks to run in a single web browser and trained using stochastic gradient descent; it can be seen as the non-distributed predecessor of MLitB.
**Distributed machine learning**
The most performant deep neural network models are trained with sophisticated scientific libraries written for GPUs (Bergstra et al., 2010; Jia et al., 2014; Collobert, Kavukcuoglu & Farabet, 2011) that provide orders of magnitude computational speed-ups compared to CPUs. Each implements some form of stochastic gradient descent (SGD) (Bottou, 2010) as the training algorithm. Most implementations are limited to running on the cores of a single machine and by extension the memory limitations of the GPU. Exceptionally, there are distributed deep learning algorithms that use a farm of GPUs (e.g., Downpour SGD (Dean et al., 2012)) and farms of commodity servers (e.g., COTS-HPS (Coates et al., 2013)). Other distributed ML algorithm research includes the parameter server model (Li
et al., 2014), parallelized SGD (Zinkevich et al., 2010), and distributed SGD (Ahn, Shahbaba & Welling, 2014). MLitB could potentially push commodity computing to the extreme using pre-existing devices, some of which may be GPU capable, with and without an organization’s existing computing infrastructure. As we discuss below, there are still many open research questions and opportunities for distributed ML algorithm research.
OPPORTUNITIES AND CHALLENGES
In tandem with our vision, there are several directions the next version of MLitB can take, both in terms of the library itself and the potential kinds of applications a ubiquitous ML framework like MLitB can offer. We first focus on the engineering and research challenges we have discovered during the development of our prototype, along with some we expect as the project grows. Second, we look at the opportunities MLitB provides, not only based on the research directions the challenges uncovered, but also novel application areas that are perfect fits for MLitB. In Section ‘Future MLitB Development’ we preview the next concrete steps in MLitB development.
Challenges
We have identified three keys engineering and research challenges that must be overcome for MLitB to achieve its vision of learning models a global scale.
Memory limitations
State-of-the-art Neural Network models have huge numbers of parameters. This prevents them from fitting onto mobile devices. There are two possible solutions to this problem. The first solution is to learn or use smaller neural networks. Smaller NN models have shown promise on image classification performance, in particular the Network in Network (Lin, Chen & Yan, 2013) model from the Caffe model zoo, is 16 MB, and outperforms AlexNet which is 256 MB (Jia et al., 2014). It is also possible to first train a deep neural network then use it to train a much smaller, shallow neural network (Ba & Caruana, 2014). Another solution is to distribute the NN (during training and prediction) across clients. An example of this approach is Downpour SGD (Dean et al., 2012).
Communication overhead
With large models, large of numbers of parameters are communicated regularly. This is a similar issue to memory limitation and could benefit from the same solutions. However, given a fixed bandwidth and asynchronous parameter updates, we can ask what parameter updates (from master to client) and which gradients (from client to master) should be communicated. An algorithm could transmit a random subset of the weight gradients, or send the most informative. In other words, given a fixed bandwidth budget, we want to maximize the information transferred per iteration.
Performance efficiency
Perhaps the biggest argument against scientific computing with JavaScript is its computation performance. We disagree that this should prevent the widespread adoption of browser-based, scientific computing because the goal of several groups to achieve native
performance in JavaScript (Chrome V8, 2014; asm.js, 2014) and GPU kernels are becoming part of existing web engines (e.g., WebCL by Kronos: www.khronos.org/webcl) and they can be seamlessly incorporated into existing JavaScript libraries, though they have yet to be written for ML.
**Opportunities**
**Massively distributed learning algorithms**
The challenges just presented are obvious areas of future distributed machine learning research (and are currently being developed for the next version of MLitB). Perhaps more interesting is, at a higher level, that the MLitB vision raises novel questions about what it means to train models on a global scale. For instance, what does it mean for a model to be trained across a global internet of heterogeneous and unreliable devices? Is there a single model or a continuum of models that are consistent locally, but different from one region to another? How should a model adapt over long periods of time? These are largely untapped research areas for ML.
**Field research**
Moving data collection and predictive models onto mobile devices makes it easy to bring models into the field. Connecting users with mobile devices to powerful NN models can aid field research by bringing the predictive models to the field, e.g., for fast labeling and data gathering. For example, a pilot program of crop surveillance in Uganda currently uses bespoke computer vision models for detecting pestilence (insect eggs, leaf diseases, etc.) (Quinn, Leyton-Brown & Mwebaze, 2011). Projects like these could leverage publicly available, state-of-the-art computer vision models to bootstrap their field research.
**Privacy preserving computing and mobile health**
Our MLitB framework provides a natural platform for the development of real privacy-preserving application (Dwork, 2008) by naturally protecting user information contained on mobile devices, yet allowing the data to be used for valuable model development. The current version of MLitB does not provide privacy preserving algorithms such as (Han et al., 2010), but these could be easily incorporated into MLitB. It would therefore be possible for a collection of personal devices to collaboratively train machine learning models using sensitive data stored locally and with modified training algorithms that guarantee privacy. One could imagine, for example, using privately stored images of a skin disease to build a classifier based on large collection of disease exemplars, yet with the data always kept on each patient’s mobile device, thus never shared, and trained using privacy preserving algorithms.
**Green computing**
One of our main objectives was to provide simple, cheap, distributed computing capability with MLitB. Because MLitB runs with minimal software installation (in most cases requiring none), it is possible to use this framework for low-power consumption distributed computing. By using existing organizational resources running in low-energy states (dormant or near dormant) MLitB can wake the machines, perform some
computing cycles, and return them to their low-energy states. This is in stark contrast to a data center approach which has near constant, heavy energy usage (Natural Resources Defense Council, 2014).
**FUTURE MLITB DEVELOPMENT**
The next phases of development will focus on the following directions: a visual programming user interface for model configuration, development of a library of ML models and algorithms, development of performant scientific libraries in JavaScript with and without GPUs, and model archiving with the development of a research closure specification.
**Visual programming**
Many ML models are constructed as chains of processing modules. This lends itself to a visual programming paradigm, where the chains can be constructed by dragging and dropping modules together. This way models can be visualized and compared, dissected, etc. Algorithms are tightly coupled to the model and a visual representation of the model can allow interaction with the algorithm as it proceeds. For example, learning rates for each layer of a neural network can be adjusted while monitoring error rates (even turned off for certain layers), or training modules can be added to improve learning of hidden layers for very deep neural networks, as done in Szegedy et al. (2014). With a visual UI it would be easy to pull in other existing, pre-trained models, remove parts, and train on new data. For example, a researcher could start with a pre-trained image classifier, remove the last layer, and easily train a new image classifier, taking advantage of an existing, generalized image representation model.
**Machine learning library**
We currently have built a prototype around an existing JavaScript implementation of DNNs (Karpathy, 2014). In the near future we plan on implementing other models (e.g., latent Dirichlet allocation) and algorithms (e.g., distributed MCMC (Ahn, Shahbaba & Welling, 2014)). MLitB is agnostic to learning algorithms and therefore is a great platform for researching novel distributed learning algorithms. To do this, however, MLitB will need to completely separate machine learning model components from the MLitB network. At the moment, the prototype is closely tied to its neural network use-case. Once separated, it will be possible for external modules to be added by the open-source community.
**GPU implementations**
Implementation of GPU kernels can bring MLitB performance up to the level of current state-of-the-art scientific libraries such as Theano (Bergstra et al., 2010; Bastien et al., 2012) and Caffe (Jia et al., 2014), while retaining the advantages of using heterogeneous devices. For example, balancing computational loads during training is very simple in MLitB and any learning algorithm can be shared by GPU powered desktops and mobile devices. Smart phones could be part of the distributed computing process by permitting the training algorithms to use short bursts of GPU power for their calculations, and therefore limiting battery drain and user disruption.
Design of research closures
MLitB can save and load JSON model configurations and parameters, allowing researchers to share and build upon other researchers’ work. However, it does not quite achieve our goal of a research closure where all aspects—code, configuration, parameters, etc—are saved into a single object. In addition to research closures, we hope to develop a model zoo, akin to Caffe’s for posting and sharing research. Finally, some kind of system for verifying models, like recomputation.org, would further strengthen the case for MLitB being truly reproducible (and provide backwards compatibility).
CONCLUSION
In this paper we have introduced MLitB: Machine Learning in the Browser, an alternative framework for ML research based entirely on using the browser as the computational engine. The MLitB vision is based upon the overarching objectives that provide ubiquitous ML capability to every computing device, cheap distributed computing, and reproducible research. The MLitB prototype is written entirely in JavaScript and makes extensive use of existing JavaScript libraries, including Node.js for servers, Web Workers for non-blocking computation, and Web Sockets for communication between clients and servers. We demonstrated the potential of MLitB on a ML use-case: Deep Neural Networks trained with distributed Stochastic Gradient Descent using heterogenous devices, including dedicated grid-computing resources and mobile devices, using the same interface and with no client-side software installation. Clients simply connect to the server and computing begins. This use-case has provided valuable information for future versions of MLitB, exposing both existing challenges and interesting research and application opportunities. We have also advocated for a framework which supports reproducible research; MLitB naturally provides this by allowing models and parameters to be saved to a single object which can be reloaded and used by other researchers immediately.
ADDITIONAL INFORMATION AND DECLARATIONS
Funding
The authors acknowledge funding support from Amsterdam Data Science and computing resources from SurfSara. M Welling acknowledges support from Facebook, Google, and Yahoo. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Grant Disclosures
The following grant information was disclosed by the authors:
SurfSara.
Facebook.
Google.
Yahoo.
Competing Interests
The authors declare there are no competing interests.
Author Contributions
- Edward Meeds conceived and designed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper.
- Remco Hendriks conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper.
- Said Al Faraby conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work.
- Magiel Bruntink and Max Welling wrote the paper, reviewed drafts of the paper.
Data Availability
The following information was supplied regarding the deposition of related data:
GitHub: github.com/software-engineering-amsterdam/MLitB.
REFERENCES
|
{"Source-Url": "https://pure.uva.nl/ws/files/19798440/cs_11.pdf", "len_cl100k_base": 10957, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 58036, "total-output-tokens": 15306, "length": "2e13", "weborganizer": {"__label__adult": 0.0004355907440185547, "__label__art_design": 0.0006413459777832031, "__label__crime_law": 0.00046443939208984375, "__label__education_jobs": 0.0020275115966796875, "__label__entertainment": 0.00020647048950195312, "__label__fashion_beauty": 0.00027680397033691406, "__label__finance_business": 0.0004620552062988281, "__label__food_dining": 0.0005021095275878906, "__label__games": 0.0010967254638671875, "__label__hardware": 0.0016632080078125, "__label__health": 0.0011119842529296875, "__label__history": 0.0005030632019042969, "__label__home_hobbies": 0.00017631053924560547, "__label__industrial": 0.0006380081176757812, "__label__literature": 0.0004351139068603515, "__label__politics": 0.0004107952117919922, "__label__religion": 0.0007143020629882812, "__label__science_tech": 0.359375, "__label__social_life": 0.00018405914306640625, "__label__software": 0.0159759521484375, "__label__software_dev": 0.611328125, "__label__sports_fitness": 0.0003838539123535156, "__label__transportation": 0.0006561279296875, "__label__travel": 0.0002636909484863281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63729, 0.02523]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63729, 0.36706]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63729, 0.89392]], "google_gemma-3-12b-it_contains_pii": [[0, 1212, false], [1212, 3735, null], [3735, 5114, null], [5114, 8269, null], [8269, 11569, null], [11569, 14929, null], [14929, 18133, null], [18133, 19814, null], [19814, 22953, null], [22953, 24193, null], [24193, 27149, null], [27149, 30259, null], [30259, 33582, null], [33582, 34057, null], [34057, 35836, null], [35836, 37770, null], [37770, 39678, null], [39678, 42734, null], [42734, 45697, null], [45697, 48736, null], [48736, 51768, null], [51768, 54226, null], [54226, 57021, null], [57021, 60521, null], [60521, 63729, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1212, true], [1212, 3735, null], [3735, 5114, null], [5114, 8269, null], [8269, 11569, null], [11569, 14929, null], [14929, 18133, null], [18133, 19814, null], [19814, 22953, null], [22953, 24193, null], [24193, 27149, null], [27149, 30259, null], [30259, 33582, null], [33582, 34057, null], [34057, 35836, null], [35836, 37770, null], [37770, 39678, null], [39678, 42734, null], [42734, 45697, null], [45697, 48736, null], [48736, 51768, null], [51768, 54226, null], [54226, 57021, null], [57021, 60521, null], [60521, 63729, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63729, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63729, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63729, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63729, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63729, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63729, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63729, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63729, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63729, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63729, null]], "pdf_page_numbers": [[0, 1212, 1], [1212, 3735, 2], [3735, 5114, 3], [5114, 8269, 4], [8269, 11569, 5], [11569, 14929, 6], [14929, 18133, 7], [18133, 19814, 8], [19814, 22953, 9], [22953, 24193, 10], [24193, 27149, 11], [27149, 30259, 12], [30259, 33582, 13], [33582, 34057, 14], [34057, 35836, 15], [35836, 37770, 16], [37770, 39678, 17], [39678, 42734, 18], [42734, 45697, 19], [45697, 48736, 20], [48736, 51768, 21], [51768, 54226, 22], [54226, 57021, 23], [57021, 60521, 24], [60521, 63729, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63729, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
00b22fe422da5a5e6b02df1637b64e280ef78603
|
The Palcom Device Web Bridge
Sandholm, Thomas; Magnusson, Boris; Johnsson, Björn A
2012
Link to publication
Citation for published version (APA):
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
- Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
- You may not further distribute the material or use it for any profit-making activity or commercial gain
- You may freely distribute the URL identifying the publication in the public portal
Take down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
The Palcom Device Web Bridge
Thomas Sandholm
Boris Magnusson
Björn A Johnsson
Technical report, LU-CS-TR:2012-251
ISSN 1404-1200, Report 100, 2012
Lund University
The Palcom Device Web Bridge
Thomas Sandholm, Boris Magnusson and Björn A Johnsson
Department of Computer Science, Lund University, Sweden
{thomass,boris.magnusson,bjornaj}@cs.lth.se
ABSTRACT
In this report we present the design of an application development toolkit for constructing web user interfaces to arbitrary de-
vice services. These device services may produce real-time flows of data that need to be analyzed and monitored. Additionally,
they may allow control of physical equipment. For example, a medical pump may be controlled remotely to inject a precise
dose of some pain-relief pharmaceutical in a cancer patient, based on monitored indications of pain. We focus our discussion
around three main components of the toolkit, first, an event bus for efficiently communicating device-generated notifications
and data; second, a web widget platform, and third a firewall-traversal solution for real-time, peer-to-peer communication in
constrained organizational networks, such as a hospital site.
Categories and Subject Descriptors
H.4 [Information Systems Applications]: Miscellaneous; H.5.3 [Information Interfaces and Presentation]: Group
and Organization Interfaces—Web-based interaction
1. INTRODUCTION
With the recent proliferation of Internet-connected personal as well as organizational devices, many integration opportunities
as well as challenges surface. The first and most obvious challenge is how to make all devices use a common protocol to simplify
orchestration and workflow configuration. Our previous work on the Palcom infrastructure [7] to a large extent tackle these
issues by providing a common service platform on top of a uniform UDP protocol for discovering and communicating with
device services. Palcom also provides an assembly tool and a programming language that allows developers to compose new
services from pre-existing ones. Furthermore, the platform has the ability to tunnel communication between device services
across networks in a secure and efficient way. The main difference from traditional software development in the area of device
control is the bottom-up development paradigm where the devices, their services and, in many cases also a full workflow, exist
and the main issue remaining is how to present these capabilities to end users.
The work described here focuses on providing building blocks for developers to easily compose end-user interface that
interact with Palcom device services. Since both devices and people who need access to these devices are distributed in
nature, accessibility and availability are key requirements. The ubiquity of web browsers on both old and new consumer
devices as well as the rapid evolution of advanced new standards, such as HTML5, lead us to a design around web protocols
and web-based user interfaces.
The report is organized as follows. First (Section 2), we discuss a web event bus for communicating asynchronous device
events to web clients. Second (Section 3), we present a web widget platform designed to communicate with device services.
Third (Section 4), we give an overview a novel solution implemented to tunnel real-time web data, such as audio, video across
firewalls. Finally, we conclude the report with lessons learned and how our solution relates to other known efforts.
This work was done as part of a project to provide IT-based support for home-care treatment of cancer patients 1.
1http://itaci.cs.lth.se/itACiH/itACiH.html
2. WEB EVENT BUS
This section presents the design of a bridge from the Palcom service platform to browser clients, capable of both communicating asynchronous events generated by device services and controlling potential actuators on devices. Fundamentally, this meant building a Web server communicating with standard web protocols in one end (HTTP/HTTPS) and the Palcom UDP protocol in the other end. The bridge serves as the foundation for our integration work and higher-level features such as the web widget framework and the firewall traversal service described in Section 3 and Section 4 respectively.
The first challenge was to overcome the inherent poll nature of HTTP/AJAX based user interfaces prominent on the WWW today. Instead of polling a server for events we want the server to push events directly to all the browser clients interested in some event channel. Using notification technologies such as RSS/Atom is not appropriate due to the real-time nature of the events. Imagine delivering a patient heart-rate via RSS. Simply using polling AJAX calls could work but would not be very efficient and the time to receive an update would be tied to whatever poll frequency was configured. Configuring an appropriate interval not only relies on the application semantics but also the network and the client and server load, and thus is non-trivial.
HTML5 [4] comes to the rescue with the introduction of standards such as Server-Sent Events (SSE) and WebSockets. The main issue with these standards is that they are too immature to provide any real cross-platform portability which is the reason we turned to the web in the first place. Legacy browsers as well as legacy devices are important for us to support in our work. A collection of HTML4 hacks known under the umbrella of COMET [3], has been used to overcome this problem in legacy applications. We build our solution around the technique of long polling, as it is the most widely supported technique across all browsers. It only relies on the standard XMLHttpRequest (XHR) API known as AJAX [1].
The basic idea with long polling is to keep connections to the Web server alive until an event comes in from the server. Then the server writes the response and closes the connection and the client immediately turns around and issues a new request. If there has not been any event for a specific period of time the client would time out and create a new connection to the server as before. The main issue to be solved is to avoid too many reconnections while making sure that inactive clients don’t consume server resources. In applications that don’t accept any message loss it is also important to maintain a buffer on the server to capture events that arrive during the time when the client is reconnecting. The main attraction of this approach is that it is simple, works trivially even in legacy browsers with minimal code, and it is easy to configure to work efficiently in many network settings. A number of open-source long polling frameworks exist, such as SockJS 2 and Socket.IO 3, Tunguska 4. They rely on immature backends and complex protocol negotiations between older and newer standards as well as COMET hacks and fallbacks such as flash. Our requirement was to build an easily embeddable solution on the server based on Java, as the Palcom protocol has its most mature implementation in Java for portability reasons. We hence decided to build our own long-polling solution on top of the Java Netty toolkit. Apart from the simple yet powerful architecture there are also a number of novel protocol features in our solution, described below.
These added features were designed to make the client simple, yet cross-platform compatible, and include event ordering, event batching, client controlled history retrieval, payload based server side filtering and client subscription reuse. Furthermore we define a payload format in JSON to make it easy to parse self-contained event data and meta data and a simple header to accept or reject messages before parsing the payload. This more structured payload also allows us to do more structured event filtering.
2.1 Basic protocol
The basic protocol is based on three different timestamps that are sent in a header and a payload that may be parsed separately. Many different payload formats may be plugged into the basic protocol but to get all the benefits of batching, ordering and filtering described below we also use a well-defined structured payload, described in the next section. There are three HTTP GET requests defined in the basic protocol, to join groups (aka channels), to poll client queues, and to leave groups. To be able to receive any events when polling a client queue the client id in the request needs to have been subscribed to groups that produce events. To subscribe to one or many new groups the following HTTP GET requests should be issued
HTTP GET join?id=CLIENT_ID&groups=GROUP_IDSS
where CLIENT_ID is the id of a client browser that may be a generated guid that is stored in a cookie in the browser, and GROUP_IDSS is a comma separated list of group ids. Similarly to unsubscribe to a channel the following HTTP GET request is issued.
HTTP GET leave?id=CLIENT_ID&groups=GROUP_IDSS
To poll for events the following request is issued:
HTTP GET poll?id=CLIENT_ID×tamp=TIMESTAMP&sequence=SEQUENCE
where CLIENT_ID is the id used in the join and leave calls, TIMESTAMP is the time in milliseconds of the last event seen (that should not be replayed) and SEQUENCE is a sequence number used to order events occurring at the same millisecond. All timestamps are generated on the server before events are queued. The response of the poll call looks as follows:
https://github.com/sockjs/sockjs-client
http://socket.io
http://www.sitepen.com/blog/2010/07/19/real-time-comet-applications-on-node-with-tunguska/
The first line of the HTTP Response payload is a header that contains a PROTOCOL_MARKER used to identify the header information and potentially version it, a LAST_EVENT_TIMESTAMP and LAST_EVENT_SEQUENCE that indicate the timestamp and sequence number of the last event returned in the payload respectively. The LAST_EVENT_TIMESTAMP is 0 in case no new events were found, i.e. the poll timed out. The SERVER_TIMESTAMP may be used by the client to reliably determine if the events returned are too old to be of interest by simply doing a diff between the LAST_EVENT_TIMESTAMP and SERVER_TIMESTAMP which would indicate how many milliseconds ago the most recent event happened. All this information is available so the client could reason about whether it makes sense to parse the payload. The last event timestamp and sequence values should also be used when sending the next poll message to ensure that no messages are missed without affecting any other polling clients. The server_timestamp may also be used to determine how old individual events are in the potential batch of events returned in the payload.
### 2.2 Payload format
The payload is a JSON object with the following format:
```json
{"events": [{"timestamp":EVENT_TIMESTAMP,"message":MESSAGE}], "timestamp":LAST_TIMESTAMP}
```
The events array is an ordered list of events from oldest to newest adhering to the timestamp specification in the request. If the timestamp in the header is not 0 the events array contains at least one element. The outer timestamp is the most recent timestamp of all the event timestamps in the array. The message is a string that may in turn be a json formatted message. We also define a message format with some meta data as follows:
```json
{"deviceID":DID, "serviceID":SID,"instanceID":IID, "command":COMMAND,"elements": [{"type":TYPE,"name":NAME,"data":DATA}]
```
where DID is the device id of the device generating the event. SID is the service id on the device that generated the event and IID is the instance id of that service. COMMAND is the command or event name, elements is a list of parameters of the command. Each parameter has a TYPE which is typically a mime type and a NAME used for payload filtering as well as the raw DATA. This data may in turn be JSON but it is then application specific.
### 2.3 Event ordering
As we have seen from above the protocol allows for simple client-side reasoning about which events to process based on timestamps and the events are strictly ordered to make replays safe. To achieve this the server serializes the addition of all the events and timestamps them with a millisecond timestamp. If the timestamp of an incoming event is the same as an already timetamped event in the same client queue it will bump up the sequence number but use the same timestamp. The first event with a given timestamp always has sequence number 0.
### 2.4 Event batching
As could be inferred from the payload format a single poll request may return more than one event. This is crucial for scalability reasons and it significantly speeds up bootstrapping where a client may have missed many events when starting up. Also during very high load the server buffer may fill up fast when the client reconnects, and since the client needs to reconnect for each poll it could lead to thrashing when the client is just reconnecting and getting further and further behind in consuming the latest events. The server ensures not only that the returned events adhere to the timestamps in the poll request but that the events in the batch returned are ordered in the time they occurred which simplifies replay.
### 2.5 Client-controlled history
The timestamp parameter of the poll request described above may be 0 in which case all events not previously returned to that CLIENT_ID as known by the server will be retrieved. The timestamp parameter may also be a negative number in which case the absolute value of the timestamp is taken to mean the maximum number of events to be returned from the top (the most recent) of the queue. 0 and negative values of the timestamp parameter in the poll call may be used to bootstrap clients, e.g. after a browser page refresh. Using 0 continuously is a bit dangerous since it would for instance mean that multiple tabs in the browser may not all get the events if they share the same CLIENT_ID. So after the initial bootstrap it is recommended that the latest timestamp of the events returned is passed into the next poll call. A negative number may be used to replay a few messages in a stream to avoid caching them on the client but still displaying them in something like a positioning trace or historical graph to the end user. If a client has been off-line for some time it may also be useful to restrict the maximum number of events returned which otherwise could overload the client both when timestamp 0 or a last seen timestamp are used.
### 2.6 Payload-based server-side filtering
As alluded to above in the payload section the well defined message structure allows us to reason about which payloads we are interested in on the client side. We make use of this in a filter that may be attached to join request. By default when you join a group you receive all events on that group (assuming they are not only sent to a single client in the backend device which is also possible). To restrict the events further on the server to save on bandwidth a filet may be appended to the group name in the join call. The format of the filter is:
The first "|" separates the filter specification from the group id. Then each filter is separated by a "||" and each filter comprises a parameter name and a filter value separated by a "|". All filters must match for the event to be sent to the client. So the "||" may be interpreted as a logical AND. If you want OR behavior for filters then the same group id with a different filter must be specified. The PARAM_n names must match the NAME of the message element NAME in the message payload described above in order to be evaluated. The evaluation is currently a direct string comparison (but could just as easily be a regexp) on one of the parameters, e.g. a record id.
### 2.7 Client channel subscription reuse
A browser client may create many subscriptions on a single id, create many subscriptions on a set of different ids, or share subscriptions by sharing ids with other browser clients. The typical case is that a unique id is generated for each browser instance, so that different tabs or page refreshes may read events from the same channel and share subscriptions. The pages and tabs may still read from different positions in the channel queues by making use of the timestamp parameter as long as it is not fully controlled by the server (e.g. timestamp is always 0).
### 3. WEB WIDGET PLATFORM
Building on the long polling model that we just described, we now present the components we designed to make it easy to build complete web application user interfaces. There are three parts to our design. First we build some core APIs to connect backend device service notifications to DOM UI elements, then we provide an application Grid container to layout content in pre-defined panes and to provide generic panes, last we provide a widget plugin model and provide a gallery of high-level generic widgets. These three parts are also supported by a number of generic backend services to allow for richer and more customizable interactions. We describe some of them below and we dedicated a separate section to the service that powers the web conferencing widget as it is more involved due to the firewall traversal capabilities.
#### 3.1 Connecting device services to DOM elements
A device contains services. Both the device and its services are addressable using a UDP based messaging middleware called Palcom. A message in Palcom is called a command and each command has a list of parameters, which in turn are all typed and named. The command structure looks the same both for messages sent to the device services and for events produced by the device sensors. Sending a command through the web application and the Web server to a device is straightforward. We simply need to specify the device, and service ids and then send over the parameters via standard HTTP. The message is then translated by the WebServer into a Palcom UDP message and sent to the correct device and service. As a simplification we let each Web server have a default device to allow easy discovery of available services and to be able to connect to it without specifying a device id. For outgoing commands produced by the device services we leverage our long polling framework. Browser clients can subscribe to events from a service by specifying the group name of the long polling channel as a concatenation of service id and event. In our design this is achieved by specifying the id of a dom element and the group name together with a callback function, and some options to pass context from subscriber to event receiver. A number of standard callback functions are provided to populate e.g. the value attribute of textarea, input text, button etc automatically if an incoming event is received. Checkboxes, Radio buttons, and Image tags may also be set automatically with standard callbacks. Below are some examples of callback functions for standard DOM elements.
```javascript
Palcom.ValueSetter = function(ui, data, options) {
ui.val(decodeURIComponent(data[0].data));
}
Palcom.ValueChecker = function(ui, data, options) {
ui.attr('checked', (options.equals === data[0].data));
}
Palcom.ValueSelector = function(ui, data, options) {
ui.attr('selected', true);
}
Palcom.ImageSetter = function(ui, data, options) {
ui.attr('src', data[0].data);
}
```
Here are some examples of how to use these callbacks
```javascript
Palcom.connect(BUTTON_ID, SERVICE_ID, COMMAND, Palcom.ValueSetter);
Palcom.connect(TEXT_ID, SERVICE_ID, COMMAND, Palcom.ValueSetter);
Palcom.connect(RADIO_ID, SERVICE_ID, COMMAND, Palcom.ValueChecker, {equals: 'true'});
Palcom.connect(CHECKBOX_ID, SERVICE_ID, COMMAND, Palcom.ValueChecker, {equals: 'true'});
Palcom.connect(SELECT_OPTION, SERVICE_ID, COMMAND, Palcom.ValueSelector);
Palcom.connect(IMAGE_ID, SERVICE_ID, COMMAND, Palcom.ImageSetter);
```
The first ID in the call is the html tag element id to connect, and the second id is part of the service identifier. to send a command to a service on a device you simply issue the following call:
Now with the basic connection established between UI components and device services we wanted to provide some tools that allows complete applications to be designed more rapidly. The most foundational support to that effect is the application grid. The application grid allows the application developer to position elements in predefined sections of a Web page. Some sections are prepopulated with standard content and sections have predefined dependencies and interaction patterns. Apart from styling sections such as logo and application title and a toolbar to toggle to and fromfullscreen there are four important sections of all applications written with out toolkit.
- **Navbar.** This in an area that displays record elements, such as patients or products, i.e. items you select for further exploration and interaction. It is implemented as an accordion widget.
- **Menu.** This is a functional menu that is application specific. The idea is that each function operates on the currently selected Navbar element. Each of these functions get a separate tab in a tabbed menu to display content. Each tab is also associated with a device service. When a new navbar item is selected the tab pane updates. When a new tab is selected the same navbar item is used for the new service associate with the selected tab.
- **Alert Bar.** This is an application wide notification area. All the alerts sent out will be broadcast to everyone currently displaying content from the application. There are three section, Alarm, Waring and Info. Each section has a queue of events and timestamps are displayed in a real-time updating human friendly format, e.g. two days ago, or two weeks ago. Anyone using the app can submit messages of any of the three alert types directly in the alert bar. A backend service persists the messages and potentially does some filtering to make sure everyone see the same lists regardless of browser restarts.
These areas can be put in a west,north,center,east,south layout where the center pane (the tab content areas) size adapts to the browser window size. Other application wide features include selecting a UI theme and a Web Font type font family for the application. This configuration as well as the Navbar and Menu content are provided by a UI service that may be extended to provide more dynamic content then the default service that simply reads the content from configuration files in a JSON format. An example of the layout for a typical application can be seen in Figure 1. This feature was implemented using the JQuery Layout plugin.\(^5\)
### 3.3 Widgets
To complete the picture and render a fully functioning web interface we also need to provide support for rendering the services mapped to the various menu panes. Developers may write their own HTML/JS/CSS code together with our core API to interact with services but we wanted to provide some higher level tools both for debugging and for styling and composition of standard widgets. Widgets may be configured with the Menu configuration. E.g. a standard widget is mapped to a menuitem which in turn is mapped to a service. The service will then need to support the protocol of the widget in order to function properly. We thus define some standard protocols, with payloads in JSON, where services are responsible for implementing some commands and return the data in a standard format. Some of these services may be general enough to just be plugged in whereas others require you to plug in your own data stream to become interesting. Nevertheless, these standard widget protocols allow us to develop more complete, higher-level, and more reusable widgets than the typical widgets seen in JavaScript libraries such as JQuery UI\(^6\) and YUI\(^7\). All the widgets are data driven, meaning that they can be reused without changing any of the UI components for a large variety of data sets. For instance, in the medical domain there are many datasets that are historical streams of patient device measuring data. All of these data sets could be rendered through the same widgets that allows for easy discovery, navigation and graphing of time series. Another widget, the table widget allows for display, search, navigation, selection, and styling of any tabular data including html, text, numbers, images, wiki content. There is also a form widget, making it easy to hook up input elements to a series of sendCommand calls as described above when a user clicks a submit button. Finally a debug widget gives allows the entire service widget to be rendered automatically including the ability to send events to available device services and receive events from them. A number of media related widgets are also provided but they are described separately as part of the firewall services in Section 4.
A gallery of widgets supported is shown in Figure 2. The Table and History widgets are used to present and navigate tabular data and data time series respectively. The Debug widget can be automatically generated from any service available in the Web server without configuration to debug events and controls. As a meta widget we provide the ability to configure nested menu bars in the Tabs widget. All the widgets may be arbitrarily nested inside menu panes.
Application developers may also write and plug in reusable widgets by implementing a couple of callbacks and registering themselves with the core library. As follows:
---
\(^5\)http://layout.jquery-dev.net/
\(^6\)http://jqueryui.com
\(^7\)http://yuilibrary.com
var ExampleWidget;
if (!ExampleWidget) {
ExampleWidget = {};
}
(function () {
ExampleWidget.setupWidget = function(service,instance,ui,value,options) {
// display widget in ui here (DOM element ID where ui should be rendered)
// and call service with core API (Palcom.connect,Palcom.sendCommand)
};
ExampleWidget.destroyWidget = function() {
// clean up widget state here
};
})();
(function () {
Palcom.registerWidget(EXAMPLE,ExampleWidget);
})();
EXAMPLE is the name of the widget as used in the menubar configuration to specify that a service should be rendered using this widget.
3.4 Widget Data Payload
Both the general purpose history and table widgets use the same standard JSON payloads to communicate graph point and table cell data respectively. This payload has the following format:
```
{
"headers": [
{"name":ROW_HEADER_TEXT","id":"header"},
{"name":COL_1_HEADER_TEXT","id":COL_1_HEADER_ID},
{"name":COL_2_HEADER_TEXT","id":COL_2_HEADER_ID},
...
{"name":COL_c_HEADER_TEXT","id":COL_c_HEADER_ID}],
"rows": [
[ROW_1_HEADER_TEXT,VAL_1_1,VAL_1_2,...,VAL_1_c],
[ROW_2_HEADER_TEXT,VAL_2_1,VAL_2_2,...,VAL_2_c],
...
[ROW_r_HEADER_TEXT,VAL_r_1,VAL_r_2,...,VAL_r_c]],
"rowids": [ROW_ID_1,ROW_ID_2,...,ROW_ID_r]
}
```
"selected":
This is an example with r rows, c columns and s selected cells. The SELECTED_ROW_ID must match an id in rowids and SELECTED_COL_ID must match an element in the headers list. The number of elements in each row array must be the same as the number of elements in the headers array. The first item in the headers array and the first element in the row array are by default used to denote table header labels. The header ids are used for pagination and search, e.g. give me 5 items after HEADER_ID x is a valid protocol request. Give me all items between HEADER_ID X and HEADER_ID Y is another example. All the requests are also associated with a record id, typically selected from the navbar items.
3.5 Widget Data API
Depending on which APIs a service exposes, different UI features will be available. A widget typically requires a minimal set of APIs to be implemented in order to at least initialize correctly.
// Gets an index of years, months, and days with data including
// average values for record id record
GetIndex(record) -> Index
// Gets all data records between epoch dates from and to
// Data follows the format described in the previous section
GetHistory(record,from,to) -> Data
// Gets period number of data records starting at position id
// period may be negative in which case records before position
// id will be retrieved. If period starts with "+" it is an
// inclusive search otherwise the record at position id
// will not be included in the Data returned.
GetData(record,id,period) -> Data
// Searches through the data like the GetData function
// but only periods with items marked as selected will
// be returned
GetSelectedData(record,id,period) -> Data
// Select a cell in a record table at a column id and
if the peers have two different local gateways the following setup logic is applied: the same local gateway no gateway redirection will be performed and the standard WebRTC protocol will be used. However so that only local gateway configurations need to be set to establish a session. If the rtc service detects that both peers use a gateway and you could lose part of the benefit of the no-deployment, no-plugin web conferencing envisioned with WebRTC.
For this scenario we allow the clients to use remote gateways, but then we are back to the issue of many ports being open to the outside world. Most such home networks only have NATs though. In the case of a very restrictive home firewall we could implement on demand and there is one port that needs to be burned through the firewall for each active peer. The current WebRTC implementations don’t make use of TURN out-of-the-box for these reasons. Unfortunately most corporate environments have very restrictive firewall policies and would not allow one port to be opened per peer. This is the case for our hospital setting, and thus we developed a new firewall traversal protocol that is designed to incur as little overhead as possible, and at the same time be easy to deploy. If it is more complicated to deploy than installing a Skype client we have lost part of the purpose of web video conferencing. Our solution only requires a single port to be opened corporate network wide for all the media session both coming in and going out. It does require the deployment of a gateway separately from the Web server and web browsers. But only one gateway is needed per local network that all browsers on that network can reuse. It is also very easy to deploy and could potentially be deployed automatically, e.g. in a corporate cloud since it only relies on a JVM to be installed. For home users behind firewall on their own DSL/Cable Wifi network it could become more of a burden to install a gateway and you could lose part of the benefit of the no-deployment, no-plugin web conferencing envisioned with WebRTC.
For this scenario we allow the clients to use remote gateways, but then we are back to the issue of many ports being open to the outside world. Most such home networks only have NATs though. In the case of a very restrictive home firewall we could provide on-demand remote gateway provisioning where one IP is allocated in the public cloud on a short lived VM per peer and session on a well-known port. This is however future work, as it would require some billing structure too to buy or rent virtual machines from a cloud provider.
Now to our solution in more detail. It comprises three components, a tunnel, an rtc gateway service and a central rtc service (hereafter simply referred to as the rtc service). The rtc gateway services are typically deployed within local networks behind firewalls and the rtc service is deployed on the public Internet. The tunnel is responsible for multiplexing the traffic that comes into it through a single well known port to another tunnel peer which would sit on a different network such as the public Internet. One could imagine tunneling directly between the peer networks but that makes the configurations more volatile and it may also be a security issue to give access directly into services running in a remote local network (such as a hospital or corporate network).
To simplify deployment we by default deploy the rtc service in the default device hooked up to our long polling Web server so that only local gateway configurations need to be set to establish a session. If the rtc service detects that both peers use the same local gateway no gateway redirection will be performed and the standard WebRTC protocol will be used. However if the peers have two different local gateways the following setup logic is applied:
1. The webRTC runtime in browser A generates an SDP (Session Description Protocol) offer through JSEP (Javascript Session Establishment Protocol) containing among other things an IP and port where it wishes to receive remote media traffic (RTP and RTCP packets).
We now intercept the offer in Javascript locally before it is broadcast to potential peers. The signalling plane of WebRTC is not defined in the spec but could treat the JSEP protocol as a black box, i.e. no knowledge of payload is necessary, you just need to pass the data from the WebRTC runtime in browser A to browser B by some means. This makes it easy for us to attach additional information to the signalling payload without the knowledge of the respective webRTC runtimes while still being fully compliant to the JSEP handshake protocol. The information we add is simply the identity of the local rtc gateway.
The enhanced offer is now passed through the rtc service and broadcast to everyone listening on the channel where browser A sent the offer.
When browser B receives an offer payload we intercept the SDP data before it reaches the WebRTC runtime and rewrites it using the browser B rtc gateway. This is done as follows. The host and port of the offer is extracted together with the remote (browser A in this case) rtc gateway. A remote allocate call is then issued to the rtc service containing, browser A port, host, gateway and browser B gateway. This remote allocate command is channeled through the tunnel that was set up on the browser B network to the browser B rtc gateway. The browser B gateway then allocates a port for the incoming session on a lease basis and passes back its own IP together with the allocated port. The idea is that all traffic on that port will be forwarded through the central service to the correct remote rtc gateway service where it will be sent out to the IP and port that were in the original SDP offer that the WebRTC runtime in browser A generated. Now browser B will get the new IP and port back pointing the the local gateway and the SDP will be rewritten with this info as the remote peer data channel.
The browser B webRTC runtime will now generate an SDP answer in accordance with JSEP that will follow the exact same procedure as the offer with first attachment of local gateway and then a remote allocate call now on the browser B side of the network before it reaches the browser B runtime.
The general architecture of the firewall setup process is depicted in Figure 3.

After the session has been established following the above procedure the relay is ready to transfer data packets from browser A to the local rtc gateway through the tunnel to the central server through the second tunnel to the rtc gateway in the browser B network and vice versa for data packets from browser B to browser A. There is only one missing piece, the webRTC runtime does not send data to a remote IP and port socket before it has been authenticated with a series of STUN messages. These STUN messages are also used continuously as a keep-alive mechanism and as soon as they don’t receive a reply the data traffic stops. So our rtc gateways also have to implement the STUN protocol used by the webRTC runtime. The good news is that all information needed to generate both STUN responses and STUN requests required by the authentication and keep-alive handshakes is already available in the RTC gateways. The bad news is that there is a chicken and egg problem where browser A does not send out data traffic until it has sent STUN requests where it got valid replies and also received STUN requests from the remote browser. At the point where it both got back STUN responses and received compatible STUN requests data
traffic must be generated immediately for the channel not to be rejected. Hence we need not only reply with the correct STUN responses to incoming STUN requests and generate compatible STUN requests at the correct time in the handshake but we need to also be aware of when the remote browser is actually ready to send data packets. This apparent chicken and egg scenario is resolved by the fact that the browsers generate data packets a bit in advance before the full authentication has happened as a test. However if we don’t manage to establish the channel quickly enough the attempt will be dropped and the data traffic will stop, and once it stops on one end it stops on both ends (both from browser A and browser B). This complicated dance has a relatively simple solution. Our gateway replies to STUN messages properly for a period of time and also generates STUN requests so that the local browser will start sending test data packets. After some point it stops replying to STUN messages to allow the remote browser to react. Now when the same test data packets start showing up from the remote browser the local gateway starts replying to STUN messages and generate STUN requests again to authenticate the channel and be compliant to the keep-alive protocol. The nice thing about this solution is that the very high frequency STUN messages never have to travel beyond the local network and never have to be sent through the tunnels to the other peer. One word of caution is that the STUN heartbeats may be used to detect the latency of the connection which of course won’t be accurate in our case since the STUN pings only happen between the browser and the local rtc gateway. For this reason we also allow a delay to be injected in the STUN replies to simulate a remote peer over a slower connection. In practise this is however rarely needed. We are currently only aware of one limitation of this protocol. The handshake does not seem to work over ssh port forwarding setups. It does work reliably over both wired, WiFi and 3G networks, though. And most importantly the audio and video packets stay in sync thanks to the webRTC use of RTP packets where both audio and video frames are synced at the source. We should also mention that webRTC uses a single port for RTP and RTCP instead of the usual odd and even port numbers to simplify NAT traversal and this also helps us in the firewall traversal setup. Otherwise the same setup would have to be duplicated for each of the two ports. Finally the WebRTC RTP payloads are all encrypted so one only needs to make sure that the SDP signalling plane is encrypted where the credentials are included to secure the protocol. This means that if someone else receives a data packet, e.g. by sniffing our tunnel or rtc gateway traffic they cannot do anything with it. We also have the option of encrypting our tunnel traffic, but in this case it is overkill. To encrypt the signalling traffic we simply rely on HTTPS which is supported by our long polling implementation trivially since it only relies on the browser XHR (XMLHttpRequest aka AJAX) API.
Figure 4 shows how RT(C)P UDP traffic flows through the WebRTC browser runtimes and the Palcom tunnel during an audio/video conference session.
5. CONCLUSIONS
We have described three parts of the Palcom Web bridge toolkit that makes it easy to build rich web clients to communicate with Palcom device services such as medical pumps. The main contributions of this work comprise:
- a long polling solution with server side filtering and history features,
• a web widget tool-box for real-time data stream monitoring and device control, and
• a firewall traversal service to tunnel WebRTC audio and video data traffic across networks protected by NATs and firewalls.
The main challenges we addressed were how to communicate efficiently between devices and web front-ends, how to build rich web clients for monitoring and controlling devices in real-time, and how to facilitate real-time audio and video conferencing through secure networks such as a hospital firewall.
To our surprise the browser runtime was very efficient in processing and rendering large amounts of streamed data. 2000 datapoints rendered in our timeseries widget every 2 seconds showed some of the opportunities of this solution. If more data needs to be streamed in real-time simple data reduction techniques such as averaging may be applied.
The second positive surprise was the audio/video data stream tunneling performance. Despite a fairly complex route through networks and tunnels, the WebRTC protocol was very resilient to changing rates and the impact on the user experience was negligible. We were even able to run the traffic through a 3G cellular network with acceptable performance. WebRTC does however require a lot of CPU power to run smoothly. 5-6 years old Windows XP PCs that we still could run on given our cross-platform design were a bit too slow to give a good user experience. On the other hand, modern laptops such as MacBook Air and MacBook Air Pro provided an excellent audio and video experience.
Future work includes integrating our work in the hospital field study and providing custom solutions for smaller form factors such as smartphones and tablets.
Acknowledgments
We would like to thank John Sturk, Lars Nilsson and Karl Kullberg for their help with testing and developing some of the infrastructure our work relies on.
6. REFERENCES
http://www.w3.org/TR/XMLHttpRequest/.
|
{"Source-Url": "http://portal.research.lu.se/ws/files/3326454/3954413.pdf", "len_cl100k_base": 8636, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 31256, "total-output-tokens": 9823, "length": "2e13", "weborganizer": {"__label__adult": 0.0003919601440429687, "__label__art_design": 0.0006570816040039062, "__label__crime_law": 0.00031757354736328125, "__label__education_jobs": 0.00066375732421875, "__label__entertainment": 0.00010162591934204102, "__label__fashion_beauty": 0.0001829862594604492, "__label__finance_business": 0.00027298927307128906, "__label__food_dining": 0.0003650188446044922, "__label__games": 0.0004467964172363281, "__label__hardware": 0.0059661865234375, "__label__health": 0.00124359130859375, "__label__history": 0.00031185150146484375, "__label__home_hobbies": 0.0001099705696105957, "__label__industrial": 0.0004911422729492188, "__label__literature": 0.00019121170043945312, "__label__politics": 0.00016129016876220703, "__label__religion": 0.00046181678771972656, "__label__science_tech": 0.07965087890625, "__label__social_life": 6.717443466186523e-05, "__label__software": 0.0198211669921875, "__label__software_dev": 0.88671875, "__label__sports_fitness": 0.00031256675720214844, "__label__transportation": 0.000621795654296875, "__label__travel": 0.0002529621124267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44104, 0.01894]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44104, 0.19566]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44104, 0.91676]], "google_gemma-3-12b-it_contains_pii": [[0, 1140, false], [1140, 1306, null], [1306, 1306, null], [1306, 4761, null], [4761, 10635, null], [10635, 16144, null], [16144, 21137, null], [21137, 26659, null], [26659, 28014, null], [28014, 29771, null], [29771, 33875, null], [33875, 37431, null], [37431, 40993, null], [40993, 44104, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1140, true], [1140, 1306, null], [1306, 1306, null], [1306, 4761, null], [4761, 10635, null], [10635, 16144, null], [16144, 21137, null], [21137, 26659, null], [26659, 28014, null], [28014, 29771, null], [29771, 33875, null], [33875, 37431, null], [37431, 40993, null], [40993, 44104, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44104, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44104, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44104, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44104, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44104, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44104, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44104, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44104, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44104, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44104, null]], "pdf_page_numbers": [[0, 1140, 1], [1140, 1306, 2], [1306, 1306, 3], [1306, 4761, 4], [4761, 10635, 5], [10635, 16144, 6], [16144, 21137, 7], [21137, 26659, 8], [26659, 28014, 9], [28014, 29771, 10], [29771, 33875, 11], [33875, 37431, 12], [37431, 40993, 13], [40993, 44104, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44104, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
9a617c703c59df726ccf88135333a0924cf0d8a0
|
[REMOVED]
|
{"Source-Url": "https://hal.archives-ouvertes.fr/file/index/docid/117053/filename/PORSCHE2006.pdf", "len_cl100k_base": 9462, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31750, "total-output-tokens": 11014, "length": "2e13", "weborganizer": {"__label__adult": 0.00039267539978027344, "__label__art_design": 0.0012674331665039062, "__label__crime_law": 0.0006275177001953125, "__label__education_jobs": 0.003154754638671875, "__label__entertainment": 0.0002579689025878906, "__label__fashion_beauty": 0.0002732276916503906, "__label__finance_business": 0.0013322830200195312, "__label__food_dining": 0.00038695335388183594, "__label__games": 0.0009226799011230468, "__label__hardware": 0.0008349418640136719, "__label__health": 0.00055694580078125, "__label__history": 0.0007891654968261719, "__label__home_hobbies": 0.000186920166015625, "__label__industrial": 0.0008006095886230469, "__label__literature": 0.0011749267578125, "__label__politics": 0.00043582916259765625, "__label__religion": 0.0006079673767089844, "__label__science_tech": 0.38818359375, "__label__social_life": 0.0002837181091308594, "__label__software": 0.0633544921875, "__label__software_dev": 0.53271484375, "__label__sports_fitness": 0.0002689361572265625, "__label__transportation": 0.0009050369262695312, "__label__travel": 0.0002994537353515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41846, 0.04201]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41846, 0.5156]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41846, 0.85179]], "google_gemma-3-12b-it_contains_pii": [[0, 913, false], [913, 6005, null], [6005, 11981, null], [11981, 17679, null], [17679, 21257, null], [21257, 26627, null], [26627, 29556, null], [29556, 32993, null], [32993, 38332, null], [38332, 41846, null]], "google_gemma-3-12b-it_is_public_document": [[0, 913, true], [913, 6005, null], [6005, 11981, null], [11981, 17679, null], [17679, 21257, null], [21257, 26627, null], [26627, 29556, null], [29556, 32993, null], [32993, 38332, null], [38332, 41846, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41846, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41846, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41846, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41846, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41846, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41846, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41846, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41846, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41846, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41846, null]], "pdf_page_numbers": [[0, 913, 1], [913, 6005, 2], [6005, 11981, 3], [11981, 17679, 4], [17679, 21257, 5], [21257, 26627, 6], [26627, 29556, 7], [29556, 32993, 8], [32993, 38332, 9], [38332, 41846, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41846, 0.06883]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
3201d09deff6baa82f695c517a728930fbb88c6d
|
INTERACTIVE CONSULTING
via Natural Language
Stuart C. Shapiro
Stanley C. Kwasny
Computer Science Department
Indiana University
Bloomington, Indiana
TECHNICAL REPORT NO. 12
INTERACTIVE CONSULTING
VIA NATURAL LANGUAGE
STUART C. SHAPIRO
STANLEY C. KWASNY
JUNE, 1974
Appeared (without appendices) in Communications of the ACM 18, 8
(August, 1975), 459-462.
INTERACTIVE CONSULTING
via Natural Language
Stuart C. Shapiro
Stanley C. Kwasny
Computer Science Department
Indiana University
Bloomington, Indiana
Abstract
Interactive Programming systems often contain help commands to give the programmer on-line instruction regarding the use of the various systems commands. We argue that it would be relatively easy to make these help commands significantly more helpful by having them accept requests in natural language. As a demonstration, we have provided Weizenbaum's ELIZA program with a script that turns it into a natural language system consultant.
Key Words and Phrases: interactive programming, time-sharing systems, natural language processing, computer assisted instruction.
Introduction
Many interactive systems include a mechanism for automatic dissemination of information regarding the use of its commands. Typically, the user gets this information by entering a basic "help" command and providing the name of the command he wants information about. For example, on the DECsystem-10 [3], the user may type HELP, and get information on the HELP commands; HELP*, and get the names of documented features; or HELP<name>, and get information on the feature <name>. Figure 1 shows the results of typing HELP and HELP* on the system available at Indiana University.
The problem with such help commands is that the user must know which command he wants information about. If, instead, he only knows what he wants to do and wants to find out the proper command to use, he is reduced to a sequence of guessing command names. Help commands should be more user-oriented, allowing the user to describe in his own terms what he wants to do. The system would interpret the request and provide information on how to accomplish the desired task.
Interactive systems consultants (help commands) are excellent applications for natural language understanding programs. Since the context which the systems consultant must deal with is limited, even unsophisticated natural language programs are capable of dealing with it. The ease with which such consultants may be programmed and their usefulness argue that large interactive systems be provided with natural language consultants.
A Natural Language Consultant
Lest the reader fear that we are proposing an extensive research project rather than a program well within the state of the art, let
HELP
HELP COMMAND (12/27/71) ===
THE HELP COMMAND PRINTS OUT HELPFUL DOCUMENTATION ON
VARIOUS SYSTEM FEATURES. THE COMMAND
HELP
WILL PRINT OUT THIS MESSAGE.
HELP *
WILL PRINT OUT THE NAMES OF ALL CURRENTLY AVAILABLE INFO.
HELP <NAME>
WILL LOOK FOR, AND PRINT OUT THE INFO ABOUT THE SYSTEM
FEATURE NAMED IN <NAME>, FOR EXAMPLE
HELP DIRECT
WILL PRINT OUT INFO ON THE DIRECTORY COMMAND.
ONLY THE FIRST 6 CHARACTERS OF THE ARGUMENT ARE
LOOKED AT, THEY MUST BE A-Z, 0-9, OR *.
HELP *
HELP IS AVAILABLE FOR THE FOLLOWING:
ABACUS BASIC BATCOM BLIS10 BOOT11 CDIRSTK COBEDIT COBOL
COBPG CREF DELFIL DIRECT DSKRAT DUMPER FAILSA FGEN
FILCOM FORTRAN FUGE2 GLOB GRIPEDIT HELP IMPORT ISAM
LIBRARY LINK LPTSLPL OMOUNT OPSEX PIPE PLTSLPL FTPSPL
QUEUE QUOLST RE_RUN SETSRC SORT SOUP SPACE SPRINT
SYSPR SYSPR ETEC OMOUNT 2741
THE MONITOR HAS THE FOLLOWING COMMANDS:
ASSIGN ATTACH BACKSP CCOUNT CCONTI CLOSE COMPIL CONTIN COPY
CORE CPUNCH CREATE CREF CSTART CTEST D DAYTIM
DCORE DMT DEASSI DEBUG DELETE DETACH DIRECT DISMOU
DSK DUMP E EDIT EDF EXECUT FILE FINISH
FUDGE GET HALT HELP INITIA JCONTI KJOB LABEL
LIST LOAD LOGIN MAKE MOUNT PJOB PLEASE PLOT
PRESER PRINT PROTEC PUNCH QUEUE R REASSI REENTE
RENAMER RESOUR REMIND RUN SAVE SCHEDU SEND SET
SKIP SSAVE START SUBMIT SYSTAT TECO TIME TPUNCH
TTY TYPE UNLOAD VERSIO VI VROAS ZERO
THE MONITOR HAS THE FOLLOWING SET COMMANDS:
BLOCKS CIP CORMAX COPMIN CPU CTEST DATE DAYTIM
DENSIT DISKFUL DISKPR1 HP0 NOMESS OPR SCHEDU SPOOL
TIME TTY WATCH
Figure 1. Help on the DECsystem-10.
us explain the minimal requirements of a natural language understanding system and why the systems consultant is a good application.
We will say that a system understands natural language if a user who knows what the system is capable of but who had not been specifically trained in the system's input language (i.e. its domain of competence) can phrase an input to the system and, possibly after some clarificatory dialogue (see, for example, [1]), have his input satisfactorily handled. The sophistication and complexity required of the system depends on its domain of competence. Relatively sophisticated systems have been written to obey commands to manipulate blocks on a tabletop [12] and to retrieve scientific information on lunar rocks [13]. Newell, et al. [8] discuss varying degrees of sophistication needed for understanding spoken language for various tasks among which is the systems consultant. Their version of the systems consultant, called Voice-CC, requires a much more sophisticated system than ours because understanding spoken language is a more difficult and less understood task than understanding language written in machine-readable form. In one respect their task is easier because Voice-CC is to communicate with the user over a voice channel at the same time the user is trying to use the system over a conventional terminal. The system can know what the user has been doing and this can be a great help in understanding what he is asking. We are proposing a consultant which operates via standard terminals. We will discuss a consultant that is independent of the system monitor, so that it has no auxiliary source of information on what the user might be attempting. (Though if the consultant were part of the monitor, it could have this information.) On the other hand, since the user is using
the terminal to ask his questions, he is presumed to know such things as what the end of transmission character is, while the sample protocol in Newell, et al. [op. cit., pp. 69-71] has a significant number of interactions on such topics. In either case the task is much easier than a general natural language understanding system because the system's domain of competence is so limited, viz. the commands and features of the interactive system. We can assume that the user of the consultant wants information about these commands and that the request will be phrased in terms of the operations which they can perform. It is only necessary to recognize these terms and respond with a discussion of the relevant command and, possibly, related commands. The system need not understand the fine details of the user's request, just the gist of what he would like to do. Therefore, building the consultant is not much more difficult than writing a manual and providing a good index/thesaurus.
There is a controversy over whether natural language is an appropriate query language [5; 6; 7]. The opposing views seem to stress the ambiguities and general sloppiness of natural language. We trust that we have adequately explained that this is not an issue for the limited context we are discussing. There is another opposing view, however, that questions the usefulness of natural language input. This view is that habitual users of any system will prefer to use a terse formal language rather than natural language, which is generally verbose. The common response to this is that natural language input is best suited to "casual users". But do casual users exist? If so, who are they and what systems do (would) they use? Our answer is, "We have met the casual users and they are we." Experienced programmers,
when faced with a new system or with the need to use an unfamiliar feature on their old system are casual users of the "help" program (system consultant). They use the system consultant because they do not know the command language, and they use it only until they learn the command language. What such a user wants is to be able to describe the operation he would like to perform and to be told the correct command to use in the given system. This is the natural language system consultant we are proposing.
The ELIZA Helper
A natural language system consultant has been described briefly elsewhere [9]. To further demonstrate its feasibility, we have made Weizenbaum's ELIZA program [4; 10; 11] into a partial implementation.
The ELIZA program is actually quite simple and involves no sophisticated parsing, analysis or "understanding" of language. The input sentence is searched for predefined keywords, substitutions are made where specified, and the sentence is broken into phrases which can be used in the output sentence. Associated with each keyword is a level number which determines the preference of a response related to that keyword along with a list of reassembly rules to be used in the response. As the input sentence is scanned, a list of keywords in the sentence is constructed with the most preferred keyword at the front of the list. When the scan is completed, the decomposition pattern for the most preferred keyword is applied to the input sentence. If this pattern matches, parts of the input may be concatenated with a rule for that keyword to form the output sentence. Where necessary, subsequent keywords from the input sentence are used. If no keywords
are found or all decomposition patterns fail to match, a stock sentence is chosen as the response.
The keywords, preference numbers, decomposition patterns and reassembly rules are written on a file called the "script" which is read and used by the ELIZA program itself. By changing scripts, ELIZA can be made to perform different roles. ELIZA has been made to perform the system consultant role by choosing the keywords from the commands, uses of the commands and their synonyms. The reassembly rules are paraphrases or quotes from sections of the manual designed to provide information about the system and to answer the presumed questions.
The current implementation uses a SNØBØL version of ELIZA [4] (we actually use Cal SNØBØL, a smaller, faster version of SNØBØL4) as a consultant for the KRONOS 2.1 operating system on the CDC 6600 computer at Indiana University. Specifically, the script is designed to provide help with permanent file (PFILE) commands used interactively, as well as knowledge about logging off the system, submitting files of control cards to the input queue, routing files to printers or card punches at various remote facilities, recovering an interactive job after being disconnected from the system, copying files, and other useful procedures as described in CDC's Time-Sharing User's Manual [2].
A user accesses the consultant by typing -HELP which activates a KRONOS procedure file, which then calls the ELIZA helper. It also copies every conversation for later study. Such sample conversations permit the maintenance of the script in a dynamic state to fit the specific needs of users and to provide navigational tools in the task of predicting conversation flow. A typical conversation is shown in Figure 2.
-Help
A COPY OF THIS CONVERSATION IS BEING
MADE FOR FURTHER STUDY...
TO TERMINATE THIS PROGRAM, TYPE QUIT,
THANKS, OR THANK YOU...
HOW DO YOU DO. WHAT IS YOUR PROBLEM?
? I NEED SOME INFORMATION ABOUT FFILLES.
FILES MAY BE ASSOCIATED WITH A JOB BY USING EITHER
THE ATTACH (DIRECT FILE) OR GET (INDIRECT FILE)
COMMANDS. FILES MAY BE MADE PERMANENT BY USING EITHER
THE DEFINE (DIRECT FILE) OR SAVE (INDIRECT FILE) COMMANDS
? I'M NOT SURE IF I WANT DIRECT OR INDIRECT FILES.
A DIRECT ACCESS FILE IS A PERMANENT FILE WHICH, UPON
REQUEST, BECOMES ASSOCIATED DIRECTLY WITH THE USERS
JOB. UNLIKE REQUESTS FOR INDIRECT ACCESS FILES, A
COPY OF THE PERMANENT FILES IS NOT CREATED FOR USER
ACCESS. USE OF A DIRECT ACCESS FILE IS REQUESTED BY
ENTERING THE ATTACH COMMAND, AND ALL SUBSEQUENT OPERATIONS
ARE PERFORMED ON THE PERMANENT FILE ITSELF. A DIRECT
ACCESS FILE IS CREATED BY USE OF THE DEFINE COMMAND.
? HOW IS THAT DIFFERENT FROM AN INDIRECT ACCESS FILE?
AN INDIRECT ACCESS FILE IS A PERMANENT FILE THAT CANNOT
BE ACCESSED DIRECTLY BY THE USER. WHEN ACCESS IS REQUESTED
BY A GET COMMAND, A COPY OF THE PERMANENT FILE IS
CREATED AND ASSOCIATED WITH THE JOB MAKING THE REQUEST.
NOTE THAT IF THE WORKING FILE IS ALTERED, THE USER
MUST ENTER THE REPLACE COMMAND TO UPDATE THE PERMANENT
FILE. AN INDIRECT ACCESS FILE IS CREATED BY USE OF
THE SAVE COMMAND.
? WHAT ELSE CAN YOU TELL ME ABOUT INDIRECT ACCESS FILES?
CAN YOU BE MORE SPECIFIC?
? SUPPOSE I WANT TO SAVE INDIRECT FILES.
THE SAVE COMMAND CREATES AN INDIRECT ACCESS PERMANENT
FILE. THIS COMMAND ALLOWS A USER TO RETAIN A COPY
OF THE SPECIFIED WORKING FILE IN THE PERMANENT FILE
SYSTEM.
? HOW DO I USE THE SAVE COMMAND?
THE FORM OF THE SAVE COMMAND IS -
SAVE, LFN=PFN=PW=PASSWORD, CT=N, M=M
? WHAT IS MEANED BY CT? I DON'T UNDERSTAND.
CT REPRESENTS THE PERMANENT FILE CATEGORY. THE FILE
CATEGORY DETERMINES WHICH USERS MAY ACCESS A FILE.
THERE ARE THREE CATEGORIES OF PERMANENT FILES - PRIVATE,
SPRINT, AND PUBLIC.
Figure 2a. A session with the ELIZA Helper. User's input is on
lines beginning with "/" or "?".
Who may use a file if it is saved as a public file?
PU represents a public file. These files are available for access by all users knowing the file name, password, and user number. Only the total number of accesses to files of this category are recorded for the originator of the file.
Please tell me what you mean by M=M on the SAVE command.
M represents the file or user permission mode. There are several possible modes: WRITE, MODIFY, APPEND, READ, READMOD, READAP, EXECUTE, OR NULL.
What am I able to do with a file saved in WRITE mode?
W represents the WRITE mode and allows the user to WRITE, MODIFY, APPEND, READ, EXECUTE, OR PURGE the file.
You mentioned working files earlier and I didn't understand. A working file is either a new file, created by the user; or a copy of an indirect access file that already exists in the system. All working files are temporary in nature and can exist no longer than the user is logged into the system. A working file is a local file. The parameter LFN on most time-sharing job commands and permanent file commands signifies a local file name.
Yesterday I lost a file because I hadn't used it in awhile. Do you want to know about PRGLIST, which gives a listing of your purged files, or do you want to know about PRSCOPY, which will allow you to retrieve a copy of a purged file?
Tell me about PRSCOPY.
PRSCOPY is a routine which allows the user to retrieve a permanent file which has been purged in accordance with the PFILE PURGING ALGORITHM.
How do I use PRSCOPY?
The user may access the PRSCOPY routine by the command PRSCOPY,DATE,PFNAME, where DATE represents the numeric day of the month when the file was purged, and PFNAME represents the name of the PFILE which is to be recovered. The date is a required parameter and is available from PRGLIST. If PFNAME is omitted, all of the users' PFILES are recovered from that day's purge tape. Note that PFILES may not be recovered after one month. Also note that to accomplish this feat from the teletype, a SUBMIT command must be used.
Figure 2b. A session with the ELIZA Helper.
? What does the submit command do?
The submit command allows a validated user to create
a batch job deck image and submit it to the local batch
queue as a deferred batch job. The user cannot interact
with a job submitted for processing using this command.
Rather, the job is treated in the same manner as a
job submitted from the card reader at WCC, Balantine,
Memorial, or Business.
? How do I submit a job in this way?
The form of the submit command is:
SUBMIT (LFN, Q, NR) C
? What is Q?
Q represents the queue specified in a submit command.
It may be either B, N, or E. B means that the job
output is disposed to the batch queue to be printed,
punched, etc., at WCC. N means that the job output
is disposed to the batch queue, but is dropped at job
termination. (Default value). E means that the job
output is disposed to export/import queue for printing
at a remote site.
? What is NR?
NR refers to the no rewind option. If present, this
parameter will inhibit rewind after execution of associated
command.
? One more question - How do I turn this machine off?
To logoff from telex, type goodbye or bye. An alternate
way to logoff is through the hello or login commands.
These have the same effect as the bye command except
the login sequence is reinitialized.
? Thank you for your help.
My pleasure.
Figure 2c. A session with the ELIZA Helper.
Organization of the consultant script follows the general theme that when a command name, a synonym of the command name or a word implying some use of that command is recognized in a user input, the user is presumed to be asking for information about that command. The initial response is a general description of the usage of the command. It gives enough information that the user's question is probably answered or at least the proper terminology is provided to rephrase the question.
If the same keyword reappears, the system responds with more specific information until the feature is completely described. The next use of the keyword is responded to with:
- CAN YOU BE MORE SPECIFIC? or
- PLEASE DESCRIBE YOUR PROBLEM IN MORE DETAIL or
- WHAT DO YOU MEAN BY _____? I DON'T UNDERSTAND
where _____ represents the input string. Further uses of the keyword are ignored, allowing less preferred keywords to determine the response.
Preference numbers determine dominance among keywords. Requests for information about parameters on control cards always dominate, since these keywords have a higher precedence than the simple name of a control card. If an input sentence were:
What does PW=PASSWORD mean on an ATTACH card?
the system would respond relative to the keyword PW and describe what password should be specified when manipulating a file rather than explaining more about the ATTACH command itself.
A more problematic situation occurs whenever the same keyword has differing interpretations, depending on the context. A partial solution is provided by assuming the user will remain within the overall context of a given script (an underlying assumption throughout ELIZA's history). Even with this assumption ambiguities arise. For example, the permanent file structure under KRONOS permits the specification of a mode under which a file may be accessed. These include a READ, WRITE, and APPEND mode. But in many situations, an input sentence may contain one of these keywords, though the user is not requesting mode information. An answer to this problem is provided in the ELIZA system through the use of more complex decomposition patterns. A phrase such as READ MODE may be specified as part of the pattern associated with the keyword READ so that responses relative to that word are not given indiscriminately. It is important to note that such disambiguation cannot always be accomplished in this manner. In some instances, ELIZA is made to respond with a question formulated to resolve the ambiguous keyword. For example, if an input sentence were:
How do I find the turnaround time at Marshal H. Wrubel Computing Center?
the system would respond:
WILL YOU LIKE TO DROP OR SUBMIT A JOB OR WOULD YOU LIKE TO SEE A STATUS OF THE QUEUES AT WCC?
Thus, a user is encouraged to use unambiguous keywords and is led to the eventual solution to his problem.
Summary
An excellent application for natural language understanding systems is an interactive system consultant. This is true for several reasons. The user of a system consultant is, ipso facto, not wellversed in the system command language, and will cease using the consultant precisely when he does learn the command language. He is, therefore, precisely the kind of user best served by a natural language input system. On the other hand, the system consultant operates on a very restricted domain, viz. the system commands and the uses to which they may be put. At this time, natural language understanding systems have been successful when applied to restricted domains and they have been successful only in such applications. Furthermore, the system consultant does not require a fine understanding of the input. It is acceptable if the consultant merely recognizes what command or feature is being inquired about and launches into a discussion of that feature. To demonstrate the feasibility of a natural language system consultant, we have implemented one using ELIZA, a keyword oriented conversation program.
References
6. Hill, I.D. Wouldn't it be nice if we could write computer programs in ordinary English—or would it? The Computer Bulletin 16, 6 (June, 1972), 306-12.
11. -----. Contextual understanding by computers. CACM 10, 8 (August, 1967), 474-80.
### Appendix I: Keywords used for ELIZA Helper
<table>
<thead>
<tr>
<th>APPEND</th>
<th>EDITOR</th>
<th>MODE</th>
<th>RA</th>
</tr>
</thead>
<tbody>
<tr>
<td>AT</td>
<td>EJ</td>
<td>MODIFY</td>
<td>READ</td>
</tr>
<tr>
<td>AT=AT</td>
<td>ELIZA</td>
<td>NAME</td>
<td>READAP</td>
</tr>
<tr>
<td>ATTACH</td>
<td>END</td>
<td>NAMES</td>
<td>READM</td>
</tr>
<tr>
<td>BA</td>
<td>EQ</td>
<td>NEVER</td>
<td>RECORD</td>
</tr>
<tr>
<td>BAL</td>
<td>EQ=EQ</td>
<td>NEW</td>
<td>RECORDS</td>
</tr>
<tr>
<td>BALLANTINE</td>
<td>EVICT</td>
<td>NPN</td>
<td>RECOVER</td>
</tr>
<tr>
<td>BALLENTINE</td>
<td>EXECUTE</td>
<td>NPN=OPN</td>
<td>RECOVERY</td>
</tr>
<tr>
<td>BALLINTINE</td>
<td>F=LPN</td>
<td>NO</td>
<td>RELEASE</td>
</tr>
<tr>
<td>BRF</td>
<td>FILE</td>
<td>NODROP</td>
<td>REPLACE</td>
</tr>
<tr>
<td>BUSINESS</td>
<td>FILES</td>
<td>NR</td>
<td>RETRIEVE</td>
</tr>
<tr>
<td>BYE</td>
<td>FN</td>
<td>OFF</td>
<td>RETURN</td>
</tr>
<tr>
<td>C=CC</td>
<td>FN=PFN</td>
<td>OPN</td>
<td>RM</td>
</tr>
<tr>
<td>CAN'T</td>
<td>GET</td>
<td>OLD</td>
<td>ROUTE</td>
</tr>
<tr>
<td>CATALOG</td>
<td>GOODBYE</td>
<td>OPERATING</td>
<td>S</td>
</tr>
<tr>
<td>CATALOGUE</td>
<td>HALT</td>
<td>OPTIONS</td>
<td>SAVE</td>
</tr>
<tr>
<td>CATEGORY</td>
<td>HELP</td>
<td>P</td>
<td>SEMI</td>
</tr>
<tr>
<td>CATLIST</td>
<td>INDIRECT</td>
<td>PARAMETER</td>
<td>SEMI-PRIVATE</td>
</tr>
<tr>
<td>CC</td>
<td>KRONOS</td>
<td>PARAMETERS</td>
<td>SEND</td>
</tr>
<tr>
<td>CDC</td>
<td>LOC</td>
<td>PASSWORD</td>
<td>SORRY</td>
</tr>
<tr>
<td>CERTAINLY</td>
<td>LFN</td>
<td>PASSWRD</td>
<td>SPEC</td>
</tr>
<tr>
<td>CHANGE</td>
<td>LPN=PFN</td>
<td>PERHAPS</td>
<td>SPRINT</td>
</tr>
<tr>
<td>COMPUTER</td>
<td>LIST</td>
<td>PERMANENT</td>
<td>STATUS</td>
</tr>
<tr>
<td>COMPUTERS</td>
<td>LISTING</td>
<td>PERMIT</td>
<td>STOP</td>
</tr>
<tr>
<td>COPY</td>
<td>LNH</td>
<td>PFILE</td>
<td>SUBMIT</td>
</tr>
<tr>
<td>COPYBF</td>
<td>LO</td>
<td>PFILES</td>
<td>SYSTEM</td>
</tr>
<tr>
<td>COPYBR</td>
<td>LO=OPTIONS</td>
<td>PROCOPY</td>
<td>TELEX</td>
</tr>
<tr>
<td>COPIESBF</td>
<td>LOC</td>
<td>PPN</td>
<td>UN</td>
</tr>
<tr>
<td>COPYCF</td>
<td>LOCATION</td>
<td>PRCOPY</td>
<td>UN=USERNUM</td>
</tr>
<tr>
<td>COPYCR</td>
<td>LOCATIONS</td>
<td>PROGLIST</td>
<td>W</td>
</tr>
<tr>
<td>CT</td>
<td>LOGOFF</td>
<td>PRIMARY</td>
<td>WCC</td>
</tr>
<tr>
<td>CT=N</td>
<td>LOST</td>
<td>PRIVATE</td>
<td>WONT</td>
</tr>
<tr>
<td>DEFINE</td>
<td>M</td>
<td>PU</td>
<td>WORKING</td>
</tr>
<tr>
<td>DIRECT</td>
<td>M=M</td>
<td>PUBLIC</td>
<td>WRIG</td>
</tr>
<tr>
<td>DISPOSE</td>
<td>MACHINE</td>
<td>PURGE</td>
<td>WRUBEL</td>
</tr>
<tr>
<td>DON'T</td>
<td>MACHINES</td>
<td>PURGED</td>
<td>WRUBLE</td>
</tr>
<tr>
<td>DROP</td>
<td>MAYBE</td>
<td>PW</td>
<td>XXX</td>
</tr>
<tr>
<td>DROPPED</td>
<td>MEM</td>
<td>PW=PASSWORD</td>
<td>YES</td>
</tr>
<tr>
<td>E</td>
<td>MEM</td>
<td>R</td>
<td>YOU</td>
</tr>
<tr>
<td>EDIT</td>
<td>MEMORIAL</td>
<td>R=R</td>
<td>6000</td>
</tr>
</tbody>
</table>
Appendix II: Script for ELIZA Helper
AM L /2/ S /ARE/
APPEND L /21/ D /APPEND MODE/
'A REPRESENTS THE APPEND MODE WHICH ALLOWS THE USER TO APPEND
INFORMATION AT THE END OF THE FILE(ED1):' CF SPEC=NEWKEY=NEWKEY:
AT=AT L /21/ D /*CF AT:*/
AT L /1/ D /*
'AT REFERS TO THE ATTRIBUTE OF THE EQUIPMENT TO BE USED WHEN ROUTING
A FILE. THE DEFAULT IS NONE. CURRENTLY, THE ONLY USE OF THIS FEATURE
IS TO SPECIFY THE UCS PRINT CHAIN AT BALLANTINE:': CF SPEC=NEWKEY=NEWKEY:
ATTACH L /20/ D /*
CF FILE: 'THE FORM OF AN ATTACH COMMAND IS -
ATTACH,LFN=PFN/UN=USERNAME,PW=PASSWORD': CF SPEC=NEWKEY=/
BA L /19/ D /*CF BALLANTINE:*/
BAL L /19/ D /*CF BALLANTINE:*/
BRF L /19/ D /*CF BALLANTINE:*/
BALLANTINE L /19/ D /*CF BALLANTINE:*/
BALLANTINE L /19/ D /*CF BALLANTINE:*/
BALLANTINE L /19/ D /*
'DO YOU WANT TO SUBMIT, ROUTE, OR DROP A JOB AT BALLANTINE, OR DO YOU
WANT TO SEE A STATUS OF JOBS IN THE BALLANTINE QUEUES?': CF SPEC=NEWKEY=NEWKEY/
BUSINESS L /19/ D /*
'DO YOU WANT TO SUBMIT, ROUTE, OR DROP A JOB AT BUSINESS, OR DO YOU
WANT TO SEE A STATUS OF JOBS IN THE BUSINESS QUEUES?': CF SPEC=NEWKEY=/
BYE L /20/ D /*CF LOGOFF:*/
C L /21/ D /*
'C REFERS TO THE ESCAPE CHARACTER USED TO IDENTIFY REFORMATTING
DIRECTIVES IN THE FILE TO BE SUBMITTED UNDER A SUBMIT COMMAND. IF
OMITTED, THE SYSTEM ASSUMES C='''/NEWKEY=NEWKEY:/
CANT S /CANT/
CATALOG L /19/ D /*CF CATLIST:*/
CATALOGUE L /19/ D /*CF CATLIST:*/
CATEGORY L /20/ D /*CF CT:*/
CATLIST L /20/ D /*
'THE CATLIST COMMAND SELECTS A LISTING OF PERTINENT INFORMATION ABOUT
EACH FILE IN THE USERS CATALOG. IF AN ALTERNATE USER NUMBER IS SPECIFIED,
THE USER OBTAINS A LISTING OF ALL FILES THAT HE CAN ACCESS IN THE
ALTERNATE USERS CATALOG:': CF SPEC=NEWKEY=/
C=CC L /21/ D /*CF CC:*/
COC L /18/ D /*CF 6600:*/
CC L /21/ D /*
'CC REFERS TO THE COPY COUNT ON A ROUTE CARD. THIS MUST BE EXPRESSED
AS A DECIMAL NUMBER BETWEEN 1 AND 63 INCLUSIVE. THE DEFAULT VALUE IS 1:'/
CERTAINLY L /0/ D /*CF YES:*/
CHANGE L /20/ D /*
'THE CHANGE COMMAND ALLOWS THE ORIGINATOR OF A FILE TO ALTER ANY OF
SEVERAL PARAMETERS WITHOUT HAVING TO ATTACH AND REDEFINE THE FILE OR
RETRIEVE AND SAVE IT:': CF SPEC=NEWKEY=/
COMPUTER L /10/ D //
WHAT DO YOU THINK MACHINES HAVE TO DO WITH YOUR PROBLEM?:
DONT YOU THINK COMPUTERS CAN HELP PEOPLE?:
DO COMPUTERS WORRY YOU?; "WHY DO YOU MENTION COMPUTERS?;"
WHAT IS IT ABOUT MACHINES THAT WORRIES YOU?;"
WHAT DO YOU THINK ABOUT MACHINES?;"
COMPUTERS L /10/ D //CF COMPUTER;/
COPY L /20/ D /COPY/
COPYING ONE FILE TO ANOTHER MAY BE ACCOMPLISHED BY USE OF ANY OF
THE FOLLOWING COMMANDS - COPYBF, COPYBR, COPYSBF, COPYCF,
COPYCR, OR COPYSCEF: CF SPEC: NEWKEY: /
COPYBF L /21/ D //
COPYBF, COPYBR, AND COPYSBF ALLOW THE USER TO DUPLICATE A FILE OR
RECORD. COPYBF CAN BE USED TO COPY MOST FILES IF AN EXACT COPY IS DESIRED.
THE DIFFERENCE BETWEEN COPYC AND COPYB IS THE PARITY ON TAPE COPIES.
COPYBF IS USED TO COPY UP TO THE FIRST RECORD MARK ENCOUNTERED ON THE
FILE, AND COPYSBF IS THE SAME AS COPYBF, BUT THE COPY IS SHIFTED RIGHT BY
ONE CHARACTER, THUS AVOIDING THE CARRIAGE CONTROL CHARACTER: "THE FORM OF
COPY COMMANDS IS - COPY**NAME1, NAME2. WHERE COPY**
REFERS TO THE APPROPRIATE COPY COMMAND: "CF SPEC: NEWKEY:; /
COPYBF L /21/ D //CF COPYBF; /
COPYSBF L /21/ D //CF COPYSBF; /
COPYCF L /21/ D //
COPYCF, COPYCR, AND COPYSCEF ALLOW THE USER TO DUPLICATE A FILE OR
RECORD. COPYCF CAN BE USED TO COPY MOST FILES IF AN EXACT COPY IS DESIRED.
THE DIFFERENCE BETWEEN COPYC AND COPYB IS THE PARITY ON TAPE COPIES.
COPYCF IS USED TO COPY UP TO THE FIRST RECORD MARK ENCOUNTERED ON THE
FILE, AND COPYSCEF IS THE SAME AS COPYCF, BUT THE COPY IS SHIFTED RIGHT BY
ONE CHARACTER, THUS AVOIDING THE CARRIAGE CONTROL CHARACTER: "THE FORM OF
COPY COMMANDS IS - COPY**NAME1, NAME2. WHERE COPY**
REFERS TO THE APPROPRIATE COPY COMMAND: "CF SPEC: NEWKEY:; /
COPYCR L /21/ D //CF COPYCR; /
COPYSCEF L /21/ D //CF COPYSCEF; /
CT=N L /21/ D //CF CT;
CT L /21/ D //
CT REPRESENTS THE PERMANENT FILE CATEGORY. THE FILE CATEGORY
DETERMINES WHICH USERS MAY ACCESS A FILE. THERE ARE THREE CATEGORIES
OF PERMANENT FILES - PRIVATE, SPRIV, AND PUBLIC; /
DIRECT L /21/ D //
A DIRECT ACCESS FILE IS A PERMANENT FILE WHICH, UPON REQUEST,
BECOMES ASSOCIATED DIRECTLY WITH THE USERS JOB. UNLIKE REQUESTS FOR
INDIRECT ACCESS FILES, A COPY OF THE PERMANENT FILES IS NOT
CREATED FOR USER ACCESS. USE OF A DIRECT ACCESS FILE IS REQUESTED
BY ENTERING THE ATTACH COMMAND, AND ALL SUBSEQUENT OPERATIONS ARE
PERFORMED ON THE PERMANENT FILE ITSELF. A DIRECT ACCESS FILE IS CREATED
BY USE OF THE DEFINE COMMAND: "CF SPEC: NEWKEY:;
DEFINE L/20/ D //
'THE DEFINE COMMAND ALLOWS A USER TO CREATE A DIRECT ACCESS PERMANENT
FILE (PFN) AND ATTACH IT IN WRITE MODE': THE FORM OF THE DEFINE COMMAND IS -
DEFINE LFN=PFN PW=PASSWD CT=N M=M //
CF SPEC: NEWKEY: //
DISPOSE L/20/ D //CF ROUTE: //
DON'T S //DONT //
DROP L/20/ D //
'A JOB MAY BE DROPPED AT SEVERAL PLACES AROUND CAMPUS AND RETURNED
BY WAY OF THE COURIER SERVICE. THIS IS ACCOMPLISHED BY SPECIFYING A DROP
PARAMETER ON THE JOB CARD. THIS PARAMETER CONSISTS OF ONE OF THE FOLLOWING
DSW(SWAIN), DCH(Chemistry), DEE(EDUCATION), DME(MEMORIAL), DBA(BALLANTINE),
DLI(LINDLEY). ANOTHER FORM OF THIS COMMAND (FOR THE CONVENIENCE OF
INTERACTIVE USERS) IS DROP;DP WHERE DP REPRESENTS A DROP POINT IDENTIFIER.
DP CAN BE ANY OF THE FOLLOWING - V(SWAIN), C(Chemistry), X(EDUCATION),
A(MEMORIAL), E (BALLANTINE), Z(LINDLEY)' // CF SPEC: NEWKEY: //
DROPPED L/18/ D //CF DROP: //
DUNNO S //DONT KNOW //
EDIT L/20/ D // 'I HAVE NO INFORMATION ABOUT EDITOR COMMANDS': NEWKEY: NEWKEY: //
EDITOR L/20/ D // CF EDIT: //
EJ L/21/ D //
'WHEN EJ APPEARS ON A ROUTE CARD, THE FILE TO BE ROUTED WILL BE
SENT ONLY AT THE COMPLETION OF THE JOB IN WHICH THE COMMAND OCCURS':
'EJ MEANS END-OF-JOB': NEWKEY: NEWKEY: //
ELIZA L/16/ S // D //
'HOW DID YOU KNOW MY NAME?': CF NAME: CF YOU: NEWKEY: //
END L/19/ D //
'DO YOU WANT TO KNOW HOW TO LOGOFF OR HOW TO STOP THE EXECUTION OF A
PROGRAM?': CF SPEC: NEWKEY: //
EQ=EQ L/21/ D // CF EQ: //
EQ L/21/ D //
'EQ REFERS TO THE EQUIPMENT SPECIFIED ON A ROUTE COMMAND. THE DEFAULT
FOR THIS PARAMETER IS ANY PRINTER, BUT THE USER MAY SPECIFY OTHER DEVICES':
'SOME OF THE EQUIPMENT THAT MAY BE SPECIFIED ON A ROUTE CARD IS 501 FOR
SPECIFYING THE CDC 501 PRINTER AT WCC, 1403 FOR THE IBM 1403 PRINTER
AT WCC, PH FOR SPECIFYING THAT A FILE IS TO BE PUNCHED AS A HOLLERITH
FILE, PT FOR SPECIFYING THAT THE FILE IS TO BE PUNCHED ON PAPER TAPE,
PL FOR SPECIFYING THAT THIS FILE IS A PLOT': CF SPEC: NEWKEY: //
E L/21/ D // CF EXECUTE: //
EVT L/20/ D // CF RETURN: //
EXECUTE L/21/ D // EXECUTE MODE:
'E REPRESENTS THE EXECUTE MODE AND ALLOWS THE USER TO EXECUTE THE FILE':
CF SPEC: NEWKEY: NEWKEY: NEWKEY: //
FILE L/18/ D //
'FILES MAY BE ASSOCIATED WITH A JOB BY USING EITHER THE ATTACH (DIRECT
FILE) OR GET (INDIRECT FILE) COMMANDS. FILES MAY BE MADE PERMANENT BY
USING EITHER THE DEFINE (DIRECT FILE) OR SAVE (INDIRECT FILE) COMMANDS':
CF SPEC: NEWKEY: NEWKEY: NEWKEY: //
FILES L /18/ D //CF FILE://
FN=PFN L /21/ D //CF PFN://
FN L /21/ D //CF PFN://
GET L /20/ D //
CF FILE: THE FORM OF A GET COMMAND IS -
GET:LFN=PFN/UN=USERNAME, PW=PASSWORD:CF SPEC: NEWKEY://
HALT L /19/ D //CF STOP://
GOODBYE L /20/ D //CF LOGOFF://
HELP L /17/ D //
'CAN YOU DESCRIBE YOUR PROBLEM?': 'HOW CAN I HELP YOU?':
'WHAT IS REALLY YOUR PROBLEM?': 'PLEASE STATE YOUR PROBLEM IN A DIFFERENT WAY':
'I CAN'T HELP YOU WITH THAT PROBLEM':
'WOULD YOU LIKE TO KNOW MORE ABOUT PFILLES?':CF SPEC://
'I'M SORRY YOU ARE:
INDIRECT L /21/ D //
'AN INDIRECT ACCESS FILE IS A PERMANENT FILE THAT CANNOT BE ACCESSED
DIRECTLY BY THE USER, WHEN ACCESS IS REQUESTED BY A GET COMMAND, A COPY
OF THE PERMANENT FILE IS CREATED AND ASSOCIATED WITH THE JOB MAKING
THE REQUEST. NOTE THAT IF THE WORKING FILE IS ALTERED, THE USER MUST
ENTER THE REPLACE COMMAND TO UPDATE THE PERMANENT FILE. AN INDIRECT
ACCESS FILE IS CREATED BY USE OF THE SAVE COMMAND:CF SPEC: NEWKEY://
KRONOS L /17/ D //
'KRONOS IS THE OPERATING SYSTEM USED AT I.U. ON THE CDC 6600:
KRONOS WAS A TITAN AND THE FATHER OF ZEUS':CF SPEC: NEWKEY://
F=LFN L /21/ D //CF LFN://
L=LOC L /21/ D //CF LOC://
LFN=PFN L /21/ D //CF LFN:CF PFN://
LFN L /21/ D //LFN REPRESENTS LOCAL FILE NAME (PRIMARY OR WORKING FILE)//
LIST L /21/ D //
'THE LIST COMMAND PRINTS THE CONTENTS OF THE PRIMARY FILE AT THE
F OPTION IS SPECIFIED, THEN WORKING FILE LFN IS PRINTED:CF SPEC: NEWKEY://
LISTING L /16/ D //CF CATLIST://
LÑH L /20/ D //
'LÑH IS AN OPTIONAL FORM OF THE LIST COMMAND AND HAS THE SAME PARAMETERS.
THE LISTING IS MADE WITHOUT A HEADER. THE FORM OF THE LÑH COMMAND IS -
LO=OPTIONS L /21/ D //CF LO://
LO L /21/ D //
'LO REFERS TO LIST OPTIONS ON THE CATLIST COMMAND. IT MAY BE SET TO
ANY OF THE FOLLOWING - F<FULL>, FP<PERMISSION INFORMATION ONLY>, P
<List of USERNUMBERS THAT HAVE ACCESSED THE FILE>, OR O<ZERO>://
LOC L /21/ D //
'LOC REPRESENTS A REMOTE LOCATION. THIS MAY BE EITHER
BAL<BALLANTINE>, MEM<MEMORIAL>, OR BUS<BUSINESS>. IF SPECIFYING A
LOCATION ON A ROUTE COMMAND, ANY OF THE FOLLOWING MAY BE USED -
WCC<WRUBEL>, IU<IU/PU INDIAN>, BAL<BALLANTINE>, CHEM<CHEMISTRY>,
MEM<MEMORIAL>, IU<IU/GARY>, IUSB<SOUTH BEND>, BUS<BUSINESS>, IU<br>
SE<JEFFERSONVILLE>, IU<IU/FORT WAYNE>, LILLY<ELI LILLY CO.//CF SPEC: NEWKEY://
LOCATION L /19/ D //CF LOC://
LOCATIONS L /19/ D //CF LOC://
LOGOFF L /20/ D //
TO LOGOFF FROM TELEX, TYPE GOODBYE OR EYE. AN ALTERNATE WAY TO LOGOFF IS THROUGH THE HELLO OR LOGIN COMMANDS. THESE HAVE THE SAME EFFECT AS THE EYE COMMAND EXCEPT THE LOGIN SEQUENCE IS REINITIALIZED //CF SPEC:NEWKEY://
LOST L /19/ D //CF PURGED://
M=M L /21/ D //CF MODE://
M L /21/ D //CF MODE://
MACHINE L /10/ D //CF COMPUTER://
MACHINES L /10/ D //CF COMPUTER://
MAYBE L /2/ D //CF PERHAPS://
MEM S /YOU://
MEM L /19/ D //CF MEMORIAL://
MRF L /19/ D //CF MEMORIAL://
MEMORIAL L /19/ D //
/DO YOU WANT TO SUBMIT, ROUTE, OR DROP A JOB AT MEMORIAL, OR DO YOU WANT TO SEE A STATUS OF JOBS IN THE MEMORIAL QUEUES? //CF SPEC:NEWKEY://
MODE L /20/ D //
/M REPRESENTS THE FILE OR USER PERMISSION MODE. THERE ARE SEVERAL POSSIBLE MODES - WRITE, MODIFY, APPEND, READ, READMD, READAP, EXECUTE, OR NULL //CF SPEC:NEWKEY:NEWKEY://
MODIFY L /21/ D //MODIFY MODE://
/M REPRESENTS THE MODIFY MODE WHICH ALLOWS THE USER TO MODIFY INFORMATION WITHIN A DIRECT ACCESS FILE AND/OR APPEND INFORMATION AT THE END OF THE FILE. THE USER MAY ALSO READ OR EXECUTE THE FILE //CF SPEC:NEWKEY://
NAME L /15/ D //
/I AM NOT INTERESTED IN NAMES //I'VE TOLD YOU BEFORE, I AM NOT INTERESTED IN NAMES - PLEASE CONTINUE //CF HELP://
NAME S /PLEASE CONTINUE://
NAME L /15/ D //CF NAME://
NEVER L /0/ D //
NEW L /20/ D //
/THE NEW COMMAND ALLOWS THE USER TO CREATE A NEW PRIMARY FILE.
THE FORM OF THIS COMMAND IS NEW,LFN. THE FILE NAME SPECIFIED BECOMES THE NEW PRIMARY FILE AND ALL CURRENT WORKING FILES ARE RELEASED UNLESS NODROP IS THE NEXT COMMAND ENTERED //CF SPEC:NEWKEY://
NFM=OFN L /21/ D //CF NFN://
NFM L /21/ D //NFM REPRESENTS NEW FILE NAME IN CHANGE COMMAND //CF SPEC:NEWKEY://
NO L /0/ D //
/WHY -NO- //CF SPEC:NEWKEY://
/ARE YOU SAYING NO JUST TO BE NEGATIVE? //CF SPEC:NEWKEY://
/YOU ARE BEING RATHER NEGATIVE //CF SPEC:NEWKEY://
NODROP L /20/ D //
/THE NODROP COMMAND PREVENTS THE SYSTEM FROM RELEASING CURRENT WORKING FILES WHEN THE USER ISSUES THE OLD, NEW, OR LIB COMMAND TO OBTAIN A NEW PRIMARY FILE. THIS COMMAND MUST BE ENTERED IMMEDIATELY AFTER THE OLD, NEW, OR LIB COMMAND SEQUENCE IS COMPLETE //CF SPEC:NEWKEY://
NR
L /21/ D //
'REFER TO THE NO REWIND OPTION. IF PRESENT, THIS PARAMETER WILL
INHIBIT REWIND AFTER EXECUTION OF ASSOCIATED COMMAND':NEWKEY:NEWKEY://
OFF
L /20/ D //CF LOGOFF://
DFN
L /21/ D //DFN REPRESENTS OLD FILE NAME IN CHANGE COMMAND://
OLD
L /20/ D //
'THE OLD COMMAND RETRIEVES A COPY OF THE SPECIFIED PERMANENT FILE
(INDIRECT) FOR USE AS THE PRIMARY FILE':THE FORM OF THE OLD COMMAND IS -
OLD*LFN=PFN/UN=USERNUM,PW=PASSWORD://
CF SPEC:NEWKEY://
OPERATING
L /18/ D //CF KRONOS://
OPTIONS
L /19/ D //CF L0://
P
L /21/ D //CF PRIVATE://
PARAMETER
L /19/ D //CF SPEC://
PARAMETERS
L /19/ D //CF SPEC://
PASSWORD
L /20/ D //CF PW://
PASSWORD
L /20/ D //CF PW://
PERHAPS
L /2/ D //
'YOU DONT SEEM TO BE QUITE CERTAIN.': 'WHY THE UNCERTAIN TONE?':
'CANT YOU BE MORE DEFINITE?': 'YOU ARENT SURE?': 'DONT YOU KNOW?://
PERMANENT
L /18/ D //CF FILES://
PERMIT
L /20/ D //
'THE PERMIT COMMAND IS USED TO GRANT PERMISSION FOR A USER UNDER
A SPECIFIED NUMBER TO ACCESS A PRIVATE FILE': THE FORM OF THE PERMIT
COMMAND IS -
PERMIT,PFN,USERNUM1=M1,USERNUM2=M2,...,
USERNUM=MNN/R=R. WHERE THE M IN EACH CASE REPRESENTS THE PERMISSION
MODE':CF SPEC:NEWKEY://
PFILE
L /18/ D //CF FILE://
PFFILES
L /18/ D //CF FILE://
PFN
L /21/ D //PFN REPRESENTS PERMANENT FILE NAME://
PFCOPY
L /20/ D //
'PFCOPY IS A ROUTINE WHICH ALLOWS THE USER TO RETRIEVE A PERMANENT
FILE WHICH HAS BEEN PURGED IN ACCORDANCE WITH THE PF FILE PURGING ALGORITHM':
'THE USER MAY ACCESS THE PFCOPY ROUTINE BY THE COMMAND PFCOPY,DAT,PFNNAME.
WHERE DATE REPRESENTS THE NUMERIC DAY OF THE MONTH WHEN THE FILE WAS
PURGED; AND PFNAME REPRESENTS THE NAME OF THE FILE WHICH IS TO BE RECOVERED.
THE DATE IS A REQUIRED PARAMETER AND IS AVAILABLE FROM PFCOPY. IF
PFNNAME IS OMMITTED, ALL OF THE USERS PFFILES ARE RECOVERED FROM THAT DAYS
PURGE TAPE. NOTE THAT PFFILES MAY NOT BE RECOVERED AFTER ONE MONTH.
ALSO NOTE THAT TO ACCOMPLISH THIS FEAT FROM THE TELETYPE, A SUBMIT COMMAND
MUST BE USED':CF SPEC:NEWKEY://
PFCOPY
L /20/ D //
'PFCOPY IS A ROUTINE WHICH ALLOWS THE USER TO FIND OUT WHICH OF HIS
PFFILES HAS BEEN PURGED AND WHICH OF THEM IS TO BE PURGED SOON':
PFCOPY MAY BE USED IN ANY OF THE FOLLOWING THREE FORMS:
PFCOPY - WHICH LISTS FILES PURGED TODAY AND THOSE SCHEDULED FOR TOMORROW,
PFCOPY,DAT - (WHERE DATE IS OF THE FORM YR/MO/DT) WHICH LISTS ALL
PFFILES PURGED SINCE THE DATE SPECIFIED; AND PFCOPY,DT, - WHICH LISTS
ALL FILES PURGED IN THE PAST MONTH':CF SPEC:NEWKEY://
PRIMARY L /21/ D //
'THE PRIMARY FILE IS ONE TYPE OF WORKING FILE. IT HAS SPECIAL
SIGNIFICANCE IN CERTAIN TIME-SHARING COMMANDS. A PRIMARY FILE IS OBTAINED
WITH THE OLD OR LIBRARY COMMAND WHICH RETRIEVES A COPY OF AN INDIRECT
ACCESS PERMANENT FILE. A PRIMARY FILE IS CREATED WITH THE NEW COMMAND.
THERE IS ONLY ONE PRIMARY FILE ACTIVE OR AVAILABLE TO THE USER AT ANY
GIVEN TIME':CF SPEC:NEWKEY://
PRIVATE L /20/ D //
'P REPRESENTS A PRIVATE FILE. THESE FILES ARE AVAILABLE
FOR ACCESS ONLY BY THE ORIGINATING USER OR BY THOSE EXPLICITLY GRANTED
PERMISSION (REFER TO PERMIT COMMAND)':/:
PU L /21/ D //CF PUBLIC://
PUBLIC L /20/ D //
'PU REPRESENTS A PUBLIC FILE. THESE FILES ARE AVAILABLE FOR ACCESS
BY ALL USERS KNOWING THE FILE NAME, PASSWORD, AND USER NUMBER. ONLY
THE TOTAL NUMBER OF ACCESSSES TO FILES OF THIS CATEGORY ARE RECORDED
FOR THE ORIGINATOR OF THE FILE':/
PURGE L /20/ D //
'THE PURGE COMMAND REMOVES THE SPECIFIED PERMANENT FILE FROM STORAGE.
FILES REMOVED IN SUCH A MANNER CANNOT BE RECOVERED BY PRGCOPY':
'THE FORM OF THE PURGE COMMAND IS ---
PURGE;PFN/UN=USERNAME;
PW=PASSWORD:CF SPEC:NEWKEY://
PURGED L /19/ D //
'DO YOU WANT TO KNOW ABOUT PRGLIST, WHICH GIVES A LISTING OF YOUR
PURGED FILES, OR DO YOU WANT TO KNOW ABOUT PRGCOPY, WHICH WILL ALLOW
YOU TO RETRIEVE A COPY OF A PURGED FILE?:CF SPEC:NEWKEY://
PW=PASSWORD L /21/ D //CF PW://
Pw=PASSWORD L /21/ D //CF PW://
PW L /21/ D //
'PW REPRESENTS A PASSWORD. THE USER HAS THE OPTION OF SPECIFYING A
ONE-TO-SEVEN CHARACTER PASSWORD FOR A FILE. THIS PASSWORD MUST BE
SPECIFIED WHENEVER ALTERNATE USERS ACCESS THE FILE':/
Q L /21/ D //
'Q REPRESENTS THE QUEUE SPECIFIED IN A SUBMIT COMMAND. IT MAY BE
EITHER B, N, OR E. B MEANS THAT THE JOB OUTPUT IS DISPOSED TO THE BATCH
QUEUE TO BE PRINTED, PUNCHED, ETC., AT WCC. N MEANS THAT THE JOB OUTPUT
IS DISPOSED TO THE BATCH QUEUE, BUT IS DROPPED AT JOB TERMINATION.
(DEFAULT VALUE). E MEANS THAT THE JOB OUTPUT IS DISPOSED TO EXPORT/IMPORT
QUEUE FOR PRINTING AT A REMOTE SITE':CF NEWKEY:NEWKEY://
R=R L /21/ D //CF R://
R L /21/ D //
'R, IF PRESENT IN THE LIST COMMAND, INDICATES THAT END-OF-RECORD
AND END-OF-FILE MARKS ARE TO BE INDICATED IN THE LISTING IF PRESENT':
NEWKEY:NEWKEY:NEWKEY://
READ L /21/ D //READ MODE/
'REPRESENTS THE READ MODE AND ALLOWS A USER TO READ AND/OR EXECUTE
THE FILE':CF SPEC:NEWKEY://
RA L /21/ D //CF READAP://
READAP L /21/ D /-
'READAP REPRESENTS THE READ/APPEND MODE AND ALLOWS THE USER TO READ A
DIRECT ACCESS FILE WITH THE IMPLICATION THAT ANOTHER USER MAY CURRENTLY
BE ACCESSING THE FILE IN APPEND MODE. THE FILE MAY ALSO BE EXECUTED IN
THIS MODE: CF SPEC: NEWKEY:
RM L /21/ D /CF READMD/;
READMD L /21/ D /-
'READMD REPRESENTS THE READ/MODIFY MODE AND ALLOWS THE USER TO READ A
DIRECT ACCESS FILE WITH THE IMPLICATION THAT ANOTHER USER MAY CURRENTLY
BE ACCESSING THE FILE IN MODIFY MODE. THE FILE MAY ALSO BE EXECUTED IN
THIS MODE: CF SPEC: NEWKEY:
RECORD L /19/ D /CF FILE;/
RECORDS L /19/ D /CF FILE;/
RECOVER L /20/ D /-
'THE RECOVER FEATURE ENABLES THE USER AT A TIME-SHARING TERMINAL TO
RESUME PROCESSING AFTER HAVING BEEN ACCIDENTALLY DISCONNECTED FROM THE
SYSTEM OR WHEN A SYSTEM MALFUNCTION REQUIRES THAT THE LOG-IN SEQUENCE
BE REINITIALIZED. THE USER IS PLACED IN RECOVERY STATE WHENEVER HE IS
DISCONNECTED FROM THE SYSTEM WITHOUT LOGGING OFF, PROVIDING THAT HE IS
NOT ALREADY IN RECOVERY STATE: RECOVERY MUST BE INITIATED WITHIN 10
MINUTES OF BEING DISCONNECTED. THIS IS DONE IN THE LOGIN' SEQUENCE. IN
RESPONSE TO THE PROMPT RECOVER/SYSTEM THE USER ENTERS RECOVER, NNN
WHERE NNN REFERS TO THE TERMINAL BEING USED WHEN THE FAILURE OCCURRED.
THIS IS THE SAME NUMBER INDICATED WHEN THE USER INITIALLY LOGGED IN.
IF THE SAME TERMINAL NUMBER IS INDICATED WHEN THE USER LOGS IN TO RECOVER,
THIS PARAMETER IS NOT REQUIRED: CF SPEC: NEWKEY:
RECOVERY L /20/ D /CF RECOVER/;
RELEASE L /20/ D /CF RETURN/;
REPLACE L /20/ D /-
'THE REPLACE COMMAND ALLOWS A USER TO REPLACE THE CONTENTS OF A
PERMANENT FILE <PFN> WITH THE CONTENTS OF A WORKING FILE <LFN>:
THE FORM OF THE REPLACE COMMAND IS -
REPLACE,LFM=PFN/UN=
USERNUM, PW=PASSWRD /CF SPEC: NEWKEY/;
RETRIEVE L /18/ D /CF GET/;
RETURN L /20/ D /-
'TO RELEASE WORKING FILE LFN; ENTER RETURN; LFN ' + NEWKEY: NEWKEY:
REWIND L /20/ D /-
'TO POSITION WORKING FILE LFN AT THE BEGINNING-OF-INFORMATION
(BOI) ENTER REWIND, LFN ' + NEWKEY: NEWKEY:
ROUTE L /20/ D /-
'THE ROUTE COMMAND CAN BE USED TO SEND A FILE TO A SPECIFIC LOCATION OR
PIECE OF EQUIPMENT OR TO MAKE MULTIPLE COPIES OF THE SAME FILE:
THE FORM OF THE ROUTE COMMAND IS -
ROUTE(LFN,EJ;C=CC,
L=LOC;EQ=EQ;AT=AT) /CF SPEC: NEWKEY/;
S L /21/ D /CF SPRIV/;
SAVE L /20/ D /-
'THE SAVE COMMAND CREATES AN INDIRECT ACCESS PERMANENT FILE. THIS
COMMAND ALLOWS A USER TO RETAIN A COPY OF THE SPECIFIED WORKING FILE IN THE
PERMANENT FILE SYSTEM: THE FORM OF THE SAVE COMMAND IS -
SAVE,LFM=PFN/PW=PASSWRD, CT=N, M=M / CF SPEC: NEWKEY/;
SEMI L /20/ D /CF SPRIV:
SEMI-PRIVATE L /20/ D /CF SPRIV:
SEND L /19/ D /CF ROUTE:
SORRY L /2/ D
'PLEASE DO NOT FEEL APOLOGETIC: 'APOLOGIES ARE NOT NECESSARY':
'IVE TOLD YOU THAT APOLOGIES ARE NOT REQUIRED':/:
SPEC L /5/ D /CAN YOU BE MORE SPECIFIC?:
'PLEASE DESCRIBE YOUR PROBLEM IN MORE DETAIL: 'IT IS NOT CLEAR WHAT
YOU MEAN: 'WHAT DO YOU MEAN BY 'POST ': I DONT UNDERSTAND':
SPRIV L /20/ D
'S REPRESENTS A SEMI-PRIVATE FILE. THESE FILES ARE AVAILABLE
FOR ACCESS BY ALL USERS KNOWING THE FILE NAME, PASSWORD, AND USER
NUMBER. ACCESS BY ALTERNATE USERS FOR FILES OF THIS CATEGORY ARE
RECORDED FOR THE ORIGINATOR OF THE FILE. THIS INCLUDES THE USER NUMBER
OF THE ALTERNATE USER, THE NUMBER OF ACCESSES MADE, AND THE DATE AND
TIME OF THE LAST ACCESS (REFER TO 'CATLIST COMMAND'):
STATUS L /20/ D
'THE STATUS COMMAND REQUESTS THE CURRENT JOB STATUS. AN IMPORTANT
FEATURE OF THIS COMMAND IS THAT IT MAY BE ENTERED DURING JOB EXECUTION:
BESIDES THE SIMPLE STATUS COMMAND WITH NO PARAMETERS, THERE ARE FOUR
ALTERNATE COMMANDS -
STATUS, F STATUS, T STATUS, J=JOBNAME
STATUS, L=LOC STATUS, F IS THE SAME AS STATUS BUT ALSO LISTS ALL WORKING FILES;
STATUS, T REQUESTS THE ACCUMULATED CPU TIME FOR THIS SESSION;
STATUS, J=JOBNAME REQUESTS THE STATUS OF A REMOTE BATCH JOB OR JOB
SUBMITTED AT WCC, BALANTINE, BUSINESS, OR MEMORIAL; AND STATUS, L=LOC
REQUESTS THE JOB STATUS OF ALL JOBS AT LOCATION LOC /CF SPEC:NEWKEY/
STOP L /20/ D
'THE STOP COMMAND TERMINATES ANY PROGRAM THAT IS CURRENTLY EXECUTING
OR WAITING FOR INPUT FROM THE TERMINAL /CF SPEC:NEWKEY:
SUBMIT L /20/ D
'THE SUBMIT COMMAND ALLOWS A VALIDATED USER TO CREATE A BATCH JOB DECK
IMAGE AND SUBMIT IT TO THE LOCAL BATCH QUEUE AS A DEFERRED BATCH JOB. THE
USER CANNOT INTERACT WITH A JOB SUBMITTED FOR PROCESSING USING THIS
COMMAND. RATHER, THE JOB IS TREATED IN THE SAME MANNER AS A JOB
SUBMITTED FROM THE CARD READER AT WCC, BALANTINE, MEMORIAL, OR BUSINESS:
'THE FORM OF THE SUBMIT COMMAND IS -
SUBMIT(<LFM,Q,NP>)C:
CF SPEC:NEWKEY:NEWKEY:
SYSTEM L /13/ D /CF KRONOS:
TELEX L /17/ D
'TELEX IS THAT PART OF KRONOS WHICH DEALS WITH INTERACTIVE USERS:
CF SPEC:NEWKEY:NEWKEY:
UN=USERNAME L /21/ D /CF UN:
UN L /21/ D
'UN REPRESENTS A FOUR-DIGIT USER NUMBER:
WCC L /19/ D
'DO YOU WANT TO SUBMIT, ROUTE, OR DROP A JOB AT WCC?: /CF SPEC:NEWKEY:
WONT S /WONT:
Appendix III: Cal SNØBØL Version of ELIZA
AN EXPLANATION OF CRYPTIC PATTERNS:
P.1: BREAKS A SENTENCE AT WORD BOUNDARIES
P.3: BREAKS CONTENT FROM A STRING OF CUES OR A STRING IN MEMORY
P.4: SHORTENS STRINGS BY PRE AND POST TRIMMING
P.5: MATCHES 'S', 'L', OR 'I' INDICATOR
P.6: MATCHES SPECIAL INDICATOR
P.7: MATCHES ASSOCIATED WORD IN MEMORY
P.8: EXTRACTS SENTENCE FROM WITHIN MEMORY QUEUE
P.9: EXTRACTS SENTENCE FROM FRONT OF QUEUE
FUNCTION FOR FINDING INTEGERS...
DEFINE('INTEGER(i)') : (INTOUT)
INTEGER I POS(0) SPAN('0123456789') RPOS(0) : $ (RETURN) F (RETURN)
INTOUT ANCHOR(1)
DEFINE INPUT FILES...
DETACH('INPUT')
INPUT('INPUT','INPUT',80)
INPUT('FILEA','SCRIPTH',80)
DEFINE OUTPUT FILE...
OUTPUT('SAVER','ROUTIT')
PATTERNS AND OTHER MEMORABILIA...
PRE.TRIM = SPAN(' ') I NULL
THANKS = FENCE 'THANK' (SPAN(' ') 'YOU' I 'S')
QUEST.TRIM = RTAB(1) I PHRASE.
'
P.1 = PRE.TRIM BREAK(' ') I WORD.
P.3 = BREAK(' ') I CONTENT.
P.4 = PRE.TRIM REM .. LESS.
P.5 = PRE.TRIM ANY('S/LD') I WORD.
P.6 = PRE.TRIM ANY('abcdefghijklmnopqrstuvwxyz') I BRANCH.
P.7 = SPAN(' ') I WORD. SPAN(' ') BREAK(' ') I CONTENT. I REM . STR.
P.8 = (ARNO(BREAK(' ') ' ')) I STR1. (ARNO(NOTANY(' '))
SPAN(' ') I WORD. SPAN(' ') BREAK(' ') I CONTENT. I REM . STR2.
P.9 = (BREAK(' ') I OUTPUT SAWER ' ') I STR1. REM . STR2.
OUTPAT = LEN(50) BREAK(' ') SPAN(' ') I OUTPUT SAWER
SHORTEM. = BREAK(' ') I PHRASE. ANY(' ') I REM TRAILER.
X.REF = PRE.TRIM FENCE 'CF'
BUMP. = PRE.TRIM 'NEUKEY'
PAREN. = PRE.TRIM ' ( ' I BREAK(' ') I CONTENT.
CALL.TO.SNOBOL = PRE.TRIM 'SNOBOL'
FAM1 = 'MOTHER' I 'FATHER' I 'SISTER' I 'BROTHER' I 'DAUGHTER'
FAM2 = 'MOM' I 'DAD' I 'WIFE' I 'CHILDREN' I 'HUSBAND' I 'SON'
FAMILY = (FAM1 I FAM2) I RELATIVE
BELIEF = ('FEEL' I 'THINK' I 'BELIEVE' I 'WISH'). PENSE
HIGH = ('HAPPY' I 'ELATED' I 'GLAD' I 'BETTER' I 'HIGH'). BIEN
MULTI = ('EVERYONE' I 'EVERYBODY' I 'NOBODY' I 'NOONE'). ALLES
ICKY = ('SAD' I 'UNHAPPY' I 'DEPRESSED' I 'SICK'). MALADE
IMPORTANT STRINGS...
INTRODUCTION = "HOW DO YOU DO. WHAT IS YOUR PROBLEM?"
CLUELESS = ".. VERY INTERESTING"
"WILL YOU MIND REPEATING THAT?"
"I DONT SEEM TO UNDERSTAND WHAT YOU ARE SAYING"
"COULD YOU CLARIFY THAT STATEMENT PLEASE?"
RETAIL = ".. MY. YOUR. I. YOU."
WE NOW READ THE SCRIPT AND FORM STRINGS AS FOLLOWS...
FOR EACH KEY WORD 'XXXX' WE FORM THE FOLLOWING VARIABLES:
RPL.XXXX IS A REPLACEMENT WORD. (OPTIONAL)
LEV.XXXX IS A LEVEL NUMBER (IF ABSENT KEY IS IGNORED)
N.XXXX A COUNT OF THE NUMBER OF DECOMPOSITIONS
DEC.I.XXXX IS THE I/TH DECOMPOSITION PATTERN
RULE.I.XXXX IS A STRING OF DECOMPOSITION RULES FOR THE I/TH
DECOMPOSITION. RULES ARE SEPARATED BY '/'.
KEYWORDS. = ""
READ IN SCRIPT...
HIGGINS SCRIPT. = TRIM(FILEA)
IDENT SCRIPT. <END> :$ INTRO
EXTRACT KEY WORD FROM SCRIPT...
SCRIPT. P.1 = :F(HIGGINS)
PLACE KEY WORD ON LIST OF KEY WORDS IF APPROPRIATE...
KEY. = "" / WORD,
KEYWORDS. KEY. = "$LESSON"
KEYWORDS. = KEY. KEYWORDS.
EXTRACT 'S', 'L', OR 'D' AND BRANCH ACCORDINGLY...
LESSON SCRIPT. P.5 = :F(HIGGINS)$($WORD.)
ERR OUTPUT = 'SCRIPT ERROR: ' / WORD. / ' SCRIPT. :$HIGGINS'
SUBSTITUTION RULE - EXTRACT STRING AND STORE...
STORE. = 'RPL' / KEY.
SCRIPT. PRE.TRIM / BREAK(/) . $STORE. / ' = :F(ERR)
$LESSON
LEVEL NUMBER - EXTRACT STRING, CHECK IF NUMBER, AND STORE...
SCRIPT. PAREN. = :F(ERR)
$(<LEV. KEY>) = INTEGER(CONTENT.) CONTENT. :LESSON
DECOMPOSITION - SET UP DECOMPOSITION NUMBER AND PATTERN...
N.N = $(<N' KEY>) + 1
$(<N KEY>) = N.N
SCRIPT. PAREN. = :F(ERR)
CHECK IF SPECIAL RULE IS TO BE STORED AND BRANCH WHERE NECESSARY...
CONTENT. CALL TO SYMBOL = :$(SPECIAL)
$(DEC. N.N KEY) = ARB CONTENT. REM. POST
RULES
STORE. = 'RULE.' N,N KEY.
STORE. = DIFFER(SCRIPT.) SCRIPT. :F(NEW.CARD)
LOOP
STORE. RTAB(1) /* */ :S(HIGGINS)
NEW.CARD ITHOLD = TRIM(FILEA)
IDENT(IHOLD, 'END') :S(INTRO)
STORE. = 'STORE. ITHOLD' : (LOOP)
+
* THE FOLLOWING ARE SPECIAL SCRIPT-HANDLING STATEMENTS
SPECIAL CONTENT. P, B = :F(ERR1) S(SBRANCH.)
ERR1 OUTPUT = 'SCRIPT ERROR: / BRANCH, / HIGGINS'
A $<DEC. N,N KEY.> = ARB MULTI REM . POST : (RULES)
B $<DEC. N,N KEY.> = ARB 'YOU' ( 'WANT' I 'NEED' ) REM . POST
: (RULES)
C $<DEC. N,N KEY.> = ARB 'YOU ARE' ARB ICKY : (RULES)
E $<DEC. N,N KEY.> = ARB 'YOU ARE' ARB HIGH : (RULES)
F $<DEC. N,N KEY.> = ARB 'YOU' BELIEF 'YOU' REM . POST
: (RULES)
G $<DEC. N,N KEY.> = ARB 'YOU' ( 'CANNOT' I 'CANT' ) REM . POST
: (RULES)
H $<DEC. N,N KEY.> = ARB 'YOU' ARB . POST 'I' : (RULES)
I $<DEC. N,N KEY.> = ARB 'AM' I 'IS I ARE' I 'WAS'
ARB 'LIKE' : (RULES)
J $<DEC. N,N KEY.> = ARB 'YOUR' ARB FAMILY REM . POST : (RULES)
K $<DEC. N,N KEY.> = ARB 'I' ARB . POST 'YOU' : (RULES)
+
* WE NOW HOLD A CONVERSATION. FIRST WE READ A SENTENCE AND
* SEARCH FOR KEY WORDS REPLACING APPROPRIATE ONES
* AND STACKING THE KEYS IN A QUASI-ORDERED LIST (STRING).
+
INTRO
OUTPUT = 'A COPY OF THIS CONVERSATION IS BEING'
OUTPUT = 'MADE FOR FURTHER STUDY...'
OUTPUT = 'TO TERMINATE THIS PROGRAM, TYPE QUIT,'
OUTPUT = 'THANKS, OR THANK YOU...'
OUTPUT = INTRODUCTION
SAVER = INTRODUCTION
+
GET INPUT STRING...
HEAR
PHRASE. = TRIM(INPUT) :F(END)
SAVER =
SAVER = '++INPUT: ' PHRASE.
SAVER =
PHRASE. QUEST.TRIM
PHRASE. = PHRASE. '/'
IDENT(PHRASe., 'QUIT') :S(END)
IDENT(PHRASe., 'REPUN') :S(INTRO)
PHRASE. THAKS :S(NICE.END)
ANCHOR(1)
LOOKBACK =
LOOK.B =
ASS.FLAG =
SHORTEN INPUT STRING WHEN APPROPRIATE...
HEARSELESS PHRASE. SHORTEN.
PHRASE. = PHRASE.
IMAGE. =
REMEMBER CUES FROM PREVIOUS SENTENCE, INCLUDING KEY CUE ONLY IF NECESSARY.
OLD.CUES = DIFFER(OLD.CUES, ) CUES.
(CUE. "/" DIFFER(NEW.CUE. ) / / RTOAB1 ) / / . OLD.CUES
OMIT.CUE
CUE.LEVEL = 0
ANCHOR();
GET WORD.
SPLIT PHRASE. P.1 =
CHECK IF WORD IS A KEYWORD.
KEYWORDS. /
MAKE SUBSTITUTION IF REQUIRED
NEW.WORD = "%RPL. WORD." IMAGE.;
IMAGE. = DIFFER(TRIM(NEW.WORD) IMAGE. NEW.WORD "IS STACK" IMAGE. WORD."
NOTHING ELSE IS DONE IF NO LEVEL NUMBER
STACK NEW.LEVEL = DIFFER(%LEV. WORD.)
CUE.LEVEL = GT(NEW.LEVEL, CUE.LEVEL)
NEW.LEVEL =
CUES. = CUES. /
LOCUE
CUES = CUES.
KEEP
IMAGE. = IMAGE. WORD."
CHECK MEMORY FOR ASSOCIATION:
MEMORY. GT($SIZE(WORD.), 4) P.7
ASS.FLAG = "YES"
MEMORY. P.8 = STR1, STR2. :F(ERR3) S(SPLIT)
EPR3 OUTPUT = "ERROR IN PATTERN P.8:
MEM OUTPUT = "MEMORY:"
OUTPUT = MEMORY.
THIS PART FORMS OUR REPLY TO THE INPUT SENTENCE
REPLY1 IDENTITY(ASS.FLAG) :F(ASOC) S(REPLY)
NEWCUE CONTENT. P.4 = TRIM(LESS.)
REPLY CUES. P.3 =
NEXTCUE CUE. = "." CONTENT.
N.N = 0
N.MAX. = %N. CUE.
ANALYSE N.N = LT(N.N+N.MAX.) N.N + 1
IMAGE. %DEC( N.N CUE. )
$"RULE. N.N CUE. ) P.2 =
CONTENT. POS(0) /
$"RULE. N.N CUE. ) = %"RULE. N.N CUE. ) CONTENT. ":
CONTENT. X.REF =
CONTENT. BUMP. :S(NEMCUE)
CONTENT. BUMP. :S(REPLY)
THE RECOMPOSITION RULES ARE JOINED WITH THE PATTERN AND PUT TO OUTPUT
ANCHOR
BEFORE =
AFTER =
HOLD =
DELETE LEADING AND TRAILING BLANKS FROM CONTENT...
CONTENT. P.4 = TRIM(LESS.)
DECOMPOSITION RULES MUST BE QUOTED...
CONTENT. POS(0) (''''' ''''''), QUOD = :F(ERR2)$<NXT>
ERR2 OUTPUT = 'ERROR IN RECOMPOSITION RULE / CONTENT. ':<HEAR>
DEAL WITH UNQUOTED PARTS OF THE DECOMPOSITION RULE...
NXT CONTENT. BREAK(QUOD) , BEFORE QUOD =
CONTENT. BREAK(''' ''') , HOLD. = DIFFER(HOLD.) :F<OUT>
CONTENT.('''''''''') $$ QUOD RTAB(1) . AFTER *QUOD =
CLEAN HOLD. FOR INDIRECT IF NEEDED...
OUT HOLD. P.4 = TRIM(LESS.)
ARE WE CURRENTLY LOOKING BACK TO A PREVIOUS SENTENCE?
LOOK.B = DIFFER(LOOKBACK) OLD.HOLD
REMEMBER HOLD. STRING IF NEEDED FOR LOOKING BACK LATER..:
OLD.HOLD = DIFFER(HOLD.) $<HOLD. :F<NO.HOLD>
IF LOOKING BACK, RESET VALUE OF $<HOLD ....
$<HOLD. = DIFFER(LOOKBACK) LOOK.B
AN ANSWER ESCAPES...
OUTS. = DIFFER(HOLD.) BEFORE $<HOLD. AFTER :$<PRINT>
NO.HOLD OUTS. = BEFORE AFTER
PRINT OUTS. OUTPUT = :$<PRINT>
OUTPUT = OUTS.
SAVER = OUTS.
SETA ANCHOR(0)
DOES ELIZA WANT TO REMEMBER THIS SENTENCE?
RETAIN IDENT(LOOKBACK) CUE. :F<HEAR>
MEMORY. = LT(SIZE(MEMORY.),200) MEMORY. IMAGE. :-' :<HEAR>
THIS IS WHAT WE DO IF THERE ARE NO KEY WORDS IN THE INPUT
DO WE HAVE MORE OF THE INPUT SENTENCE TO CONSIDER...
NO.CUE PHASE. = DIFFER(TRAILER.) TRAILER. :$<HEARLESS>
IF NEEDED, REVIVE CUES FROM PREVIOUS SENTENCE...
CUES. = DIFFER(OLD.CUES) OLD.CUES :F<RECALL>
OLD.CUES =
LOOKBACK = 'YES'
:REPLY)
LOOK FOR EARLIER TOPIC FROM MEMORY QUEUE...
RECALL MEMORY. P.3 =
OUTS. = '.. EARLIER YOU SAID / CONTENT. ':<PRINT>
MAKE ASSOCIATION WITH EARLIER WORD...
ASSOC OUTS. = 'DOES THAT HAVE ANYTHING TO DO WITH THE FACT /
' THAT / TRIM(CONTENT.) ''':<PRINT>
EVERYTHING HAS FAILED AT THIS POINT AND ELIZA STAMMERS...
ER.AH.UM CLUELESS P.9 = STR2. STR1. (HEAR)
NICE. END OUTPUT = "MY PLEASURE"
SAVER = "MY PLEASURE"
AN EXPLANATION OF THE ORGANIZATION OF SCRIPT FILES:
THE FIRST LINE OF A FILE ENTRY CONTAINS THE KEY WORD
FOLLOWED BY AN 'L' AND ITS LEVEL NUMBER ENCLOSED IN SLASHES
FOLLOWED BY AN 'S' AND ITS SUBSTITUTION STRING ENCLOSED IN
SLASHES; FOLLOWED BY A 'D' AND THE STRING TO BE USED IN THE
DECOMPOSITION PATTERN. (EACH OF THESE PARTS ARE OPTIONAL AND
THE ORDER IN WHICH THEY APPEAR IS NOT IMPORTANT EXCEPT THAT THE
KEY WORD MUST APPEAR FIRST.)
SUBSEQUENT LINES OF SCRIPT MAY CONTAIN ANY NUMBER OF
DECOMPOSITION RULES ENCLOSED IN SINGLE QUOTES (""") AND DELIMITED
BY COLONS (:) ENDING IN COLON SLASH (:). (THIS PART IS ALSO
OPTIONAL.)
WITHIN EACH RULE, THE VARIABLE POST MAY APPEAR UNQUOTED
WHEREVER THE SUBSTRING OF THE INPUT SENTENCE (REMAINING AFTER THE
DECOMPOSITION PATTERN WAS APPLIED) IS TO APPEAR IN THE OUTPUT
SENTENCE.
TWO OTHER METHODS OF SPECIFYING A DECOMPOSITION RULE MAY BE
USED. ONE WAY IS TO SIMPLY USE THE WORD 'NEWKEY' (UNQUOTED)
AND THE ASSOCIATED KEY WORD WILL BE IGNORED. A NEW KEY WORD
WILL BE TAKEN FROM THE CUES, STACK AND SUBSEQUENT ACTION
PERFORMED. THE SECOND WAY IS TO USE THE LETTERS OF (UNQUOTED)
FOLLOWED BY ANOTHER KEY WORD SYNONYMOUS WITH THIS ONE. Thus,
WORDS CAN BE IGNORED EXCEPT FOR THEIR SUBSTITUTIONS;
WORDS CAN EVOKE THE EXACT RESPONSES OF ANOTHER WORD; OR
WORDS CAN TRIGGER ORIGINAL RESPONSES OCCASIONALLY AND BE IGNORED
THE REST OF THE TIME.
END
|
{"Source-Url": "https://www.cs.indiana.edu/pub/techreports/TR12.pdf", "len_cl100k_base": 16357, "olmocr-version": "0.1.53", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 37719, "total-output-tokens": 18795, "length": "2e13", "weborganizer": {"__label__adult": 0.0003540515899658203, "__label__art_design": 0.0014085769653320312, "__label__crime_law": 0.0003910064697265625, "__label__education_jobs": 0.0162811279296875, "__label__entertainment": 0.00032019615173339844, "__label__fashion_beauty": 0.00021159648895263672, "__label__finance_business": 0.0007500648498535156, "__label__food_dining": 0.00030684471130371094, "__label__games": 0.0012178421020507812, "__label__hardware": 0.001712799072265625, "__label__health": 0.0003540515899658203, "__label__history": 0.00046372413635253906, "__label__home_hobbies": 0.0002262592315673828, "__label__industrial": 0.0005617141723632812, "__label__literature": 0.0013418197631835938, "__label__politics": 0.00034880638122558594, "__label__religion": 0.0005788803100585938, "__label__science_tech": 0.07525634765625, "__label__social_life": 0.0002353191375732422, "__label__software": 0.1004638671875, "__label__software_dev": 0.79638671875, "__label__sports_fitness": 0.00018286705017089844, "__label__transportation": 0.0004010200500488281, "__label__travel": 0.0001823902130126953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57085, 0.02147]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57085, 0.08495]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57085, 0.84939]], "google_gemma-3-12b-it_contains_pii": [[0, 358, false], [358, 1127, null], [1127, 2787, null], [2787, 4305, null], [4305, 6132, null], [6132, 7936, null], [7936, 9620, null], [9620, 11366, null], [11366, 13424, null], [13424, 15506, null], [15506, 16868, null], [16868, 18281, null], [18281, 19741, null], [19741, 20859, null], [20859, 21829, null], [21829, 22659, null], [22659, 24551, null], [24551, 26759, null], [26759, 29186, null], [29186, 31598, null], [31598, 33973, null], [33973, 36193, null], [36193, 38632, null], [38632, 40989, null], [40989, 43530, null], [43530, 45897, null], [45897, 47222, null], [47222, 49170, null], [49170, 50807, null], [50807, 52455, null], [52455, 53787, null], [53787, 55542, null], [55542, 57085, null]], "google_gemma-3-12b-it_is_public_document": [[0, 358, true], [358, 1127, null], [1127, 2787, null], [2787, 4305, null], [4305, 6132, null], [6132, 7936, null], [7936, 9620, null], [9620, 11366, null], [11366, 13424, null], [13424, 15506, null], [15506, 16868, null], [16868, 18281, null], [18281, 19741, null], [19741, 20859, null], [20859, 21829, null], [21829, 22659, null], [22659, 24551, null], [24551, 26759, null], [26759, 29186, null], [29186, 31598, null], [31598, 33973, null], [33973, 36193, null], [36193, 38632, null], [38632, 40989, null], [40989, 43530, null], [43530, 45897, null], [45897, 47222, null], [47222, 49170, null], [49170, 50807, null], [50807, 52455, null], [52455, 53787, null], [53787, 55542, null], [55542, 57085, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57085, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57085, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57085, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57085, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57085, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57085, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57085, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57085, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57085, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57085, null]], "pdf_page_numbers": [[0, 358, 1], [358, 1127, 2], [1127, 2787, 3], [2787, 4305, 4], [4305, 6132, 5], [6132, 7936, 6], [7936, 9620, 7], [9620, 11366, 8], [11366, 13424, 9], [13424, 15506, 10], [15506, 16868, 11], [16868, 18281, 12], [18281, 19741, 13], [19741, 20859, 14], [20859, 21829, 15], [21829, 22659, 16], [22659, 24551, 17], [24551, 26759, 18], [26759, 29186, 19], [29186, 31598, 20], [31598, 33973, 21], [33973, 36193, 22], [36193, 38632, 23], [38632, 40989, 24], [40989, 43530, 25], [43530, 45897, 26], [45897, 47222, 27], [47222, 49170, 28], [49170, 50807, 29], [50807, 52455, 30], [52455, 53787, 31], [53787, 55542, 32], [55542, 57085, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57085, 0.04266]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
7e61a3f8aeb6987491e643fa7ff968395ba2cc44
|
Agile Requirements, Estimation and Planning
-- Iteration Zero --
James W Grenning
james@wingman-sw.com
Presented at the Embedded Systems Conference
San Jose, CA 2012, Boston, MA 2013
Introduction
Agile product development is designed to improve visibility and predictability of schedule performance, as well as overall product quality. A product team transitioning from a phased and plan driven product development model will find it challenging to change to the iterative and incremental development model of Agile. This paper shows readers how to get started with Agile, and prepare for their first development iteration. The techniques described in this paper are useful for getting ready to do your first iteration, hence the name Iteration Zero.
This paper mainly focuses on the planning aspects of Iteration Zero, but touches on other important activities to prepare the team for iteration one and beyond. We start by looking at the input needed by iteration zero and the outputs produced.
The paper describes Product Stories, and how they are used to manage project scope to plan and track the product development effort. You will also learn what makes a good Product Story. We explore a series of techniques a team can use to estimate the backlog of stories. With estimated stories a plan is formed to incrementally develop the product and track their progress, giving the team the information to manage to a successful completion.
Because iteration zero prepares the team for iteration one, I’ll briefly describe the mechanics, planning and tracking of iteration one and beyond. A big part of Agile development is the self-organizing team. So we briefly look at how a team has to make an agreement about work practices in iteration zero and must continually refine their work practice each iteration thereafter.
Although this is written from the perspective of a new development effort, this is applicable to any team transitioning to an incremental and iterative agile planning model.
What is Iteration Zero
Iteration Zero is a focused set of activities that a team does to get ready to begin a series of product development iterations. Each product development iteration delivers some demonstrable part of the system.
Needed documents are created incrementally, as we learn what is needed and how to build it. Each iteration should deliver some functionality. In an embedded development effort, when there is not hardware early in the development cycle, the early demonstrations might be executing test cases in a virtual platform environment, or on prototypes.
In making product development visible, agile relies on demonstrable progress rather that a document trail. This lets the team and business stakeholders see and adjust the evolving product.
In Iteration Zero the team explores the product ideas, customer needs, development practices, hardware and software architecture. The team breaks down their vision of the features needed and work to be done. They get a common understanding on the goals of the development effort, including the market need, the business needs, the product content and the effort needed to develop the product. They develop an initial plan.
The activity should be focussed. It should not drag on. For many development efforts, effecting teams of 10 or so people, a focussed Iteration Zero should take only a few days or a couple of weeks. Doing some of the intense activities might be best off-site away from day-to-day distractions.
Some organizations delay getting product development started because they don’t know everything yet. Don’t let crystal clarity keep your organization from starting. You can’t know everything when you set off to develop a new product or product release, so it is a good idea to avoid spending too much time in the “fuzzy front end”\(^1\). The time, once spent, cannot be recovered. It will be very difficult to catch up later in the development effort.
---
\(^1\) [MCCONNELL]
The Bright Idea
To begin Iteration Zero you needs a bright idea, a product to build, or a product to evolve. This is the initial vision of the product. Expect the vision to evolve over the development lifecycle as the development effort and opportunity become more clear.
Critical Dates
In any development effort it is normal to have target or mandated dates. Business runs on dates. Collect those critical dates and bring them to Iteration Zero. Also, know why those dates are critical. Are dates externally imposed like the trade-show that won’t reschedule just because we’re not ready? Are dates tied to manufacturing realities, vendor dates, customer contracts, revenue goals, market windows, competitive threats or those motivational stretch goals? Visibility needs to run both directions. It’s motivating for development to know why the dates are critical. Management needs to know what is possible and where the time is being spent.
Product Needs
With the bright idea, there is always a list of important product needs. Usually these come pre-trimmed so that all development sees is the absolute must haves. They all are priority one. They are also at a high level of abstraction; they are not too precise. In these imprecise and highly important features lie the detailed features that are needed, along with others that are not needed and everything in between. When the realities of development set in, we will see that some feature aspects are more valuable than others; some are needed before others; and some won’t ever be developed at all.
Technology and Architecture Goals
Maybe the new product has some specific technology drivers, like compatibility with industry standards, downloadable code through the cloud, 3-D graphics, or transition to an embedded Linux.
Who Participates in Iteration Zero?
Iteration 0 is a team activity. You’d like the whole team to participate. Some of your situations will not make that very practical, like a very large team, multiple teams building a product or a distributed team.
Generally you’d like to have these people present
- People with a vision of the product being developed
- People that understand why features are needed and how they will be used
- People that will build the system
- People that will test the system
- People that fund the system
- Technology and Domain Experts
The attendees should include everyone involved in developing, specifying, and testing the product. You can probably see that the intention is to be inclusive. In a game company I’ve worked with, the participants included game designers, producers, software engineers, sound engineers, artists, testers, systems engineers and project managers.
Outputs From Iteration Zero
At the end of Iteration Zero the team is ready to start iterating. Knowledge won’t be perfect, it never is. After Iteration Zero we have enough to get started on some of the high priority work. Avoid wasting time at the fuzzy front end. With iteration 1, the team can start working on what is clear and important, which provides time to figure out the things we don’t understand. Let’s look at some of the outputs from Iteration Zero.
Product Vision
The product should provide initial answers to questions like these:
- Why build it?
- What is it?
- What are the critical dates?
- What problem or need does the product meet?
- What are the key business drivers?
- What are the target markets?
- Who are our external, or internal, customers and suppliers?
Record the product vision on a Big Visible Chart (BVC). It does not have to be pretty, just visible. The team needs the vision to make tradeoffs during product development.
Expect the product vision to change as the development effort evolves. Some of your initial assumptions will be right, other wrong and others will just evolve. As the vision changes, update the BVC. Make sure the team knows when the vision changes. It is motivating to be working toward a valuable end and a shared vision contributes to that. It is demotivating to work towards a goal and discover that it stopped being critical a month ago.
Architectural Vision
The architectural vision is not a formal design document. It is the initial partitioning of the system from a hardware and software perspective. The goal is not to figure it all out up front, but to make appropriate provisional decisions about the partitioning so that the learning can begin and the ideas can be tried. The depth and the effort needed for the vision will vary for different teams and products. A distributed team will need more of the vision documented to communicate to remote members. A collocated team can keep the vision on white boards while it evolves, and commit it to a document as they need.
Product Team
It is important to have the right people on the team to avoid cross team handovers. They cause delay. It’s helpful to think of the team as consisting of two major roles, product owner and developer. I’ll sometimes refer to the product owner as the customer.\(^2\)
The development team is made up of the people that design and build the product.
The product owner could be a single person, though usually in product development there is a product owner team, led by the product owner. The customer team specifies the product, and is the customer of the
\(^2\) Product Owner comes from Scrum. Customer comes from Extreme Programming
development team. In product development this is usually an internal customer, rather than an end user. The voice of the end users must be represented by the customer team. Every development effort is different, but a customer team could be made of these people:
- Product manager
- Systems engineer
- Test engineer
- Tester
- Business or Product analyst
- Marketing specialist
The customer team may change over time. Their duty is to speak with a unified voice to the development team. Multiple people on the team does not mean that developers have multiple voices. Sometimes when a product being developed is a platform or will serve different parts of the market, customer team members have different goals and priorities. The customer team should try to work those differing product and business goals before engaging the developers. I am not suggesting slipping back into making all product decisions before engaging development and throwing it over the wall, keeping development out of business decisions. I am suggesting that the customer team should be respectful for development’s time and work out what they can independently. Development will still have their say when the customer team shares their current view of features, priorities and goals.
Sometimes the customer/developer relationship is not so obvious. For example, when hardware is being developed concurrently with software, the hardware developer may act as a customer to the software developers during board bring-up. At the same time the software developer might have a customer relationship with a hardware engineer.
Having Test people as part of the customer team is critical because many detailed specifications are written as test cases. The customer defines done, and done means passes its tests.
Avoid sharing people across teams. Specialties might make this inevitable, but if you do share, the person should work to transfer knowledge so the team can be more self sufficient.
**Product Stakeholders**
It is important to keep communications open with the stakeholders of the product. Upper management is investing a lot in the product development, so it is critical to make it easy for them to follow the progress. Be proactive. You will see later that having visible planning artifacts, and regularly scheduled planning meetings facilitate open communications with product stakeholders.
**Product Story Backlog**
The product backlog is made up of all the work that is needed to build the product, and then some. The backlog is made up of Product Stories. Most materials on Agile Development talk about User Stories as the unit of work. Because in product development, many stories are not really visible to the end user, I prefer to call then Product Stories, or simply Stories.
Product stories come in a variety of sizes. Though they need to be fairly small to be chosen for the current iteration. Stories start out big, what Mike Cohn calls epics [COHN1].
Epics might be like the headings in requirements document, or feature specifications. The following diagram illustrates that the epics are broken down into stories.
Each rectangle represents a story, usually written on a note card. Use note cards in story decomposition and planning. They are easy to manipulate, and somehow make the work more tangible. For distributed teams and long term memory, some teams put their stories into a spreadsheet of some sort of agile backlog management software. I recommend that you first learn the techniques with note cards and later find a tool if one is needed.
You are probably used to a work breakdown structure. A work breakdown usually focuses on the tasks to be performed, often leaving big integration activities for later in the development timeline. The story approach is a feature breakdown, where the focus is on delivering pieces of what the product must do. Integration happens much more regularly.
Each story has a value and a cost represented on the cards in the above diagram as $/!, and read as bang for the buck. The sizes of ! (bang) and $ (buck) show that some stories are more valuable than others while independently stories have differing relative costs. We know we’ve broken the epics down to the right level when we start to see work that we won’t do, work whose bang is not worth the buck. The fine grained backlog allows the product owner to select the most valuable parts of the epic features for development.
You will see later that we quantify the cost of stories, but let the value be a judgement call without putting numbers on the cards. Some teams will also try to put values on the cards. Consider that advanced agile.
MuSCoW Analysis
When the customer team understands the bang for the buck (I/$), then well informed decisions on story priority can be made. Anyone who has been in development for a while knows that we always want to put more into a product than we have time or money for. When we look at requirements only from the high level, all the requirements are priority one. When we start to break down features into smaller pieces, we start to see stratification in priorities. Let’s look at a technique called MuSCoW analysis. [MUSCOW]
MuSCoW analysis is a very useful technique for prioritizing work. Stories can be classified into these categories
- Mu - Must haves
- S - Should haves
- Co - Could have
- W - Won’t haves (very soon anyway)
When choosing stories for an iteration, or planning the next release, MuSCoW comes in handy. Stories let us manage project scope with fine grains. In order to apply MuSCoW, we need well structured stories. Let’s look at how we get them.
INVEST in Stories
Mike Cohn describes the acronym INVEST as a reminder of six important attributes of a story [COHN1]
- I - Independent
- N - Negotiable
- V - Valuable
- E - Estimable
- S - Small
- T - Testable
It is difficult to initially get features into INVEST sized stories. The best stories are not centered on one layer of the system, but rather cut across layers of the system. This is counter to how most embedded systems software is developed where individual engineers own specific layers or components. We want layers and components in the system, but we don’t want our team to specialize in just one single area. This apparent optimization causes a bottleneck, delays in integration, and a less flexible and knowledgeable team.
Stories cut across layers and components so that integration happens earlier and more often. I am not saying we’ll never slice a story at a layer, it’s just not the first choice.
When we’re trying to get stories that adhere to INVEST, we sometimes need to apply these techniques: Split, Stub, Spike, and Time-box. [GRENNING1]
- Split - cut out some functionality, forming 2 or more stories. Split known from unknown. Split at a component boundary when you must.
• Stub - make a stub implementation of a dependency so that the lack of the dependent item does not block the rest of the story
• Spike - a spike is an experiment. Some stories are large because we don’t know enough to estimate them. Use a spike story to go learn something so that the story can be estimated or split.
• Time-box - leave the story as is but agree to limit the amount of time spent on it. This is usually applied to spikes.
Now let’s look at each of the INVEST attributes.
Independent
Stories should be independent. The order of development should not matter. Like many objectives, they cannot be met all the time. Although you will probably be surprised at how many stories can be kept independent. Another interpretation of ‘I’ is immediately actionable. It’s something we can do now.
Negotiable
Stories are negotiable. We can negotiate what is in and what is not in the story? We negotiate stories usually when they represent too much work take on at once. When a story is too big, split it. If it is too small, aggregate it with another. If part of a story is known, but part unknown, split known from unknown. Stories can be split by the test cases that they have to support. Consider splitting the happy path of a given feature from the error paths.
Valuable
Ideally stories should deliver immediate value to the customer. This may be unrealistic for the stories for an embedded software product where much of the product must come together to show anything. So we often have to settle for the second interpretation of V, Visible. Sometimes Visible is all you can get.
In embedded development, there are often technology and architecture goals. The connection between the goal and the user is not always direct. The technology and architecture goals often influence the viability of the product. Sometimes the goals can be realized just by having the architectural vision to guide development. I’d prefer to not have a story like “port embedded database” and prefer to have a story about “save and restore product configuration” that pulls in the embedded database porting work.
When the infrastructure work is just too big, find a way to express it as a series of demonstrations rather than having no visible progress for several weeks or months. If you had a system that needed a flash file system and there was not even a flash driver yet, one of the early visible stories to demonstrate progress would be to turn on the flash memory device’s LED.
One of my clients is building a system with a robotic arm. We wrote stories about some of the specific movements that need to be made like, open the grabber, close the grabber, move arm up, or move arm to home position. You can see that these are not valuable in themselves, but show visible progress toward the value.
**Estimate-able**
Stories are the unit of work for agile product development. The stories supports planning and tracking and consequently must be estimate-able. Stories that are too big are hard to estimate for a number of reasons. Stories are often vague and contain too much. The developers may not have the technical knowledge to make an estimate. The product owner may not know exactly what they want. These are all natural and normal when a team is inventing something new, or adding a major new capability to a system.
People are not so good at estimating big complex features and systems, which brings us to the next attribute of INVEST.
**Small**
Stories should be small. Small enough so that many fit in the iteration. Consequently, multiple stories should be completed in each iteration. People are better at estimating smaller pieces of work. Let's compare bigger stories to smaller stories using the next two diagrams.
When stories are big, there is more risk and less feedback. If the timeline represents one month, we only get a really good data point once a month on the feature progress. Smaller stories let developers get feedback from customers so the right features get developed. Developers, have you ever spent a month only to find you delivered what was asked for but it was not what was needed? Small stories also provide more regular management information to help determine if development is on track.
Another thing happens when we break stories into smaller pieces as this diagram shows.
After the first few stories are delivered, we might discover the final story is not needed, or something else is needed more. A big part of agile development is finding the work you don’t have to do.
Making stories smaller is a challenge for embedded development teams. Use split, stub, spike and time-boxing to cut the stories down to size.
Testable
For a story to be done, it has to pass its tests. That means that stories must be testable. Stories are tokens for the work that has to be done, and they are usually vague and ambiguous. They are the name of the functionality, a promise for a conversation. The tests provide the detailed requirements just in time for the development team. A helpful way to clarify a story is to ask the product owner and associated test people, what tests will demonstrate that the story is done.
Stories are the fine grained work that makes up the product backlog. Let’s see how stories are used in agile planning and tracking.
Agile Planning and Tracking
Barry Bhoem, pioneer in software development, drew this diagram to show the uncertainty involved in development efforts. Regardless of the axis being correct or not, the graph shows that early in the development effort there is great uncertainty. It is not until the product is delivered that the actual scope, cost and date are known with certainty.
What we’d like is an estimation and planning mechanism that accepts this law of software physics and helps us to close the gap of uncertainty.
Agile estimation and planning is designed to help the estimate converge more rapidly with reality by having feedback in the system. It’s not that Agile planning methods ignore dates and the developers have a take-it or leave-it attitude. Dates are critical. We have to manage to the dates while being realistic about what can be achieved by the people doing the work. A release plan is one of the tools. It can be used to manage to a specific delivery date, or deliver a specific content. Most often a planning is a combination of the two.
**Release Plan**
Stories are laid out in a release plan. This diagram represents the product backlog with specific stories in each iteration. Each iteration is a set of stories, viewed edge on with each story written on a notecard. The boxes represent larger epic stories that have not yet been broken down.
Notice that the first few iterations are broken down into small stories, ready to be developed. Generally, the stories in the next few iterations should be more detailed, as the work is close at hand. Stories later in the plan can be bigger and more ambiguous. This is just like planning in your daily life. You probably know in great detail what’s for dinner tonight, and only have a vague idea of what you’ll prepare over the next two weeks. Sometimes you will break down the work for a more detailed plan, but often deciding too soon has little advantage.
One key idea is that the system is releasable at any iteration boundary. In a concurrent hardware/software development effort, this probably cannot be true in the early iterations, but can be true in the later ones. So, a limited functionality prototype can be scheduled to use the system at the end of iteration 5, while a field ready product is scheduled for the end of iteration 10. With the give and take from one iteration to the next,
and the big ambiguous epics out in the second half of the plan, the exact content of each release is not predictable, but we can use MuSCoW to have the most interesting releases we can on those dates.
How far to break down the epics is a judgement call. It is important to have a few iterations of work that is ready and important. If your backlog does not have ample small stories, it is likely that the stories with the most bang for the buck are not being worked on.
Break down the epic when you want a better view of the work to come, or reduce schedule risk. Break them down if there is a high risk or some of the epic content is critical to the product success. Realize that making the plan more detailed will take effort away from working on the product. The increased planning effort might not be worth delaying development.
**Velocity**
Velocity is the measure of progress made to deliver the product. Initially velocity is estimated; later it is measured. This graph represents a team's velocity as measured over six iterations.

Velocity is measured in story points per iteration. We’ll talk more about story points and estimation in the next section. Story points measure the effort needed to complete a story. In the above chart the team initially had a velocity of about 14, and as of iteration 6 the team is consistently getting 30 to 35 points completed each iteration.
Burn-down
If every story in the backlog has a story point estimate, we can create a burn-down chart, like this.
With a reasonably consistent velocity, the burn down chart can be used to see when all the identified stories will be complete. Assuming the goal is to release after the 10th iteration, this chart shows the usual situation that there is more work to do than there is time.
Dates are critical to business, so it is natural to manage a development effort to dates. We always want to put more into the product, and usually there is not enough time. What the burn-down chart provides is an early warning of variation between desired scope and date and the most likely scope and date.
Early warning means you have options
With this early warning, teams have options, unlike the 11th hour “we're not going to make it” moments, where there are no options but to delay or ship poor quality. Early warning give time to adjust development.
Add people
Fred Brooks said that adding people to a late project makes it later. [BROOKS] This is because the newly added people take time away from those already working to deliver the product. There is an initial productivity loss.
If you add people early enough, it is much more likely that the team's short term velocity hit is more than made up during the remaining part of the timeline.
Change the date
Some dates can’t be moved. An industry trade show is not going to change its date just because your product is not ready. Although not all dates are immovable. Some dates are arbitrary, others may be unpleasant to change, but can be changed when the evidence of progress and likely outcome is compelling. Fine grained scope control with stories, velocity and burn-down give that kind of evidence.
Adjust the scope
Developing iteratively, using stories to drive the plan, allows a team to do the most critical features first. The most valuable stories can be chosen from the backlog. Priorities can be done using MuSCoW analysis. The trade show can be attended with the most important demonstrable features on the date.
In a traditional up front plan and design approach, valuable time may be spent on features that get cut to meet a date. If the stories drive an incremental delivery plan, along with sound incremental development engineering practices, effort can be used for only the features that are delivered. Upfront work for features that had to be cut in a traditional development effort is not done, and instead is spend on features that are delivered.
Estimation
Estimation is educated guesswork. You are inventing something, so we cannot expect perfect future vision. Beginning a product development effort and fixing the scope, people, and exact delivery date is close to insanity, especially if you keep doing it and get the same high-stress result.
Ask an engineer for an estimate for a new feature. Promise that it will only be used for budgetary and planning purposes. Assure him that he won’t be held to it. You’ll see arms crossed and an effort to back away. People don’t like giving time estimates because even with all the assurances, the number they give too often goes right into the plan and very soon it is viewed as a commitment.
We want the people doing the work to estimate it, and we want estimation to accept that it is educated guesswork.
Problems with traditional estimation efforts
- People reluctantly give time estimates.
- When the estimator is not the person doing the work, the estimate will probably be wrong
- When the estimator is not the person doing the work, the person that has to do the work will not own the estimate.
- Estimates are viewed as commitments
Agile estimation and planning are designed to avoid some of these problems. Mike Cohn, in Agile Estimation and Planning, describes these estimation steps [COHN2]:
- Estimate the relative sizes of each story (story points)
- Estimate the velocity for the team (story points/iteration)
- Measure the actual velocity and feed that back into the plan.
One of the key points here is that estimates are not done in units of time that we are used to like developer-hours or developer-days. Estimates are made in story points, a unit-less number. Typically the stories requiring the least effort to implement, but all about the same effort, are assigned the story point value of 1. The rest of the stories are estimated in integers and are multiples of the stories with the least effort. For example, the stories that are estimated at eight points are estimated to take eight times as long to develop as stories with a one point estimate. Stories are small enough when you can envision that numerous stories can be completed in an iteration.
Story points work, because:
- The pressure of time estimates is removed.
- People are good at relative estimates for smaller items.
- It does not matter who will do the work, the estimates are not in calendar time so differences in personal velocity do not effect the story point total.
- Developers own the estimates.
- They can be used, along with velocity to create and evolve a realistic plan.
- With the teams run rate as feedback it is a self correcting system.
Warnings about velocity misuse:
- Don’t create incentives for velocity goals.
- Don’t make stretch goals.
- Don’t measure individual velocity.
- Don’t compare the velocity from one team to another.
With each story having an estimate, and the team an estimated velocity, a release plan can be formed. The product owner team uses their judgement, and their sense of the relative value of the stories, the date goals of the team, and a cost for each story to lay out a release plan. Developers sometimes suggest pulling some stories forward to reduce risk and manage dependencies (though often dependencies can be managed through stubbing).
Traditional development plans where much effort is placed on nailing down time estimates takes much longer to create than a story point based plan. It appears to be more precise, but it is still guesswork. In an agile development effort, depending upon scope, a plan can be put together very quickly so that work can begin, and the plan revised as we learn by doing. I’ve coached many teams through initial story creation and estimation where a good first draft plan is completed in a couple days. In one case the product development was also planned by traditional means, and we came up with the same answer. The traditional plan took many weeks of effort to develop. (I admit that the team that had a traditional plan that converged could in part be because of the ground work on the traditional plan, but still the team was surprised when it arrived at a similar estimate through such a different means.)
With a traditional plan, we struggle to be precisely right, but usually end up precisely wrong. In agile we are prefer generally correct to precisely wrong. We can get generally correct more quickly too, and then refine as we go.
We’ve talked about the estimates long enough. So, how do we get these estimates? Let’s see.
Planning Poker
Planning Poker is the most popular estimation technique used by agile teams. Planning poker was invented to solve a deadlocked planning meeting. In a conference room in American Fork, Utah 2002 we had a backlog to estimate. I was the coach. The customer read a story. The two senior engineers discuss the impact of the story on the system. Reluctantly, an estimate is tossed out on the table. They go back and forth for quite a while. Everyone else in the room is drifting off, definitely not engaged. The discussion oscillates from one potential solution to another, avoiding putting a number on the card. When the discussion finally ends, the estimate did not really change over all that discussion, 20 minutes wasted. A few more stories go through the same pattern. With 25 more stories to estimate, how can we get this meeting moving? We took a break, and when we returned I had everyone pick up a note card, and listen to the next story. Next they wrote their estimate on a card and placed it face down on the table. Then all revealed their estimates simultaneously. We converged sometimes but not always. There was discussion when we did not agree. After a while each player had a hand of cards, it looked like a poker game. Planning Poker was born. We got through the stories with time to spare. [GRENNING2]
The problem with that meeting was that only two guys were engaged, and even when they agreed they choose to talk about it, and talk about it… Everyone else nodded and slept. It’s the team’s estimate and they need to be engaged.
The mechanics are easy. Each developer has a set of planning poker cards with a sparse set of numbers. Mike Cohn, in Agile Estimation and Planning [COHN2] suggests using a modified Fibonacci sequence (1, 2, 3, 5, 8, 13, 21, 40, and so on). In my original paper I suggested a sparse sequence where when the numbers got bigger, there were larger gaps. I wanted to avoid arguing 10 vs. 11. Remember, we’re not going to precisely wrong, but generally right. I prefer a set of numbers that are easy to add up for quick estimates (1, 2, 3, 5, 8, 10, 15, 20, 30, 50, and so on).
The product owner reads a story. Developers ask questions about the story is it is not clear, or to determine what is included and what is not included. A good question to ask is “How will it be tested?” The all developers, playing their cards close to their chest, choose a card and place it face down on the table. When all player are down they roll over their card. If they all have the same estimate, the estimate is written on the story and the next story is read. It does not matter if each developer has a different implementation in mind, they agree on the effort, so the estimate is agreed.
When developers do not agree, the outliers discuss their estimates. The low outlier will describe why the story is so easy, the high outlier will describe why the story is so hard. Maybe the product owner will chime in and what is included in the story. The whole team plays again. Usually estimates converge quickly. If they don’t pull the card out, to discuss later, or average the estimates, or take the high or low. What ever the team decides is OK.
Planning poker is not always the right tool for the job. It goes much more smoothly when a team has a set of baseline story estimates so that they have a feel for the size of a story point. Planning poker is not the best tool for estimating a large backlog. The Planning Poker Party is the right tool though.
**Planning Poker Party**
The Planning Poker Party is designed for estimating a large batch of stories. Typically an Iteration Zero backlog has 100-200 stories. In my workshops we can typically create an initial estimate in half a day. With a large batch of stories we go through a sequence of planning steps described in the following subsections. It’s best to do this activity with the whole team around a large table. There is a more detailed description my blog [GRENNING3].
**High-low Showdown**
In high-low showdown we’re trying to reduce the number of stories in play for each round of the next game. Lay out five cards on the table marked like this:
- Low
- Medium
- Hard
- More info
- You must be kidding!
The markings represent the relative effort for the stories placed under that heading. Before play begins the product owner would have gone through the backlog with the team, so everyone would be familiar with the stories. The dealer reads a story, some developer (or developers) offers what pile to put it into based on a guess of the relative effort. This should be fast. Don’t waste time discussing which pile a specific story lands in. You will get some wrong, but it’s OK because in the next activity it will be evident. The real objective of this is to end up with a about ⅓ of the card in each of the high, medium and low stacks. Of course there will be some on the more info and you must be kidding stacks. Avoid long discussion, just get the cards into a stack that is close enough. We refine estimates in the next activity.
If you have less than 75 stories, you can skip High-low showdown and go right to deal and slide.
**Deal and Slide**
Start deal and slide with the low effort pile of stories (or the whole pile if you skipped high-low showdown). Spread them out on the table. Depending on the size of your backlog, you might need a big table before we are done. I prefer a pool table.
Developers, in silence, start sliding the story cards around the table, putting the easiest stories to the left and the hardest to the right, forming columns of similar difficulty. Anyone can slide any card any number of times, but if a card won’t settle down, a Nervous Nellie, it should be taken out of play. Once the cards that can settle down, do settle down, talk about the Nervous Nelly cards and see if you can settle them. If you can’t it is a sign that more info is needed, or someone is kidding.
Repeat the process with the next stack of cards. Don’t worry if you find some lows in the mediums, mediums in the highs or vice versa. It will all work out. The high-medium-low sorting was just a way to avoid having 200 cards on the table at the start of the game.
With all the low, medium, and high stories laid out, it is probably not a bad idea to go back and look over the columns and decide if the stories are placed correctly. Next, the columns need headings.
Planning Poker for Groups
In this round, label the headings of the columns with their relative estimates using a planning poker-like approach. The columns to the left should have small numbers and to the right larger numbers. You might find that you join some adjacent columns to avoid splitting hairs over some of the estimates. Remember, generally right, not precisely wrong. If your product owner wants a ballpark on the more info and you must be kidding stories, lay them out and put appropriately large estimates on them as they deserve.
Once the cards are down and columns have estimates, write the estimate on each card.
Developer Guts
In developer guts, developers estimate their velocity. The product owner is supposed to choose the stories for each iteration, but in developer guts I let the developers pretend that they get to choose the content of the first couple iterations. Here’s how it goes:
Developers choose a set of stories that they think they can complete in a two week period (assuming a 2 week iteration). Choose another set of stories. Add up the numbers. Are the sums similar? Discuss, repeat and somehow show some guts and choose an estimated velocity.
Once you have completed a few iterations, you will stop playing Developer Guts and use the measured velocity (the actual number of completed points in the last iteration). The practice of using the last velocity as the velocity for the coming iterations is called Yesterdays Weather. [BECK] The name is based on an effective weather forecasting technique, based on the fact that the most likely weather forecast is that it will be the same as yesterday.
Customer Guts
All the stories are estimated; the developers have made a guess at their velocity; now the customer (product owner team) must choose a series of iterations. The customer might accept the first two iterations that the developers put together during developer guts, but it is the customer’s call.
The next few iterations should be largely made up of smaller stories, usually ones with estimates in the single digits. These are well understood, or at least focused bits of work. Bigger stories should be broken into smaller ones no later than the start of an iteration. When a big story is not fully delivered, no value is delivered to the customer, and no points are added to the velocity. Smaller stories improve both those situations.
Replanning
Locking into a plan means that all the future things you learn cannot be incorporated into the plan. Agile plans are alive, so plan to replan. Times to replan:
- After a few iterations
- When the velocity widely varies for no good reason
- When new stories are added to the backlog
- When you know the current plan is not right
Get Ready to Start Iterating
Although the product owner can in an emergency change the priorities of the team, it’s best not to change an iteration in progress. Valuable work in progress and context are lost. In the non-agile approach, a new field issue, or urgent change would have to sit around for weeks or months to not interrupt the big chunks of work in process. With a two-week iteration, on average an urgent issue will only have to wait one week until it can move to the head of the priority list.
If an iteration had to be changed, estimate the emergency work and remove an equal amount from the iterations. There are other strategies for dealing with regular but reactive maintenance work, like having one developer each iteration be the lead for new emergencies, or have an unspecified support story in each iteration to reserve some bandwidth.
Iteration Planning Meeting
Each iteration there is a planning meeting that wraps up the previous iteration and starts the next. Some teams break these into two meetings. The product owner team should have decided on the stories for iteration before the meeting so the customer team can speak to development with one voice. The meeting should have a regular time slot, and be well attended. Once you get good at these meetings, they should only take a couple hours.
Here is a short list of activities to wrap up each iteration.
- Demonstrate the completed work
- Record the team velocity
- Update the burn down chart
- Do an iteration retrospective for the prior iteration
- What worked?, What were the problems? What should we do differently?
Here are the activities to plan the next iteration:
- Product owner presents the stories
- Developers double check estimates
- Some stories might get split
- Developers discuss architectural impact
- Developers choose which stories they will be responsible for
- Update the iteration race track
Iteration Race Track
To monitor the work in progress we put up an iteration race track in a visible area, preferably where we also do the daily standup meeting. A cork board, a white board, or a cubicle wall works fine as a race track. Initially it looks like this:
As stories are implemented, the race track will show the state of the work in progress.
If this information radiator looked like this at the iteration midpoint, the team should be concerned. Half the time is used, but half the story points are not accepted, or even claimed to be done by the developers.
Team Agreement on Working Practices and Learning Goals
Teams starting an agile development effort should also form an agreement on the practices and standards they will follow. Agile is about working in self-managing teams and making the process your own.
Many teams draw from the planning practices of Scrum and the engineering practices of Extreme Programming. Some of the choices are mechanical:
- Daily stand up meeting time
• Iteration start day of the week
• Iteration length
• Where the team will meet
Some other practices and decisions are not so easy to make:
• How to apply automated testing to legacy code
• Adopting Test Driven Development
• Setting us a continuous integration server
• Coding standards
• Pair programming, code reviews or both
• How to automate acceptance tests
• Making a shared workspace and or team room
• Working with other teams or remote developers
Other Iteration Zero Activities
In this paper I have focused on the planning aspects of iteration zero. Teams often use iteration zero to get tooling and training in place so that iteration one can focus on building a slice of the product. Some of the activities are:
• Training in Agile Development and Test Driven Development
• Setup up tools for Test Driven Development
• Setup up tools for Story Testing (Acceptance testing)
• Setup up a continuous integration server (Hudson, Cruise Control, for example)
• Setup reporting conventions and tools
Summary
Agile planning is adaptive planning. The plan is centered on what is important to the business and the end user of the product. All plans are guesses and they have to be continually adjusted and evaluated. Making the work visible to non-engineers as stories and demonstrations lets stakeholders see the progress in things they understand. The backlog is made up of stories. The progress is measured in velocity and visualized in the burn-down chart. These are simple but powerful tools.
Development organizations always want more in their systems than time or effort will allow, so it will be natural to have to make tradeoffs to meet date of feature content goals. The difference with an adaptive agile plan is that it provides early warning when the plan execution deviates from the goal. The plan is easy to demonstrate to non-developers. Early warning usually means that there are more options. It is a significant management advantage to have more reliable plans, and plans that give early warning of problems. It is valuable to product owners to decide what feature slices to add first rather than waiting for infrastructure to be built and only seeing visible progress near the end of the development lifecycle where there is little time to react.
A pilot project at a recent client had a track record of typically being three months late, usually only discovering the schedule gap very late in the development cycle. With their first project the stories gave them credible evidence of the scope of the product and its duration. They worked a few iterations to
establish a velocity. The news was not good. They took options to the product owner. We can deliver on
time with these features, but these other features cannot be done by the deadline. The scope in jeopardy
was laid out in iterations. After a little MuSCoW analysis, the product owner settled on adding two iterations
and adjusting the content. The business could adjust to the new plan because there was ample time, and
flexibility in the organization.
Bibliography
[MCCONNELL]
[BROOKS] Brooks, Fred, The Mythical Man Month
[COHN1] Cohn, Mike, User Stories
[COHN2] Cohn, Mike, Agile Estimation and Planning
[GRENNING1] Grenning, James, Planning Poker,
http://www.renaissancesoftware.net/blog/archives/48
[GRENNING2] Grenning, James, Planning Poker,
http://www.renaissancesoftware.net/papers/44-planing-poker.html
[GRENNING3] Grenning, James, Planning Poker Party,
http://www.renaissancesoftware.net/blog/archives/36
|
{"Source-Url": "https://wingman-sw.com/papers/Iteration0-Grenning-v1r0.pages.pdf", "len_cl100k_base": 9832, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 49988, "total-output-tokens": 10934, "length": "2e13", "weborganizer": {"__label__adult": 0.000446319580078125, "__label__art_design": 0.0004229545593261719, "__label__crime_law": 0.00024390220642089844, "__label__education_jobs": 0.0018358230590820312, "__label__entertainment": 5.1975250244140625e-05, "__label__fashion_beauty": 0.00021135807037353516, "__label__finance_business": 0.0007104873657226562, "__label__food_dining": 0.000396728515625, "__label__games": 0.0007014274597167969, "__label__hardware": 0.0008139610290527344, "__label__health": 0.0003638267517089844, "__label__history": 0.0002377033233642578, "__label__home_hobbies": 0.00014531612396240234, "__label__industrial": 0.0004439353942871094, "__label__literature": 0.0002503395080566406, "__label__politics": 0.00024211406707763672, "__label__religion": 0.0003924369812011719, "__label__science_tech": 0.0024700164794921875, "__label__social_life": 9.40561294555664e-05, "__label__software": 0.00313568115234375, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00045680999755859375, "__label__transportation": 0.0005598068237304688, "__label__travel": 0.00025272369384765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47978, 0.00277]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47978, 0.33889]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47978, 0.95595]], "google_gemma-3-12b-it_contains_pii": [[0, 185, false], [185, 185, null], [185, 3197, null], [3197, 3968, null], [3968, 6659, null], [6659, 9353, null], [9353, 12125, null], [12125, 14001, null], [14001, 16187, null], [16187, 18989, null], [18989, 20421, null], [20421, 21858, null], [21858, 23854, null], [23854, 25270, null], [25270, 27025, null], [27025, 29972, null], [29972, 32315, null], [32315, 35814, null], [35814, 38710, null], [38710, 41440, null], [41440, 43345, null], [43345, 44351, null], [44351, 46942, null], [46942, 47397, null], [47397, 47978, null]], "google_gemma-3-12b-it_is_public_document": [[0, 185, true], [185, 185, null], [185, 3197, null], [3197, 3968, null], [3968, 6659, null], [6659, 9353, null], [9353, 12125, null], [12125, 14001, null], [14001, 16187, null], [16187, 18989, null], [18989, 20421, null], [20421, 21858, null], [21858, 23854, null], [23854, 25270, null], [25270, 27025, null], [27025, 29972, null], [29972, 32315, null], [32315, 35814, null], [35814, 38710, null], [38710, 41440, null], [41440, 43345, null], [43345, 44351, null], [44351, 46942, null], [46942, 47397, null], [47397, 47978, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47978, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47978, null]], "pdf_page_numbers": [[0, 185, 1], [185, 185, 2], [185, 3197, 3], [3197, 3968, 4], [3968, 6659, 5], [6659, 9353, 6], [9353, 12125, 7], [12125, 14001, 8], [14001, 16187, 9], [16187, 18989, 10], [18989, 20421, 11], [20421, 21858, 12], [21858, 23854, 13], [23854, 25270, 14], [25270, 27025, 15], [27025, 29972, 16], [29972, 32315, 17], [32315, 35814, 18], [35814, 38710, 19], [38710, 41440, 20], [41440, 43345, 21], [43345, 44351, 22], [44351, 46942, 23], [46942, 47397, 24], [47397, 47978, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47978, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
df3ef412f5dd3b4d41b19ac6072fe55a86771da8
|
MLitB: Machine Learning in the Browser
Meeds, E.W.F.; Hendriks, R.; Al Faraby, S.; Bruntink, M.; Welling, M.
Published in:
PeerJ Computer Science
DOI:
10.7717/peerj-cs.11
Citation for published version (APA):
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.
MLitB: machine learning in the browser
Edward Meeds, Remco Hendriks, Said Al Faraby, Magiel Bruntink and Max Welling
Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
ABSTRACT
With few exceptions, the field of Machine Learning (ML) research has largely ignored the browser as a computational engine. Beyond an educational resource for ML, the browser has vast potential to not only improve the state-of-the-art in ML research, but also, inexpensively and on a massive scale, to bring sophisticated ML learning and prediction to the public at large. This paper introduces MLitB, a prototype ML framework written entirely in Javascript, capable of performing large-scale distributed computing with heterogeneous classes of devices. The development of MLitB has been driven by several underlying objectives whose aim is to make ML learning and usage ubiquitous (by using ubiquitous compute devices), cheap and effortlessly distributed, and collaborative. This is achieved by allowing every internet capable device to run training algorithms and predictive models with no software installation and by saving models in universally readable formats. Our prototype library is capable of training deep neural networks with synchronized, distributed stochastic gradient descent. MLitB offers several important opportunities for novel ML research, including: development of distributed learning algorithms, advancement of web GPU algorithms, novel field and mobile applications, privacy preserving computing, and green grid-computing. MLitB is available as open source software.
INTRODUCTION
The field of Machine Learning (ML) currently lacks a common platform for the development of massively distributed and collaborative computing. As a result, there are impediments to leveraging and reproducing the work of other ML researchers, potentially slowing down the progress of the field. The ubiquity of the browser as a computational engine makes it an ideal platform for the development of massively distributed and collaborative ML. Machine Learning in the Browser (MLitB) is an ambitious software development project whose aim is to bring ML, in all its facets, to an audience that includes both the general public and the research community.
By writing ML models and algorithms in browser-based programming languages, many research opportunities become available. The most obvious is software compatibility: nearly all computing devices can collaborate in the training of ML models by contributing to their computational power.
A researcher sets up a learning problem in his/her browser. Through the internet, grid and desktop machines contribute computation to solve the problem. Heterogeneous devices, such as mobile phone and tablets, connect to the same problem and contribute computation. At any time, connected clients can download the model configuration and parameters, or use the model directly in their browsing environment. Icon made by Freepik from www.flaticon.com.
Some computational resources to the overall training procedure and can, with the same code, harness the power of sophisticated predictive models on the same devices (see Fig. 1). This goal of ubiquitous ML has several important consequences: training ML models can now occur on a massive, even global scale, with minimal cost, and ML research can now be shared and reproduced everywhere, by everyone, making ML models a freely accessible, public good. In this paper, we present both a long-term vision for MLitB and a light-weight prototype implementation of MLitB, that represents a first step in completing the vision, and is based on an important ML use-case, Deep Neural Networks.
In Section ‘MLITB: Vision’ we describe in more detail our vision for MLitB in terms of three main objectives: (1) make ML models and algorithms ubiquitous, for both the public and the scientific community, (2) create an framework for cheap distributed computing by harnessing existing infrastructure and personal devices as novel computing resources, and (3) design research closures, software objects that archive ML models, algorithms, and parameters to be shared, reused, and in general, support reproducible research.
In Section ‘MLITB: Prototype’ we describe the current state of the MLitB software implementation, the MLitB prototype. We begin with a description of our design choices,
including arguments for using JavaScript and the other modern web libraries and utilities. Then we describe a bespoke map-reduce synchronized event-loop, specifically designed for training a large class of ML models using distributed stochastic gradient descent (SGD). Our prototype focuses on a specific ML model, Deep Neural Networks (DNNs), using an existing JavaScript implementation (Karpathy, 2014), modified only slightly for MLitB. We also report results of a scaling experiment, demonstrating the feasibility, but also the engineering challenges of using browsers for distributed ML applications. We then complete the prototype description with a walk-through of using MLitB to specify and train a neural network for image classification.
MLitB is influenced and inspired by current volunteer computing projects. These and other related projects, including those from machine learning, are presented in Section ‘Related Work.’ Our prototype has exposed several challenges requiring further research and engineering; these are presented in Section ‘Opportunities and Challenges,’ along with discussion of interesting application avenues MLitB makes possible. The most urgent software development directions follow in Section ‘Future MLitB Development.’
**MLITB: VISION**
Our long-term vision for MLitB is guided by three overarching objectives:
**Ubiquitous ML**: models can be training and executed in any web browsing environment without any further software installation.
**Cheap distributed computing**: algorithms can be executed on existing grid, cloud, etc., computing resources with minimal (and possibly no) software installation, and can be easily managed remotely via the web; additionally, small internet enabled devices can contribute computational resources.
**Reproducibility**: MLitB should foster reproducible science with research closures, universally readable objects containing ML model specifications, algorithms, and parameters, that can be used seamlessly to achieve the first two objectives, as well as support sharing of ML models and collaboration within the research community and the public at large.
**Ubiquitous machine learning**
The browser is the most ubiquitous computing device of our time, running, in some shape or form on all desktops, laptops, and mobile devices. Software for state-of-the-art ML algorithms and models, on the other hand, are very sophisticated software libraries written in highly specific programming languages within the ML research community (Bastien et al., 2012; Jia et al., 2014; Collobert, Kavukcuoglu & Farabet, 2011). As research tools, these software libraries have been invaluable. We argue, however, that to make ML truly ubiquitous requires writing ML models and algorithms with web programming languages and using the browser as the computational engine.
The software we propose can run sophisticated predictive models on cell phones or super-computers; for the former, this extends the distributed nature of ML to a global internet. By further encapsulating the algorithms and model together, the benefit of powerful predictive modeling becomes a public commodity.
Cheap distributed computing
The usage of web browsers as compute nodes provides the capability of running sophisticated ML algorithms without the expense and technical difficulty of using custom grid or super-computing facilities (e.g., Hadoop cloud computing Shvachko et al. (2010)). It has long been a dream to use volunteer computing to achieve real massive scale computing. Successes include Seti@Home (Anderson et al., 2002) and protein folding (Lane et al., 2013). MLitB is being developed to not only run natively on browsers but also for scaled distributed computing on existing cluster and/or grid resources and, by harnessing the capacity of non-traditional devices, for extremely massive scale computing with a global volunteer base. In the former set-up, low communication overhead and homogeneous devices (a “typical” grid computing solution) can be exploited. In the latter, volunteer computing via the internet opens the scaling possibilities tremendously, albeit at the cost of unreliable compute nodes, variable power, limited memory, etc. Both have serious implications for the user, but, most importantly, both are implemented by the same software.
Although the current version of MLitB does not provide GPU computing, it does not preclude its implementation in future versions. It is therefore possible to seamlessly provide GPU computing when available on existing grid computing resources. Using GPUs on mobile devices is a more delicate proposition since power consumption management is of paramount importance for mobile devices. However, it is possible for MLitB to manage power intelligently by detecting, for example, if the device is connected to a power source, its temperature, and whether it is actively used for other activities. A user might volunteer periodic “mini-bursts” of GPU power towards a learning problem with minimal disruption to or power consumption from their device. In other words, MLitB will be able to take advantage of the improvements and breakthroughs of GPU computing for web engines and mobile chips, with minimal software development and/or support.
Reproducible and collaborative research
Reproducibility is a difficult yet fundamental requirement for science (McNutt, 2014). Reproducibility is now considered just as essential for high-quality research as peer review; simply providing mathematical representations of models and algorithms is no longer considered acceptable (Stodden, Guo & Ma, 2013). Furthermore, merely replicating other work, despite its importance, can be given low publication priority (Casadevall & Fang, 2010) even though it is considered a prerequisite for publication. In other words, submissions must demonstrate that their research has been, or could be, independently reproduced.
For ML research there is no reason for not providing working software that allows reproduction of results (for other fields in science, constraints restricting software publication may exist). Currently, the main bottlenecks are the time cost to researchers for making research available, and the incompatibility of the research (i.e., code) for others, which further increases the time investment for researchers. One of our primary goals for MLitB is to provide reproducible research with minimal to no time cost to both the
primary researcher and other researchers in the community. Following (Stodden, Borwein & Bailey, 2013), we support “setting the default to reproducible.”
For ML disciplines, this means other researchers should not only be able to use a model reported in a paper to verify the reported results, but also retrain the model using the reported algorithm. This higher standard is difficult and time-consuming to achieve, but fortunately this approach is being adopted more and more often, in particular by a sub-discipline of machine learning called deep learning. In the deep learning community, the introduction of new datasets and competitions, along with innovations in algorithms and modeling, have produced a rapid progress on many ML prediction tasks. Model collections (also called model zoos), such as those built with Caffe (Jia et al., 2014) make this collaboration explicit and easy to access for researchers. However, there remains a significant time investment to run any particular deep learning model (these include compilation, library installations, platform dependencies, GPU dependencies, etc.). We argue that these are real barriers to reproducible research and choosing ubiquitous software and compute engines makes it easier. For example, during our testing we converted a very performant computer vision model (Lin, Chen & Yan, 2013) into JSON format and it can now be used on any browser with minimal effort.¹
In a nod to the concept of closures concept common in functional programming, our approach treats a learning problem as a research closure: a single object containing model and algorithm configuration plus code, along with model parameters that can be executed (and therefore tested and analyzed) by other researchers.
**MLITB: PROTOTYPE**
The MLitB project and its accompanying software (application programming interfaces (APIs), libraries, etc.) are built entirely in JavaScript. We have taken a pragmatic software development approach to achieve as much of our vision as possible. To leverage our software development process, we have chosen, wherever possible, well-supported and actively developed external technology. By making these choices we have been able to quickly develop a working MLitB prototype that not only satisfies many of our objectives, but is as technologically future proof as possible. To demonstrate MLitB on a meaningful ML problem, we have similarly incorporated an existing JavaScript implementation of a Deep Neural Network into MLitB. The full implementation of the MLitB prototype can be found on GitHub (https://github.com/software-engineering-amsterdam/MLitB).
**Why JavaScript?**
JavaScript is a pervasive web programming language, embedded in approximately 90% of web-sites (W3Techs, 2014). This pervasiveness means it is highly supported (Can I Use, 2014), and is actively developed for efficiency and functionality (Chrome V8, 2014; asm.js, 2014). As a result, JavaScript is the most popular programming language on GitHub and its popularity is continuing to grow (Ray et al., 2014).
The main challenge for scientific computing with JavaScript is the lack of high-quality scientific libraries compared to platforms such as Matlab and Python. With the potential of native computational efficiency (or better, GPU computation) becoming available
---
¹ JavaScript Object Notation json.org.
for JavaScript, it is only a matter of time before JavaScript bridges this gap. A recent set of benchmarks showed that numerical JavaScript code can be competitive with native C (Khan et al., 2014).
General architecture and design
Design considerations
The minimal requirements for MLitB are based on the scenario of running the network as public resource computing. The downside of public resource computing is the lack of control over the computing environment. Participants are free to leave (or join) the network at anytime and their connectivity may be variable with high latency. MLitB is designed to be robust to these potentially destabilizing events. The loss of a participant results in the loss of computational power and data allocation. Most importantly, MLitB must robustly handle new and lost clients, re-allocation of data, and client variability in terms of computational power, storage capacity, and network latency.
Although we are agnostic to the specific technologies used to fulfill the vision of MLitB, in practice we are guided by both the requirements of MLitB and our development constraints. Therefore, as a first step towards implementing our vision, we chose technology pragmatically. Our choices also follow closely the design principles for web-based big data applications (Begoli & Horey, 2012), which recommend popular standards and light-weight architectures. As we will see, some of our choices may be limiting at large scale, but they have permitted a successful small-scale MLitB implementation (with up to 100 clients).
Figure 2 shows the high-level architecture and web technologies used in MLitB. Modern web browsers provide functionality for two essential aspects of MLitB: Web Workers (W3C, 2014) for parallelizing program execution with threads and Web Sockets (IETF, 2011) for fast bi-directional communication channels to exchange messages more quickly between server and browser. To maintain compatibility across browser vendors, there is little choice for alternatives to Web Workers and Web Sockets. These same choices are also used in another browser-based distributed computing platform (Cushing et al., 2013).
On the server-side, there are many choices that can be made based on scalability, memory management, etc. However, we chose Node.js for the server application (http://nodejs.org). Node.js provides several useful features for our application: it is lightweight, written in JavaScript, handles events asynchronously, and can serve many clients concurrently (Tilkov & Vinoski, 2010). Asynchronous events occur naturally in MLitB as clients join/leave the network, client computations are received by the server, users add new models and otherwise interact with the server. Since the main computational load is carried by the clients, and not the server, a light-weight server that can handle many clients concurrently is all that is required by MLitB.
Design overview
The general design of MLitB is composed of several parts. A master server hosts ML problems/projects and connects clients to them. The master server also manages the main event loop, where client triggered events are handled, along with the reduce steps.
of a (bespoke) map-reduce procedure used for computation. When a browser (i.e., a heterogeneous device) makes an initial connection to the master server, a user-interface (UI) client (also known as a boss) is instantiated. Through the UI, clients can add workers that can perform different tasks (e.g., train a model, download parameters, take a picture, etc.). An independent data server serves data to clients using zip files and prevents the master server from blocking while serving data. For efficiency, data transfer is performed using XHR. Trained models can be saved into JSON objects at any point in the training process; these can later be loaded in lieu of creating new models.
**Master server**
The master node (server) is implemented in Node.js with communication between the master and slave nodes handled by Web Sockets. The master server hosts multiple ML
problems/projects simultaneously along with all clients’ connections. All processes within the master are event-driven, triggered by actions of the slave nodes. Calling the appropriate functions by slave nodes to the master node is handled by the router. The master must efficiently perform its tasks (data reallocation and distribution, reduce-steps) because the clients are idle awaiting new parameters before their next work cycle. New clients must also wait until the end of an iteration before joining a network. The MLitB network is dynamic and permits slave nodes to join and leave during processing. The master monitors its connections and is able to detect lost participants. When this occurs, data that was allocated to the lost client is re-allocated the remaining clients, if possible, otherwise it is marked as to be allocated.
Data server
The data server is a bespoke application intended to work with our neural network use-case model and can be thought of a lightweight replacement for a proper image database. The data server is an independent Node.js application that can, but does not necessarily live on the same machine. Users upload data in zip files before training begins; currently, the data server handles zipped image classification datasets (where sub-directory names define class labels). Data is then downloaded from the data server and zipped files are sent to clients using XHR and unzipped and processed locally. XHR is used instead of WebSockets because they communicate large zip-files more efficiently. A redundant cache of data is stored locally in the clients’ browser’s memory. For example, a client may store 10,000 data vectors, but at each iteration it may only have the computational power to process 100 data vectors in its scheduled iteration duration. The data server uses specialized JavaScript APIs unzip.js and redis-server.
Clients
Clients are browser connections from heterogeneous devices that visit the master server’s url. Clients interact through a UI worker, called a boss, and can create slave workers to perform various tasks (see Workers). The boss is the main worker running in a client’s browser. It manages the slave and image download worker and functions as a bridge between the downloader and slaves. A simple wrapper handles UI interactions, and provides input/output to the boss. Client bosses use a data worker to download data from the data server using XHR. The data worker and server communicate using XHR and pass zip files in both directions. The boss handles unzipping and decoding data for slaves that request data. Clients therefore require no software installation other than its native browser. Clients can contribute to any project hosted by the master server. Clients can trigger several events through the UI worker. These include adjusting hyper-parameters, adding data, and adding slave workers, etc. (Fig. 3). Most tasks are run in a separate Web Worker thread (including the boss), ensuring a non-blocking and responsive client UI. Data downloading is a special task that, via the boss and the data worker, uses XHR to download from the data server.
Each client connection to the master server initiates a UI worker, also known as a boss. For uploading data from a client to the data server and for downloading data from the data server to a client, a separate Web Worker called the data worker is used. Users can add slaves through the UI worker; each slave performs a separate task using a Web Worker. Icon made by Freepik from www.flaticon.com.
**Workers**
In Fig. 3 the tasks implemented using Web Worker threads are shown. At the highest-level is the client UI, with which the user interacts with ML problems and controls their slave workers. From the client UI, a user can create a new project, load a project from file, upload data to a project, or add slave workers for a project. Slaves can perform several tasks; most important is the trainer, which connects to an event loop of a ML project and contributes to its computation (i.e., its map step). Each slave worker communicates directly to the master server using Web Sockets. For the latter three tasks, the communication is mainly for sending requests for models parameters and receiving them. The training slave has more complicated behavior because it must download data then perform computation.
as part of the main event loop. To begin training, the user sets the slave task to train and selects start/restart. This will trigger a join event at the master server; model parameters and data will be downloaded and the slave will begin computation upon completion of the data download. The user can remove a slave at any time. Other slave tasks are tracking, which requires receiving model parameters from the master, and allows users to monitor statistics of the model on a dataset (e.g., classification error) or to execute the model (e.g., classify an image on a mobile device). Each slave worker communicates directly to the master server using Web Sockets.
Events and software behavior
The MLitB network is constructed as a master–slave relationship, with one server and multiple slave nodes (clients). The setup for computation is similar to a MapReduce network (Dean & Ghemawat, 2008); however, the master server performs many tasks during an iteration of the master event loop, including a reduce step, but also several other important tasks.
The specific tasks will be dictated by events triggered by the client, such as requests for parameters, new client workers, removed/lost clients, etc. Our master event loop can be considered as a synchronized map-reduce algorithm with a user defined iteration duration \( T \), where values of \( T \) may range from 1 to 30 s, depending on the size of the network and the problem. MLitB is not limited to a map-reduce paradigm and in fact we believe that our framework opens the door to peer-to-peer or gossip algorithms (Boyd et al., 2006). We are currently developing asynchronous algorithms to improve the scalability of MLitB.
Master event loop
The master event loop consists of five steps and is executed by the master server node as long there is at least one slave node connected. Each loop includes one map-reduce step, and runs for at least \( T \) seconds. The following steps are executed, in order:
(a) New data uploading and allocation.
(b) New client trainer initialization and data allocation.
(c) Training workers reduce step.
(d) Latency monitoring and data allocation adjustment.
(e) Master broadcasts parameters.
(a) New data uploading and allocation
When a client boss uploads data, it directly communicates with the data server using XHR. Once the data server has uploaded the zip file, it sends the data indices and classification labels to the boss. The boss then registers the indices with the master server. Each data index is managed: MLitB stores an allocated index (the worker that is allocated the ID) and a cached index (the worker that has cached the ID). The master ensures that the data allocation is balanced amongst its clients. Once a data set is allocated on the master server, the master allocates indices and sends the set of IDs to workers. Workers can then request data from the boss, who in turn use its data downloader worker to download those worker
specific IDs from the data server. The data server sends a zipped file to the data downloader, which are then unzipped and processed by the boss (e.g., JPEG decoding for images). The zip file transfers are fast but the decoding can be slow. We therefore allow workers to begin computing before the entire dataset is downloaded and decoded, allowing projects to start training almost immediately while data gets cached in the background.
**b) New client trainer initialization and data allocation**
When a client boss adds a new slave, a request to join the project is sent to the master. If there is unallocated data, a balanced fraction of the data is allocated to the new worker. If there is no unallocated data, a pie-cutter algorithm is used to remove allocated data from other clients and assign it to the new client. This prevents unnecessary data transfers. The new worker is sent a set of data IDs it will need to download from the client’s data worker. Once the data has been downloaded and put into the new worker’s cache, the master will then add the new worker to the computation performed at each iteration. The master server is immediately informed when a client or one of its workers is removed from the network. Because of this, it can manage the newly unallocated data (that were allocated to the lost client).
**c) Training workers’ reduce step**
The reduce step is completely problem specific. In our prototype, workers compute gradients with respect to model parameters over their allocated data vectors, and the reduce step sums over the gradients and updates the model parameters.
**d) Latency monitoring and data allocation adjustment**
The interval $T$ represents both the time of computation and the latency between the client and the master node. The synchronization is stochastic and adaptive. At each reduce step, the master node estimates the latency between the client and the master and informs the client worker how long it should run for. A client does not need to have a batch size because it just clocks its own computation and returns results at the end of its scheduled work time. Under this setting, it is possible to have mobile devices that compute only a few gradients per second and a powerful desktop machine that performs hundreds or thousands. This simple approach also allows the master to account for unexpected user activity: if the user’s device slows or has increased latency, the master will decrease the load on the device for the next iteration. Generally, devices with a cellular network connection communicate with longer delays than hardwired machines. In practice, this means the reduction step in the master node receives delayed responses from slave nodes, forcing it to run the reduction function after the slowest slave node (with largest latency) has returned. This is called asynchronous reduction callback delay.
**e) Master broadcasts parameters**
An array of model parameters is broadcast to each clients’ boss worker using XHR; when the boss receives new parameters, they are given to each of its workers who then start another computation iteration.
ML use-case: deep neural networks
The current version of the MLitB software is built around a pervasive ML use-case: deep neural networks (DNNs). DNNs are the current state-of-the-art prediction models for many tasks, including computer vision (Krizhevsky, Sutskever & Hinton, 2012; Lin, Chen & Yan, 2013), speech recognition (Hinton et al., 2012), and natural language processing and machine translation (Liu et al., 2014; Bahdanau, Cho & Bengio, 2014; Sutskever, Vinyals & Le, 2014). Our implementation only required superficial modifications to an existing JavaScript implementation (Karpathy, 2014) to fit into our network design.
Scaling behavior of MLitB
We performed an experiment to study the scaling behavior of MLitB prototype. Using up to 32 4-core workstation machines connected on a local area network using a single router, we trained a simple convolutional NN on the MNIST dataset for 100 iterations (with 4 seconds per iteration/synchronization event). The number of slave nodes doubled from one experiment to the next (i.e., 1, 2, 4, ..., 96). We are interested in the scaling behavior of two performance indicators: (1) power, measured in data vectors processed per second, and (2) latency in milliseconds between slaves and master node. Of secondary interest is the generalization performance on the MNIST test set. As a feasibility study of a distributed ML framework, we are most interested scaling power while minimizing latency effects during training, but we also want to ensure the correctness of the training algorithm. Since optimization using compiled JS and/or GPUs of the ML JavaScript library possible, but not our focus, we are less concerned with the power performance of a single slave node.
Results for power and latency are shown in Fig. 4. Power increases linearly up to 64 slave nodes, at which point a large increase in latency limits additional power gains for new nodes. This is due to a single server reaching the limit of its capacity to process incoming gradients synchronously. Solutions include using multiple server processes, asynchronous updates, and partial gradient communication. Test error, as a function of the number of nodes is shown in Fig. 5 after 50 iterations (200 s) and 100 iterations (400 s); i.e., each point represents the same wall-clock computation time. This demonstrates the correctness of MLitB for a given model architecture and learning hyperparameters.
Due to the data allocation policy that limits the data vector capacity of each node to 3,000 vectors, experiments with more nodes process more of the training set during the training procedure. For example, using only 1 slave node trains on 3/60 of the full training set. With 20 nodes, the network is training on the full dataset. This policy could easily be modified to include data refreshment when running with unallocated data.
The primary latency issue is due to all clients simultaneously sending gradients to the server at the end of each iteration. Three simple scaling solutions are (1) increasing the number of master node processes that receive gradients (2) using asynchronous update rules (each slave computes for a random amount of time, then sends updates), reducing the load of any one master node process, and (3) partial communication of gradients (decreasing bandwidth).
Figure 4 Effects of scaling on power and latency. Power—measured as the number of data vectors processed per second—scales linearly until 64 nodes, when the increase in latency jumps. The ideal linear scaling is shown in grey.
Figure 5 Effects of scaling on optimization. Convergence of the NN is measured in terms of test error after 50 and 100 iterations. Each point represents approximately the same wall-clock time (200/400 s for 50 and 100 iterations, respectively).
Walk-through of MLitB prototype
We briefly describe how MLitB works from a researcher’s point of view.
**Specification of neural network and training parameters**
Using a minimalist UI (not shown), the researcher can specify their neural network, for example, they can add/remove layers of different types, and adjust regularization parameters (L1/L2/dropout) and learning rates. Alternatively, the researcher can load a previously saved neural network in JSON format (that may or may not have already been trained). Once a NN is specified (or loaded), it appears in the display, along with other neural networks also managed by the master node. By selecting a specific neural network, the researcher can then add workers and data (e.g., project **cifar10** in Fig. 6).
**Specification of training data**
Image classification data is simple to upload using named directory structures for image labels. For example, for CIFAR10 all files in the “apple” subdirectory will be given label “apple” once loaded (e.g., the image file /cifar10/apple/apple_apple_s_000022.png). The entire “cifar10” directory can be zipped and uploaded. MLitB processes JPEG and PNG formats. A test set can be uploaded in **tracker** mode.
**Training mode**
In the training mode, a training worker performs as many gradient computations as possible within the iteration duration $T$ (i.e., during the map step of the main event loop). The total gradient and the number of gradients is sent to the master, which then in the reduce step computes a weighted average of gradients from all workers and takes a gradient step using AdaGrad (*Duchi, Hazan & Singer, 2011*). At the end of the main event loop, new neural network weights are sent via Web Sockets to both trainer workers (for the next
Figure 7 Tracking model (model execution). The label of a test image is predicted using the latest NN parameters. Users can execute a NN prediction using an image stored on their device or using their device’s camera. In this example, an image of a horse is correctly predicted with probability 0.687 (the class-conditional predictive probability).
Tracking mode
There are two possible functions in tracking mode: (1) executing the neural network on test data, and (2) monitoring classification error on an independent data set. For 1, users can predict class labels for images taken with a device’s camera or locally stored images. Users can also learn a new classification problem on the fly by taking a picture and giving it a new label; this is treated as a new data vector and a new output neuron is added dynamically to the neural network if the label is also new. Figure 7 shows a test image being classified by the cifar10 trained neural network. For 2, users create a statistics worker and can upload test images and track their error over time; after each complete evaluation of the test images, the latest neural network received from the master is used. Fig. 8 shows the error for cifar10 using a small test set for the first 600 parameter updates.
Archiving trained neural network model
The prototype does not include a research closure specification. However, it does provide easy archiving functionality. At any moment, users can download the entire model specification and current parameter values in JSON format. Users can then share or initialize a new training session with the JSON object by uploading it during the model specification phase, which represents a high-level of reproducibility. Although the JSON object fully specifies the model, it does not include training or testing code. Despite this shortcoming, using a standard protocol is simple way of providing a lightweight archiving system.
Figure 8 Tracking mode (classification error). A test dataset can be loaded and its classification error rate tracked over iterations; here using a NN trained on CIFAR-10.
Limitations of MLitB prototype
In this section we briefly discuss the limitations of the current prototype; later in Section 'Opportunities and Challenges' we will discuss the challenges we face in scaling MLitB to a massive level.
Our scaling experiment demonstrates that the MLitB prototype can accommodate up to 64 clients before latency significantly degrades its performance. Latency, however, is primarily affected by the length of an iteration and by size of the neural network. For longer iterations, latency will become a smaller portion of the main event loop. For very large neural networks, latency will increase due to bandwidth pressure.
As discussed previously, the main computational efficiency loss is due to the synchronization requirement of the master event loop. This requirement causes the master server to be idle while the clients are computing and the clients to wait while the master processes all the gradients. As the size of the full gradients can be large (at least >1 MB for small neural networks), the network bandwidth is quickly saturated at the end of a computation iteration and during the parameter broadcast. By changing to an asynchronous model, the master can continuously process gradients and the bandwidth can be maximally utilized. By communicating partial gradients, further efficiency can be attained. We leave this for future work.
There is a theoretical limit of 500 MB data storage per client (the viable memory of a web-browser). In our experience, the practical limit is closer to 100 MB at which point performance is lost due to memory management issues. We found that 1 MB/s bandwidth was achievable on a local network, which meant that it could handle images on MNIST and CIFAR-10 easily, but would stall for larger images. With respect to Deep Neural Networks, the data processing ability of a single node was limited (especially is one compared
to sophisticated GPU enables libraries (Bastien et al., 2012)). Although we were most interested in the scaling performance, we note that naive convolution implementations significantly slow performance. We found that reasonable sized images, up to $100 \times 100 \times 3$ pixels, can be processed on mobile devices in less than a second without convolutions, but can take several seconds with convolutions, limiting its usefulness. In the future, near native or better implementations will be required for the convolutional layers.
**RELATED WORK**
MLitB has been influenced by a several different technologies and ideas presented by previous authors and from work in different specialization areas. We briefly summarize this related work below.
**Volunteer computing**
BOINC (Anderson, 2004) is an open-source software library used to set up a grid computing network, allowing anyone with a desktop computer connected to the internet to participate in computation; this is called *public resource computing*. Public resource or volunteer computing was popularized by SETI@Home (Anderson et al., 2002), a research project that analyzes radio signals from space in the search of signs of extraterrestrial intelligence. More recently, protein folding has emerged as significant success story (Lane et al., 2013). Hadoop (Shvachko et al., 2010) is an open-source software system for storing very large datasets and executing user application tasks on large networks of computers. MapReduce (Dean & Ghemawat, 2008) is a general solution for performing computation on large datasets using computer clusters.
**JavaScript applications**
In (Cushing et al., 2013) a network of distributed web-browsers called WeevilScout is used for complex computation (regular expression matching and binary tree modifications) using a JavaScript engine. It uses similar technology (Web Workers and Web Sockets) as MLitB. ConvNetJS (Karpathy, 2014) is a JavaScript implementation of a convolutional neural-network, developed primarily for educational purposes, which is capable of building diverse neural networks to run in a single web browser and trained using stochastic gradient descent; it can be seen as the non-distributed predecessor of MLitB.
**Distributed machine learning**
The most performant deep neural network models are trained with sophisticated scientific libraries written for GPUs (Bergstra et al., 2010; Jia et al., 2014; Collobert, Kavukcuoglu & Farabet, 2011) that provide orders of magnitude computational speed-ups compared to CPUs. Each implements some form of stochastic gradient descent (SGD) (Bottou, 2010) as the training algorithm. Most implementations are limited to running on the cores of a single machine and by extension the memory limitations of the GPU. Exceptionally, there are distributed deep learning algorithms that use a farm of GPUs (e.g., Downpour SGD (Dean et al., 2012)) and farms of commodity servers (e.g., COTS-HPS (Coates et al., 2013)). Other distributed ML algorithm research includes the parameter server model (Li...
et al., 2014), parallelized SGD (Zinkevich et al., 2010), and distributed SGD (Ahn, Shahbaba & Welling, 2014). MLitB could potentially push commodity computing to the extreme using pre-existing devices, some of which may be GPU capable, with and without an organization's existing computing infrastructure. As we discuss below, there are still many open research questions and opportunities for distributed ML algorithm research.
**OPPORTUNITIES AND CHALLENGES**
In tandem with our vision, there are several directions the next version of MLitB can take, both in terms of the library itself and the potential kinds of applications a ubiquitous ML framework like MLitB can offer. We first focus on the engineering and research challenges we have discovered during the development of our prototype, along with some we expect as the project grows. Second, we look at the opportunities MLitB provides, not only based on the research directions the challenges uncovered, but also novel application areas that are perfect fits for MLitB. In Section ‘Future MLitB Development’ we preview the next concrete steps in MLitB development.
**Challenges**
We have identified three keys engineering and research challenges that must be overcome for MLitB to achieve its vision of learning models a global scale.
**Memory limitations**
State-of-the-art Neural Network models have huge numbers of parameters. This prevents them from fitting onto mobile devices. There are two possible solutions to this problem. The first solution is to learn or use smaller neural networks. Smaller NN models have shown promise on image classification performance, in particular the Network in Network (Lin, Chen & Yan, 2013) model from the Caffe model zoo, is 16 MB, and outperforms AlexNet which is 256 MB (Jia et al., 2014). It is also possible to first train a deep neural network then use it to train a much smaller, shallow neural network (Ba & Caruana, 2014). Another solution is to distribute the NN (during training and prediction) across clients. An example of this approach is Downpour SGD (Dean et al., 2012).
**Communication overhead**
With large models, large of numbers of parameters are communicated regularly. This is a similar issue to memory limitation and could benefit from the same solutions. However, given a fixed bandwidth and asynchronous parameter updates, we can ask what parameter updates (from master to client) and which gradients (from client to master) should be communicated. An algorithm could transmit a random subset of the weight gradients, or send the most informative. In other words, given a fixed bandwidth budget, we want to maximize the information transferred per iteration.
**Performance efficiency**
Perhaps the biggest argument against scientific computing with JavaScript is its computation performance. We disagree that this should prevent the widespread adoption of browser-based, scientific computing because the goal of several groups to achieve native
performance in JavaScript (Chrome V8, 2014; asm.js, 2014) and GPU kernels are becoming part of existing web engines (e.g., WebCL by Kronos: www.khronos.org/webcl) and they can be seamlessly incorporated into existing JavaScript libraries, though they have yet to be written for ML.
Opportunities
Massively distributed learning algorithms
The challenges just presented are obvious areas of future distributed machine learning research (and are currently being developed for the next version of MLitB). Perhaps more interesting is, at a higher level, that the MLitB vision raises novel questions about what it means to train models on a global scale. For instance, what does it mean for a model to be trained across a global internet of heterogeneous and unreliable devices? Is there a single model or a continuum of models that are consistent locally, but different from one region to another? How should a model adapt over long periods of time? These are largely untapped research areas for ML.
Field research
Moving data collection and predictive models onto mobile devices makes it easy to bring models into the field. Connecting users with mobile devices to powerful NN models can aid field research by bringing the predictive models to the field, e.g., for fast labeling and data gathering. For example, a pilot program of crop surveillance in Uganda currently uses bespoke computer vision models for detecting pestilence (insect eggs, leaf diseases, etc.) (Quinn, Leyton-Brown & Mwebaze, 2011). Projects like these could leverage publicly available, state-of-the-art computer vision models to bootstrap their field research.
Privacy preserving computing and mobile health
Our MLitB framework provides a natural platform for the development of real privacy-preserving application (Dwork, 2008) by naturally protecting user information contained on mobile devices, yet allowing the data to be used for valuable model development. The current version of MLitB does not provide privacy preserving algorithms such as (Han et al., 2010), but these could be easily incorporated into MLitB. It would therefore be possible for a collection of personal devices to collaboratively train machine learning models using sensitive data stored locally and with modified training algorithms that guarantee privacy. One could imagine, for example, using privately stored images of a skin disease to build a classifier based on large collection of disease exemplars, yet with the data always kept on each patient’s mobile device, thus never shared, and trained using privacy preserving algorithms.
Green computing
One of our main objectives was to provide simple, cheap, distributed computing capability with MLitB. Because MLitB runs with minimal software installation (in most cases requiring none), it is possible to use this framework for low-power consumption distributed computing. By using existing organizational resources running in low-energy states (dormant or near dormant) MLitB can wake the machines, perform some
computing cycles, and return them to their low-energy states. This is in stark contrast to a data center approach which has near constant, heavy energy usage (Natural Resources Defense Council, 2014).
**FUTURE MLITB DEVELOPMENT**
The next phases of development will focus on the following directions: a visual programming user interface for model configuration, development of a library of ML models and algorithms, development of performant scientific libraries in JavaScript with and without GPUs, and model archiving with the development of a research closure specification.
**Visual programming**
Many ML models are constructed as chains of processing modules. This lends itself to a visual programming paradigm, where the chains can be constructed by dragging and dropping modules together. This way models can be visualized and compared, dissected, etc. Algorithms are tightly coupled to the model and a visual representation of the model can allow interaction with the algorithm as it proceeds. For example, learning rates for each layer of a neural network can be adjusted while monitoring error rates (even turned off for certain layers), or training modules can be added to improve learning of hidden layers for very deep neural networks, as done in Szegedy et al. (2014). With a visual UI it would be easy to pull in other existing, pre-trained models, remove parts, and train on new data. For example, a researcher could start with a pre-trained image classifier, remove the last layer, and easily train a new image classifier, taking advantage of an existing, generalized image representation model.
**Machine learning library**
We currently have built a prototype around an existing JavaScript implementation of DNNs (Karpathy, 2014). In the near future we plan on implementing other models (e.g., latent Dirichlet allocation) and algorithms (e.g., distributed MCMC (Ahn, Shahbaba & Welling, 2014)). MLitB is agnostic to learning algorithms and therefore is a great platform for researching novel distributed learning algorithms. To do this, however, MLitB will need to completely separate machine learning model components from the MLitB network. At the moment, the prototype is closely tied to its neural network use-case. Once separated, it will be possible for external modules to be added by the open-source community.
**GPU implementations**
Implementation of GPU kernels can bring MLitB performance up to the level of current state-of-the-art scientific libraries such as Theano (Bergstra et al., 2010; Bastien et al., 2012) and Caffe (Jia et al., 2014), while retaining the advantages of using heterogeneous devices. For example, balancing computational loads during training is very simple in MLitB and any learning algorithm can be shared by GPU powered desktops and mobile devices. Smart phones could be part of the distributed computing process by permitting the training algorithms to use short bursts of GPU power for their calculations, and therefore limiting battery drain and user disruption.
Design of research closures
MLitB can save and load JSON model configurations and parameters, allowing researchers to share and build upon other researchers’ work. However, it does not quite achieve our goal of a research closure where all aspects—code, configuration, parameters, etc—are saved into a single object. In addition to research closures, we hope to develop a model zoo, akin to Caffe’s for posting and sharing research. Finally, some kind of system for verifying models, like recomputation.org, would further strengthen the case for MLitB being truly reproducible (and provide backwards compatibility).
CONCLUSION
In this paper we have introduced MLitB: Machine Learning in the Browser, an alternative framework for ML research based entirely on using the browser as the computational engine. The MLitB vision is based upon the overarching objectives that provide ubiquitous ML capability to every computing device, cheap distributed computing, and reproducible research. The MLitB prototype is written entirely in JavaScript and makes extensive use of existing JavaScript libraries, including Node.js for servers, Web Workers for non-blocking computation, and Web Sockets for communication between clients and servers. We demonstrated the potential of MLitB on a ML use-case: Deep Neural Networks trained with distributed Stochastic Gradient Descent using heterogenous devices, including dedicated grid-computing resources and mobile devices, using the same interface and with no client-side software installation. Clients simply connect to the server and computing begins. This use-case has provided valuable information for future versions of MLitB, exposing both existing challenges and interesting research and application opportunities. We have also advocated for a framework which supports reproducible research; MLitB naturally provides this by allowing models and parameters to be saved to a single object which can be reloaded and used by other researchers immediately.
ADDITIONAL INFORMATION AND DECLARATIONS
Funding
The authors acknowledge funding support from Amsterdam Data Science and computing resources from SurfSara. M Welling acknowledges support from Facebook, Google, and Yahoo. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Grant Disclosures
The following grant information was disclosed by the authors:
SurfSara.
Facebook.
Google.
Yahoo.
Competing Interests
The authors declare there are no competing interests.
Author Contributions
• Edward Meeds conceived and designed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper.
• Remco Hendriks conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper.
• Said Al Faraby conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work.
• Magiel Bruntink and Max Welling wrote the paper, reviewed drafts of the paper.
Data Availability
The following information was supplied regarding the deposition of related data:
GitHub: github.com/software-engineering-amsterdam/MLitB.
REFERENCES
|
{"Source-Url": "https://pure.uva.nl/ws/files/19798440/cs_11.pdf", "len_cl100k_base": 10895, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 58036, "total-output-tokens": 15217, "length": "2e13", "weborganizer": {"__label__adult": 0.0004286766052246094, "__label__art_design": 0.0006270408630371094, "__label__crime_law": 0.0004673004150390625, "__label__education_jobs": 0.001850128173828125, "__label__entertainment": 0.0001913309097290039, "__label__fashion_beauty": 0.00027441978454589844, "__label__finance_business": 0.0004422664642333984, "__label__food_dining": 0.0004987716674804688, "__label__games": 0.0010747909545898438, "__label__hardware": 0.0016689300537109375, "__label__health": 0.0010776519775390625, "__label__history": 0.0004856586456298828, "__label__home_hobbies": 0.00017404556274414062, "__label__industrial": 0.0006399154663085938, "__label__literature": 0.0004024505615234375, "__label__politics": 0.0004105567932128906, "__label__religion": 0.0007109642028808594, "__label__science_tech": 0.322265625, "__label__social_life": 0.00017642974853515625, "__label__software": 0.01611328125, "__label__software_dev": 0.6484375, "__label__sports_fitness": 0.0003812313079833984, "__label__transportation": 0.0006542205810546875, "__label__travel": 0.00026607513427734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63473, 0.02526]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63473, 0.37239]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63473, 0.89422]], "google_gemma-3-12b-it_contains_pii": [[0, 1203, false], [1203, 3755, null], [3755, 5586, null], [5586, 8741, null], [8741, 12041, null], [12041, 15406, null], [15406, 18595, null], [18595, 19469, null], [19469, 22608, null], [22608, 23823, null], [23823, 26780, null], [26780, 29906, null], [29906, 33227, null], [33227, 33700, null], [33700, 35472, null], [35472, 37396, null], [37396, 39474, null], [39474, 42536, null], [42536, 45519, null], [45519, 48542, null], [48542, 51574, null], [51574, 54032, null], [54032, 56813, null], [56813, 60313, null], [60313, 63473, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1203, true], [1203, 3755, null], [3755, 5586, null], [5586, 8741, null], [8741, 12041, null], [12041, 15406, null], [15406, 18595, null], [18595, 19469, null], [19469, 22608, null], [22608, 23823, null], [23823, 26780, null], [26780, 29906, null], [29906, 33227, null], [33227, 33700, null], [33700, 35472, null], [35472, 37396, null], [37396, 39474, null], [39474, 42536, null], [42536, 45519, null], [45519, 48542, null], [48542, 51574, null], [51574, 54032, null], [54032, 56813, null], [56813, 60313, null], [60313, 63473, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63473, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63473, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63473, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63473, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63473, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63473, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63473, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63473, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63473, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63473, null]], "pdf_page_numbers": [[0, 1203, 1], [1203, 3755, 2], [3755, 5586, 3], [5586, 8741, 4], [8741, 12041, 5], [12041, 15406, 6], [15406, 18595, 7], [18595, 19469, 8], [19469, 22608, 9], [22608, 23823, 10], [23823, 26780, 11], [26780, 29906, 12], [29906, 33227, 13], [33227, 33700, 14], [33700, 35472, 15], [35472, 37396, 16], [37396, 39474, 17], [39474, 42536, 18], [42536, 45519, 19], [45519, 48542, 20], [48542, 51574, 21], [51574, 54032, 22], [54032, 56813, 23], [56813, 60313, 24], [60313, 63473, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63473, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
53ba094001fbf01034c504e753056d35c7b1b88a
|
1 Introduction
As software systems become more complex, it becomes economically desirable to re-use existing pieces (or components) of software. Similarly, with the expanding space of applications, together with the ever-increasing ubiquity and mobility of devices, there is an ever-greater need for software which can be composed in novel and unanticipated ways, by users, at run time. For these practices to become widespread, they must be feasible in cases where the components have been developed independently of one another. In such cases, software interfaces are almost certain to be mismatched in some way, meaning that they cannot be correctly composed directly. To compose such mismatched components, therefore, some kind of adaptation is necessary.
Current development practices are far from optimal in their predispositions towards re-use and composition. This proposal concerns research into new ways by which existing software artifacts may be combined, and by which new software artifacts may be written so as later to be more amenable to adaptation and re-use. The key to these improvements is effecting a separation of concerns, between code which implements functionality and code which implements integration (i.e. communication). In the following sections I will motivate and outline the design of a system which enables and promotes this separation.
2 Motivation
When pieces of software are developed independently, yet have logically compatible functionalities, making them work together is often non-trivial. There are effectively two variants of this problem: firstly, incorporating a selection of re-used artifacts amid a novel environment, which can be tailored to support those artifacts; secondly, combining multiple re-used artifacts together more-or-less directly, such that additional code would ideally be unnecessary. I will call these the “re-usability” and “re-use” problems respectively. Both are interesting, but I will focus on the latter.
2.1 Open-source development
Perhaps the best evidence of the problem comes from world of open-source software. Even when source-code is fully and freely available, we anecdotally observe two phenomena. Firstly, most large programs’ request-trackers contain requests for features which are already implemented in some other similar open-source project. This indicates that the effort required to port existing code for some logically compatible functionality is frequently non-trivial. Secondly, functionality is frequently duplicated in programs whose only distinction is in incidental implementation details irrelevant to their functionality: the choice of operating system, windowing toolkit, desktop environment, programming language, network protocol, storage abstraction, and so on. A major motivation for such duplication is invariably that the added homogeneity aids integration with other software.
As further illustration, here are some simple compositional use-cases which are currently impracticable without considerable coding effort.
- sharing bookmarks or history logs across multiple web browsers, or with other classes of application;
- adding a button to invoke a web-based natural language translator from within an e-mail client;
- writing a script (e.g. a *make* rule) which invokes the Postscript generator of an interactive graphical document editor;
- sharing a calendar between multiple applications concurrently, each notifying others of updates;
- pausing a media player application whenever another process requests the sound device.
Although we can imagine what code we might write to solve each problem individually, my concern is to invent the necessary supporting tools and runtime services to make the *entire class* of problem significantly more tractable. This means it should be *cheaper* for developers and *easier* for users to solve most common problems concerning communication between mismatched code. I do not expect these tasks to become fully automated nor, necessarily, trivial.
Similarly, although we can imagine how to implement a solution to each problem *for a particular codebase* (e.g. for Firefox, or for Emacs), my concern is rather to enable the developer to implement the solution only once, or at worst a small number of times, and have that implementation be cheaply composable with the large number of external codebases for which that feature is semantically meaningful.
(A final example might be the advertisement for Apple’s iPhone product, showing in cinemas at the time of writing. A user begins with an instant messaging application, suggesting a cinema trip to a friend. He or she then uses a web browser to browse film descriptions, and finds candidate cinemas using a
mapping application backed by a web search engine. Finally, the user clicks a cinema on the map, which passes its phone number to the dialler application. Communicating these units of application-level meaning between continuously running applications currently requires considerable integration effort specific to the applications being combined.)
2.2 The need for adaptation
Adaptation is synonymous with resolving mismatch. Given our requirement that composition may be specified dynamically by end users, I propose a form of adaptation which is performed most often at load-time or run-time, is applicable to a wide range of existing software, and is intended to support evolutionary adoption. In the following sections we will overview the nature of adaptation, sketch a design of the system, and outline a plan for its implementation and evaluation.
3 What is adaptation?
I define adaptation as any process which modifies the form or behaviour of a subsystem to enable or improve communication with the surrounding parts of the system (which I call its environment). Adaptation may take many forms. It may be done ahead-of-time, at load time or at run time, may be invasive (i.e. modifying target code) or non-invasive (i.e. supplementing or interposing additional code)\(^1\), automatic or manual, binary- or source-level, and may be done for correctness (i.e. the very ability to compose functional units of software) or for optimisation.
3.1 What needs adapting?
I assume familiarity with the problem space of adaptation. Unfamiliar readers should refer to Appendix A.
3.2 Existing practices
The problem of adaptation, while often not explicitly acknowledged by name, is certainly not a new one. Aside from writing “glue code” in conventional languages, many other established practices have particular relevance: scripting languages, service-oriented computing, aspect-oriented programming, interface definition languages, automatic marshalling in component middleware, configuration languages (e.g. in “inversion of control” frameworks), unified programming interfaces (e.g. Unix’s “everything is a file”), unified binary interface (e.g. Microsoft’s Common Language Runtime) and code metadata. None of these
\(^1\)I use invasive and non-invasive synonymously with white-box and black-box respectively.
techniques is specifically designed to tackle the problem of mismatched programming interfaces. For more detailed consideration of each technique, see Appendix B.
4 The idea
I propose to investigate the thesis that “the complexity of composing heterogeneous mismatched components can be substantially reduced by adaptation abstractions which enable the separation of functionality from integration”. To do so, I will devise and implement a linkage model which enables and encourages such separation, including a linking language containing adaptation features, and implementations of both static and dynamic linking. This amounts to a form of non-invasive manual binary adaptation.
The observations informing my approach are summarised in the following sections.
4.1 Separating functionality from integration
Code frequently incorporates knowledge about how, with whom and using what conventions it is to communicate. The avoidance of inlining such details is precisely the established good coding practice of “low coupling”. However, even with the greater programmer discipline and foresight, it is simply not possible to communicate without assuming some details of communication. The essential feature of my approach is therefore to provide not only a separate domain, distinct from the programming language, in which to specify integration details, but also convenient ways to work around such details from the outside when mismatch does occur. The separate domain is a configuration language, and these “convenient ways” are adaptation primitives.
4.2 Hierarchical configuration
Some configuration languages, such as Darwin [?], have an explicit hierarchical structure, whereas others such as Reo [?] have a more general graph structure. Although hierarchy may seem an unnecessary restriction, it mirrors both human problem-solving and the recursive nature of the re-use paradigm—where new artifacts are created, recursively, as combinations of pre-existing and re-used ones. Providing a logical containment hierarchy might also tend to delineate those pieces of a system which turn out later to be convenient units of re-use.
4.3 Pragmatism
It is essential that we support adaptation of existing code, since the benefits of re-use are negated if the wealth of existing code cannot be exploited. Also, as with most outputs of research, the potential for impact is much greater if a technology can be adopted in an evolutionary fashion. Therefore, we must
support multiple code representations and languages, preferably in an extensible manner, and prioritise the support of popular languages (including C and Java).
Another pragmatic distinction comes from scripting languages. I have mentioned that hierarchy is useful for expressing logical groupings and hence aiding re-use. Notwithstanding this, some adaptations are too small to be practically re-usable. A benefit of scripting languages is their brevity, which makes them suitable for invasive adaptation, i.e. for altering code which is frequently changed \cite{invasive_adaptation}—perhaps during rapid prototyping, or for end-user customisation. In cases where adaptation logic is not complex enough to justify re-use, we must instead focus on making them brief to express in-line (as “ad-hoc” adaptation), and hence convenient to change. To do so, my configuration language must aim for brevity and expressivity comparable to that of scripting languages.
Finally, we observe that different object code representations contain differing levels of metadata. If we are to support many of these representations, including many popular ones, we must support those which provide little type information or other semantic annotations. It follows that we will not be able to guarantee safety of the compositions generated, unless their constituents happen to provide the necessary annotations—which we will not mandate. In other words, my system will value composability over safety or other property-checking. (The addition of pluggable property checking is discussed in Section 8.)
### 4.4 Extensible adaptation primitives
Rather than providing a fixed set of adaptation primitives, as with systems such as Nimble \cite{nimble}, I propose that this set should be extensible, i.e. that new adaptation primitives must be definable outside of the configuration language.
One important class of externally-definable primitives is that of generated adaptation. Much adaptation can be captured re-usably as adaptor generation algorithms, where a one-size-fits-all adaptor implementation would be inefficient. The inputs to such algorithms are the target pieces of code or their interface descriptions, perhaps supplemented with additional semantic specification. The output is the required adaptor code. Examples include adaptor synthesis algorithms \cite{adaptor_synthesis}, wrapper generators \cite{wrapper_generators}, IDL compilers, and convenient constructors for translation tables and parsers. It is crucial to have convenient support for invoking these generative adapters from within the linking language; they may be seen as adaptation functions, ranging over units of linkage.
### 5 Implementation
I intend to implement a configuration language, first as a static linking tool and second as an extension to the dynamic loader of a conventional modern operating system such as GNU/Linux. Specifically, I will devise and implement the following.
**Configuration language** A configuration language should be defined, which is capable of expressing adaptation over binary component representations. To support heterogeneity, the set of representations should be extensible. Likewise, the set of adaptation primitives must be extensible, meaning that new primitives may be added outside the language (analogously to how installing new Unix commands extends the language of the shell). A unifying model of components, linkage and adaptation must necessarily underpin the language’s design.
**Static implementation** The language should be implemented, for ahead-of-time use, as a linker supporting adaptation. The linker should accept the configuration language, and generate fully-linked executables out of pre-existing object files and the like.
**Dynamic implementation** The language should be implemented, for runtime use, as a dynamic loader (in the sense of the C library’s `dlopen()` et al) supporting adaptation. The loader must accept a variant of the configuration language, describing the components and adaptations to load.
**Refactoring engine** To complement the linking language, and demonstrate an alternative application of the separation between integration and functionality, a refactoring engine should be implemented. This will semi-automatically (i.e. with user assistance) refactor single source files in some popular existing language (either C or Java) into two refactored files: “core functionality”, which will remain in the target language, and “integration logic”, which will be captured in the configuration language.
### 5.1 Illustration
To illustrate the linking language, consider a toy system composed of two components, one or which is written to generate data as a series of tuples, and the other which expects to read data as a stream. Figure 1 shows a configuration which might implement such a system.
The top-level configuration combines two components, one which outputs tuples of the form `(sequence_no, bit_string)`, and the other wishing to read a bit-stream. Such an arrangement might be used to handle out-of-order packet delivery in a network. The components are connected by a third component, which is a configuration of some ad-hoc adaptation and two smaller components: a tuple store and a re-usable adaptation component implementing a `take_contiguous(n)` procedure for the tuple space. The `take_contiguous` procedure retrieves a list of two-tuples with sequential sequence numbers, whose bit-strings do not exceed n in combined length. The ad-hoc adaptation projects out the bit-strings and flattens the resulting structure into a single string, which is handed to the stream reader as the output of a `read` call. Thick black lines indicate connector bindings.
Some plausible code for the example system is shown in Figure 2. The syntax is similar to that of Knit [?], a tool which allows precise specification of linkage
graphs under the traditional C linkage model. Note the following features in the code, referring back to the system diagram in Figure 1:
- the subdivision of each imported and exported interface into “roles”, such as prod and cons;
- the differences in names of compatible roles (e.g., cons versus dest and the explicit mapping between the two using the <import> <- <export> syntax;
- the inline (or “ad-hoc”) use of adaptation primitives map and project, respectively for function application over a list and for tuple element selection;
- the use of various constructors for different binary code representations, here showing obj_elf("C", ...) for an ELF object file with C linkage and exec_elf(...) for an ELF executable.
5.2 Novelty
I believe that the following contributions of the proposed work will be substantially novel:
- support for adaptation in a linking language;
- the pragmatic distinction between ad-hoc and re-usable adaptation;
- support for an extensible set of adaptation primitives in a configuration language;
- extension of an existing operating system’s dynamic loading interface for implementing adaptation;
This syntax is simplified from Knit, with added adaptation features. Reserved words are in bold, and denote either syntactic block kinds ("unit" or "compound"), compositional operators ("<−" and "as") or built-in types. Identifiers refer either to logical modules, interfaces (i.e. "roles"), linkage standards, symbols or files. */
unit myTupleSpce {
exports [ prod { (int, bit list) in ordered() },
cons { void out(int, bit list) } ];
files { objelf("C", tuplespace.o) }
}
unit myTupleProducer {
imports [ dest { void output(bit list, int) } ];
exports [
/∗ ... */ ];
files { objelf("C", tupleprod.o) }
}
unit myStreamConsumer {
imports [ streamProvider { int read(byte addr, int) } ];
exports [
/∗ ... */ ];
files { objelf("C", streamcons.o) }
}
unit takeContiguous {
imports [ source { (int, bit list) in monotonic() } ];
exports [ listProvider { (int, bit list) list get() } ];
files { objelf("C", takecontig.o) }
}
/∗ The stanzas above are simply declarations for existing object files, and could have been autogenerated from source. The next two blocks link them, using the wiring operator '<−' and adaptation primitives 'map', 'project' and 'flatten'. */
compound tupleStreamAdapter {
exports [ tupin { void out(int, bit list) },
strout { int read(byte addr, int) } ];
link objelf("C", "tupstream.o") |
myTupleSpce, takeContiguous |
|
takeContiguous.source <- myTupleSpce.prod { in monotonic <- in ordered }
tupin as myTupleSpce.cons;
strout as takeContiguous.listProvider {
read as flatten(map (project #2) get)
}
}
compound Process {
exports [ /∗ ... */ ];
link execelf("process") |
myTupleProducer, tupleStreamAdapter, myStreamConsumer |
|
myTupleProducer.dest <- tupleStreamAdapter.tupin {
output(a, b) <- out(b, a)
}
myStreamConsumer.streamProvider <- tupleStreamAdapter.strout
}
Figure 2: Example code for the configuration shown in Figure 1.
refactoring to separate integration from functionality.
In addition, certain differences of approach or emphasis differentiate my proposal from existing work. These include the very strong practical emphasis, an insistence that the system will be useful for a wide variety of highly heterogeneous components, and a preference for composability ahead of safety or verifiability.
5.3 Practical approach
5.3.1 Static implementation and basic case-studies
The static linker implementation will be based on Knit [2], in the first instance, and the process of extending Knit will be used to refine out a suitable linkage model, notation, and set of basic adaptation primitives. This may or may not lead to a complete re-design which dispenses with all Knit implementation.
Case-study evaluation of this implementation will use well-known open-source codebases including Mozilla Firefox. Once the codebase’s linkage relation has been captured in Knit code (generated mechanically), we may extract sub-graphs corresponding to individual features and, using the adaptation features, integrate them with a separate codebase. For example, we might try extracting the browser history feature from Firefox and integrating it into a file manager. The primary success criterion is that this should be possible entirely from the linkage domain, without changing any existing source code of the file manager, and without introducing any adaptation primitives which would not be widely re-usable. From the experience gained during this work, I will develop a library of common adaptations, utilising the support for an extensible set of convenient adaptation primitives described in Section 4.4.
5.3.2 Dynamic implementation
Dynamic loading is a logical extension to the static linking case, and is essential for dynamic user-directed composition. Dynamic loading is a highly general mechanism, which can add arbitrary new code into the running process. This loaded code might function entirely within the current process, or it might be stub code for communication with external processes (or the kernel). As such, dynamic loading may serve as the unique mechanism by which any new communication channel is defined or brought into scope. For example, one could imagine rewriting the traditional C code
```c
FILE *fp = fopen("/path/to/myfile", "r");
```
as a call to dynamically load a “file handle object”, e.g.
```c
FILE *fp = (FILE*) dlopen("/path/to/myfile!r", 0);
```
where we have unified file control block pointers with library handles, and encoded the read-only interface signifier "r" into the object name.
Note that this transfers perfectly well to languages providing safety guarantees. Run-time safety can be achieved by including an extra “type argument”, e.g. in Java
```java
T t = System.dlopen("/path/to/myfile!r", 0, T.class);
```
where `System.dlopen()` is polymorphic in `T` and throws an exception if the object denoted by `/path/to/myfile!r` doesn’t satisfy type `T`. The type argument is an assertion about the type of the object which the `dlopen()` call is expected to return. Static type-safety requires additionally that the type encoded by the assertion can be inferred at compile-time, such as in the above Java example where `T.class`, having type `Class<T>`, allows the compiler to infer that the result will have type `T`.
I will implement the dynamic case by embedding a variant of the adaptation language into Linux’s dynamic loader, and providing bindings for some common languages (including C and Java). Using this, and with help from the library, users will be able to dynamically specify extensions to their running applications by combining generic third-party code with adaptation logic. For example, a user might construct a browser plugin for natural language translation, dynamically, by specifying some adaptation which combines a local HTML parser library with a plain-text natural language translation web service. This, and other case-studies similar to the static case, will be sufficient for basic proof-of-concept.
To demonstrate the unifying power of dynamic loading, I will also reimplement the Unix filesystem and sockets interfaces as thin layers over the dynamic loader. As a result, programs written against only one or other of these interfaces (e.g. a webmail-to-POP gateway, binding to a socket for input) will be made to run just as easily against other targets (e.g. reading input from a log file containing a previous POP session) simply by adapting the socket address (e.g. to use a special address family, whose address structure can embed some adaptation expression denoting the session log file).
(It is possible to unify more than just the socket and filesystem interfaces. Note that `dlopen()` is essentially performing instantiation of `objects`—such objects are traditionally libraries, but might be finer-grained. Language-level object instantiation mechanisms could therefore potentially also be unified with dynamic loading. In general, there are many arguments in favour of bringing the worlds of operating system and language implementation closer together, for example the bad interactions between garbage collection and demand paging. Meanwhile, the arguments for the “revival of dynamic languages” read like a manifesto for this closer integration, since most of the cited features—dynamic structural modification, persistence, namespaces, pluggable type-checking and reflection—are already, in some form or another, features of operating systems. I hope that my findings will add further support to these arguments, although it is unlikely that any substantial exploration will be feasible in the time available.)
5.3.3 Refactoring
The final piece of implementation is a refactoring tool, for human-assisted separation of integration from functionality within existing C code. Semi-automatic refactoring is rapidly maturing, and found in many popular development environments (notably Eclipse). Although most refactoring techniques are prototyped for Java or similar languages, refactoring C is also feasible with suitable treatment of the preprocessor. The tool would accept single C source files, and (with user assistance) output two files: one, a simpler C source file implementing the “core functionality” in terms of idealised imports and exports; the other, expressed in the configuration language, detailing the adaptations necessary to recover compatibility of this simpler code with the original non-idealised imported and exported interfaces.
To intuit a possible algorithm for this refactoring, note that in any module of source code, there is a finite set of statements or expressions which perform communication with the outside (i.e. across some external interface). For each of these points, the algorithm may use the data dependency graph to search for logic which is candidate for shifting into the adaptation domain, to achieve the goal of better modularisation and/or lower overall complexity of the combined source and adaptation. Graph complexity measures, or even abstract syntax tree size, might be useful as heuristics for identifying such logic, but the semi-automatic approach allows fall-back onto human judgement.
6 Evaluation
I propose to evaluate the implementation work by a combination of two methods: case study, and software measurement.
6.1 Case study
Case study, as already described, will target some existing well-known open-source codebases, including Mozilla Firefox, and investigate the use of adaptation to produce novel combinations of code. I will target a selection of the dimensions described in Appendix A. Some example cases, and their dimensions, might be:
- adapting a web browser plug-in between different binary plug-in interfaces (e.g. Firefox to Konqueror);
- adapting a graphical application between different binary toolkit interfaces (e.g. GTK+ to Qt);
- porting a commonly useful feature from a web browser to a file manager (e.g. history or bookmarking);
- adapting a web browser extension to run as a stand-alone application in a separate process;
• adapting a graphical debugger front-end to use a new back-end with a differing command set or protocol;
• integrating a web-based on-line banking system with a home accounting program;
• any or all of the examples mentioned in the Motivation.
This “first pass” method of evaluation will proceed by demonstrating intuitively the simplicity of performing these tasks using the new configuration language and adaptation library, by comparison to conventional glue code.
6.2 Software measurement
A more rigorous assurance of success may be found by software measurement. For this purpose I will divide the proposed work into two: support for convenience of adaptation (i.e. the configuration language and its implementations), and support for refactoring (to effect the separation of functionality from integration in existing source code). Different measurements will be required for each.
In the case of the configuration language, we would like to show that using the language to perform adaptation and integration is cheaper than traditional methods (i.e. writing glue code in other languages). Direct user observation is possible, e.g. by giving coding assignments to a sample of undergraduates. However, it is difficult to factor out the familiarisation overheads associated with a new language. Instead, I propose to approximate “cheaper” with “less complex”, and use code complexity measures. Many traditional complexity measures are unsuitable because they fail to account for relevant sources of complexity. For example, cyclomatic complexity considers only the control flow graph, so would not account for complexity inherent in a regular expression string-rewriting rule (which is a particularly complex string constant). Harrison’s entropy-based measure [2] is an ordinal measure of average information per token, whose evaluation empirically demonstrates that this correlates negatively with bug density (a reasonable proxy for effective complexity). This work can likely be extended to allow comparison between my configuration language and traditional glue code, hopefully showing that the average information per symbol is greater in the new language and, correspondingly, that less total code entropy is required.
In the case of the refactoring engine, we wish to show that the refactored source is more composable, i.e. less coupled, than the original source. Existing measures of coupling (as first proposed by Stevens et al [2]) use ad-hoc weightings to assess the severity of coupling between each pair of modules. I propose a more principled measure, again based on information entropy—this is outlined in Appendix C. One complication is the fact that after refactoring there are at least three (rather than two) modules: the two being composed, and the adaptation or glue logic. As usual, the coupling of the ensemble can be measured as a weighted sum of the pairwise couplings of each component.
with the adaptation logic, assuming that all communication proceeds through
the adapter. (A trivial “identity function” adapter does not reduce coupling; it
arguably increases it, since changes to one component might need to be reflected
not only in the target component but also again in the adapter. The weighting
must be chosen to reflect this.)
Evaluation will proceed by measuring the coupling of a refactored source
tree, which makes use of adaptation features, and comparing it with the un-
refactored version. Depending on the success of the refactoring engine, these
refactored source trees might be produced semi-automatically using the tool,
or else manually. Since the refactored interfaces should, intuitively, be simpler,
and the adaptation logic also simpler than a traditionally written glue module,
the measure should show a clear reduction in local coupling on both edges.
6.3 Non-criteria
One non-criterion regarding evaluation is performance. Clearly, the techniques
described will necessitate greater indirection, greater numbers of procedures
and module boundaries, run-time code generation, and other techniques which
will degrade performance relative to a statically composed, statically optimised
version of the same code. I am confident that the performance penalties can be
substantially negated by relevant optimisation techniques. One existing example
is the flatten technique employed by Knit [? ]. However, I will not research such
optimisations during this work.
7 Related work
Most relevant work is cited inline, including some in the Appendices. Here I
highlight the recent work of most direct relevance.
Flexible Packaging DeLine’s 1999 thesis [? ] targets almost the same prob-
lem as this proposal: how can we compose functionality in the presence
of mismatched integration details? However, Flexible Packaging takes a
“clean slate” approach, rethinking the entire software development pro-
cess. The result is a system which places strong constraints on the lan-
guages and styles in which code can be written, and cannot be applied to
existing code. The approach proposed here, although less clean and offer-
ing less dramatic reductions in integration effort, embraces the wealth of
existing code and permits greater heterogeneity.
Rich interfaces and adaptor synthesis There is considerable work on en-
riching component interface specifications with additional metadata [? ?
], primarily for compatibility checking. Some of these interface specifica-
tions can be used as input to adaptor synthesis algorithms [? ? ? ? ].
In most cases the capability of these algorithms is limited to finite-state
protocol adaptations and argument permutations. This work is essentially
complementary to my proposal: it suggests some unifying notions of interface, and provides algorithms which might be incorporated into the library of adaptations.
**Coordination** Coordination languages such as Linda [? ] and Reo [? ] specify the interactions between concurrent computational processes, whether these be data flow (sends and receives) or control flow (waiting and resuming of processes). The non-invasive “exogenous” coordination [? ] provides a separate configuration domain for expressing these interactions. This is consequently a domain convenient for performing adaptation of timing and protocol details. However, these coordination languages can’t express changes to the individual messages sent and received, meaning that they can’t express a large class of useful adaptations.
**Linkage-level flexibility** Some work has explored link- and load-time flexibility of binary code. Knit [? ] introduces flexibility into the linkage graph for ahead-of-time linking. Load and Let Link [? ] provides similar flexibility at run time. Binary Component Adaptation [? ] relaxes composition constraints for Java bytecode by rewriting typing metadata. Like this proposal, all these works make the case for flexibility at the level of linking and loading. However, none of these addresses the problem of mismatched interfaces.
**8 Future work and alternatives**
For some ideas on future work and alternative avenues, see Appendix D.
**9 Provisional structure and timetable**
Below is a draft thesis outline, interleaved over a provisional timetable which begins in January 2008 (month 0) and ends with December 2009 (month 23). Please refer back to Sections 5 and 6 for details of all the implementation and evaluation work mentioned.
1. **Introduction**
2. **Technical background**
3. **Combining linkage with adaptation** I will describe the design and implementation of a linking language supporting an extensible set of adaptation primitives, as described in Section 5. I will also detail the experimental work applying the language implementation to a selection of case studies (including the “file manager history”, “calendar sharing” and “debugger back-end” examples). From this experience, I will summarise and justify the language’s underlying linkage model and a basic library of useful adaptation primitives.
• Months 0–3: begin work on Knit; extend to support multiple binary formats; add support for externally-defined adaptation primitives; implement argument remapping primitive as test
• 3–5: Knit-ify the necessary parts of two open-source codebases (provisionally Firefox and Rox Filer); identify and implement useful primitives for mix-and-match of features (e.g. history) between these
• 5–7: develop some useful case-study compositions, some simpler ones to include reference “old style” glue code implementations for later evaluation
4. Dynamic composition with adaptation I will describe the design and implementation of a variant of the linking language and its embedding into Linux’s dynamic loader, as described in Section 5. I will describe and explain the deviations from the original language, and detail further case studies demonstrating dynamic composition use-cases (including the “browser natural-language translation plug-in” and “POP from file” examples, along with other mix-and-match of code targeting sockets, filesystem and database interfaces).
• 7–9: embed into dynamic loading interface
• 9–10: develop case studies for dynamic composition
5. Refactoring I will detail the implementation of a semi-automatic refactoring engine to separate functionality from integration in existing C code, as described in Section 5.3.3. I will report experiences of applying the engine to several of the case-study compositions already developed, showing that the refactored sources permit simpler adaptation logic.
• 14–17: devise and implement refactoring
• 17–18: apply refactoring to existing case-studies, using coupling measure to evaluate
6. Entropy-based software measures I will describe and justify the application of Harrison’s entropy-based complexity measure to show that the total information and average information per symbol are both improved when using our adaptation techniques (as compared to traditional glue code). I will describe an entropy-based coupling measure which accounts for a broad range of sources of coupling. Although all this work logically follows the refactoring case studies, it is timetabled earlier, since limitations of the measure might constrain the choice of refactoring case studies.
• 10–11: adapt entropy-based complexity measure to adaptation code, and evaluate old-versus-new (i.e. glue versus adaptation) using case studies already implemented
• 11–14: devise entropy-based coupling measure and test against existing measures
7. **Experimental evaluation and analysis** I will summarise the experience gained from the case studies made of the configuration language, showing both intuitive and measurable improvements, the latter using the complexity measure. I will also summarise experience from the refactoring case studies, again showing both intuitive and measurable improvements, the latter using the coupling measure.
8. **Conclusions and future work**
• 18–20: leftover or additional experiments and practical work
• 20–23: final-phase dissertation writing
• 23: submission
**References**
O Nierstrasz and F Achermann. Separation of concerns through unifica-
Appendices
A Dimensions of adaptation
When discussing adaptation, it is helpful to describe the concrete ways in which two components may be mismatched. Here I will enumerate some non-orthogonal dimensions of adaptation.
**Data encoding** Data of the same *meaning* may be concretely represented in many different forms. For example, there frequently exist many different file formats, character sets or network protocol messages for what are, at some higher level, the same meanings. The simplest cases of such mismatch may be handled by conversion routines, translation tables or
other mapping constructs. I discuss more specific cases of this mismatch in the following paragraphs.
**Operations** Units of code may implement logically compatible operations but differ in the concrete expression of their interfaces. For example, two traditional procedural or object-oriented interfaces might differ in the names of operations, order and types of arguments, and type of return value. This is a special case of the data encoding mismatch, since arguments and return values may be thought of more generally as structured messages. These mismatches might occur at a higher semantic level than that of conventional type systems – for example, a procedure
\[
\text{substring :: string} \rightarrow \text{int} \rightarrow \text{int} \rightarrow \text{string}
\]
might interpret the two integers either as (start, end) or as (offset, length).
**Protocol** A complement of the syntactic operational mismatch is the more semantic issue of protocol mismatch. In any stateful communication, the meaning of any message depends on what messages have been sent previously, and in what order. These messages might be those of network protocols (an obvious candidate for mismatch, especially versioning issues), procedural interfaces (where each call is a pair of messages) or even variable accesses and initialisations (for example in C, where the semantics of global and local variables are subtly different with respect to initialisation protocol). Finite-state automata may be used to capture many protocols [? ?], although some may require more complex automata (e.g. a stack which may not be popped more times than it has been pushed, requiring a deterministic pushdown automaton [? ?]).
**Language** A common difficulty is combining code written in different languages. First, there must be some model of the relevant dynamic constructs of one language expressed using those of the other. Ensuing subtleties include expressivity (that all meaningful operations provided by the foreign-language component can invoked on its native model), safety (that no undefined operations can be invoked through the model), and efficient implementability. Procedure call and other message-based communication is easily handled by value translation (i.e. marshalling), and tools exist to generate translating wrappers (e.g. Swig [? ]). Some language features require new implementation techniques (e.g. spaghetti stacks) which are less straightforward to integrate with other runtimes. Sharing state is also complex, since state management semantics vary greatly (e.g. automatic versus manual) and have far-reaching implementation consequences. A final difficulty is from “closed world” assumptions which break when resources are shared: consider a garbage collector which stops all other threads. Finding all other threads is easy under the assumption that these are only those created by a unique runtime library. The ability to interpose within the language implementation, to adapt this logic, is
therefore essential for supporting integration. Nevertheless, interopera-
tion between specific implementations is essentially similar to any other
problem of mismatched library composition.
Targeted API When programs are written against pre-existing concrete in-
terfaces, such as system call interfaces or library interfaces, they are in-
evitably coupled to these interfaces. It often proves desirable, but expen-
sive, to port software to target a different piece of supporting software –
perhaps a different operating system, a different windowing toolkit, a
different mathematical library, and so on. A convenient way of generating
a suitable wrapper, with little or no change required to the existing code,
would be a very useful form of adaptation.
Binary interface Essentially a special case of data encoding mismatch, ABI
mismatches are caused by different conventions for data representation
and communication. In compilers they are worked around by building in
knowledge of several different ABIs, and by annotating source code (for
example using extern "<lang>" directives in C). This works satisfactorily
only because of the small number of conventions commonly implemented
for any given instruction set architecture. More generally there needs
to be a systematic way of specifying ABIs and performing the necessary
interposition or rewriting.
Programming style For any given functionality, there may be many program-
ing styles, above the level of language, which may be used to implement
it. Classic examples are the event- versus thread-based styles, the fil-
ter versus the in-place update, data-flow versus control-flow, and so on.
A particularly powerful form of adaptation would be capable of mediat-
ing between logically compatible code written in different styles, enabling
composition of highly heterogeneous codebases.
“Packaging” Although there exist innumerable many possible conventions for
binary interface, programming style and the like, in practice we observe
only a very small subset of these possibilities. For example, when writ-
ing some new code, we would choose to “package” it in one of several
different ways: as a command-line tool, a graphical tool, a C library, a
Python library, a web form, a spreadsheet macro, or various others.[? ].
Crucially, we would clearly pick some pre-existing set of communication
conventions, rather than inventing our own. A particularly pragmatic ap-
proach, therefore, is to develop means of adapting between pre-existing
(and future) packagings, for some one-time effort in each case, without
going so far as to achieve the level of generality required to support any
conceivable packaging with equal effort.
Software architecture Code frequently makes assumptions about communi-
cation topologies, control structures, causality of information flow, and
reasoning about uniqueness or completeness conditions over the remain-
der of the system. Among the hardest and most subtle kinds of mismatch
are those arising from such assumptions. Garlan et al [? ] provided a case-study detailing several such problems: inability to decompose and minimise run-time code dependencies, inability to extend event loops, in-accessibility of desired interfaces to objects, introduction of unwanted multithreading, and overly constrictive concurrency control. The underlying assumptions causing these problems are usually not stated explicitly in source code, and can therefore be difficult to identify. Adapting the components to overcome these mismatches — by extending event loops, decomposing dependencies, exposing internal interfaces and altering concurrency control logic — is a challenging task.
B Survey of existing practices
Scripting languages Scripting languages are programming languages characterised by brevity, support for dynamic code evaluation, and lack of static type-checking. They exploit a trade-off: in return for a loss of some efficiency, elegance and safety, they gain dynamism, convenience and ease of modification. Scripting languages are therefore popular for “glue code”, whose purpose is to interface existing pieces of software. Glue code invariably performs adaptation, and has special support for this in the extensive regular expression-based string matching and rewriting found in mainstream scripting languages such as Perl and Python. Additionally, these languages provide convenient support for interacting with external code using a wide variety of system-level communication mechanisms: invoking other scripts, accessing the file system or network, invoking external programs, manipulating environment variables, and so on. However, lack of static checking can make scripts less reliable than code in conventional languages. Also, adaptation by string rewriting is especially error-prone — but is necessary because of the low-level byte-stream IPC mechanisms by which scripts and other components traditionally communicate.
Orchestration and service-oriented computing Related to scripting is the recent trend towards “service-oriented computing”, where large systems are decomposed into a set of passive services (typically web services or other RPC-like abstractions) and a set of proactive control components which use them. These control components might be called as orchestrations, workflows, or simply scripts. Separating the proactive from the passive allows new languages to be used to express the active components, which may have convenient support for error-handling, parallelism, and asynchronous or high-latency communication [? ]. This support has proved useful for building distributed applications in the wide area, in addition to providing the benefits of scripting languages. However, as with scripts, there may be considerable effort in adapting between the combination of multiple proactive components.
Distributed middleware Middlewares frequently use IDL compilers to gener-
ate communicational code, supported by run-time libraries [? ]. This frees application code from the need to build in particular implementations of communication abstractions (such as remote procedure call). However, since client and server code must share a common interface, and must be written to the conventions demanded by the middleware's communication abstraction (and perhaps also to the syntactic conventions of the particular IDL compiler being used), this often does not help in the case of combining code written independently.
**Component middleware** So-called “component-based” technologies such as JavaBeans, COM+ and the CORBA Component Model have become popular in industry for creating components of richer interface description (and hence greater perceived re-usability), for composing such components (often graphically), and for automatically generating certain kinds of marshalling wrappers (such as COM+’s context proxies [? ]). Note that marshalling is a form of adaptation, at the level of data representation. While useful, these technologies do not support any higher-level forms of adaptation, such as adapting between mismatched interface definitions. Therefore, although their emphasis on interface specification is helpful, they do not solve the problem of direct re-use of independently developed code.
**Programming language advances** Higher-level programming languages, including functional and higher-order languages such as Haskell or ML, provide a more powerful set of basic abstractions than many traditional languages. These include tuples, lists, streams, first-class functions, discriminated unions and pattern-matching. The inclusion of such abstractions cuts down the potential for mismatch which might otherwise be caused by differing conventions or implementations of these common abstractions. Meanwhile, the powerful computational abstractions of lazy evaluation and higher-order functions might enable more convenient expression of script-style adaptation logic [? ]. However, the need to adopt a common language, and the longstanding inconvenience of interfacing these languages with foreign code, mean that for the foreseeable future there will be greater need to perform adaptation to and from these languages rather than within them.
**Configuration languages** “Configuration languages” include roughly any language which expresses wiring, linkage, component topologies, component initialisation data or other specialisations particular to a specific deployment. Examples are linking languages [? ], architecture description languages [? ? ? ], exogenous coordination languages [? ], the configuration languages of “inversion of control” development platforms (such as Spring\(^2\) or Castle\(^3\)), and, strictly speaking, almost all conventional programming
\(^2\)http://www.springframework.org/
\(^3\)http://www.castleproject.org/
languages. (We informally exclude the latter, for convenience of reference to the remainder.) These languages are useful for conveniently separating concerns, enabling static checking [? ] and resolving some lower-level mismatches (e.g. of component naming or wiring). However, existing configuration languages do not support any higher-level forms of adaptation. We will return to this idea in Section 4.
Aspect-oriented programming This technique [? ] extends programming languages with a new kind of module called an aspect, which specifies inline insertions or modifications to code within other modules, at certain “join points” specified declaratively by the aspect definition. An aspect can be used to modularise features whose code might otherwise be scattered throughout many modules—such features might include logging, security checks, concurrency control and so on. Aspects can be treated as first-class units of composition (i.e. linkage) alongside traditional modules, and it may be convenient to effect certain adaptations either as new aspects or as changes to existing aspects, instead of making inline changes to a large set of modules. As such, aspect weaving can be seen as a particularly general adaptation primitive. However, the technique has not yet been specialised towards composition of heterogeneous multi-language systems—for example, most aspect toolchains target a particular language, while inter-module linkage is typically specified endogenously by name-matching rather than in a separate configuration domain.
Unification of programming interfaces The Unix motto of “everything is a file” is an instance of a general technique: defining a unified programming interface onto a disparate set of objects. The intention is to maximise the composability of application code with respect both to data, such as files, and to other pieces of application code—where communication with this code is itself abstracted by the unified interface (as with pipes). Other examples are the BSD sockets API, and the World-Wide Web with its small set of HTTP “methods”. This approach is appealing, but has drawbacks. Unification comes at the expense of semantic detail, so little static checking can be performed. In practice, most objects implement some ill-defined subset of the unified interface, discoverable only at run-time by query or, worse, only in error-handling. Worst of all, some operations of some objects simply will not be mappable satisfactorily from the unified interface; this forces either an arbitrary local choice among the many unsatisfactory ways, or the use of an escape-hatch such as Unix’s ioctl(). In both cases, the original benefit is lost, since there is now a high likelihood of mismatch with other application code. Adding a layer of indirection (i.e. adaptation) between these pieces of application code is a promising solution.
Unification of binary interfaces Similar to API unification, unifying binary representations and linkage models brings immediate interoperability benefits. Testament to this fact is the successful implementation of a large
number of languages over Microsoft .NET’s Common Language Runtime. Since byte-code is almost always generated by tools rather than by hand, code changes are not a problem: it is sufficient to implement a compiler from each source-level language to the common byte-code. However, again the need for semantic uniformity can be restrictive: in the .NET case, all languages must use a garbage-collected heap, the .NET threading model, a common data model, a common type system, and so on. Moreover, no standard is ever final, nor adopted everywhere, so there will always be a need for adaptation between different binary-level conventions. This is evidenced by the current market for Java-to-.NET interoperability products, including bytecode translators\textsuperscript{4} and trampoline generators\textsuperscript{5}.
**Metadata and annotations** Recent languages and linkage standards, including both Java bytecode and .NET intermediate code, incorporate the ability to annotate sections of code with arbitrary metadata. This is useful for making explicit the semantic distinctions between apparently unified objects, such as methods, variables, and so on. However, clearly it is also necessary for application code to take these annotations into account. Therefore, annotations are a useful set of inputs into the adaptation process, but do not in themselves solve the problems of adaptation.
### Proposed entropy-based coupling measurement
Existing software measurement work defines various measures of coupling, including some specific examples measuring local coupling. This concept was introduced by Stevens et al \textsuperscript{24} as “the measure of the strength of association established by a connection from one module to another”. One consequent intuition is that coupling measures the likelihood that changes in one subsystem will require consequent changes in a disjoint subsystem, and therefore low coupling predicts good properties such as extensibility and maintainability. Another intuition is that high coupling correlates negatively with reusability, since the more strongly a module is coupled with its environment, the more changes are necessary in order to re-use that module in a different environment.
We would therefore like to show that compositions making appropriate use of adaptation are less strongly coupled than traditional compositions. Unfortunately, existing measurements for local coupling are ad-hoc and do not capture all sources of coupling. I propose a new measure based on information theory and a channel-based model of communication.
To illustrate, consider an untyped, unstructured communication interface such as Unix pipes or files. In order to communicate structured data, the parties must fix on conventions for coding that structure, typically a combination of whitespace and punctuation characters. The convention chosen must be understood by both parties in order for them to communicate, and is therefore
\textsuperscript{4}IKVM.NET: http://www.ikvm.net/
\textsuperscript{5}JuggerNET: http://codemesh.com/products/juggernet/
a source of coupling. Now consider a further intuition: if a component is able to understand multiple alternative structure codings—for example, if it detects whether commas or tabs are being used to delimit input fields, and interprets the input correctly in either case—then it is less coupled to its environment than if it understands only one. This is because the component and the environment have to agree on fewer choices about the code in the former case than in the latter. This number of choices idea clearly suggests information entropy, and it is specifically the entropy of the channel’s coding rules which correlates with coupling.
I propose that the coupling between two modules joined by adaptation can be measured by the complexity (i.e. entropy) of the shared interface definitions on which the modules depend. Suitably advanced notions of interface are required for this to capture all meaningful sources of coupling, certainly beyond the highly syntactic interface definitions used in today’s code. For example, in the Unix pipeline
```
printenv | sed 's/=.*//'
```
which prints out the names of all defined environment variables, there is an implied contract that all lines output by `printenv` follow the pattern `NAME=value`. Temporal, timing- and protocol-based interfaces are still useful for describing these contracts, but must be applied with sufficiently fine grain to capture the complete language of individual data values sent and received, looking further just the language-defined constructs such as procedure signatures. As an alternative to these, I may explore a new, purely channel-oriented notion of interface, based on the concepts of symbols, symbol content (i.e. the symbol itself), symbol context and context variables.
### D Future work and alternatives
**Pluggable checking** So far we have considered only the problem of creating working compositions of software. A separate problem is the ability to reason about these compositions and check arbitrary properties of them (including type-safety, but potentially also quality of service, security and so on). Given our emphasis on dynamism and heterogeneity, a useful approach is that of pluggable checking, where type systems and other reasoning frameworks are composable extras applied to configurations of software. It would be interesting to add support for annotations, both as discovered properties of code and as external assertions within the configuration language, and demonstrate some examples of pluggable checkability over these (and over intervening adaptation primitives).
**Top-to-bottom traceability** Configuration languages span a spectrum from high- to low-level. At the high level, architecture description languages (ADLs) such as Unicon, Wright or xADL describe the structure of large, possibly distributed applications. With the advent of
virtualization technologies such as Xen and VMware, system software now also supports the description of entire distributed applications spanning multiple machines in the local area. (Specifically, in the case of Xen, these descriptions would be the inputs into a domain builder for distributed applications.) This opens up the possibility of describing the intended structural and extra-functional properties, using ADLs, and having the system toolchain and runtime directly support the checking, enforcement and traceability of those properties. These properties might also usefully include security properties and policies [? ].
**Adaptation-oriented programming** If all programmers had access to convenient and expressive features for adaptation, how would this change the way programs are written? Intuitively, the need to target concrete pre-existing interfaces (such as library interfaces) when writing new code seems to cause a “leakage” of complexity—where the highly general-purpose library interface imposes unwanted complexity onto what might otherwise be a much simpler client. In other words, targeting existing APIs increases the complexity of the resultant source code, and reduces its reusability. Writing code which deliberately ignores pre-existing targetable interfaces, with the expectation of using adaptation to perform this integration later, might reduce client complexity and hence improve re-usability of this new code. Investigation of this phenomenon might prove a useful alternative avenue.
|
{"Source-Url": "http://www.cl.cam.ac.uk/~srk31/research/reports/kell-thesis-proposal-2008.pdf", "len_cl100k_base": 11623, "olmocr-version": "0.1.49", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 58322, "total-output-tokens": 14657, "length": "2e13", "weborganizer": {"__label__adult": 0.0003199577331542969, "__label__art_design": 0.0003018379211425781, "__label__crime_law": 0.00024271011352539065, "__label__education_jobs": 0.0006103515625, "__label__entertainment": 4.363059997558594e-05, "__label__fashion_beauty": 0.00011658668518066406, "__label__finance_business": 0.00013113021850585938, "__label__food_dining": 0.0002624988555908203, "__label__games": 0.00036406517028808594, "__label__hardware": 0.0004472732543945313, "__label__health": 0.0002713203430175781, "__label__history": 0.00017726421356201172, "__label__home_hobbies": 6.717443466186523e-05, "__label__industrial": 0.0002117156982421875, "__label__literature": 0.0002529621124267578, "__label__politics": 0.00020122528076171875, "__label__religion": 0.0003540515899658203, "__label__science_tech": 0.003101348876953125, "__label__social_life": 7.587671279907227e-05, "__label__software": 0.00357818603515625, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.00023615360260009768, "__label__transportation": 0.00035691261291503906, "__label__travel": 0.0001653432846069336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66675, 0.01697]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66675, 0.60288]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66675, 0.89851]], "google_gemma-3-12b-it_contains_pii": [[0, 1982, false], [1982, 4722, null], [4722, 7044, null], [7044, 9513, null], [9513, 12468, null], [12468, 15404, null], [15404, 16543, null], [16543, 18525, null], [18525, 21135, null], [21135, 24217, null], [24217, 26619, null], [26619, 29543, null], [29543, 32263, null], [32263, 34603, null], [34603, 37011, null], [37011, 39264, null], [39264, 42026, null], [42026, 44296, null], [44296, 47297, null], [47297, 50275, null], [50275, 53207, null], [53207, 56100, null], [56100, 59204, null], [59204, 62289, null], [62289, 65153, null], [65153, 66675, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1982, true], [1982, 4722, null], [4722, 7044, null], [7044, 9513, null], [9513, 12468, null], [12468, 15404, null], [15404, 16543, null], [16543, 18525, null], [18525, 21135, null], [21135, 24217, null], [24217, 26619, null], [26619, 29543, null], [29543, 32263, null], [32263, 34603, null], [34603, 37011, null], [37011, 39264, null], [39264, 42026, null], [42026, 44296, null], [44296, 47297, null], [47297, 50275, null], [50275, 53207, null], [53207, 56100, null], [56100, 59204, null], [59204, 62289, null], [62289, 65153, null], [65153, 66675, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66675, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66675, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66675, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66675, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66675, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66675, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66675, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66675, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66675, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66675, null]], "pdf_page_numbers": [[0, 1982, 1], [1982, 4722, 2], [4722, 7044, 3], [7044, 9513, 4], [9513, 12468, 5], [12468, 15404, 6], [15404, 16543, 7], [16543, 18525, 8], [18525, 21135, 9], [21135, 24217, 10], [24217, 26619, 11], [26619, 29543, 12], [29543, 32263, 13], [32263, 34603, 14], [34603, 37011, 15], [37011, 39264, 16], [39264, 42026, 17], [42026, 44296, 18], [44296, 47297, 19], [47297, 50275, 20], [50275, 53207, 21], [53207, 56100, 22], [56100, 59204, 23], [59204, 62289, 24], [62289, 65153, 25], [65153, 66675, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66675, 0.00578]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.