text
stringlengths
17
3.36M
source
stringlengths
3
333
__index_level_0__
int64
0
518k
Visual DCT is an EPICS configuration tool completely written in Java and therefore supported in various systems. It was developed to provide features missing in existing configuration tools as Capfast and GDCT. Visually Visual DCT resembles GDCT - records can be created, moved and linked, fields and links can be easily modified. But Visual DCT offers more: using groups, records can be grouped together in a logical block, which allows a hierarchical design. Additionally indication of data flow direction using arrows makes the design easier to understand. Visual DCT has a powerful DB parser, which allows importing existing DB and DBD files. Output file is also DB file, all comments and record order is preserved and visual data saved as comment, which allows DBs to be edited in other tools or manually. Great effort has been taken and many tricks used to optimize the performance in order to compensate for the fact that Java is an interpreted language.
Visual DCT - Visual EPICS Database Configuration Tool
4,900
The NIF Integrated Computer Control System (ICCS) application software uses a set of service frameworks that assures uniform behavior spanning the front-end processors (FEPs) and supervisor programs. This uniformity is visible both in the way each program employs shared services and in the flexibility it affords for attaching graphical user interfaces (GUIs). Uniformity of structure across applications is desired for the benefit of programmers who will be maintaining the many programs that constitute the ICCS. In this paper, the framework components that have the greatest impact on the application structure are discussed.
Application Software Structure Enables Nif Operations Kirby W. Fong
4,901
The Swiss Light Source (SLS) has in the order of 500 magnet power supplies (PS) installed, ranging from from 3 A/20 V four-quadrant PS to a 950 A/1000 V two-quadrant 3 Hz PS. All magnet PS have a local digital controller for a digital regulation loop and a 5 MHz optical point-to-point link to the VME level. The PS controller is running a pulse width/pulse repetition regulation scheme, optional with multiple slave regulation loops. Many internal regulation parameters and controller diagnostics are readable by the control system. Industry Pack modules with standard VME carrier cards are used as VME hardware interface with the high control density of eight links per VME card. The low level EPICS interface is identical for all 500 magnet PS, including insertion devices. The digital PS have proven to be very stable and reliable during commissioning of the light source. All specifications were met for all PS. The advanced diagnostic for the magnet PS turned out to be very useful not only for the diagnostic of the PS but also to identify problems on the magnets.
Application of digital regulated Power Supplies for Magnet Control at the Swiss Light Source
4,902
The commissioning of the Swiss Light Source (SLS) started in Feb. 2000 with the Linac, continued in May 2000 with the booster synchrotron and by Dec. 2000 first light in the storage ring were produced. The first four beam lines had to be operational by August 2001. The thorough integration of all subsystems to the control system and a high level of automation was prerequisite to meet the tight time schedule. A careful balanced distribution of functionality into high level and low level applications allowed an optimization of short development cycles and high reliability of the applications. High level applications were implemented as CORBA based client/server applications (tcl/tk and Java based clients, C++ based servers), IDL applications using EZCA, medm/dm2k screens and tcl/tk applications using CDEV. Low level applications were mainly built as EPICS process databases, SNL state machines and customized drivers. Functionality of the high level application was encapsulated and pushed to lower levels whenever it has proven to be adequate. That enabled to reduce machine setups to a handful of physical parameters and allow the usage of standard EPICS tools for display, archiving and processing of complex physical values. High reliability and reproducibility were achieved with that approach.
System Integration of High Level Applications during the Commissioning of the Swiss Light Source
4,903
The Integrated Computer Control System (ICCS) for the National Ignition Facility (NIF) is a layered architecture of 300 front-end processors (FEP) coordinated by supervisor subsystems including automatic beam alignment and wavefront control, laser and target diagnostics, pulse power, and shot control timed to 30 ps. FEP computers incorporate either VxWorks on PowerPC or Solaris on UltraSPARC processors that interface to over 45,000 control points attached to VME-bus or PCI-bus crates respectively. Typical devices are stepping motors, transient digitizers, calorimeters, and photodiodes. The front-end layer is divided into another segment comprised of an additional 14,000 control points for industrial controls including vacuum, argon, synthetic air, and safety interlocks implemented with Allen-Bradley programmable logic controllers (PLCs). The computer network is augmented asynchronous transfer mode (ATM) that delivers video streams from 500 sensor cameras monitoring the 192 laser beams to operator workstations. Software is based on an object-oriented framework using CORBA distribution that incorporates services for archiving, machine configuration, graphical user interface, monitoring, event logging, scripting, alert management, and access control. Software coding using a mixed language environment of Ada95 and Java is one-third complete at over 300 thousand source lines. Control system installation is currently under way for the first 8 beams, with project completion scheduled for 2008.
The Overview of the National Ignition Facility Distributed Computer Control System
4,904
In this paper we outline a software development process for safety-critical systems that aims at combining some of the specific strengths of model-based development with those of programming language based development using safety-critical subsets of Ada. Model-based software development and model-based test case generation techniques are combined with code generation techniques and tools providing a transition from model to code both for a system itself and for its test cases. This allows developers to combine domain-oriented, model-based techniques with source code based validation techniques, as required for conformity with standards for the development of safety-critical software, such as the avionics standard RTCA/DO-178B. We introduce the AutoFocus and Validator modeling and validation toolset and sketch its usage for modeling, test case generation, and code generation in a combined approach, which is further illustrated by a simplified leading edge aerospace model with built-in fault tolerance.
Model-Based Software Engineering and Ada: Synergy for the Development of Safety-Critical Systems
4,905
The validation of modern software systems incorporates both functional and quality requirements. This paper proposes a validation approach for software quality requirement - its power consumption. This approach validates whether the software produces the desired results with a minimum expenditure of energy. We present energy requirements and an approach for their validation using a power consumption model, test-case specification, software traces, and power measurements. Three different approaches for power data gathering are described. The power consumption of mobile phone applications is obtained and matched against the power consumption model.
Software Validation using Power Profiles
4,906
Consistency, defined as the requirement that a series of measurements of the same project carried out by different raters using the same method should produce similar results, is one of the most important aspects to be taken into account in the measurement methods of the software. In spite of this, there is a widespread view that many measurement methods introduce an undesirable amount of subjectivity in the measurement process. This perception has made several organizations develop revisions of the standard methods whose main aim is to improve their consistency by introducing some suitable modifications of those aspects which are believed to introduce a greater degree of subjectivity.Each revision of a method must be empirically evaluated to determine to what extent is the aim of improving its consistency achieved. In this article we will define an homogeneous statistic intended to describe the consistency level of a method, and we will develop the statistical analysis which should be carried out in order to conclude whether or not a measurement method is more consistent than other one.
An Assessment of the Consistency for Software Measurement Methods
4,907
A major part of debugging, testing, and analyzing a complex software system is understanding what is happening within the system at run-time. Some developers advocate running within a debugger to better understand the system at this level. Others embed logging statements, even in the form of hard-coded calls to print functions, throughout the code. These techniques are all general, rough forms of what we call system monitoring, and, while they have limited usefulness in simple, sequential systems, they are nearly useless in complex, concurrent ones. We propose a set of new mechanisms, collectively known as a monitoring system, for understanding such complex systems, and we describe an example implementation of such a system, called IDebug, for the Java programming language.
Monitoring and Debugging Concurrent and Distributed Object-Oriented Systems
4,908
Semantic properties are domain-specific specification constructs used to augment an existing language with richer semantics. These properties are taken advantage of in system analysis, design, implementation, testing, and maintenance through the use of documentation and source-code transformation tools. Semantic properties are themselves specified at two levels: loosely with precise natural language, and formally within the problem domain. The refinement relationships between these specification levels, as well as between a semantic property's use and its realization in program code via tools, is specified with a new formal method for reuse called kind theory.
Semantic Properties for Lightweight Specification in Knowledgeable Development Environments
4,909
Building complex software systems necessitates the use of component-based architectures. In theory, of the set of components needed for a design, only some small portion of them are "custom"; the rest are reused or refactored existing pieces of software. Unfortunately, this is an idealized situation. Just because two components should work together does not mean that they will work together. The "glue" that holds components together is not just technology. The contracts that bind complex systems together implicitly define more than their explicit type. These "conceptual contracts" describe essential aspects of extra-system semantics: e.g., object models, type systems, data representation, interface action semantics, legal and contractual obligations, and more. Designers and developers spend inordinate amounts of time technologically duct-taping systems to fulfill these conceptual contracts because system-wide semantics have not been rigorously characterized or codified. This paper describes a formal characterization of the problem and discusses an initial implementation of the resulting theoretical system.
Semantic Component Composition
4,910
J. Albrecht`s Function Point Analysis (FPA) is a method to determine the functional size of software products. An organization called International Function Point Users Group (IPFUG), considers the FPA as a standard in the software functional size measurement. The Albrechts method is followed by IPFUG method which includes some modifications in order to improve it. A limitation of the method refers to the fact that FPA is not sensitive enough to differentiate the functional size in small enhancements. That affects the productivity analysis, where the software product functional size is required. To provide more power to the functional size measurement, A. Abran, M. Maya and H. Nguyeckim have proposed some modifications to improve it. The IPFUG v 4.1 method which includes these modifications is named IFPUG v 4.1 extended. In this work we set the conditions to delimiting granular from non granular functions and we calculate the static calibration and sensitivity graphs for the measurements of a set of projects with a high percentage of granular functions, all of then measured with the IFPUG v 4.1 method and the IFPUG v 4.1 extended. Finally, we introduce a statistic analysis in order to determine whether significant differences exist between both methods or not.
The analysis of the IFPUG method sensitivity
4,911
We present GUPU, a side-effect free environment specialized for programming courses. It seamlessly guides and supports students during all phases of program development, covering specification, implementation, and program debugging. GUPU features several innovations in this area. The specification phase is supported by reference implementations augmented with diagnostic facilities. During implementation, immediate feedback from test cases and from visualization tools helps the programmer's program understanding. A set of slicing techniques narrows down programming errors. The whole process is guided by a marking system.
Declarative program development in Prolog with GUPU
4,912
This paper describes the COINS (COnstraint-based INteractive Solving) system: a conflict-based constraint solver. It helps understanding inconsistencies, simulates constraint additions and/or retractions (without any propagation), determines if a given constraint belongs to a conflict and provides diagnosis tools (e.g. why variable v cannot take value val). COINS also uses user-friendly representation of conflicts and explanations.
COINS: a constraint-based interactive solving system
4,913
Previous work in the area of tracing CLP(FD) programs mainly focuses on providing information about control of execution and domain modification. In this paper, we present a trace structure that provides information about additional important aspects. We incorporate explanations in the trace structure, i.e. reasons for why certain solver actions occur. Furthermore, we come up with a format for describing the execution of the filtering algorithms of global constraints. Some new ideas about the design of the trace are also presented. For example, we have modeled our trace as a nested block structure in order to achieve a hierarchical view. Also, new ways about how to represent and identify different entities such as constraints and domain variables are presented.
Tracing and Explaining Execution of CLP(FD) Programs
4,914
CLPGUI is a graphical user interface for visualizing and interacting with constraint logic programs over finite domains. In CLPGUI, the user can control the execution of a CLP program through several views of constraints, of finite domain variables and of the search tree. CLPGUI is intended to be used both for teaching purposes, and for debugging and improving complex programs of realworld scale. It is based on a client-server architecture for connecting the CLP process to a Java-based GUI process. Communication by message passing provides an open architecture which facilitates the reuse of graphical components and the porting to different constraint programming systems. Arbitrary constraints and goals can be posted incrementally from the GUI. We propose several dynamic 2D and 3D visualizations of the search tree and of the evolution of finite domain variables. We argue that the 3D representation of search trees proposed in this paper provides the most appropriate visualization of large search trees. We describe the current implementation of the annotations and of the interactive execution model in GNU-Prolog, and report some evaluation results.
CLPGUI: a generic graphical user interface for constraint logic programming over finite domains
4,915
Type analyses of logic programs which aim at inferring the types of the program being analyzed are presented in a unified abstract interpretation-based framework. This covers most classical abstract interpretation-based type analyzers for logic programs, built on either top-down or bottom-up interpretation of the program. In this setting, we discuss the widening operator, arguably a crucial one. We present a new widening which is more precise than those previously proposed. Practical results with our analysis domain are also presented, showing that it also allows for efficient analysis.
More Precise Yet Efficient Type Inference for Logic Programs
4,916
Constraint logic programming combines declarativity and efficiency thanks to constraint solvers implemented for specific domains. Value withdrawal explanations have been efficiently used in several constraints programming environments but there does not exist any formalization of them. This paper is an attempt to fill this lack. Furthermore, we hope that this theoretical tool could help to validate some programming environments. A value withdrawal explanation is a tree describing the withdrawal of a value during a domain reduction by local consistency notions and labeling. Domain reduction is formalized by a search tree using two kinds of operators: operators for local consistency notions and operators for labeling. These operators are defined by sets of rules. Proof trees are built with respect to these rules. For each removed value, there exists such a proof tree which is the withdrawal explanation of this value.
Value withdrawal explanations: a theoretical tool for programming environments
4,917
In this paper we present a simple source code configuration tool. ExLibris operates on libraries and can be used to extract from local libraries all code relevant to a particular project. Our approach is not designed to address problems arising in code production lines, but rather, to support the needs of individual or small teams of researchers who wish to communicate their Prolog programs. In the process, we also wish to accommodate and encourage the writing of reusable code. Moreover, we support and propose ways of dealing with issues arising in the development of code that can be run on a variety of like-minded Prolog systems. With consideration to these aims we have made the following decisions: (i) support file-based source development, (ii) require minimal program transformation, (iii) target simplicity of usage, and (iv) introduce minimum number of new primitives.
Exporting Prolog source code
4,918
The twelfth Workshop on Logic Programming Environments, WLPE 2002, is one in a series of international workshops held in the topic area. The workshops facilitate the exchange ideas and results among researchers and system developers on all aspects of environments for logic programming. Relevant topics for these workshops include user interfaces, human engineering, execution visualization, development tools, providing for new paradigms, and interfacing to language system tools and external systems. This twelfth workshop held in Copenhaguen. It follows the successful eleventh Workshop on Logic Programming Environments held in Cyprus in December, 2001. WLPE 2002 features ten presentations. The presentations involve, in some way, constraint logic programming, object-oriented programming and abstract interpretation. Topics areas addressed include tools for software development, execution visualization, software maintenance, instructional aids. This workshop was a post-conference workshop at ICLP 2002. Alexandre Tessier, Program Chair, WLPE 2002, June 2002.
Proceedings of the 12th International Workshop on Logic Programming Environments
4,919
It is next to impossible to develop real-life applications in just pure Prolog. With XPCE we realised a mechanism for integrating Prolog with an external object-oriented system that turns this OO system into a natural extension to Prolog. We describe the design and how it can be applied to other external OO systems.
An Architecture for Making Object-Oriented Systems Available from Prolog
4,920
The Gisela framework for declarative programming was developed with the specific aim of providing a tool that would be useful for knowledge representation and reasoning within real-world applications. To achieve this, a complete integration into an object-oriented application development environment was used. The framework and methodology developed provide two alternative application programming interfaces (APIs): Programming using objects or programming using a traditional equational declarative style. In addition to providing complete integration, Gisela also allows extensions and modifications due to the general computation model and well-defined APIs. We give a brief overview of the declarative model underlying Gisela and we present the methodology proposed for building applications together with some real examples.
Enhancing Usefulness of Declarative Programming Frameworks through Complete Integration
4,921
Together is the recommended software development tool in the Atlas collaboration. The programmatic API, which provides the capability to use and augment Together's internal functionality, is comprised of three major components - IDE, RWI and SCI. IDE is a read-only interface used to generate custom outputs based on the information contained in a Together model. RWI allows to both extract and write information to a Together model. SCI is the Source Code Interface, as the name implies it allows to work at the level of the source code. Together is extended by writing modules (java classes) extensively making use of the relevant API. We exploited Together extensibility to add support for the Atlas Dictionary Language. ADL is an extended subset of OMG IDL. The implemented module (ADLModule) makes Together to support ADL keywords, enables options and generate ADL object descriptions directly from UML Class diagrams. The module thoroughly accesses a Together reverse engineered C++ project - and/or design only class diagrams - and it is general enough to allow for possibly additional HEP-specific Together tool tailoring.
Extending the code generation capabilities of the Together CASE tool to support Data Definition languages
4,922
Power law distributions have been found in many natural and social phenomena, and more recently in the source code and run-time characteristics of Object-Oriented (OO) systems. A power law implies that small values are extremely common, whereas large values are extremely rare. In this paper, we identify twelve new power laws relating to the static graph structures of Java programs. The graph structures analyzed represented different forms of OO coupling, namely, inheritance, aggregation, interface, parameter type and return type. Identification of these new laws provide the basis for predicting likely features of classes in future developments. The research in this paper ties together work in object-based coupling and World Wide Web structures.
Power Law Distributions in Class Relationships
4,923
Athena is the ATLAS off-line software framework, based upon the GAUDI architecture from LHCb. As part of ATLAS' continuing efforts to enhance and customise the architecture to meet our needs, we have developed a data object description tool suite and service for Athena. The aim is to provide a set of tools to describe, manage, integrate and use the Event Data Model at a design level according to the concepts of the Athena framework (use of patterns, relationships, ...). Moreover, to ensure stability and reusability this must be fully independent from the implementation details. After an extensive investigation into the many options, we have developed a language grammar based upon a description language (IDL, ODL) to provide support for object integration in Athena. We have then developed a compiler front end based upon this language grammar, JavaCC, and a Java Reflection API-like interface. We have then used these tools to develop several compiler back ends which meet specific needs in ATLAS such as automatic generation of object converters, and data object scripting interfaces. We present here details of our work and experience to date on the Athena Definition Language and Athena Data Dictionary.
The Athena Data Dictionary and Description Language
4,924
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
The Virtual Monte Carlo
4,925
XML is now becoming an industry standard for data description and exchange. Despite this there are still some questions about how or if this technology can be useful in High Energy Physics software development and data analysis. This paper aims to answer these questions by demonstrating how XML is used in the IceCube software development system, data handling and analysis. It does this by first surveying the concepts and tools that make up the XML technology. It then goes on to discuss concrete examples of how these concepts and tools are used to speed up software development in IceCube and what are the benefits of using XML in IceCube's data handling and analysis chain. The overall aim of this paper it to show that XML does have many benefits to bring High Energy Physics software development and data analysis.
Concrete uses of XML in software development and data analysis
4,926
Oval is a testing tool which help developers to detect unexpected changes in the behavior of their software. It is able to automatically compile some test programs, to prepare on the fly the needed configuration files, to run the tests within a specified Unix environment, and finally to analyze the output and check expectations. Oval does not provide utility code to help writing the tests, therefore it is quite independant of the programming/scripting language of the software to be tested. It can be seen as a kind of robot which apply the tests and warn about any unexpected change in the output. Oval was developed by the LLR laboratory for the needs of the CMS experiment, and it is now recommended by the CERN LCG project.
OVAL: the CMS Testing Robot
4,927
When the IceCube experiment started serious software development it needed a development environment in which both its developers and clients could work and that would encourage and support a good software development process. Some of the key features that IceCube wanted in such a environment were: the separation of the configuration and build tools; inclusion of an issue tracking system; support for the Unified Change Model; support for unit testing; and support for continuous building. No single, affordable, off the shelf, environment offered all these features. However there are many open source tools that address subsets of these feature, therefore IceCube set about selecting those tools which it could use in developing its own environment and adding its own tools where no suitable tools were found. This paper outlines the tools that where chosen, what are their responsibilities in the development environment and how they fit together. The complete environment will be demonstrated with a walk through of single cycle of the development process.
IceCube's Development Environment
4,928
FAYE, The Frame AnalYsis Executable, is a Java based implementation of the Frame/Stream/Stop model for analyzing data. Unlike traditional Event based analysis models, the Frame/Stream/Stop model has no preference as to which part of any data is to be analyzed, and an Event get as equal treatment as a change in the high voltage. This model means that FAYE is a suitable analysis framework for many different type of data analysis, such as detector trends or as a visualization core. During the design of FAYE the emphasis has been on clearly delineating each of the executable's responsibilities and on keeping their implementations as completely independent as possible. This leads to the large part of FAYE being a generic core which is experiment independent, and smaller section that customizes this core to an experiments own data structures. This customization can even be done in C++, using JNI, while the executable's control remains in Java. This paper reviews the Frame/Stream/Stop model and then looks at how FAYE has approached its implementation, with an emphasis on which responsibilities are handled by the generic core, and which parts an experiment must provide as part of the customization portion of the executable.
FAYE: A Java Implement of the Frame/Stream/Stop Analysis Model
4,929
In this talk we will review the major additions and improvements made to the ROOT system in the last 18 months and present our plans for future developments. The additons and improvements range from modifications to the I/O sub-system to allow users to save and restore objects of classes that have not been instrumented by special ROOT macros, to the addition of a geometry package designed for building, browsing, tracking and visualizing detector geometries. Other improvements include enhancements to the quick analysis sub-system (TTree::Draw()), the addition of classes that allow inter-file object references (TRef, TRefArray), better support for templated and STL classes, amelioration of the Automatic Script Compiler and the incorporation of new fitting and mathematical tools. Efforts have also been made to increase the modularity of the ROOT system with the introduction of more abstract interfaces and the development of a plug-in manager. In the near future, we intend to continue the development of PROOF and its interfacing with GRID environments. We plan on providing an interface between Geant3, Geant4 and Fluka and the new geometry package. The ROOT GUI classes will finally be available on Windows and we plan to release a GUI inspector and builder. In the last year, ROOT has drawn the endorsement of additional experiments and institutions. It is now officially supported by CERN and used as key I/O component by the LCG project.
ROOT Status and Future Developments
4,930
Managing large-scale software products is a complex software engineering task. The automation of the software development, release and distribution process is most beneficial in the large collaborations, where the big number of developers, multiple platforms and distributed environment are typical factors. This paper describes Build and Output Analyzer framework and its components that have been developed in CMS to facilitate software maintenance and improve software quality. The system allows to generate, control and analyze various types of automated software builds and tests, such as regular rebuilds of the development code, software integration for releases and installation of the existing versions.
BOA: Framework for Automated Builds
4,931
Virtual organizations (VOs) are communities of resource providers and users distributed over multiple policy domains. These VOs often wish to define and enforce consistent policies in addition to the policies of their underlying domains. This is challenging, not only because of the problems in distributing the policy to the domains, but also because of the fact that those domains may each have different capabilities for enforcing the policy. The Community Authorization Service (CAS) solves this problem by allowing resource providers to delegate some policy authority to the VO while maintaining ultimate control over their resources. In this paper we describe CAS and our past and current implementations of CAS, and we discuss our plans for CAS-related research.
The Community Authorization Service: Status and Future
4,932
The Athena Startup Kit (ASK), is an interactive front-end to the Atlas software framework (ATHENA). Written in python, a very effective "glue" language, it is build on top of the, in principle unrelated, code repository, build, configuration, debug, binding, and analysis tools. ASK automates many error-prone tasks that are otherwise left to the end-user, thereby pre-empting a whole category of potential problems. Through the existing tools, which ASK will setup for the user if and as needed, it locates available resources, maintains job coherency, manages the run-time environment, allows for interactivity and debugging, and provides standalone execution scripts. An end-user who wants to run her own analysis algorithms within the standard environment can let ASK generate the appropriate skeleton package, the needed dependencies and run-time, as well as a default job options script. For new and casual users, ASK comes with a graphical user interface; for advanced users, ASK has a scriptable command line interface. Both are built on top of the same set of libraries. ASK does not need to be, and isn't, experiment neutral. Thus it has built-in workarounds for known gotcha's, that would otherwise be a major time-sink for each and every new user. ASK minimizes the overhead for those physicists in Atlas who just want to write and run their analysis code.
The Athena Startup Kit
4,933
The Gaudi/Athena and Grid Alliance (GANGA) is a front-end for the configuration, submission, monitoring, bookkeeping, output collection, and reporting of computing jobs run on a local batch system or on the grid. In particular, GANGA handles jobs that use applications written for the Gaudi software framework shared by the Atlas and LHCb experiments. GANGA exploits the commonality of Gaudi-based computing jobs, while insulating against grid-, batch- and framework-specific technicalities, to maximize end-user productivity in defining, configuring, and executing jobs. Designed for a python-based component architecture, GANGA has a modular underpinning and is therefore well placed for contributing to, and benefiting from, work in related projects. Its functionality is accessible both from a scriptable command-line interface, for expert users and automated tasks, and through a graphical interface, which simplifies the interaction with GANGA for beginning and c1asual users. This paper presents the GANGA design and implementation, the development of the underlying software bus architecture, and the functionality of the first public GANGA release.
GANGA: a user-Grid interface for Atlas and LHCb
4,934
The Atlas collaboration at CERN has adopted the Gaudi software architecture which belongs to the blackboard family: data objects produced by knowledge sources (e.g. reconstruction modules) are posted to a common in-memory data base from where other modules can access them and produce new data objects. The StoreGate has been designed, based on the Atlas requirements and the experience of other HENP systems such as Babar, CDF, CLEO, D0 and LHCB, to identify in a simple and efficient fashion (collections of) data objects based on their type and/or the modules which posted them to the Transient Data Store (the blackboard). The developer also has the freedom to use her preferred key class to uniquely identify a data object according to any other criterion. Besides this core functionality, the StoreGate provides the developers with a powerful interface to handle in a coherent fashion persistable references, object lifetimes, memory management and access control policy for the data objects in the Store. It also provides a Handle/Proxy mechanism to define and hide the cache fault mechanism: upon request, a missing Data Object can be transparently created and added to the Transient Store presumably retrieving it from a persistent data-base, or even reconstructing it on demand.
The StoreGate: a Data Model for the Atlas Software Architecture
4,935
Decisions on which classes to refactor are fraught with difficulty. The problem of identifying candidate classes becomes acute when confronted with large systems comprising hundreds or thousands of classes. In this paper, we describe a metric by which key classes, and hence candidates for refactoring, can be identified. Measures quantifying the usage of two forms of coupling, inheritance and aggregation, together with two other class features (number of methods and attributes) were extracted from the source code of three large Java systems. Our research shows that metrics from other research domains can be adapted to the software engineering process. Substantial differences were found between each of the systems in terms of the key classes identified and hence opportunities for refactoring those classes varied between those systems.
Making refactoring decisions in large-scale Java systems: an empirical stance
4,936
The baseline design and implementation of the DataFlow system, to be documented in the ATLAS DAQ/HLT Technical Design Report in summer 2003, will be presented. Empahsis will be placed on the system performance and scalability based on the results from prototyping studies which have maximised the use of commercially available hardware.
The DataFlow System of the ATLAS Trigger and DAQ
4,937
Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: a) why is it needed? b) what is its domain of operation? c) how does it help and what should it do to improve Web application development? and d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialisation at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.
Web Engineering
4,938
Adding small code snippets at key points to existing code fragments is called instrumentation. It is an established technique to debug certain otherwise hard to solve faults, such as memory management issues and data races. Dynamic instrumentation can already be used to analyse code which is loaded or even generated at run time.With the advent of environments such as the Java Virtual Machine with optimizing Just-In-Time compilers, a new obstacle arises: self-modifying code. In order to instrument this kind of code correctly, one must be able to detect modifications and adapt the instrumentation code accordingly, preferably without incurring a high penalty speedwise. In this paper we propose an innovative technique that uses the hardware page protection mechanism of modern processors to detect such modifications. We also show how an instrumentor can adapt the instrumented version depending on the kind of modificiations as well as an experimental evaluation of said techniques.
Instrumenting self-modifying code
4,939
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
Timestamp Based Execution Control for C and Java Programs
4,940
The paper proposes a theoretical approach of the debugging of constraint programs based on a notion of explanation tree. The proposed approach is an attempt to adapt algorithmic debugging to constraint programming. In this theoretical framework for domain reduction, explanations are proof trees explaining value removals. These proof trees are defined by inductive definitions which express the removals of values as consequences of other value removals. Explanations may be considered as the essence of constraint programming. They are a declarative view of the computation trace. The diagnosis consists in locating an error in an explanation rooted by a symptom.
Towards declarative diagnosis of constraint programs over finite domains
4,941
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems -- a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one C-based system -- the Solaris operating system kernel -- and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
Postmortem Object Type Identification
4,942
Debugging is commonly understood as finding and fixing the cause of a problem. But what does ``cause'' mean? How can we find causes? How can we prove that a cause is a cause--or even ``the'' cause? This paper defines common terms in debugging, highlights the principal techniques, their capabilities and limitations.
Causes and Effects in Computer Programs
4,943
In this paper, we propose a mathematical framework for automated bug localization. This framework can be briefly summarized as follows. A program execution can be represented as a rooted acyclic directed graph. We define an execution snapshot by a cut-set on the graph. A program state can be regarded as a conjunction of labels on edges in a cut-set. Then we argue that a debugging task is a pruning process of the execution graph by using cut-sets. A pruning algorithm, i.e., a debugging task, is also presented.
A mathematical framework for automated bug localization
4,944
Due to the increased complexity of parallel and distributed programs, debugging of them is considered to be the most difficult and time consuming part of the software lifecycle. Tool support is hence a crucial necessity to hide complexity from the user. However, most existing tools seem inadequate as soon as the program under consideration exploits more than a few processors over a long execution time. This problem is addressed by the novel debugging tool DeWiz (Debugging Wizard), whose focus lies on scalability. DeWiz has a modular, scalable architecture, and uses the event graph model as a representation of the investigated program. DeWiz provides a set of modules, which can be combined to generate, analyze, and visualize event graph data. Within this processing pipeline the toolset tries to extract useful information, which is presented to the user at an arbitrary level of abstraction. Additionally, DeWiz is a framework, which can be used to easily implement arbitrary user-defined modules.
Event-based Program Analysis with DeWiz
4,945
In message passing programs, once a process terminates with an unexpected error, the terminated process can propagate the error to the rest of processes through communication dependencies, resulting in a program failure. Therefore, to locate faults, developers must identify the group of processes involved in the original error and faulty processes that activate faults. This paper presents a novel debugging tool, named MPI-PreDebugger (MPI-PD), for localizing faulty processes in message passing programs. MPI-PD automatically distinguishes the original and the propagated errors by checking communication errors during program execution. If MPI-PD observes any communication errors, it backtraces communication dependencies and points out potential faulty processes in a timeline view. We also introduce three case studies, in which MPI-PD has been shown to play the key role in their debugging. From these studies, we believe that MPI-PD helps developers to locate faults and allows them to concentrate in correcting their programs.
Debugging Tool for Localizing Faulty Processes in Message Passing Programs
4,946
By recording every state change in the run of a program, it is possible to present the programmer every bit of information that might be desired. Essentially, it becomes possible to debug the program by going ``backwards in time,'' vastly simplifying the process of debugging. An implementation of this idea, the ``Omniscient Debugger,'' is used to demonstrate its viability and has been used successfully on a number of large programs. Integration with an event analysis engine for searching and control is presented. Several small-scale user studies provide encouraging results. Finally performance issues and implementation are discussed along with possible optimizations. This paper makes three contributions of interest: the concept and technique of ``going backwards in time,'' the GUI which presents a global view of the program state and has a formal notion of ``navigation through time,'' and the integration with an event analyzer.
Debugging Backwards in Time
4,947
Cyclic debugging requires repeatable executions. As non-deterministic or real-time systems typically do not have the potential to provide this, special methods are required. One such method is replay, a process that requires monitoring of a running system and logging of the data produced by that monitoring. We shall discuss the process of preparing the replay, a part of the process that has not been very well described before.
Availability Guarantee for Deterministic Replay Starting Points in Real-Time Systems
4,948
Attribute grammars (AGs) are known to be a useful formalism for semantic analysis and translation. However, debugging AGs is complex owing to inherent difficulties of AGs, such as recursive grammar structure and attribute dependency. In this paper, a new systematic method of debugging AGs is proposed. Our approach is, in principle, based on previously proposed algorithmic debugging of AGs, but is more general. This easily enables integration of various query-based systematic debugging methods, including the slice-based method. The proposed method has been implemented in Aki, a debugger for AG description. We evaluated our new approach experimentally using Aki, which demonstrates the usability of our debugging method.
Generalized Systematic Debugging for Attribute Grammars
4,949
We present a general method for fault localization based on abstracting over program traces, and a tool that implements the method using Ernst's notion of potential invariants. Our experiments so far have been unsatisfactory, suggesting that further research is needed before invariants can be used to locate faults.
Automated Fault Localization Using Potential Invariants
4,950
In order to design and implement tracers, one must decide what exactly to trace and how to produce this trace. On the one hand, trace designs are too often guided by implementation concerns and are not as useful as they should be. On the other hand, an interesting trace which cannot be produced efficiently, is not very useful either. In this article we propose a methodology which helps to efficiently produce accurate traces. Firstly, design a formal specification of the trace model. Secondly, derive a prototype tracer from this specification. Thirdly, analyze the produced traces. Fourthly, implement an efficient tracer. Lastly, compare the traces of the two tracers. At each step, problems can be found. In that case one has to iterate the process. We have successfully applied the proposed methodology to the design and implementation of a real tracer for constraint logic programming which is able to efficiently generate information required to build interesting graphical views of executions.
Rigorous design of tracers: an experiment for constraint logic programming
4,951
This paper describes an interactive tool that facilitates following define-use chains in large codes. The motivation for the work is to support relative debugging, where it is necessary to iteratively refine a set of asser-tions between different versions of a program. DUCT is novel because it exploits the Microsoft Intermediate Language (MSIL) that underpins the .NET Framework. Accordingly, it works on a wide range of programming languages without any modification. The paper describes the design and implementation of DUCT, and then illustrates its use with a small case study.
DUCT: An Interactive Define-Use Chain Navigation Tool for Relative Debugging
4,952
Breast cancer as a medical condition and mammograms as images exhibit many dimensions of variability across the population. Similarly, the way diagnostic systems are used and maintained by clinicians varies between imaging centres and breast screening programmes, and so does the appearance of the mammograms generated. A distributed database that reflects the spread of pathologies across the population is an invaluable tool for the epidemiologist and the understanding of the variation in image acquisition protocols is essential to a radiologist in a screening programme. Exploiting emerging grid technology, the aim of the MammoGrid [1] project is to develop a Europe-wide database of mammograms that will be used to investigate a set of important healthcare applications and to explore the potential of the grid to support effective co-working between healthcare professionals.
MammoGrid: Large-Scale Distributed Mammogram Analysis
4,953
The Function Points Analysis (FPA) of A.J. Albrecht is a method to determine the functional size of software products. The International Function Point Users Group, (IFPUG), establishes the FPA like a standard in the software functional size measurement. The IFPUG [3] [4] method follows the Albrecht's method and incorporates in its succesive versions modifications to the rules and hints with the intention of improving it [7]. The required documentation level to apply the method is the functional specification which corresponds to level I in the Rudolph's clasification [8]. This documentation is avalaible with some difficulty for those companies which are dedicated to develop software for third parties when they have to prepare the appropiate budget for this development. Then, we face the need of developing an early method [6] [9] for measuring the functional size of a software product that we will name to abbreviate it Early Method or EFPM (Early Function Point Method). The required documentation to apply the EFPM would be the User Requirements or some analogous documentations. This is a part of a research work now in process in Oviedo University. In this article we only show the following, results: From the measurements of a set of projects using the IFPUG method v. 4.1 we obtain the linear correlation coefficients between the total number of Function Points for each project and the counters of the ILFs number, ILFs+EIFs number and EIs+EOs+EQs number. Using the preliminary results we compute the regression functions. This results we will allow us to determine the factors to be considered in the development of EFPM and to estimate the function points.
A Preliminary Study for the development of an Early Method for the Measurement in Function Points of a Software Product
4,954
Design methods in information systems frequently create software descriptions using formal languages. Nonetheless, most software designers prefer to describe software using natural languages. This distinction is not simply a matter of convenience. Natural languages are not the same as formal languages; in particular, natural languages do not follow the notions of equivalence used by formal languages. In this paper, we show both the existence and coexistence of different notions of equivalence by extending the no-tion of oracles used in formal languages. This allows distinctions to be made between the trustworthy oracles assumed by formal languages and the untrust-worthy oracles used by natural languages. By examin-ing the notion of equivalence, we hope to encourage designers of software to rethink the place of ambiguity in software design.
Notions of Equivalence in Software Design
4,955
We survey existing approaches to the formal verification of statecharts using model checking. Although the semantics and subset of statecharts used in each approach varies considerably, along with the model checkers and their specification languages, most approaches rely on translating the hierarchical structure into the flat representation of the input language of the model checker. This makes model checking difficult to scale to industrial models, as the state space grows exponentially with flattening. We look at current approaches to model checking hierarchical structures and find that their semantics is significantly different from statecharts. We propose to address the problem of state space explosion using a combination of techniques, which are proposed as directions for further research.
Model Checking of Statechart Models: Survey and Research Directions
4,956
Hybrid systems are characterized by the hybrid evolution of their state: A part of the state changes discretely, the other part changes continuously over time. Typically, modern control applications belong to this class of systems, where a digital controller interacts with a physical environment. In this article we illustrate how a combination of the formal method VDM and the computer algebra system Mathematica can be used to model and simulate both aspects: the control logic and the physics involved. A new Mathematica package emulating VDM-SL has been developed that allows the integration of differential equation systems into formal specifications. The SAFER example from Kelly (1997) serves to demonstrate the new simulation capabilities Mathematica adds: After the thruster selection process, the astronaut's actual position and velocity is calculated by numerically solving Euler's and Newton's equations for rotation and translation. Furthermore, interactive validation is supported by a graphical user interface and data animation.
Modeling and Validating Hybrid Systems Using VDM and Mathematica
4,957
Central to the power of open-source software is bug shallowness, the relative ease of finding and fixing bugs. The open-source movement began with Unix software, so many users were also programmers capable of finding and fixing bugs given the source code. But as the open-source movement reaches the Macintosh platform, bugs may not be shallow because few Macintosh users are programmers. Based on reports from open-source developers, I, however, conclude that that bugs are as shallow in open-source, Macintosh software as in any other open-source software.
Bug shallowness in open-source, Macintosh software
4,958
This research analyzes complex networks in open-source software at the inter-package level, where package dependencies often span across projects and between development groups. We review complex networks identified at ``lower'' levels of abstraction, and then formulate a description of interacting software components at the package level, a relatively ``high'' level of abstraction. By mining open-source software repositories from two sources, we empirically show that the coupling of modules at this granularity creates a small-world and scale-free network in both instances.
Inter-Package Dependency Networks in Open-Source Software
4,959
The global testing problem studied in this paper is to seek a definite answer to whether a system of concurrent black-boxes has an observable behavior in a given finite (but could be huge) set "Bad". We introduce a novel approach to solve the problem that does not require integration testing. Instead, in our approach, the global testing problem is reduced to testing individual black-boxes in the system one by one in some given order. Using an automata-theoretic approach, test sequences for each individual black-box are generated from the system's description as well as the test results of black-boxes prior to this black-box in the given order. In contrast to the conventional compositional/modular verification/testing approaches, our approach is essentially decompositional. Also, our technique is complete, sound, and can be carried out automatically. Our experiment results show that the total number of tests needed to solve the global testing problem is substantially small even for an extremely large "Bad".
Testing Systems of Concurrent Black-boxes--an Automata-Theoretic and Decompositional Approach
4,960
This paper presents a unique approach to connecting requirements engineering (RE) activities into a process framework that can be employed to obtain quality requirements with reduced expenditures of effort and cost. We propose a two-phase model that is novel in that it introduces the concept of verification and validation (V&V) early in the requirements life cycle. In the first phase, we perform V&V immediately following the elicitation of requirements for each individually distinct system function. Because the first phase focuses on capturing smaller sets of related requirements iteratively, each corresponding V&V activity is better focused for detecting and correcting errors in each requirement set. In the second phase, a complementary verification activity is initiated; the corresponding focus is on the quality of linkages between requirements sets rather than on the requirements within the sets. Consequently, this approach reduces the effort in verification and enhances the focus on the verification task. Our approach, unlike other models, has a minimal time delay between the elicitation of requirements and the execution of the V&V activities. Because of this short time gap, the stakeholders have a clearer recollection of the requirements, their context and rationale; this enhances the stakeholder feedback. Furthermore, our model includes activities that closely align with the effective RE processes employed in the software industry. Thus, our approach facilitates a better understanding of the flow of requirements, and provides guidance for the implementation of the RE process.
Local and Global Analysis: Complementary Activities for Increasing the Effectiveness of Requirements Verification and Validation
4,961
This paper presents a framework that guides the requirements engineer in the implementation and execution of an effective requirements generation process. We achieve this goal by providing a well-defined requirements engineering model and a criteria based process for optimizing method selection for attendant activities. Our model, unlike other models, addresses the complete requirements generation process and consists of activities defined at more adequate levels of abstraction. Additionally, activity objectives are identified and explicitly stated - not implied as in the current models. Activity objectives are crucial as they drive the selection of methods for each activity. Our model also incorporates a unique approach to verification and validation that enhances quality and reduces the cost of generating requirements. To assist in the selection of methods, we have mapped commonly used methods to activities based on their objectives. In addition, we have identified method selection criteria and prescribed a reduced set of methods that optimize these criteria for each activity defined by our requirements generation process. Thus, the defined approach assists in the task of selecting methods by using selection criteria to reduce a large collection of potential methods to a smaller, manageable set. The model and the set of methods, taken together, provide the much needed guidance for the effective implementation and execution of the requirements generation process.
An Objectives-Driven Process for Selecting Methods to Support Requirements Engineering Activities
4,962
This paper presents an approach for the implementation and execution of an effective requirements generation process. We achieve this goal by providing a well-defined requirements engineering model that includes verification and validation (V&V), and analysis. In addition, we identify focused activity objectives and map popular methods to lower-level activities, and define a criterion based process for optimizing method selection for attendant activities. Our model, unlike other models, addresses the complete requirements generation process and consists of activities defined at more adequate levels of abstraction. Furthermore, our model also incorporates a unique approach to V&V that enhances quality and reduces the cost of generating requirements. Additionally, activity objectives are identified and explicitly stated - not implied as in the current models. To assist in the selection of an appropriate set of methods, we have mapped commonly used methods to activities based on their objectives. Finally, we have identified method selection criteria and prescribed a reduced set of methods that optimize these criteria for each activity in our model. Thus, our approach assists in the task of selecting methods by using selection criteria to reduce a large collection of potential methods to a smaller, manageable set. The model, clear mapping of methods to activity objectives, and the criteria based process, taken together, provide the much needed guidance for the effective implementation and execution of the requirements generation process.
Effective Requirements Generation: Synchronizing Early Verification & Validation, Methods and Method Selection Criteria
4,963
A systematic, language-independent method of finding a minimal set of paths covering the code of a sequential program is proposed for application in White Box testing. Execution of all paths from the set ensures also statement coverage. Execution fault marks problematic areas of the code. The method starts from a UML activity diagram of a program. The diagram is transformed into a directed graph: graph's nodes substitute decision and action points; graph's directed edges substitute action arrows. The number of independent paths equals easy-to-compute cyclomatic complexity of the graph. Association of a vector to each path creates a path vector space. Independence of the paths is equivalent to linear independence of the vectors. It is sufficient to test any base of the path space to complete the procedure. An effective algorithm for choosing the base paths is presented.
Systematic Method for Path-Complete White Box Testing
4,964
Reverse engineering has been a standard practice in the hardware community for some time. It has only been within the last ten years that reverse engineering, or "program comprehension", has grown into the current sub-discipline of software engineering. Traditional software engineering is primarily focused on the development and design of new software. However, most programmers work on software that other people have designed and developed. Up to 50% of a software maintainers time can be spent determining the intent of source code. The growing demand to reevaluate and reimplement legacy software systems, brought on by the proliferation of clientserver and World Wide Web technologies, has underscored the need for reverse engineering tools and techniques. This paper introduces the terminology of reverse engineering and gives some of the obstacles that make reverse engineering difficult. Although reverse engineering remains heavily dependent on the human component, a number of automated tools are presented that aid the reverse engineer.
A Survey of Reverse Engineering and Program Comprehension
4,965
This paper describes a comprehensive prototype of large-scale fault adaptive embedded software developed for the proposed Fermilab BTeV high energy physics experiment. Lightweight self-optimizing agents embedded within Level 1 of the prototype are responsible for proactive and reactive monitoring and mitigation based on specified layers of competence. The agents are self-protecting, detecting cascading failures using a distributed approach. Adaptive, reconfigurable, and mobile objects for reliablility are designed to be self-configuring to adapt automatically to dynamically changing environments. These objects provide a self-healing layer with the ability to discover, diagnose, and react to discontinuities in real-time processing. A generic modeling environment was developed to facilitate design and implementation of hardware resource specifications, application data flow, and failure mitigation strategies. Level 1 of the planned BTeV trigger system alone will consist of 2500 DSPs, so the number of components and intractable fault scenarios involved make it impossible to design an `expert system' that applies traditional centralized mitigative strategies based on rules capturing every possible system state. Instead, a distributed reactive approach is implemented using the tools and methodologies developed by the Real-Time Embedded Systems group.
Prototype of Fault Adaptive Embedded Software for Large-Scale Real-Time Systems
4,966
Software component reuse is the key to significant gains in productivity. However, the major problem is the lack of identifying and developing potentially reusable components. This paper concentrates on our approach to the development of reusable software components. A prototype tool has been developed, known as the Reuse Assessor and Improver System (RAIS) which can interactively identify, analyse, assess, and modify abstractions, attributes and architectures that support reuse. Practical and objective reuse guidelines are used to represent reuse knowledge and to do domain analysis. It takes existing components, provides systematic reuse assessment which is based on reuse advice and analysis, and produces components that are improved for reuse. Our work on guidelines has been extended to a large scale industrial application.
Automated Improvement for Component Reuse
4,967
We specify an abstraction layer to be used between an enterprise application and the utilized enterprise framework (like J2EE or .NET). This specification is called the Workshop. It provides an intuitive metaphor supporting the programmer in designing easy understandable code. We present an implementation of this specification. It is based upon the J2EE framework and is called the JWorkshop. As a proof of concept we implement a special certification authority called the Key Authority based upon the JWorkshop. The mentioned certification authority runs very successfully in a variety of different real world projects.
The Workshop - Implementing Well Structured Enterprise Applications
4,968
Debian is possibly the largest free software distribution, with well over 4,500 source packages in the latest stable release (Debian 3.0) and more than 8,000 source packages in the release currently in preparation. However, we wish to know what these numbers mean. In this paper, we use David A. Wheeler's SLOCCount system to determine the number of physical source lines of code (SLOC) of Debian 3.0 (aka woody). We show that Debian 3.0 includes more than 105,000,000 physical SLOC (almost twice than Red Hat 9, released about 8 months later), showing that the Debian development model (based on the work of a large group of voluntary developers spread around the world) is at least as capable as other development methods (like the more centralized one, based on the work of employees, used by Red Hat or Microsoft) to manage distributions of this size. It is also shown that if Debian had been developed using traditional proprietary methods, the COCOMO model estimates that its cost would be close to $6.1 billion USD to develop Debian 3.0. In addition, we offer both an analysis of the programming languages used in the distribution (C amounts for about 65%, C++ for about 12%, Shell for about 8% and LISP is around 4%, with many others to follow), and the largest packages (The Linux kernel, Mozilla, XFree86, PM3, etc.)
Measuring Woody: The Size of Debian 3.0
4,969
What is Software Architecture? The rules, paradigmen, pattern that help to construct, build and test a serious piece of software. It is the practical experience boiled down to abstract level. Software Architecture builds on System Engineering and the scientific method as established by Galileo Galilei: Measure what you can and make measureable what you can not. The experiment (test) is more important then the deduction. Pieces of information about software architecture are all over the internet. This paper uses citation as much as possible. The aim is to bring together an overview, not to rephrase the wording.
Software Architecture Overview
4,970
Programs with constraints are hard to debug. In this paper, we describe a general architecture to help develop new debugging tools for constraint programming. The possible tools are fed by a single general-purpose tracer. A tracer-driver is used to adapt the actual content of the trace, according to the needs of the tool. This enables the tools and the tracer to communicate in a client-server scheme. Each tool describes its needs of execution data thanks to event patterns. The tracer driver scrutinizes the execution according to these event patterns and sends only the data that are relevant to the connected tools. Experimental measures show that this approach leads to good performance in the context of constraint logic programming, where a large variety of tools exists and the trace is potentially huge.
A Tracer Driver for Versatile Dynamic Analyses of Constraint Logic Programs
4,971
We propose some slight additions to O-O languages to implement the necessary features for using Deductive Object Programming (DOP). This way of programming based upon the manipulation of the Production Tree of the Objects of Interest, result in making Persistent these Objects and in sensibly lowering the code complexity.
Deductive Object Programming
4,972
This paper discusses the concept of model-driven software engineering applied to the Grid application domain. As an extension to this concept, the approach described here, attempts to combine both formal architecture-centric and model-driven paradigms. It is a commonly recognized statement that Grid systems have seldom been designed using formal techniques although from past experience such techniques have shown advantages. This paper advocates a formal engineering approach to Grid system developments in an effort to contribute to the rigorous development of Grids software architectures. This approach addresses quality of service and cross-platform developments by applying the model-driven paradigm to a formal architecture-centric engineering method. This combination benefits from a formal semantic description power in addition to model-based transformations. The result of such a novel combined concept promotes the re-use of design models and facilitates developments in Grid computing.
A Formal Architecture-Centric Model-Driven Approach for the Automatic Generation of Grid Applications
4,973
This paper studies the differences and similarities between domain ontologies and conceptual data models and the role that ontologies can play in establishing conceptual data models during the process of information systems development. A mapping algorithm has been proposed and embedded in a special purpose Transformation Engine to generate a conceptual data model from a given domain ontology. Both quantitative and qualitative methods have been adopted to critically evaluate this new approach. In addition, this paper focuses on evaluating the quality of the generated conceptual data model elements using Bunge-Wand-Weber and OntoClean ontologies. The results of this evaluation indicate that the generated conceptual data model provides a high degree of accuracy in identifying the substantial domain entities along with their attributes and relationships being derived from the consensual semantics of domain knowledge. The results are encouraging and support the potential role that this approach can take part in process of information system development.
Engineering Conceptual Data Models from Domain Ontologies: A Critical Evaluation
4,974
Modern frameworks for development of graphical interfaces are using the native controls of the operating system. Because of that they are using operating system events model for inter-component communication. We consider a method to increase inter-component communication speed by sending messages from one component to the other passing over the operating system. Besides the messages subscription helps to avoid receiving of unnecessary messages.
Inter-component communication methods in object-oriented frameworks
4,975
This paper studies the role that ontologies can play in establishing conceptual data models during the process of information systems development. A mapping algorithm has been proposed and embedded in a special purpose Transformation-Engine to generate a conceptual data model from a given domain ontology. In addition, this paper focuses on applying the proposed approach to a bioinformatics context as the nature of biological data is considered a barrier in representing biological conceptual data models. Both quantitative and qualitative methods have been adopted to critically evaluate this new approach. The results of this evaluation indicate that the quality of the generated conceptual data models can reflect the problem domain entities and the associations between them. The results are encouraging and support the potential role that this approach can play in providing a suitable starting point for conceptual data model development.
Deriving Conceptual Data Models from Domain Ontologies for Bioinformatics
4,976
This paper discusses the problems faced with interoperability between two programming languages, with respect to GNU Octave, and GTK API written in C, to provide the GTK API on Octave.Octave-GTK is the fusion of two different API's: one exported by GNU Octave [scientific computing tool] and the other GTK [GUI toolkit]; this enables one to use GTK primitives within GNU Octave, to build graphical front ends,at the same time using octave engine for number crunching power. This paper illustrates our implementation of binding logic, and shows results extended to various other libraries using the same base code generator. Also shown, are methods of code generation, binding automation, and the niche we plan to fill in the absence of GUI in Octave. Canonical discussion of advantages, feasibility and problems faced in the process are elucidated.
Octave-GTK: A GTK binding for GNU Octave
4,977
How to get advantages of MVC model without making applications unnecessarily complex? The full-featured MVC implementation is on the top end of ladder of complexity. The other end is meant for simple cases that do not call for such complex designs, however still in need of the advantages of MVC patterns, such as ability to change the look-and-feel. This paper presents patterns of MVC implementation that help to benefit from the paradigm and keep the right balance between flexibility and implementation complexity.
Applied MVC Patterns. A pattern language
4,978
The well-known problem of state space explosion in model checking is even more critical when applying this technique to programming languages, mainly due to the presence of complex data structures. One recent and promising approach to deal with this problem is the construction of an abstract and correct representation of the global program state allowing to match visited states during program model exploration. In particular, one powerful method to implement abstract matching is to fill the state vector with a minimal amount of relevant variables for each program point. In this paper, we combine the on-the-fly model-checking approach (incremental construction of the program state space) and the static analysis method called influence analysis (extraction of significant variables for each program point) in order to automatically construct an abstract matching function. Firstly, we describe the problem as an alternation-free value-based mu-calculus formula, whose validity can be checked on the program model expressed as a labeled transition system (LTS). Secondly, we translate the analysis into the local resolution of a parameterised boolean equation system (PBES), whose representation enables a more efficient construction of the resulting abstract matching function. Finally, we show how our proposal may be elegantly integrated into CADP, a generic framework for both the design and analysis of distributed systems and the development of verification tools.
Static Analysis using Parameterised Boolean Equation Systems
4,979
Nowadays, enterprises are confronted to growing needs for traceability, product genealogy and product life cycle management. To meet those needs, the enterprise and applications in the enterprise environment have to manage flows of information that relate to flows of material and that are managed in shop floor level. Nevertheless, throughout product lifecycle coordination needs to be established between reality in the physical world (physical view) and the virtual world handled by manufacturing information systems (informational view). This paper presents the "Holon" modelling concept as a means for the synchronisation of both physical view and informational views. Afterwards, we show how the concept of holon can play a major role in ensuring interoperability in the enterprise context.
A Product Oriented Modelling Concept: Holons for systems synchronisation and interoperability
4,980
In the last few years, lot of work has been done in order to ensure enterprise applications interoperability; however, proposed solutions focus mainly on enterprise processes. Indeed, throughout product lifecycle coordination needs to be established between reality in the physical world (physical view) and the virtual world handled by manufacturing information systems (informational view). This paper presents a holonic approach that enables synchronisation of both physical and informational views. A model driven approach for interoperability is proposed to ensure interoperability of holon based models with other applications in the enterprise.
Product Centric Holons for Synchronisation and Interoperability in Manufacturing Environments
4,981
Currently many different modeling languages are used for workflow definitions in BPM systems. Authors of this paper analyze the two most popular graphical languages, with highest possibility of wide practical usage - UML Activity diagrams (AD) and Business Process Modeling Notation (BPMN). The necessary in practice workflow aspects are briefly discussed, and on this basis a natural AD profile is proposed, which covers all of them. A functionally equivalent BPMN subset is also selected. The semantics of both languages in the context of process execution (namely, mapping to BPEL) is also analyzed in the paper. By analyzing AD and BPMN metamodels, authors conclude that an exact transformation from AD to BPMN is not trivial even for the selected subset, though these languages are considered to be similar. Authors show how this transformation could be defined in the MOLA transformation language.
Use of UML and Model Transformations for Workflow Process Definitions
4,982
Static software checking tools are useful as an additional automated software inspection step that can easily be integrated in the development cycle and assist in creating secure, reliable and high quality code. However, an often quoted disadvantage of these tools is that they generate an overly large number of warnings, including many false positives due to the approximate analysis techniques. This information overload effectively limits their usefulness. In this paper we present ELAN, a technique that helps the user prioritize the information generated by a software inspection tool, based on a demand-driven computation of the likelihood that execution reaches the locations for which warnings are reported. This analysis is orthogonal to other prioritization techniques known from literature, such as severity levels and statistical analysis to reduce false positives. We evaluate feasibility of our technique using a number of case studies and assess the quality of our predictions by comparing them to actual values obtained by dynamic profiling.
Prioritizing Software Inspection Results using Static Profiling
4,983
Because of constraints imposed by the market, embedded software in consumer electronics is almost inevitably shipped with faults and the goal is just to reduce the inherent unreliability to an acceptable level before a product has to be released. Automatic fault diagnosis is a valuable tool to capture software faults without extra effort spent on testing. Apart from a debugging aid at design and integration time, fault diagnosis can help analyzing problems during operation, which allows for more accurate system recovery. In this paper we discuss perspectives and limitations for applying a particular fault diagnosis technique, namely the analysis of program spectra, in the area of embedded software in consumer electronics devices. We illustrate these by our first experience with a test case from industry.
Program Spectra Analysis in Embedded Software: A Case Study
4,984
Document management software systems are having a wide audience at present. However, groupware as a term has a wide variety of possible definitions. Groupware classification attempt is made in this paper. Possible approaches to groupware are considered including document management, document control and mailing systems. Lattice theory and concept modelling are presented as a theoretical background for the systems in question. Current technologies in state-of-the-art document managenent software are discussed. Design and implementation aspects for user-friendly integrate enterprise systems are described. Results for a real system to be implemented are given. Perspectives of the field in question are discussed.
Object-Based Groupware: Theory, Design and Implementation Issues
4,985
We introduce the concept of a morphism between coloured nets. Our definition generalizes Petris definition for ordinary nets. A morphism of coloured nets maps the topological space of the underlying undirected net as well as the kernel and cokernel of the incidence map. The kernel are flows along the transition-bordered fibres of the morphism, the cokernel are classes of markings of the place-bordered fibres. The attachment of bindings, colours, flows and marking classes to a subnet is formalized by using concepts from sheaf theory. A coloured net is a sheaf-cosheaf pair over a Petri space and a morphism between coloured nets is a morphism between such pairs. Coloured nets and their morphisms form a category. We prove the existence of a product in the subcategory of sort-respecting morphisms. After introducing markings our concepts generalize to coloured Petri nets.
Morphisms of Coloured Petri Nets
4,986
A new breed of web application, dubbed AJAX, is emerging in response to a limited degree of interactivity in large-grain stateless Web interactions. At the heart of this new approach lies a single page interaction model that facilitates rich interactivity. We have studied and experimented with several AJAX frameworks trying to understand their architectural properties. In this paper, we summarize three of these frameworks and examine their properties and introduce the SPIAR architectural style. We describe the guiding software engineering principles and the constraints chosen to induce the desired properties. The style emphasizes user interface component development, and intermediary delta-communication between client/server components, to improve user interactivity and ease of development. In addition, we use the concepts and principles to discuss various open issues in AJAX frameworks and application development.
An Architectural Style for Ajax
4,987
Aspect mining is a reverse engineering process that aims at finding crosscutting concerns in existing systems. This paper proposes an aspect mining approach based on determining methods that are called from many different places, and hence have a high fan-in, which can be seen as a symptom of crosscutting functionality. The approach is semi-automatic, and consists of three steps: metric calculation, method filtering, and call site analysis. Carrying out these steps is an interactive process supported by an Eclipse plug-in called FINT. Fan-in analysis has been applied to three open source Java systems, totaling around 200,000 lines of code. The most interesting concerns identified are discussed in detail, which includes several concerns not previously discussed in the aspect-oriented literature. The results show that a significant number of crosscutting concerns can be recognized using fan-in analysis, and each of the three steps can be supported by tools.
Identifying Crosscutting Concerns Using Fan-in Analysis
4,988
Recently, a new web development technique for creating interactive web applications, dubbed AJAX, has emerged. In this new model, the single-page web interface is composed of individual components which can be updated/replaced independently. With the rise of AJAX web applications classical multi-page web applications are becoming legacy systems. If until a year ago, the concern revolved around migrating legacy systems to web-based settings, today we have a new challenge of migrating web applications to single-page AJAX applications. Gaining an understanding of the navigational model and user interface structure of the source application is the first step in the migration process. In this paper, we explore how reverse engineering techniques can help analyze classic web applications for this purpose. Our approach, using a schema-based clustering technique, extracts a navigational model of web applications, and identifies candidate user interface components to be migrated to a single-page AJAX interface. Additionally, results of a case study, conducted to evaluate our tool, are presented.
Migrating Multi-page Web Applications to Single-page AJAX Interfaces
4,989
Program comprehension is the most tedious and time consuming task of software maintenance, an important phase of the software life cycle. This is particularly true while maintaining scientific application programs that have been written in Fortran for decades and that are still vital in various domains even though more modern languages are used to implement their user interfaces. Very often, programs have evolved as their application domains increase continually and have become very complex due to extensive modifications. This generality in programs is implemented by input variables whose value does not vary in the context of a given application. Thus, it is very interesting for the maintainer to propagate such information, that is to obtain a simplified program, which behaves like the initial one when used according to the restriction. We have adapted partial evaluation for program comprehension. Our partial evaluator performs mainly two tasks: constant propagation and statements simplification. It includes an interprocedural alias analysis. As our aim is program comprehension rather than optimization, there are two main differences with classical partial evaluation. We do not change the original
Partial Evaluation for Program Comprehension
4,990
This paper describes an approach for reusing specification patterns. Specification patterns are design patterns that are expressed in a formal specification language. Reusing a specification pattern means instantiating it or composing it with other specification patterns. Three levels of composition are defined: juxtaposition, composition with inter-patterns links and unification. This paper shows through examples how to define specification patterns in B, how to reuse them directly in B, and also how to reuse the proofs associated with specification patterns.
Reuse of Specification Patterns with the B Method
4,991
We propose a Capabilities-based approach for building long-lived, complex systems that have lengthy development cycles. User needs and technology evolve during these extended development periods, and thereby, inhibit a fixed requirements-oriented solution specification. In effect, for complex emergent systems, the traditional approach of baselining requirements results in an unsatisfactory system. Therefore, we present an alternative approach, Capabilities Engineering, which mathematically exploits the structural semantics of the Function Decomposition graph - a representation of user needs - to formulate Capabilities. For any given software system, the set of derived Capabilities embodies change-tolerant characteristics. More specifically, each individual Capability is a functional abstraction constructed to be highly cohesive and to be minimally coupled with its neighbors. Moreover, the Capability set is chosen to accommodate an incremental development approach, and to reflect the constraints of technology feasibility and implementation schedules. We discuss our validation activities to empirically prove that the Capabilities-based approach results in change-tolerant systems.
Capabilities Engineering: Constructing Change-Tolerant Systems
4,992
Stakeholders' expectations and technology constantly evolve during the lengthy development cycles of a large-scale computer based system. Consequently, the traditional approach of baselining requirements results in an unsatisfactory system because it is ill-equipped to accommodate such change. In contrast, systems constructed on the basis of Capabilities are more change-tolerant; Capabilities are functional abstractions that are neither as amorphous as user needs nor as rigid as system requirements. Alternatively, Capabilities are aggregates that capture desired functionality from the users' needs, and are designed to exhibit desirable software engineering characteristics of high cohesion, low coupling and optimum abstraction levels. To formulate these functional abstractions we develop and investigate two algorithms for Capability identification: Synthesis and Decomposition. The synthesis algorithm aggregates detailed rudimentary elements of the system to form Capabilities. In contrast, the decomposition algorithm determines Capabilities by recursively partitioning the overall mission of the system into more detailed entities. Empirical analysis on a small computer based library system reveals that neither approach is sufficient by itself. However, a composite algorithm based on a complementary approach reconciling the two polar perspectives results in a more feasible set of Capabilities. In particular, the composite algorithm formulates Capabilities using the cohesion and coupling measures as defined by the decomposition algorithm and the abstraction level as determined by the synthesis algorithm.
Reconciling Synthesis and Decomposition: A Composite Approach to Capability Identification
4,993
The purpose of this paper is to evaluate the impact of emerging network-centric software systems on the field of software architecture. We first develop an insight concerning the term "network-centric" by presenting its origin and its implications within the context of software architecture. On the basis of this insight, we present our definition of a network-centric framework and its distinguishing characteristics. We then enumerate the challenges that face the field of software architecture as software development shifts from a platform-centric to a network-centric model. In order to face these challenges, we propose a formal approach embodied in a new architectural style that supports overcoming these challenges at the architectural level. Finally, we conclude by presenting an illustrative example to demonstrate the usefulness of the concepts of network centricity, summarizing our contributions, and linking our approach to future work that needs to be done in this area.
The Implications of Network-Centric Software Systems on Software Architecture: A Critical Evaluation
4,994
This document describes a couple of tools that help to quickly design and develop computer (formalized) languages. The first one use Flex to perform lexical analysis and the second is an extention of Prolog DCGs to perfom syntactical analysis. Initially designed as a new component for the Centaur system, these tools are now available independently and can be used to construct efficient Prolog parsers that can be integrated in Prolog or heterogeneous systems. This is the initial version of the CLF documentation. Updated version will be available online when necessary.
Developing efficient parsers in Prolog: the CLF manual (v1.0)
4,995
Today many organizations aspire to adopt agile processes in hope of overcoming some of the difficulties they are facing with their current software development process. There is no structured framework for the agile adoption process. This paper presents a 3-Stage process framework that assists organization and guides organizations through their agile adoption efforts. The Process Framework has been received significantly positive feedback from experts and leaders in agile adoption industry.
Agile Adoption Process Framework
4,996
With the advent of potent network technology, software development has evolved from traditional platform-centric construction to network-centric evolution. This change involves largely the way we reason about systems as evidenced in the introduction of Network- Centric Operations (NCO). Unfortunately, it has resulted in conflicting interpretations of how to map NCO concepts to the field of software architecture. In this paper, we capture the core concepts and goals of NCO, investigate the implications of these concepts and goals on software architecture, and identify the operational characteristics that distinguish network-centric software systems from other systems. More importantly, we use architectural design principles to propose an outline for a network-centric architectural style that helps in characterizing network-centric software systems and that provides a means by which their distinguishing operational characteristics can be realized.
Architecting Network-Centric Software Systems: A Style-Based Beginning
4,997
Adopting agile practices brings about many benefits and improvements to the system being developed. However, in mission and life-critical systems, adopting an inappropriate agile practice has detrimental impacts on the system in various phases of its lifecycle as well as precludes desired qualities from being actualized. This paper presents a three-stage process that provides guidance to organizations on how to identify the agile practices they can benefit from without causing any impact to the mission and life critical system being developed.
Determining the Applicability of Agile Practices to Mission and Life-critical Systems
4,998
Software system certification presents itself with many challenges, including the necessity to certify the system at the level of functional requirements, code and binary levels, the need to chase down run-time errors, and the need for proving timing properties of the eventual, compiled system. This paper illustrates possible approaches for certifying code that arises from control systems requirements as far as stability properties are concerned. The relative simplicity of the certification process should encourage the development of systematic procedures for certifying control system codes for more complex environments.
Certifying controls and systems software
4,999