text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
In this paper we apply the social network concept of core-periphery structure to the sociotechnical structure of a software development team. We propose a socio-technical pattern that can be used to locate emerging coordination problems in Open Source projects. With the help of our tool and method called TESNA, we demonstrate a method to monitor the socio-technical core-periphery movement in Open Source projects. We then study the impact of different core-periphery movements on Open Source projects. We conclude that a steady core-periphery shift towards the core is beneficial to the project, whereas shifts away from the core are clearly not good. Furthermore, oscillatory shifts towards and away from the core can be considered as an indication of the instability of the project. Such an analysis can provide developers with a good insight into the health of an Open Source project. Researchers can gain from the pattern theory, and from the method we use to study the core-periphery movements. | Exploring the Impact of Socio-Technical Core-Periphery Structures in
Open Source Software Development | 5,200 |
Unified modelling language (UML) 2.0 introduced in 2002 has been developing and influencing object-oriented software engineering and has become a standard and reference for information system analysis and design modelling. There are many concepts and theories to model the information system or software application with UML 2.0, which can make ambiguities and inconsistencies for a novice to learn to how to model the system with UML especially with UML 2.0. This article will discuss how to model the simple software application by using some of the diagrams of UML 2.0 and not by using the whole diagrams as suggested by agile methodology. Agile methodology is considered as convenient for novices because it can deliver the information technology environment to the end-user quickly and adaptively with minimal documentation. It also has the ability to deliver best performance software application according to the customer's needs. Agile methodology will make simple model with simple documentation, simple team and simple tools. | Object-oriented modelling with unified modelling language 2.0 for simple
software application based on agile methodology | 5,201 |
We present a novel approach to detect refactoring opportunities by measuring the participation of references between types in instances of patterns representing design flaws. This technique is validated using an experiment where we analyse a set of 95 open-source Java programs for instances of four patterns representing modularisation problems. It turns out that our algorithm can detect high impact refactorings opportunities - a small number of references such that the removal of those references removes the majority of patterns from the program. | On the Detection of High-Impact Refactoring Opportunities in Programs | 5,202 |
Distribution of software development is becoming more and more common in order to save the production cost and reduce the time to market. Large geographical distance, different time zones and cultural differences in distributed software development (DSD) leads to weak communication which adversely affects the project. Using agile practices for distributed development is also gaining momentum in various organizations to increase the quality and performance of the project. This paper explores the intersection of these two significant trends for software development i.e. DSD and agile. We discuss the challenges faced by geographically distributed agile teams and proven practices to address these issues, which will help in building a successful distributed team. | Distributed Agile Software Development: A Review | 5,203 |
This paper describes a set of tools for automating and controlling the development and maintenance of software systems. The mental model is a software assembly line. Program design and construction take place at individual programmer workstations. Integration of individual software components takes place at subsequent stations on the assembly line. Software is moved automatically along the assembly line toward final packaging. Software under construction or maintenance is divided into packages. Each package of software is composed of a recipe and ingredients. Some new terms are introduced to describe the ingredients. The recipe specifies how ingredients are transformed into products. The benefits of the Software Assembly Line for development, maintenance, and management of large-scale computer systems are explained. | Software Must Move! A Description of the Software Assembly Line | 5,204 |
The retrograde software analysis is a method that emanates from executing a program backwards - instead of taking input data and following the execution path, we start from output data and by executing the program backwards, command by command, analyze data that could lead to the current output. The changed perspective forces a developer to think in a new way about the program. It can be applied as a thorough procedure or casual method. With this method, we have many advantages in testing, algorithm and system analysis. | The Application and Extension of Retrograde Software Analysis | 5,205 |
The key factor of component based software development is component composition technology. A Component interaction graph is used to describe the interrelation of components. Drawing a complete component interaction graph (CIG) provides an objective basis and technical means for making the testing outline. Although many researches have focused on this subject, the quality of system that is composed of components has not been guaranteed. In this paper, a CIG is constructed from a state chart diagram and new test cases are generated to test the component composition. | Component Interaction Graph: A new approach to test component
composition | 5,206 |
Research shows, that the major issue in development of quality software is precise estimation. Further this estimation depends upon the degree of intricacy inherent in the software i.e. complexity. This paper attempts to empirically demonstrate the proposed complexity which is based on IEEE Requirement Engineering document. It is said that a high quality SRS is pre requisite for high quality software. Requirement Engineering document (SRS) is a specification for a particular software product, program or set of program that performs some certain functions for a specific environment. The various complexity measure given so far are based on Code and Cognitive metrics value of software, which are code based. So these metrics provide no leverage to the developer of the code. Considering the shortcoming of code based approaches, the proposed approach identifies complexity of software immediately after freezing the requirement in SDLC process. The proposed complexity measure compares well with established complexity measures. Finally the trend can be validated with the result of proposed measure. Ultimately, Requirement based complexity measure can be used to understand the complexity of proposed software much before the actual implementation of design thus saving on cost and manpower wastage. | A Complexity measure based on Requirement Engineering Document | 5,207 |
This index covers the final course project reports for COMP5541 Winter 2010 at Concordia University, Montreal, Canada, Tools and Techniques for Software Engineering by 4 teams trying to capture the requirements, provide the design specification, configuration management, testing and quality assurance of their partial implementation of the Unified University Inventory System (UUIS) of an Imaginary University of Arctica (IUfA). Their results are posted here for comparative studies and analysis. | Contents of COMP5541 Winter 2010 Final UUIS SRS and SDD Reports | 5,208 |
The web information resources are growing explosively in number and volume. Now to retrieve relevant data from web has become very difficult and time-consuming. Semantic Web envisions that these web resources should be developed in machine-processable way in order to handle irrelevancy and manual processing problems. Whereas, the Semantic Web is an extension of current web, in which web resources are equipped with formal semantics about their interpretation through machines. These web resources are usually contained in web applications and systems, and their formal semantics are normally represented in the form of web-ontologies. In this research paper, an object-oriented design methodology (OODM) is upgraded for developing semantic web applications. OODM has been developed for designing of web applications for the current web. This methodology is good enough to develop web applications. It also provides a systematic approach for the web applications development but it is not helpful in generating machine-pocessable content of web applications in their development. Therefore, this methodology needs to be extended. In this paper, we propose that extension in OODM. This new extended version is referred to as the semantic web object-oriented design methodology (SW-OODM). | Engineering Semantic Web Applications by Using Object-Oriented Paradigm | 5,209 |
XML stands for the Extensible Markup Language. It is a markup language for documents, Nowadays XML is a tool to develop and likely to become a much more common tool for sharing data and store. XML can communicate structured information to other users. In other words, if a group of users agree to implement the same kinds of tags to describe a certain kind of information, XML applications can assist these users in communicating their information in an more robust and efficient manner. XML can make it easier to exchange information between cooperating entities. In this paper we will present the XML technique by fourth factors Strength of XML, XML Parser, XML Goals and Types of XML Parsers. | An Overview: Extensible Markup Language Technology | 5,210 |
Software that cannot evolve is condemned to atrophy: it cannot accommodate the constant revision and re-negotiation of its business goals nor intercept the potential of new technology. To accommodate change in software systems we have defined an active software architecture to be: dynamic in that the structure and cardinality of the components and interactions are changeable during execution; updatable in that components can be replaced; decomposable in that an executing system may be (partially) stopped and split up into its components and interactions; and reflective in that the specification of components and interactions may be evolved during execution. Here we describe the facilities of the ArchWare architecture description language (ADL) for specifying active architectures. The contribution of the work is the unique combination of concepts including: a {\pi}-calculus based communication and expression language for specifying executable architectures; hyper-code as an underlying representation of system execution that can be used for introspection; a decomposition operator to incrementally break up executing systems; and structural reflection for creating new components and binding them into running systems. | Support for Evolving Software Architectures in the ArchWare ADL | 5,211 |
Software that cannot change is condemned to atrophy: it cannot accommodate the constant revision and re-negotiation of its business goals nor intercept the potential of new technology. To accommodate change in such systems we have defined an active software architecture to be: dynamic in that the structure and cardinality of the components and interactions are not statically known; updatable in that components can be replaced dynamically; and evolvable in that it permits its executing specification to be changed. Here we describe the facilities of the ArchWare architecture description language (ADL) for specifying active architectures. The contribution of the work is the unique combination of concepts including: a {\pi}-calculus based communication and expression language for specifying executable architectures; hyper-code as an underlying representation of system execution; a decomposition operator to break up and introspect on executing systems; and structural reflection for creating new components and binding them into running systems. | Constructing Active Architectures in the ArchWare ADL | 5,212 |
Behavior-Driven Development (BDD) is a specification technique that automatically certifies that all functional requirements are treated properly by source code, through the connection of the textual description of these requirements to automated tests. Given that in some areas, in special Enterprise Information Systems, requirements are identified by Business Process Modeling - which uses graphical notations of the underlying business processes, this paper aims to provide a mapping from the basic constructs that form the most common BPM languages to Behavior Driven Development constructs. | Mapping Business Process Modeling constructs to Behavior Driven
Development Ubiquitous Language | 5,213 |
The Eclipse Graphical Modeling (GMF) Framework provides the major approach for implementing visual languages on top of the Eclipse platform. GMF relies on a family of modeling languages to describe different aspects of the visual language and its implementation in an editor. GMF uses a model-driven approach to map the different GMF models to Java code. The framework, as it stands, provides very little support for evolution. In particular, there is no support for propagating changes from say the domain model (i.e., the abstract syntax of the visual language) to other models. We analyze the resulting co-evolution challenge, and we provide a transformation-based solution, say GMF model adapters, that serve the propagation of abstract-syntax changes based on the interpretation of difference models. | Automated co-evolution of GMF editor models | 5,214 |
The ever-increasing complexity of software systems makes them hard to comprehend, predict and tune due to emergent properties and non-deterministic behaviour. Complexity arises from the size of software systems and the wide variety of possible operating environments: the increasing choice of platforms and communication policies leads to ever more complex performance characteristics. In addition, software systems exhibit different behaviour under different workloads. Many software systems are designed to be configurable so that policies can be chosen to meet the needs of various stakeholders. For complex software systems it can be difficult to accurately predict the effects of a change and to know which configuration is most appropriate. This thesis demonstrates that it is useful to run automated experiments that measure a selection of system configurations. Experiments can find configurations that meet the stakeholders' needs, find interesting behavioural characteristics, and help produce predictive models of the system's behaviour. The design and use of ACT (Automated Configuration Tool) for running such experiments is described, in combination a number of search strategies for deciding on the configurations to measure. Design Of Experiments (DOE) is discussed, with emphasis on Taguchi Methods. These statistical methods have been used extensively in manufacturing, but have not previously been used for configuring software systems. The novel contribution here is an industrial case study, applying the combination of ACT and Taguchi Methods to DC-Directory, a product from Data Connection Ltd (DCL). The case study investigated the applicability of Taguchi Methods for configuring complex software systems. Taguchi Methods were found to be useful for modelling and configuring DC- Directory, making them a valuable addition to the techniques available to system administrators and developers. | Observation-Driven Configuration of Complex Software Systems | 5,215 |
This paper introduces CONFIGEN, a tool that helps modularizing software. CONFIGEN allows the developer to select a set of elementary components for his software through an interactive interface. Configuration files for use by C/assembly code and Makefiles are then automatically generated, and we successfully used it as a helper tool for complex system software refactoring. CONFIGEN is based on propositional logic, and its implementation faces hard theoretical problems. | CONFIGEN: A tool for managing configuration options | 5,216 |
This paper presents a tool stack for the implementation, specification and test of software following the practices of Behavior Driven Development (BDD) in Python language. The usage of this stack highlights the specification and validation of the software's expected behavior, reducing the error rate and improving documentation. Therefore, it is possible to produce code with much less defects at both functional and unit levels, in addition to better serving to stakeholders' expectations. | A tool stack for implementing Behaviour-Driven Development in Python
Language | 5,217 |
Registries play a key role in service-oriented applications. Originally, they were neutral players between service providers and clients. The UDDI Business Registry (UBR) was meant to foster these concepts and provide a common reference for companies interested in Web services. The more Web services were used, the more companies started create their own local registries: more efficient discovery processes, better control over the quality of published information, and also more sophisticated publication policies motivated the creation of private repositories. The number and heterogeneity of the different registries - besides the decision to close the UBR are pushing for new and sophisticated means to make different registries cooperate. This paper proposes DIRE (DIstributed REgistry), a novel approach based on a publish and subscribe (P/S) infrastructure to federate different heterogeneous registries and make them exchange information about published services. The paper discusses the main motivations for the P/S-based infrastructure, proposes an integrated service model, introduces the main components of the framework, and exemplifies them on a simple case study. | On the Cooperation of Independent Registries | 5,218 |
Dependency analysis is a technique to identify and determine data dependencies between service protocols. Protocols evolving concurrently in the service composition need to impose an order in their execution if there exist data dependencies. In this work, we describe a model to formalise context-aware service protocols. We also present a composition language to handle dynamically the concurrent execution of protocols. This language addresses data dependency issues among several protocols concurrently executed on the same user device, using mechanisms based on data semantic matching. Our approach aims at assisting the user in establishing priorities between these dependencies, avoiding the occurrence of deadlock situations. Nevertheless, this process is error-prone, since it requires human intervention. Therefore, we also propose verification techniques to automatically detect possible inconsistencies specified by the user while building the data dependency set. Our approach is supported by a prototype tool we have implemented. | Handling Data-Based Concurrency in Context-Aware Service Protocols | 5,219 |
A collaborative object represents a data type (such as a text document) designed to be shared by a group of dispersed users. The Operational Transformation (OT) is a coordination approach used for supporting optimistic replication for these objects. It allows the users to concurrently update the shared data and exchange their updates in any order since the convergence of all replicas, i.e. the fact that all users view the same data, is ensured in all cases. However, designing algorithms for achieving convergence with the OT approach is a critical and challenging issue. In this paper, we propose a formal compositional method for specifying complex collaborative objects. The most important feature of our method is that designing an OT algorithm for the composed collaborative object can be done by reusing the OT algorithms of component collaborative objects. By using our method, we can start from correct small collaborative objects which are relatively easy to handle and incrementally combine them to build more complex collaborative objects. | On Coordinating Collaborative Objects | 5,220 |
A Bayesian Network based mathematical model has been used for modelling Extreme Programming software development process. The model is capable of predicting the expected finish time and the expected defect rate for each XP release. Therefore, it can be used to determine the success/failure of any XP Project. The model takes into account the effect of three XP practices, namely: Pair Programming, Test Driven Development and Onsite Customer practices. The model's predictions were validated against two case studies. Results show the precision of our model especially in Predicting the project finish time. | Bayesian Network Based XP Process Modelling | 5,221 |
This work proposes a methodology for source code quality and static behaviour evaluation of a software system, based on the standard ISO/IEC-9126. It uses elements automatically derived from source code enhanced with expert knowledge in the form of quality characteristic rankings, allowing software engineers to assign weights to source code attributes. It is flexible in terms of the set of metrics and source code attributes employed, even in terms of the ISO/IEC-9126 characteristics to be assessed. We applied the methodology to two case studies, involving five open source and one proprietary system. Results demonstrated that the methodology can capture software quality trends and express expert perceptions concerning system quality in a quantitative and systematic manner. | Code Quality Evaluation Methodology Using The ISO/IEC 9126 Standard | 5,222 |
Software component reuse is the software engineering practice of developing new software products from existing components. A reuse library or component reuse repository organizes stores and manages reusable components. This paper describes how a reusable component is created, how it reuses the function and checking if optimized code is being used in building programs and applications. Finally providing coding guidelines, standards and best practices used for creating reusable components and guidelines and best practices for making configurable and easy to use. | Building Reusable Software Component For Optimization Check in ABAP
Coding | 5,223 |
Although software managers are generally good at new project estimation, their experience of scheduling rework tends to be poor. Inconsistent or incorrect effort estimation can increase the risk that the completion time for a project will be problematic. To continually alter software maintenance schedules during software maintenance is a daunting task. Our proposed framework, validated in a case study confirms that the variables resulting from requirements changes suffer from a number of problems, e.g., the coding used, end user involvement and user documentation. Our results clearly show a significant impact on rework effort as a result of unexpected errors that correlate with 1) weak characteristics and attributes as described in the program's source lines of code, especially in data declarations and data statements, 2) lack of communication between developers and users on a change effects, and 3) unavailability of user documentation. To keep rework effort under control, new criteria in change request forms are proposed. These criteria are shown in a proposed framework; the more case studies that are validated, the more reliable the result will be in determining the outcome of effort rework estimation. | Examining Requirements Change Rework Effort: A Study | 5,224 |
This article is about a measurement analysis based approach to help software practitioners in managing the additional level complexities and variabilities in software product line applications. The architecture of the proposed approach i.e. ZAC is designed and implemented to perform preprocessesed source code analysis, calculate traditional and product line metrics and visualize results in two and three dimensional diagrams. Experiments using real time data sets are performed which concluded with the results that the ZAC can be very helpful for the software practitioners in understanding the overall structure and complexity of product line applications. Moreover the obtained results prove strong positive correlation between calculated traditional and product line measures. | Towards Performance Measurement And Metrics Based Analysis of PLA
Applications | 5,225 |
Because of the importance of object oriented methodologies, the research in developing new measure for object oriented system development is getting increased focus. The most of the metrics need to find the interactions between the objects and modules for developing necessary metric and an influential software measure that is attracting the software developers, designers and researchers. In this paper a new interactions are defined for object oriented system. Using these interactions, a parser is developed to analyze the existing architecture of the software. Within the design model, it is necessary for design classes to collaborate with one another. However, collaboration should be kept to an acceptable minimum i.e. better designing practice will introduce low coupling. If a design model is highly coupled, the system is difficult to implement, to test and to maintain overtime. In case of enhancing software, we need to introduce or remove module and in that case coupling is the most important factor to be considered because unnecessary coupling may make the system unstable and may cause reduction in the system's performance. So coupling is thought to be a desirable goal in software construction, leading to better values for external software qualities such as maintainability, reusability and so on. To test this hypothesis, a good measure of class coupling is needed. In this paper, based on the developed tool called Design Analyzer we propose a methodology to reuse an existing system with the objective of enhancing an existing Object oriented system keeping the coupling as low as possible. | A Parsing Scheme for Finding the Design Pattern and Reducing the
Development Cost of Reusable Object Oriented Software | 5,226 |
A number of companies are trying to migrate large monolithic software systems to Service Oriented Architectures. A common approach to do this is to first identify and describe desired services (i.e., create a model), and then to locate portions of code within the existing system that implement the described services. In this paper we describe a detailed case study we undertook to match a model to an open-source business application. We describe the systematic methodology we used, the results of the exercise, as well as several observations that throw light on the nature of this problem. We also suggest and validate heuristics that are likely to be useful in partially automating the process of matching service descriptions to implementations. | A Case Study in Matching Service Descriptions to Implementations in an
Existing System | 5,227 |
This document presents the integration of design patterns and mobile applications, in the development of software management of plates (SIGEP) that allows to support in the solutions to problematics that they appear in the process of maintaining of plates copper cathodes of a Mining Company, in our case for Quebrada Blanca S.A. (CMQB S.A.). These problematics mainly are related to the little control over the tasks carried out in the maintaining to the cathodic plates, and the lack of information that leads to this practice, originates a deficient management and it does not allow to make opportune decisions referring to these elements, and therefore it does to project and to administer the life utility of the plates of cathodes, generating lifted costs associated to this process. As the process of maintaining a cathode plates constantly changing process, with respect to maintenance strategies in the system design SIGEP recognizing the flexibility and reuse in the design of system components, this achieved through design patterns used. The SIGEP implementation of the system and the incorporation of a mobile application, meant for CMQB S.A. increase control of the tasks carried out plates cathodes, allowing the company to detailed information on the maintenance of these elements, allowing among other things, identify cathode plates which are more expensive, and therefore knowing what must be replaced. | Integration of Design Patterns and Mobile Applications in a Management
System for Monitoring Maintenance Cathode Plates of Mining Company Quebrada
Blanca SA | 5,228 |
This note concerns a search for publications in which the pragmatic concept of a test as conducted in the practice of software testing is formalized, a theory about software testing based on such a formalization is presented or it is demonstrated on the basis of such a theory that there are solid grounds to test software in cases where in principle other forms of analysis could be used. This note reports on the way in which the search has been carried out and the main outcomes of the search. The message of the note is that the fundamentals of software testing are not yet complete in some respects. | Searching publications on software testing | 5,229 |
Improving software process to achieve high quality in a software development organization is the key factor to success. Bangladeshi software firms have not experienced much in this particular area in comparison to other countries. The ISO 9001 and CMM standard has become a basic part of software development. The main objectives of our study are: 1) To understand the software development process uses by the software developer firms in Bangladesh 2) To identify the development practices based on established quality standard and 3) To establish a standardized and coherent process for the development of software for a specific project. It is revealed from this research that software industries of Bangladesh are lacking in target set for software process and improvement, involvement of quality control activities, and standardize business expertise practice. This paper investigates the Bangladeshi software industry in the light of the above challenges. | Software Development Standard and Software Engineering Practice: A Case
Study of Bangladesh | 5,230 |
Many large financial planning models are written in a spreadsheet programming language (usually Microsoft Excel) and deployed as a spreadsheet application. Three groups, FAST Alliance, Operis Group, and BPM Analytics (under the name "Spreadsheet Standards Review Board") have independently promulgated standardized processes for efficiently building such models. These spreadsheet engineering methodologies provide detailed guidance on design, construction process, and quality control. We summarize and compare these methodologies. They share many design practices, and standardized, mechanistic procedures to construct spreadsheets. We learned that a written book or standards document is by itself insufficient to understand a methodology. These methodologies represent a professionalization of spreadsheet programming, and can provide a means to debug a spreadsheet that contains errors. We find credible the assertion that these spreadsheet engineering methodologies provide enhanced productivity, accuracy and maintainability for large financial planning models | Spreadsheets Grow Up: Three Spreadsheet Engineering Methodologies for
Large Financial Planning Models | 5,231 |
Three controlled experiments testing the benefits that Java programmers gain from using the Two-Tier Programming Toolkit have recently been concluded. The first experiment offers statistically significant evidence (p-value: 0.02) that programmers who undertook only minimal (1-hour) training in using the current prototype exhibit 76% productivity gains in key tasks in software development and maintenance. The second experiment shows that the use of the TTP Toolkit is likely (p-value: 0.10) to almost triple the accuracy of programmers performing tasks associated with software quality. The third experiment shows that the TTP Toolkit does not offer significant productivity gains in performing very short (under 10 min.) tasks. | Three Controlled Experiments in Software Engineering with the Two-Tier
Programming Toolkit: Final Report | 5,232 |
Refactoring is a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behaviour. A database refactoring is a small change to the database schema which improves its design without changing its semantics. This paper presents example 'spreadsheet refactorings', derived from the above and taking into account the unique characteristics of spreadsheet formulas and VBA code. The techniques are constrained by the tightly coupled data and code in spreadsheets. | Spreadsheet Refactoring | 5,233 |
Although experts diverge on how best to improve spreadsheet quality, it is generally agreed that more time needs to be spent testing spreadsheets. Ideally, experienced and trained spreadsheet engineers would carry this out, but quite often this is neither practical nor possible. Many spreadsheets are a legacy, developed by staff that have since moved on, or indeed modified by many staff no longer employed by the organisation. When such spreadsheets fall into the hands of inexperienced, non-experts, any features that reduce error visibility may become a risk. Range names are one such feature, and this paper, building on previous research, investigates in a more structured and controlled manner the effect they have on the debugging performance of novice spreadsheet users. | How do Range Names Hinder Novice Spreadsheet Debugging Performance? | 5,234 |
The paper examines in the context of financial reporting, the controls that organisations have in place to manage spreadsheet risk and errors. There has been widespread research conducted in this area, both in Ireland and internationally. This paper describes a study involving 19 participants (2 case studies and 17 by survey) from Ireland. Three areas are examined; firstly, the extent of spreadsheet usage, secondly, the level of complexity employed in spreadsheets, and finally, the controls in place regarding spreadsheets. The findings support previous findings of Panko (1998), that errors occur frequently in spreadsheets and that there is little or unenforced controls employed, however this research finds that attitudes are changing with regard to spreadsheet risk and that one organisation is implementing a comprehensive project regarding policies on the development and control of spreadsheets. Further research could be undertaken in the future to examine the development of a "best practice model" both for the reduction in errors and to minimise the risk in spreadsheet usage. | Spreadsheet Risk Management in Organisations | 5,235 |
Previous spreadsheet inspection experiments have had human subjects look for seeded errors in spreadsheets. In this study, subjects attempted to find errors in human-developed spreadsheets to avoid the potential artifacts created by error seeding. Human subject success rates were compared to the successful rates for error-flagging by spreadsheet static analysis tools (SSATs) applied to the same spreadsheets. The human error detection results were comparable to those of studies using error seeding. However, Excel Error Check and Spreadsheet Professional were almost useless for correctly flagging natural (human) errors in this study. | The Detection of Human Spreadsheet Errors by Humans versus Inspection
(Auditing) Software | 5,236 |
EuSpRIG concerns direct researchers to revisit spreadsheet education, taking into account error auditing tools, checklists, and good practices. This paper aims at elaborating principles to design a spreadsheet curriculum. It mainly focuses on two important issues. Firstly, it is necessary to establish the spreadsheet invariants to be taught, especially those concerning errors and good practices. Secondly, it is important to take into account the learners' ICT experience, and to encourage them to attitudes that foster self-learning. We suggest key principles for spreadsheet teaching, and we illustrate them with teaching guidelines. | Teaching Spreadsheets: Curriculum Design Principles | 5,237 |
In previous work we have studied how an explicit representation of background knowledge associated with a specific spreadsheet can be exploited to alleviate usability problems with spreadsheet-based applications. We have implemented this approach in the SACHS system to provide a semantic help system for spreadsheets applications. In this paper, we evaluate the (comprehension) coverage of SACHS on an Excel-based financial controlling system via a "Wizard-of-Oz" experiment. This shows that SACHS adds significant value, but systematically misses important classes of explanations. For judgements about the information contained in spreadsheets, we provide a first approach for an "assessment module" in SACHS. | What we understand is what we get: Assessment in Spreadsheets | 5,238 |
General definitions as well as rules of reasoning regarding control code production, distribution, deployment, and usage are described. The role of testing, trust, confidence and risk analysis is considered. A rationale for control code testing is sought and found for the case of safety critical embedded control code. | Informal Control code logic | 5,239 |
This paper motivates the need for a formalism for the modelling and analysis of dynamic reconfiguration of dependable real-time systems. We present requirements that the formalism must meet, and use these to evaluate well established formalisms and two process algebras that we have been developing, namely, Webpi and CCSdp. A simple case study is developed to illustrate the modelling power of these two formalisms. The paper shows how Webpi and CCSdp represent a significant step forward in modelling adaptive and dependable real-time systems. | On Modelling and Analysis of Dynamic Reconfiguration of Dependable
Real-Time Systems | 5,240 |
Web service applications are distributed processes that are composed of dynamically bounded services. In our previous work [15], we have described a framework for performing runtime monitoring of web service against behavioural correctness properties (described using property patterns and converted into finite state automata). These specify forbidden behavior (safety properties) and desired behavior (bounded liveness properties). Finite execution traces of web services described in BPEL are checked for conformance at runtime. When violations are discovered, our framework automatically proposes and ranks recovery plans which users can then select for execution. Such plans for safety violations essentially involve "going back" - compensating the executed actions until an alternative behaviour of the application is possible. For bounded liveness violations, recovery plans include both "going back" and "re-planning" - guiding the application towards a desired behaviour. Our experience, reported in [16], identified a drawback in this approach: we compute too many plans due to (a) overapproximating the number of program points where an alternative behaviour is possible and (b) generating recovery plans for bounded liveness properties which can potentially violate safety properties. In this paper, we describe improvements to our framework that remedy these problems and describe their effectiveness on a case study. | Optimizing Computation of Recovery Plans for BPEL Applications | 5,241 |
Although web applications evolved to mature solutions providing sophisticated user experience, they also became complex for the same reason. Complexity primarily affects the server-side generation of dynamic pages as they are aggregated from multiple sources and as there are lots of possible processing paths depending on parameters. Browser-based tests are an adequate instrument to detect errors within generated web pages considering the server-side process and path complexity a black box. However, these tests do not detect the cause of an error which has to be located manually instead. This paper proposes to generate metadata on the paths and parts involved during server-side processing to facilitate backtracking origins of detected errors at development time. While there are several possible points of interest to observe for backtracking, this paper focuses user interface components of web frameworks. | Browser-based Analysis of Web Framework Applications | 5,242 |
In recent years, there has been an explosive growth in the popularity of online social networks such as Facebook. In a new twist, third party developers are now able to create their own web applications which plug into Facebook and work with Facebook's "social" data, enabling the entire Facebook user base of more than 400 million active users to use such applications. These client applications can contain subtle errors that can be hard to debug if they misuse the Facebook API. In this paper we present an experience report on applying Microsoft's new code contract system for the .NET framework to the Facebook API.We wrote contracts for several classes in the Facebook API wrapper which allows Microsoft .NET developers to implement Facebook applications. We evaluated the usefulness of these contracts during implementation of a new Facebook application. Our experience indicates that having code contracts provides a better and quicker software development experience. | Contracting the Facebook API | 5,243 |
This paper proposes a method for deriving formal specifications of systems. To accomplish this task we pass through a non trivial number of steps, concepts and tools where the first one, the most important, is the concept of method itself, since we realized that computer science has a proliferation of languages but very few methods. We also propose the idea of Layered Fault Tolerant Specification (LFTS) to make the method extensible to dependable systems. The principle is layering the specification, for the sake of clarity, in (at least) two different levels, the first one for the normal behavior and the others (if more than one) for the abnormal. The abnormal behavior is described in terms of an Error Injector (EI) which represents a model of the erroneous interference coming from the environment. This structure has been inspired by the notion of idealized fault tolerant component but the combination of LFTS and EI using rely guarantee thinking to describe interference can be considered one of the main contributions of this work. The progress toward this method and the way to layer specifications has been made experimenting on the Transportation and the Automotive Case Studies of the DEPLOY project. | Deriving Specifications of Dependable Systems: toward a Method | 5,244 |
Pervasive user-centric applications are systems which are meant to sense the presence, mood, and intentions of users in order to optimize user comfort and performance. Building such applications requires not only state-of-the art techniques from artificial intelligence but also sound software engineering methods for facilitating modular design, runtime adaptation and verification of critical system requirements. In this paper we focus on high-level design and analysis, and use the algebraic rewriting language Real-Time Maude for specifying applications in a real-time setting. We propose a generic component-based approach for modeling pervasive user-centric systems and we show how to analyze and prove crucial properties of the system architecture through model checking and simulation. For proving time-dependent properties we use Metric Temporal Logic (MTL) and present analysis algorithms for model checking two subclasses of MTL formulas: time-bounded response and time-bounded safety MTL formulas. The underlying idea is to extend the Real-Time Maude model with suitable clocks, to transform the MTL formulas into LTL formulas over the extended specification, and then to use the LTL model checker of Maude. It is shown that these analyses are sound and complete for maximal time sampling. The approach is illustrated by a simple adaptive advertising scenario in which an adaptive advertisement display can react to actions of the users in front of the display. | Modeling and Analyzing Adaptive User-Centric Systems in Real-Time Maude | 5,245 |
Message Sequence Charts (MSCs) are an appealing visual formalism mainly used in the early stages of system design to capture the system requirements. However, if we move towards an implementation, an executable specifications related in some fashion to the MSC-based requirements must be obtained. The MSCs can be used effectively to specify the bus protocol in the way where high-level transition systems is used to capture the control flow of the system components of the protocol and MSCs to describe the non-atomic component interactions. This system of specification is amenable to formal verification. In this paper, we present the way how we can specify the bus protocols using MSCs and how these specifications can be translated into program of verification tool (we have used Symbolic Model Verifier (SMV)) for the use of formal verification. We have contributed to the following tasks in this respect. Firstly, the way to specify the protocol using MSC has been presented. Secondly, a translator that translates the specifications (described in a textual input file) into SMV programs has been constructed. Finally, we have presented the verification result of the AMBA bus protocol using the SMV program found through the translation process. The SMV program found through the translation process can be used in order to automatically verify various properties of any bus protocol specified. | Bus Protocols: MSC-Based Specifications and Translation into Program of
Verification Tool for Formal Verification | 5,246 |
In this paper, an approach to facilitate the treatment with variabilities in system families is presented by explicitly modelling variants. The proposed method of managing variability consists of a variant part, which models variants and a decision table to depict the customisation decision regarding each variant. We have found that it is easy to implement and has advantage over other methods. We present this model as an integral part of modelling system families. | Modelling Variability for System Families | 5,247 |
This paper describes the rationale, curriculum and subject matter of a new MSc module being taught on an MSc Finance and Information Management course at the University of Wales Institute in Cardiff. Academic research on spreadsheet risks now has some penetration in academic literature and there is a growing body of knowledge on the subjects of spreadsheet error, human factors, spreadsheet engineering, "best practice", spreadsheet risk management and various techniques used to mitigate spreadsheet errors. This new MSc module in End User Computing Risk Management is an attempt to pull all of this research and practitioner experience together to arm the next generation of finance spreadsheet champions with the relevant knowledge, techniques and critical perspective on an emerging discipline. | Defending the future: An MSc module in End User Computing Risk
Management | 5,248 |
Spreadsheets are ubiquitous, heavily relied on throughout vast swathes of finance, commerce, industry, academia and Government. They are also acknowledged to be extraordinarily and unacceptably prone to error. If these two points are accepted, it has to follow that their uncontrolled use has the potential to inflict considerable damage. One approach to controlling such error should be to define as "good practice" a set of characteristics that a spreadsheet must possess and as "bad practice" another set that it must avoid. Defining such characteristics should, in principle, perfectly do-able. However, being able to say with authority at a definite moment that any particular spreadsheet complies with these characteristics is very much more difficult. The author asserts that the use of automated spreadsheet development could markedly help in ensuring and demonstrating such compliance. | Spreadsheets - the Good, the Bad and the Downright Ugly | 5,249 |
This volume contains the proceedings of WCSI 2010, the International Workshop on Component and Service Interoperability. WCSI 2010 was held in Malaga (Spain) on June 29th, 2010 as a satellite event of the TOOLS 2010 Federated Conferences. The papers published in this volume tackle different issues that are currently central to our community, namely definition of expressive interface languages, formal models and approaches to software composition and adaptation, interface-based compatibility and substitutability, and verification techniques for distributed software. | Proceedings International Workshop on Component and Service
Interoperability | 5,250 |
This paper studies the problem of predicting the coding effort for a subsequent year of development by analysing metrics extracted from project repositories, with an emphasis on projects containing XML code. The study considers thirteen open source projects and applies machine learning algorithms to generate models to predict one-year coding effort, measured in terms of lines of code added, modified and deleted. Both organisational and code metrics associated to revisions are taken into account. The results show that coding effort is highly determined by the expertise of developers while source code metrics have little effect on improving the accuracy of estimations of coding effort. The study also shows that models trained on one project are unreliable at estimating effort in other projects. | Predicting Coding Effort in Projects Containing XML Code | 5,251 |
The notion of contract aware components has been published roughly ten years ago and is now becoming mainstream in several fields where the usage of software components is seen as critical. The goal of this paper is to survey domains such as Embedded Systems or Service Oriented Architecture where the notion of contract aware components has been influential. For each of these domains we briefly describe what has been done with this idea and we discuss the remaining challenges. | Contract Aware Components, 10 years after | 5,252 |
A key objective for ubiquitous environments is to enable system interoperability between system's components that are highly heterogeneous. In particular, the challenge is to embed in the system architecture the necessary support to cope with behavioral diversity in order to allow components to coordinate and communicate. The continuously evolving environment further asks for an automated and on-the-fly approach. In this paper we present the design building blocks for the dynamic and on-the-fly interoperability between heterogeneous components. Specifically, we describe an Architectural Pattern called Mediating Connector, that is the key enabler for communication. In addition, we present a set of Basic Mediator Patterns, that describe the basic mismatches which can occur when components try to interact, and their corresponding solutions. | Components Interoperability through Mediating Connector Patterns | 5,253 |
This article contributes to the design and the verification of trusted components and services. The contracts are declined at several levels to cover then different facets, such as component consistency, compatibility or correctness. The article introduces multilevel contracts and a design+verification process for handling and analysing these contracts in component models. The approach is implemented with the COSTO platform that supports the Kmelia component model. A case study illustrates the overall approach. | Multilevel Contracts for Trusted Components | 5,254 |
We define a notion of social machine and envisage an algebra that can describe networks of such. To start with, social machines are defined as tuples of input, output, processes, constraints, state, requests and responses; apart from defining the machines themselves, the algebra defines a set of connectors and conditionals that can be used to describe the interactions between any number of machines in a multitude of ways, as a means to represent real machines interacting in the real web, such as Twitter, Twitter running on top of Amazon AWS, mashups built using Twitter and, obviously, other social machines. This work is not a theoretical paper as yet; but, in more than one sense, we think we have found a way to describe web based information systems and are starting to work on what could be a practical way of dealing with the complexity of this emerging web of social machines that is all around us. This version should be read as work in progress and comments, observations, bugs... are most welcome and should be sent to the email of the first, corresponding author. | The Emerging Web of Social Machines | 5,255 |
Although object-orientation has been around for several decades, its key concept abstraction has not been exploited for proper application of object-orientation in other phases of software development than the implementation phase. We mention some issues that lead to a lot of confusion and obscurity with object-orientation and its application in software development. We describe object-orientation as abstract as possible such that it can be applied to all phases of software development. | On Object-Orientation | 5,256 |
The number of bug reports in complex software increases dramatically. Now bugs are triaged manually, bug triage or assignment is a labor-intensive and time-consuming task. Without knowledge about the structure of the software, testers often specify the component of a new bug wrongly. Meanwhile, it is difficult for triagers to determine the component of the bug only by its description. We dig out the components of 28,829 bugs in Eclipse bug project have been specified wrongly and modified at least once. It results in these bugs have to be reassigned and delays the process of bug fixing. The average time of fixing wrongly-specified bugs is longer than that of correctly-specified ones. In order to solve the problem automatically, we use historical fixed bug reports as training corpus and build classifiers based on support vector machines and Na\"ive Bayes to predict the component of a new bug. The best prediction accuracy reaches up to 81.21% on our validation corpus of Eclipse project. Averagely our predictive model can save about 54.3 days for triagers and developers to repair a bug. Keywords: bug reports; bug triage; text classification; predictive model | Predicting Bugs' Components via Mining Bug Reports | 5,257 |
Mayavi is an open-source, general-purpose, 3D scientific visualization package. It seeks to provide easy and interactive tools for data visualization that fit with the scientific user's workflow. For this purpose, Mayavi provides several entry points: a full-blown interactive application; a Python library with both a MATLAB-like interface focused on easy scripting and a feature-rich object hierarchy; widgets associated with these objects for assembling in a domain-specific application, and plugins that work with a general purpose application-building framework. In this article, we present an overview of the various features of Mayavi, we then provide insight on the design and engineering decisions made in implementing Mayavi, and finally discuss a few novel applications. | Mayavi: a package for 3D visualization of scientific data | 5,258 |
Starting from version 2.0, UML introduced hierarchical composite structures, which are an expressive way of defining complex software architectures, but which have a very loosely defined semantics in the standard. In this paper we propose a set of consistency rules that disambiguate the meaning of UML composite structures. Our primary goal was to have an operational model of composite structures for the OMEGA UML profile, an executable profile dedicated to the formal specification and validation of real-time systems, developed in a past project to which we contributed. However, the rules and principles stated here are applicable to other hierarchical component models based on the same concepts, such as SysML. The presented ruleset is supported by an OCL formalization which is described in this report. This formalization was applied on different complex models for the evaluation and validation of the proposed principles. | Well-formedness and typing rules for UML Composite Structures | 5,259 |
In system development life cycle (SDLC), a system model can be developed using Data Flow Diagram (DFD). DFD is graphical diagrams for specifying, constructing and visualizing the model of a system. DFD is used in defining the requirements in a graphical view. In this paper, we focus on DFD and its rules for drawing and defining the diagrams. We then formalize these rules and develop the tool based on the formalized rules. The formalized rules for consistency check between the diagrams are used in developing the tool. This is to ensure the syntax for drawing the diagrams is correct and strictly followed. The tool automates the process of manual consistency check between data flow diagrams. | Formalization of the data flow diagram rules for consistency check | 5,260 |
In information systems, a system is analyzed using a modeling tool. Analysis is an important phase prior to implementation in order to obtain the correct requirements of the system. During the requirements phase, the software requirements specification (SRS) is used to specify the system requirements. Then, this requirements specification is used to implement the system. The requirements specification can be represented using either a structure approach or an object-oriented approach. A UML (Unified Modeling Language) specification is a well-known for representation of requirements specification in an object-oriented approach. In this paper, we present one case study and discuss how mapping from UML specification into implementation is done. The case study does not require advanced programming skills. However, it does require familiarity in creating and instantiating classes, object-oriented programming with inheritance, data structure, file processing and control loop. For the case study, UML specification is used in requirements phase and Borland C++ is used in implementation phase. Based on the case study, it shows that the proposed approach improved the understanding of mapping from UML specification into implementation. | From UML Specification into Implementation using Object Mapping | 5,261 |
Normally, program execution spends most of the time on loops. Automated test data generation devotes special attention to loops for better coverage. Automated test data generation for programs having loops with variable number of iteration and variable length array is a challenging problem. It is so because the number of paths may increase exponentially with the increase of array size for some programming constructs, like merge sort. We propose a method that finds heuristic for different types of programming constructs with loops and arrays. Linear search, Bubble sort, merge sort, and matrix multiplication programs are included in an attempt to highlight the difference in execution between single loop, variable length array and nested loops with one and two dimensional arrays. We have used two parameters/heuristics to predict the minimum number of iterations required for generating automated test data. They are longest path level (kL) and saturation level (kS). The proceedings of our work includes the instrumentation of source code at the elementary level, followed by the application of the random inputs until all feasible paths or all paths having longest paths are collected. However, duplicate paths are avoided by using a filter. Our test data is the random numbers that cover each feasible path. | Heuristic Approach of Automated Test Data Generation for Program having
Array of Different Dimensions and Loops with Variable Number of Iteration | 5,262 |
Many software developments projects fail due to quality problems. Software testing enables the creation of high quality software products. Since it is a cumbersome and expensive task, and often hard to manage, both its technical background and its organizational implementation have to be well founded. We worked with regional companies that develop software in order to learn about their distinct weaknesses and strengths with regard to testing. Analyzing and comparing the strengths, we derived best practices. In this paper we explain the project's background and sketch the design science research methodology used. We then introduce a graphical categorization framework that helps companies in judging the applicability of recommendations. Eventually, we present details on five recommendations for tech-nical aspects of testing. For each recommendation we give im-plementation advice based on the categorization framework. | Improving the Technical Aspects of Software Testing in Enterprises | 5,263 |
This book consists of the chapters describing novel approaches to integrating fault tolerance into software development process. They cover a wide range of topics focusing on fault tolerance during the different phases of the software development, software engineering techniques for verification and validation of fault tolerance means, and languages for supporting fault tolerance specification and implementation. Accordingly, the book is structured into the following three parts: Part A: Fault tolerance engineering: from requirements to code; Part B: Verification and validation of fault tolerant systems; Part C: Languages and Tools for engineering fault tolerant systems. | An Introduction to Software Engineering and Fault Tolerance | 5,264 |
Component Based Approach has been introduced in core engineering discipline long back but the introduction to component based concept in software perspective is recently developed by Object Management Group. Its benefits from the re-usability point of view is enormous. The intertwining relationship of domain engineering with component based software engineering is analyzed. The object oriented approach and its basic difference with component approach is of great concern. The present study highlights the life-cycle, cost effectiveness and the basic study of component based software from application perspective. | Component Based Development | 5,265 |
A classical problem in Software Engineering is how to certify that every system requirement is correctly implemented by source code. This problem, albeit well studied, can still be considered an open one, given the problems faced by software development organizations. Trying to solve this problem, Behavior-Driven Development (BDD) is a specification technique that automatically certifies that all functional requirements are treated properly by source code, through the connection of the textual description of these requirements to automated tests. However, in some areas, such as Enterprise Information Systems, requirements are identified by Business Process Modeling - which uses graphical notations of the underlying business processes. Therefore, the aim of this paper is to present Business Language Driven Development (BLDD), a method that aims to extend BDD, by connecting business process models directly to source code, while keeping the expressiveness of text descriptions when they are better fitted than graphical artifacts. | Introducing Business Language Driven Development | 5,266 |
The growing complexity of software systems as well as changing conditions in their operating environment demand systems that are more flexible, adaptive and dependable. The service-oriented computing paradigm is in widespread use to support such adaptive systems, and, in many domains, adaptations may occur dynamically and in real time. In addition, services from heterogeneous, possibly unknown sources may be used. This motivates a need to ensure the correct behaviour of the adapted systems, and its continuing compliance to time bounds and other QoS properties. The complexity of dynamic adaptation (DA) is significant, but currently not well understood or formally specified. This paper elaborates a well-founded model and theory of DA, introducing formalisms written using COWS. The model is evaluated for reliability and responsiveness properties with the model checker CMC. | A Formal Model for Dynamically Adaptable Services | 5,267 |
The article proposes a model for the configuration management of open systems. The model aims at validation of configurations against given specifications. An extension of decision graphs is proposed to express specifications. The proposed model can be used by software developers to validate their own configurations across different versions of the components, or to validate configurations that include components by third parties. The model can also be used by end-users to validate compatibility among different configurations of the same application. The proposed model is first discussed in some application scenarios and then formally defined. Moreover, a type discipline is given to formally define validation of a configuration against a system specification | A Model for Configuration Management of Open Software Systems | 5,268 |
Interface adaptation allows code written for one interface to be used with a software component with another interface. When multiple adapters are chained together to make certain adaptations possible, we need a way to analyze how well the adaptation is done in case there are more than one chains that can be used. We introduce an approach to precisely analyzing the loss in an interface adapter chain using a simple form of abstract interpretation. | Precisely Analyzing Loss in Interface Adapter Chains | 5,269 |
Testing is one of the most indispensable tasks in software engineering. The role of testing in software development has grown significantly because testing is able to reveal defects in the code in an early stage of development. Many unit test frameworks compatible with C/C++ code exist, but a standard one is missing. Unfortunately, many unsolved problems can be mentioned with the existing methods, for example usually external tools are necessary for testing C++ programs. In this paper we present a new approach for testing C++ programs. Our solution is based on C++ template metaprogramming facilities, so it can work with the standard-compliant compilers. The metaprogramming approach ensures that the overhead of testing is minimal at runtime. This approach also supports that the specification language can be customized among other advantages. Nevertheless, the only necessary tool is the compiler itself. | Testing by C++ template metaprograms | 5,270 |
Classic project management and its tools usually deal with the management of three variables, and their relationships with each other. These are the factors of time, resources (cost) and quality. If one of the variables is to be improved, it always has negative effects on the other two. However, these factors only partially describe the reality of project management. What current project management tools often only consider implicitly is the location of an activity. In this paper, the implications of using location data for project management are clarified and a system that offers mobile support in planning and implementing projects. ----- Klassisches Projektmanagement und seine Werkzeuge befassen sich meist mit der Verwaltung dreier Gr\"o{\ss}en und ihrer Zusammenh\"ange untereinander. Dabei handelt es sich um die Faktoren Zeit, Ressourcen (Kosten) und Qualit\"at. Falls eine der Gr\"o{\ss}en verbessert werden soll, hat dies immer negative Auswirkungen auf die anderen beiden Gr\"o{\ss}en. Diese Gr\"o{\ss}en beschreiben die Ph\"anomene des Projektmanagement jedoch nur unvollst\"andig. Was bei Projektmanagementwerkzeugen bis dato oft nur implizit durch den Projektleiter einbezogen wird ist ein Ortsbezug. In diesem Beitrag werden die Implikationen durch diesen Ortsbezug konkretisiert und ein System dargestellt, welches Projektleiter bei der Planung und Umsetzung von Projekten mobil wie auch station\"ar unterst\"utzt. | Mobiles ortsbezogenes Projektmanagement | 5,271 |
Methods for the automatic composition of services into executable workflows need detailed knowledge about the application domain,in particular about the available services and their behavior in terms of input/output data descriptions. In this paper we discuss how the EMBRACE data and methods ontology (EDAM) can be used as background knowledge for the composition of bioinformatics workflows. We show by means of a small example domain that the EDAM knowledge facilitates finding possible workflows, but that additional knowledge is required to guide the search towards actually adequate solutions. We illustrate how the ability to flexibly formulate domain-specific and problem-specific constraints supports the work ow development process. | Constraint-Guided Workflow Composition Based on the EDAM Ontology | 5,272 |
The Use Case Maps (UCM) scenario notation is applicable to many requirements engineering activities. However, other scenario notations, such as Message Sequence Charts (MSC) and UML Sequence Diagrams (SD), have shown to be better suited for detailed design. In order to use the notation that is best appropriate for each phase in an efficient manner, a mechanism has to be devised to automatically transfer the knowledge acquired during the requirements analysis phase (using UCM) to the design phase (using MSC or SD). This paper introduces UCMEXPORTER, a new tool that implements such a mechanism and reduces the gap between high-level requirements and detailed design. UCMEXPORTER automatically transforms individual UCM scenarios to UML Sequence Diagrams, MSC scenarios, and even TTCN-3 test skeletons. We highlight the current capabilities of the tool as well as architectural solutions addressing the main challenges faced during such transformation, including the handling of concurrent scenario paths, the generation of customized messages, and tool interoperability. | UCMExporter: Supporting Scenario Transformations from Use Case Maps | 5,273 |
Outsourcing, the passing on of tasks by organizations to other organizations, often including the personnel and means to perform these tasks, has become an important IT-business strategy over the past decades. We investigate imaginative definitions for outsourcing relations and outsourcing transformations. Abstract models of an extreme and unrealistic simplicity are considered in order to investigate possible definitions of outsourcing. Rather than covering all relevant practical cases an imaginative definition of a concept provides obvious cases of its instantiation from which more refined or liberal definitions may be derived. A definition of outsourcing induces to a complementary definition of insourcing. Outsourcing and insourcing have more complex variations in which multiple parties are involved. All of these terms both refer to state transformations and to state descriptions pertaining to the state obtained after such transformations. We make an attempt to disambiguate the terminology in that respect and we make an attempt to characterize the general concept of sourcing which captures some representative cases. Because mereology is the most general theory of parthood relations we coin business mereology as the general theory in business studies which concerns the full variety of sourcing relations and transformations. | Business Mereology: Imaginative Definitions of Insourcing and
Outsourcing Transformations | 5,274 |
This document presents a lab demo that exemplifies the manual derivation of an OO Method conceptual model, taking as input a Communication Analysis requirements model. In addition, it is described how the conceptual model is created in the OLIVANOVA Modeler tool. The lab demo corresponds to part of the business processes of a fictional small and medium enterprise named SuperStationery Co. This company provides stationery and office material to its clients. The company acts as a as intermediary: the company has a catalogue of products that are bought from suppliers and sold to clients. This lab demo, besides illustrating the derivation technique, demonstrates that the technique is feasible in practice. Also, the results of this lab demo provide a valuable feedback in order to improve the derivation technique. | Integration of Communication Analysis and the OO Method: Manual
derivation of the Conceptual Model. The SuperStationery Co. lab demo | 5,275 |
In Real-time system, utilization based schedulability test is a common approach to determine whether or not tasks can be admitted without violating deadline requirements. The exact problem has previously been proven intractable even upon single processors; sufficient conditions are presented here for determining whether a given periodic task system will meet all deadlines if scheduled non-preemptively upon a multiprocessor platform using the earliest-deadline first scheduling algorithm. Many real-time scheduling algorithms have been developed recently to reduce affinity in the portable devices that use processors. Extensive power aware scheduling techniques have been published for energy reduction, but most of them have been focused solely on reducing the processor affinity. The non-preemptive scheduling of periodic task systems upon processing platforms comprised of several same processors is considered. | Precise Schedulability Analysis for unfeasible to notify separately for
comprehensive - EDF Scheduling of interrupted Hard Real-Time Tasks on the
similar Multiprocessors | 5,276 |
Interface specifications play an important role in component-based software development. An interface theory is a formal framework supporting composition, refinement and compatibility of interface specifications. We present different interface theories which use modal I/O-transition systems as their underlying domain for interface specifications: synchronous interface theories, which employ a synchronous communication schema, as well as a novel interface theory for asynchronous communication where components communicate via FIFO-buffers. | Interface Theories for (A)synchronously Communicating Modal
I/O-Transition Systems | 5,277 |
Despite the increasing maturity of model-driven software development (MDD), some research challenges remain open in the field of information systems (IS). For instance, there is a need to improve modelling techniques so that they cover several development stages in an integrated way, and they facilitate the transition from analysis to design. This paper presents Message Structures, a technique for the specification of communicative interactions between the IS and organisational actors. This technique can be used both in the analysis stage and in the design stage. During analysis, it allows abstracting from the technology that will support the IS, and to complement business process diagramming techniques with the specification of the communicational needs of the organisation. During design, Message Structures serves two purposes: (i) it allows to systematically derive a specification of the IS memory (e.g. a UML class diagram), (ii) and it allows to reason the user interface design using abstract patterns. This technique is part of Communication Analysis, a communication-oriented requirements engineering method, but it can be adopted in order to extend widely-used business process and functional requirements modelling techniques (e.g. BPMN, Use Cases). Moreover, the paper presents two tools that support Message Structures, one uses the Xtext technology, and the other uses the Eclipse Modelling Framework. Industrial experience has shown us that the technique can be adopted and applied in complex projects. | A practical guide to Message Structures: a modelling technique for
information systems analysis and design | 5,278 |
Since the birth of software engineering, it always are recognized as one pure engineering subject, therefore, the foundational scientific problems are not paid much attention. This paper proposes that Requirements Analysis, the kernel process of software engineering, can be modeled based on the concept of "common knowledge". Such a model would make us understand the nature of this process. This paper utilizes the formal language as the tool to characterize the "common knowledge"-based Requirements Analysis model, and theoretically proves that : 1) the precondition of success of software projects regardless of cost would be that the participants in a software project have fully known the requirement specification, if the participants do not understand the meaning of the other participants; 2) the precondition of success of software projects regardless of cost would be that the union set of knowledge of basic facts of the participants in a software project can fully cover the requirement specification, if the participants can always understand the meaning of the other participants. These two theorems may have potential meanings to propose new software engineering methodology. | A Kind of Representation of Common Knowledge and its Application in
Requirements Analysis | 5,279 |
Fiji National University is encountering many difficulties with its current administrative systems. These difficulties include accessibility, scalability, performance, flexibility and integration. We propose a new campus information system, FNU-CIS to addresses these difficulties. FNU-CIS has the potential to provide wide range of the services for students and staffs at the university. In order to assist in the design and implementation of proposed FNU-CIS, we present an overview, software architecture and prototype implementation of our proposed system. We discuss the key properties of our system, compare it with other similar systems available and outline our future plans for research in FNU-CIS implementation. | Thin Client Web-Based Campus Information Systems for Fiji National
University | 5,280 |
Many programmers, when they encounter an error, would like to have the benefit of automatic fix suggestions---as long as they are, most of the time, adequate. Initial research in this direction has generally limited itself to specific areas, such as data structure classes with carefully designed interfaces, and relied on simple approaches. To provide high-quality fix suggestions in a broad area of applicability, the present work relies on the presence of contracts in the code, and on the availability of dynamic analysis to gather evidence on the values taken by expressions derived from the program text. The ideas have been built into the AutoFix-E2 automatic fix generator. Applications of AutoFix-E2 to general-purpose software, such as a library to manipulate documents, show that the approach provides an improvement over previous techniques, in particular purely model-based approaches. | Code-based Automated Program Fixing | 5,281 |
In this report we describe a tool framework for certifying properties of PLCs: CERTPLC. CERTPLC can handle PLC descriptions provided in the Sequential Function Chart (SFC) language of the IEC 61131-3 standard. It provides routines to certify properties of systems by delivering an independently checkable formal system description and proof (called certificate) for the desired properties. We focus on properties that can be described as inductive invariants. System descriptions and certificates are generated and handled using the COQ proof assistant. Our tool framework is used to provide supporting evidence for the safety of embedded systems in the industrial automation domain to third-party authorities. In this document we describe the tool framework: usage scenarios, the archi-tecture, semantics of PLCs and their realization in COQ, proof generation and the construction of certificates. | A Tool for the Certification of PLCs based on a Coq Semantics for
Sequential Function Charts | 5,282 |
Most of legacy systems use nowadays were modeled and documented using structured approach. Expansion of these systems in terms of functionality and maintainability requires shift towards object-oriented documentation and design, which has been widely accepted by the industry. In this paper, we present a survey of the existing Data Flow Diagram (DFD) to Unified Modeling language (UML) transformation techniques. We analyze transformation techniques using a set of parameters, identified in the survey. Based on identified parameters, we present an analysis matrix, which describes the strengths and weaknesses of transformation techniques. It is observed that most of the transformation approaches are rule based, which are incomplete and defined at abstract level that does not cover in depth transformation and automation issues. Transformation approaches are data centric, which focuses on data-store for class diagram generation. Very few of the transformation techniques have been applied on case study as a proof of concept, which are not comprehensive and majority of them are partially automated. | Comparative Study on DFD to UML Diagrams Transformations | 5,283 |
The requirements roadmap concept is introduced as a solution to the problem of the requirements engineering of adaptive systems. The concept requires a new general definition of the requirements problem which allows for quantitative (numeric) variables, together with qualitative (binary boolean) propositional variables, and distinguishes monitored from controlled variables for use in control loops. We study the consequences of these changes, and argue that the requirements roadmap concept bridges the gap between current general definitions of the requirements problem and its notion of solution, and the research into the relaxation of requirements, the evaluation of their partial satisfaction, and the monitoring and control of requirements, all topics of particular interest in the engineering of requirements for adaptive systems [Cheng et al. 2009]. From the theoretical perspective, we show clearly and formally the fundamental differences between more traditional conception of requirements engineering (e.g., Zave & Jackson [1997]) and the requirements engineering of adaptive systems (from Fickas & Feather [1995], over Letier & van Lamsweerde [2004], and up to Whittle et al. [2010] and the most recent research). From the engineering perspective, we define a proto-framework for early requirements engineering of adaptive systems, which illustrates the features needed in future requirements frameworks for this class of systems. | Mixed-Variable Requirements Roadmaps and their Role in the Requirements
Engineering of Adaptive Systems | 5,284 |
Two of the common features of business and the web are diversity and dynamism. Diversity results in users having different preferences for the quality requirements of a system. Diversity also makes possible alternative implementations for functional requirements, called variants, each of them providing different quality. The quality provided by the system may vary due to different variant components and changes in the environment. The challenge is to dynamically adapt to quality variations and to find the variant that best fulfills the multi-criteria quality requirements driven by user preferences and current runtime conditions. For service-oriented systems this challenge is augmented by their distributed nature and lack of control over the constituent services and their provided quality of service (QoS). We propose a novel approach to runtime adaptability that detects QoS changes, updates the system model with runtime information, and uses the model to select the variant to execute at runtime. We introduce negotiable maintenance goals to express user quality preferences in the requirements model and automatically interpret them quantitatively for system execution. Our lightweight selection strategy selects the variant that best fulfills the user required multi-criteria QoS based on updated QoS values. | Runtime Adaptability driven by Negotiable Quality Requirements | 5,285 |
Variability in Business Process modeling has already been faced by different authors from the literature. Depending on the context in which each author faces the modeling problem, we find different approaches (C-EPC, C-YAWL, FEATURE-EPC, PESOA, PROVOP, or WORKLETS). In this report we present four of the most representative approaches (C-EPC, PESOA, PROVOP and WORKLETS) which are presented by means of the different case studies found in the literature. | BP Variability Case Studies Development using different Modeling
Approaches | 5,286 |
This paper studies and proposes a technique of function point counting for items classified as non-measurable. The main objective is to expand the conventional technique of counting to ensure that this comprises consistently the tasks involved in building portals and sites in general. In addition, it also applies to measure the cost of continued activities related to these web applications. The extended technique is potentially useful to measure several products associated with information systems, including periodicals publishable in intranets. | Theoretical Count of Function Points for Non-Measurable Items | 5,287 |
Pervasive computing appears like a new computing era based on networks of objects and devices evolving in a real world, radically different from distributed computing, based on networks of computers and data storages. Contrary to most context-aware approaches, we work on the assumption that pervasive software must be able to deal with a dynamic software environment before processing contextual data. After demonstrating that SOA (Service oriented Architecture) and its numerous principles are well adapted for pervasive computing, we present our extended SOA model for pervasive computing, called Service Lightweight Component Architecture (SLCA). SLCA presents various additional principles to meet completely pervasive software constraints: software infrastructure based on services for devices, local orchestrations based on lightweight component architecture and finally encapsulation of those orchestrations into composite services to address distributed composition of services. We present a sample application of the overall approach as well as some relevant measures about SLCA performances. | Lightweight Service Oriented Architecture for Pervasive Computing | 5,288 |
Search-based Software Engineering has been utilized for a number of software engineering activities. One area where Search-Based Software Engineering has seen much application is test data generation. Evolutionary testing designates the use of metaheuristic search methods for test case generation. The search space is the input domain of the test object, with each individual or potential solution, being an encoded set of inputs to that test object. The fitness function is tailored to find test data for the type of test that is being undertaken. Evolutionary Testing (ET) uses optimizing search techniques such as evolutionary algorithms to generate test data. The effectiveness of GA-based testing system is compared with a Random testing system. For simple programs both testing systems work fine, but as the complexity of the program or the complexity of input domain grows, GA-based testing system significantly outperforms Random testing. | Search-based software test data generation using evolutionary
computation | 5,289 |
The fundamental unit of large scale software construction is the component. A component is the fundamental user interface object in Java. Everything you see on the display in a java application is a component. The ability to let users drag a component from the Interface and drop into your application is almost a requirement of a modern, commercial user interface. The CBD approach brings high component reusability and easy maintainability, and reduces time-to-market. This paper describes the component repository which provides functionality for component reuse process through the drag and drop mechanism and it's influences on the reusable components | Drag and Drop: Influences on the Design of Reusable Software Components | 5,290 |
Open source projects often maintain open bug repositories during development and maintenance, and the reporters often point out straightly or implicitly the reasons why bugs occur when they submit them. The comments about a bug are very valuable for developers to locate and fix the bug. Meanwhile, it is very common in large software for programmers to override or overload some methods according to the same logic. If one method causes a bug, it is obvious that other overridden or overloaded methods maybe cause related or similar bugs. In this paper, we propose and implement a tool Rebug- Detector, which detects related bugs using bug information and code features. Firstly, it extracts bug features from bug information in bug repositories; secondly, it locates bug methods from source code, and then extracts code features of bug methods; thirdly, it calculates similarities between each overridden or overloaded method and bug methods; lastly, it determines which method maybe causes potential related or similar bugs. We evaluate Rebug-Detector on an open source project: Apache Lucene-Java. Our tool totally detects 61 related bugs, including 21 real bugs and 10 suspected bugs, and it costs us about 15.5 minutes. The results show that bug features and code features extracted by our tool are useful to find real bugs in existing projects. | Detect Related Bugs from Source Code Using Bug Information | 5,291 |
Enterprise information systems can be developed following a model-driven paradigm. This way, models that represent the organisational work practice are used to produce models that represent the information system. Current software development methods are starting to provide guidelines for the construction of conceptual models, taking as input requirements models. This paper proposes the integration of two methods: Communication Analysis (a communication-oriented requirements engineering method [Espa\~na, Gonz\'alez et al. 2009]) and the OO-Method (a model-driven object-oriented software development method [Pastor and Molina 2007]). For this purpose, a systematic technique for deriving OO-Method Conceptual Models from business process and requirements models is proposed. The business process specifications (which include message structures) are processed in order to obtain static and dynamic views of the computerised information system. Then, using the OLIVANOVA framework, software source code can be generated automatically [CARE Technologies]. | Integration of Communication Analysis and the OO-Method: Rules for the
manual derivation of the Conceptual Model | 5,292 |
Validation is one of the software engineering disciplines that help build quality into software. The major objective of software validation process is to determine that the software performs its intended functions correctly and provide information about its quality and reliability. This paper identifies general measures for the specific goals and its specific practices of Validation Process Area (PA) in Capability Maturity Model Integration (CMMI). CMMI is developed by Software Engineering Institute (SEI). CMMI is a framework for improvement and assessment of a software development process. CMMI needs a measurement program that is practical. The method we used to define the measures is to apply the Goal Question Metrics (GQM) paradigm to the specific goals and its specific practices of Validation Process Area in CMMI. | Validation Measures in CMMI | 5,293 |
The paper describes a framework for multi-function system testing. Multi-function system testing is considered as fusion (or revelation) of clique-like structures. The following sets are considered: (i) subsystems (system parts or units / components / modules), (ii) system functions and a subset of system components for each system function, and (iii) function clusters (some groups of system functions which are used jointly). Test procedures (as units testing) are used for each subsystem. The procedures lead to an ordinal result (states, colors) for each component, e.g., [1,2,3,4] (where 1 corresponds to 'out of service', 2 corresponds to 'major faults', 3 corresponds to 'minor faults', 4 corresponds to 'trouble free service'). Thus, for each system function a graph over corresponding system components is examined while taking into account ordinal estimates/colors of the components. Further, an integrated graph (i.e., colored graph) for each function cluster is considered (this graph integrates the graphs for corresponding system functions). For the integrated graph (for each function cluster) structure revelation problems are under examination (revelation of some subgraphs which can lead to system faults): (1) revelation of clique and quasi-clique (by vertices at level 1, 2, etc.; by edges/interconnection existence) and (2) dynamical problems (when vertex colors are functions of time) are studied as well: existence of a time interval when clique or quasi-clique can exist. Numerical examples illustrate the approach and problems. | Framework for Clique-based Fusion of Graph Streams in Multi-function
System Testing | 5,294 |
The paper addresses modular hierarchical design (composition) of a management system for smart homes. The management system consists of security subsystem (access control, alarm control), comfort subsystem (temperature, etc.), intelligence subsystem (multimedia, houseware). The design solving process is based on Hierarchical Morphological Multicriteria Design (HMMD) approach: (1) design of a tree-like system model, (2) generation of design alternatives for leaf nodes of the system model, (3) Bottom-Up process: (i) multicriteria selection of design alternatives for system parts/components and (ii) composing the selected alternatives into a resultant combination (while taking into account ordinal quality of the alternatives above and their compatibility). A realistic numerical example illustrates the design process of a management system for smart homes. | Composition of Management System for Smart Homes | 5,295 |
The article describes a course on system design (structural approach) which involves the following: issues of systems engineering; structural models; basic technological problems (structural system modeling, modular design, evaluation/comparison, revelation of bottlenecks, improvement/upgrade, multistage design, modeling of system evolution); solving methods (optimization, combinatorial optimization, multicriteria decision making); design frameworks; and applications. The course contains lectures and a set of special laboratory works. The laboratory works consist in designing and implementing a set of programs to solve multicriteria problems (ranking/selection, multiple choice problem, clustering, assignment). The programs above are used to solve some standard problems (e.g., hierarchical design of a student plan, design of a marketing strategy). Concurrently, each student can examine a unique applied problem from his/her applied domain(s) (e.g., telemetric system, GSM network, integrated security system, testing of microprocessor systems, wireless sensor, corporative communication network, network topology). Mainly, the course is targeted to developing the student skills in modular analysis and design of various multidisciplinary composite systems (e.g., software, electronic devices, information, computers, communications). The course was implemented in Moscow Institute of Physics and Technology (State University). | Course on System Design (structural approach) | 5,296 |
For a software system, its architecture is typically defined as the fundamental organization of the system incorporated by its components, their relationships to one another and their environment, and the principles governing their design. If contributed to by the artifacts coresponding to engineering processes that govern the system's evolution, the definition gets natually extended into the architecture of software and software process. Obviously, as long as there were no software systems, managing their architecture was no problem at all; when there were only small systems, managing their architecture became a mild problem; and now we have gigantic software systems, and managing their architecture has become an equally gigantic problem (to paraphrase Edsger Dijkstra). In this paper we propose a simple, yet we believe effective, model for organizing architecture of software systems. First of all we postulate that only a hollistic approach that supports continuous integration and verification for all software and software process architectural artifacts is the one worth taking. Next we indicate a graph-based model that not only allows collecting and maintaining the architectural knowledge in respect to both software and software process, but allows to conveniently create various quantitive metric to asses their respective quality or maturity. Such model is actually independent of the development methodologies that are currently in-use, that is it could well be applied for projects managed in an adaptive, as well as in a formal approach. Eventually we argue that the model could actually be implemented by already existing tools, in particular graph databases are a convenient implementation of architectural repository. | Software is a directed multigraph (and so is software process) | 5,297 |
More information is now being published in machine processable form on the web and, as de-facto distributed knowledge bases are materializing, partly encouraged by the vision of the Semantic Web, the focus is shifting from the publication of this information to its consumption. Platforms for data integration, visualization and analysis that are based on a graph representation of information appear first candidates to be consumers of web-based information that is readily expressible as graphs. The question is whether the adoption of these platforms to information available on the Semantic Web requires some adaptation of their data structures and semantics. Ondex is a network-based data integration, analysis and visualization platform which has been developed in a Life Sciences context. A number of features, including semantic annotation via ontologies and an attention to provenance and evidence, make this an ideal candidate to consume Semantic Web information, as well as a prototype for the application of network analysis tools in this context. By analyzing the Ondex data structure and its usage, we have found a set of discrepancies and errors arising from the semantic mismatch between a procedural approach to network analysis and the implications of a web-based representation of information. We report in the paper on the simple methodology that we have adopted to conduct such analysis, and on issues that we have found which may be relevant for a range of similar platforms | Lost in translation: data integration tools meet the Semantic Web
(experiences from the Ondex project) | 5,298 |
Running parallel applications requires special and expensive processing resources to obtain the required results within a reasonable time. Before parallelizing serial applications, some analysis is recommended to be carried out to decide whether it will benefit from parallelization or not. In this paper we discuss the issue of speed up gained from parallelization using Message Passing Interface (MPI) to compromise between the overhead of parallelization cost and the gained parallel speed up. We also propose an experimental method to predict the speed up of MPI applications. | To Parallelize or Not to Parallelize, Speed Up Issue | 5,299 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.