text
stringlengths
17
3.36M
source
stringlengths
3
333
__index_level_0__
int64
0
518k
The RODIN, and DEPLOY projects laid solid foundations for further theoretical, and practical (methodological and tooling) advances with Event-B. Our current interest is the co-simulation of cyber-physical systems using Event-B. Using this approach we aim to simulate various features of the environment separately, in order to exercise deployable code. This paper has two contributions, the first is the extension of the code generation work of DEPLOY, where we add the ability to generate code from Event-B state-machine diagrams. The second describes how we may use code, generated from state-machines, to simulate the environment, and simulate concurrently executing state-machines, in a single task. We show how we can instrument the code to guide the simulation, by controlling the relative rate that non-deterministic transitions are traversed in the simulation.
Building on the DEPLOY Legacy: Code Generation and Simulation
5,600
Designing fault tolerance mechanisms for multi-agent systems is a notoriously difficult task. In this paper we present an approach to formal development of a fault tolerant multi-agent system by refinement in Event-B. We demonstrate how to formally specify cooperative error recovery and dynamic reconfiguration in Event-B. Moreover, we discuss how to express and verify essential properties of a fault tolerant multi-agent system while refining it. The approach is illustrated by a case study - a multi-robotic system.
Development of Fault Tolerant MAS with Cooperative Error Recovery by Refinement in Event-B
5,601
Event-B is a formal approach oriented to system modeling and analysis. It supports refinement mechanism that enables stepwise modeling and verification of a system. By using refinement, the complexity of verification can be spread and mitigated. In common development using Event-B, a specification written in a natural language is examined before modeling in order to plan the modeling and refinement strategy. After that, starting from a simple abstract model, concrete models in several different abstraction levels are constructed by gradually introducing complex structures and concepts. Although users of Event-B have to plan how to abstract the specification for the construction of each model, guidelines for such a planning have not been suggested. Specifically, some elements in a model often require that other elements are included in the model because of semantics constraints of Event-B. As such requirements introduces many elements at once, non-experts of Event-B often make refinement rough though rough refinement does not mitigate the complexity of verification well. In response to the problem, a method is proposed to plan what models are constructed in each abstraction level. The method calculates plans that mitigate the complexity well considering the semantics constraints of Event-B and the relationships between elements in a system.
Towards Refinement Strategy Planning for Event-B
5,602
This article presents a verification and validation activity performed in an industrial context, to validate configuration data of a metro CBTC system by creating a formal B model of these configuration data and of their properties. A double tool chain is used to safely check whether a certain given input of configuration data fulfill its properties. One tool is based on some Rodin and open source plug-ins and the other tool is based on ProB.
Formal Data Validation with Event-B
5,603
Software re-modularization is an old preoccupation of reverse engineering research. The advantages of a well structured or modularized system are well known. Yet after so much time and efforts, the field seems unable to come up with solutions that make a clear difference in practice. Recently, some researchers started to question whether some basic assumptions of the field were not overrated. The main one consists in evaluating the high-cohesion/low-coupling dogma with metrics of unknown relevance. In this paper, we study a real structuring case (on the Eclipse platform) to try to better understand if (some) existing metrics would have helped the software engineers in the task. Results show that the cohesion and coupling metrics used in the experiment did not behave as expected and would probably not have helped the maintainers reach there goal. We also measured another possible restructuring which is to decrease the number of cyclic dependencies between modules. Again, the results did not meet expectations.
Legacy Software Restructuring: Analyzing a Concrete Case
5,604
Integrating formal methods into industrial practice is a challenging task. Often, different kinds of expertise are required within the same development. On the one hand, there are domain engineers who have specific knowledge of the system under development. On the other hand, there are formal methods experts who have experience in rigorously specifying and reasoning about formal systems. Coordination between these groups is important for taking advantage of their expertise. In this paper, we describe our approach of using generic instantiation to facilitate this coordination. In particular, generic instantiation enables a separation of concerns between the different parties involved in developing formal systems.
Abstract Data Types in Event-B - An Application of Generic Instantiation
5,605
Cloud based development is a challenging task for several software engineering projects, especially for those which needs development with reusability. Present time of cloud computing is allowing new professional models for using the software development. The expected upcoming trend of computing is assumed to be this cloud computing because of speed of application deployment, shorter time to market, and lower cost of operation. Until Cloud Co mputing Reusability Model is considered a fundamental capability, the speed of developing services is very slow. Th is paper spreads cloud computing with component based development named Cloud Co mputing Reusability Model (CCR) and enable reusability in cloud computing. In this paper Cloud Co mputing Reusability Model has been proposed. The model has been validated by Cloudsim an d experimental result shows that reusability based cloud computing approach is effective in minimizing cost and time to market.
Reusability Framework for Cloud Computing
5,606
Web services are building blocks of interoperable systems. Composing Web services makes the processes capable of doing complex tasks. Composite services may fail during their execution which can be diagnosed by a mediator. The mediator adapts the structure so that the failure is recovered. Moreover, future executions should avoid the situation or organize a strategy to repair the structure with a minimum delay. In this paper the failure reasons of a composite service are reviewed. Furthermore, the requirements of a solution for recovery of a system from a failure are investigated.
Requirements of a Recovery Solution for Failure of Composite Web Services
5,607
The transition from user requirements to UML diagrams is a difficult task for the designer especially when he handles large texts expressing these needs. Modeling class Diagram must be performed frequently, even during the development of a simple application. This paper proposes an approach to facilitate class diagram extraction from textual requirements using NLP techniques and domain ontology.
From user requirements to UML class diagram
5,608
One of the significant objectives of software engineering community is to use effective and useful models for precise calculation of effort in software cost estimation. The existing techniques cannot handle the dataset having categorical variables efficiently including the commonly used analogy method. Also, the project attributes of cost estimation are measured in terms of linguistic values whose imprecision leads to confusion and ambiguity while explaining the process. There are no definite set of models which can efficiently handle the dataset having categorical variables and endure the major hindrances such as imprecision and uncertainty without taking the classical intervals and numeric value approaches. In this paper, a new approach based on fuzzy logic, linguistic quantifiers and analogy based reasoning is proposed to enhance the performance of the effort estimation in software projects dealing with numerical and categorical data. The performance of this proposed method illustrates that there is a realistic validation of the results while using historical heterogeneous dataset. The results were analyzed using the Mean Magnitude Relative Error (MMRE) and indicates that the proposed method can produce more explicable results than the methods which are in vogue.
Estimation of Effort in Software Cost Analysis for Heterogenous Dataset using Fuzzy Analogy
5,609
In Requirements Engineering, requirements elicitation aims the acquisition of information from the stakeholders of a system-to-be. An important task during elicitation is to identify and render explicit the stakeholders' implicit assumptions about the system-to-be and its environment. Purpose of doing so is to identify omissions in, and conflicts between requirements. This paper offers a conceptual framework for the identification and documentation of default requirements that stakeholders may be using. The framework is relevant for practice, as it forms a check-list for types of questions to use during elicitation. An empirical validation is described, and guidelines for elicitation are drawn.
Context-Driven Elicitation of Default Requirements: an Empirical Validation
5,610
Testing is a validation activity used to check the system's correctness with respect to the specification. In this context,test based on refusals is studied in theory and tools are effectively constructed. This paper addresses,a formal testing based on stochastic refusals graphs (SRG) in order to test stochastic system represented by maximality-based labeled stochastic transition systems (MLSTS). First, we propose a framework to generate SRGs from MLSTSs. Second,we present a new technique to generate automatically a canonical tester from stochastic refusal graph and conformance relation confSRG. Finally, implementation is proposed and the application of our approach is shown by an example.
Extending Refusal Testing by Stochastic Refusals for Testing Non-deterministic Systems
5,611
Maintenance is a dominant component of software cost, and localizing reported defects is a significant component of maintenance. We propose a scalable approach that leverages the natural language present in both defect reports and source code to identify files that are potentially related to the defect in question. Our technique is language-independent and does not require test cases. The approach represents reports and code as separate structured documents and ranks source files based on a document similarity metric that leverages inter-document relationships. We evaluate the fault-localization accuracy of our method against both lightweight baseline techniques and also reported results from state-of-the-art tools. In an empirical evaluation of 5345 historical defects from programs totaling 6.5 million lines of code, our approach reduced the number of files inspected per defect by over 91%. Additionally, we qualitatively and quantitatively examine the utility of the textual and surface features used by our approach.
Fault Localization Using Textual Similarities
5,612
Today, service oriented systems need to be enhanced to sense and react to users context in order to provide a better user experience. To meet this requirement, Context-Aware Services (CAS) have emerged as an underling design and development paradigm for the development of context-aware systems. The fundamental challenges for such systems development are context-awareness management and service adaptation to the users context. To cope with such requirements, we propose a well designed architecture, named ACAS, to support the development of Context-Aware Service Oriented Systems (CASOS). This architecture relies on a set of context-awareness and CAS specifications and metamodels to enhance a core service, in service oriented systems, to be context-aware. This enhancement is fulfilled by the Aspect Adaptations Weaver (A2W) which, based on the Aspect Paradigm (AP) concepts, considers the services adaptations as aspects.
Context-Awareness for Service Oriented Systems
5,613
Can one estimate the number of remaining faults in a software system? A credible estimation technique would be immensely useful to project managers as well as customers. It would also be of theoretical interest, as a general law of software engineering. We investigate possible answers in the context of automated random testing, a method that is increasingly accepted as an effective way to discover faults. Our experimental results, derived from best-fit analysis of a variety of mathematical functions, based on a large number of automated tests of library code equipped with automated oracles in the form of contracts, suggest a poly-logarithmic law. Although further confirmation remains necessary on different code bases and testing techniques, we argue that understanding the laws of testing may bring significant benefits for estimating the number of detectable faults and comparing different projects and practices.
The Search for the Laws of Automatic Random Testing
5,614
Estimates of quantity of the certificates issued during 10 years of existence of the professionals certification program in the area of software engineering implemented by one of the leading professional associations are presented. The estimates have been obtained by way of processing certificate records openly accessible at the certification program web-site. Comparison of these estimates and the known facts about evolution of the certification program indicates that as of the present day this evolution has not led to a large scale issuance of these certificates. But the same estimates, possibly, indicate that the meaning of these certificates differs from what is usually highlighted, and their real value is much greater. Also these estimates can be viewed, besides everything else, as reflecting an outcome of a decade long experimental verification of the known idea about "software engineering as a mature engineering profession," and they possibly show that this idea deserves partial revision.
How many software engineering professionals hold this certificate?
5,615
Software verification has emerged as a key concern for ensuring the continued progress of information technology. Full verification generally requires, as a crucial step, equipping each loop with a "loop invariant". Beyond their role in verification, loop invariants help program understanding by providing fundamental insights into the nature of algorithms. In practice, finding sound and useful invariants remains a challenge. Fortunately, many invariants seem intuitively to exhibit a common flavor. Understanding these fundamental invariant patterns could therefore provide help for understanding and verifying a large variety of programs. We performed a systematic identification, validation, and classification of loop invariants over a range of fundamental algorithms from diverse areas of computer science. This article analyzes the patterns, as uncovered in this study, governing how invariants are derived from postconditions; it proposes a taxonomy of invariants according to these patterns, and presents its application to the algorithms reviewed. The discussion also shows the need for high-level specifications based on "domain theory". It describes how the invariants and the corresponding algorithms have been mechanically verified using an automated program prover; the proof source files are available. The contributions also include suggestions for invariant inference and for model-based specification.
Loop invariants: analysis, classification, and examples
5,616
Contracts are a form of lightweight formal specification embedded in the program text. Being executable parts of the code, they encourage programmers to devote proper attention to specifications, and help maintain consistency between specification and implementation as the program evolves. The present study investigates how contracts are used in the practice of software development. Based on an extensive empirical analysis of 21 contract-equipped Eiffel, C#, and Java projects totaling more than 260 million lines of code over 7700 revisions, it explores, among other questions: 1) which kinds of contract elements (preconditions, postconditions, class invariants) are used more often; 2) how contracts evolve over time; 3) the relationship between implementation changes and contract changes; and 4) the role of inheritance in the process. It has found, among other results, that: the percentage of program elements that include contracts is above 33% for most projects and tends to be stable over time; there is no strong preference for a certain type of contract element; contracts are quite stable compared to implementations; and inheritance does not significantly affect qualitative trends of contract usage.
Contracts in Practice
5,617
The emergence of Web services in the information space, as well as the advanced technology of SOA, give tremendous opportunities for users in an ambient space or distant, empowerment and organizations in various fields application, such as geolocation, E-learning, healthcare, digital government, etc.. In fact, Web services are a solution for the integration of distributed information systems, autonomous, heterogeneous and self-adaptable to the context. However, as Web services can evolve in a dynamic environment in a well-defined context and according to events automatically, such as time, temperature, location, authentication, etc.. We are interested in improving their SOA to empower the Web services to be self adaptive contexts. In this paper, we propose a new trend of self adaptability of Web services context. Then applying these requirements in the architecture of the platform of adaptability to context WComp, by integrating the workflow. Our work is illustrated by a case study of authentication.
Adaptation of Web services to the context based on workflow: Approach for self-adaptation of service-oriented architectures to the context
5,618
Software Product Lines (SPLs) are families of products whose commonalities and variability can be captured by Feature Models (FMs). T-wise testing aims at finding errors triggered by all interactions amongst t features, thus reducing drastically the number of products to test. T-wise testing approaches for SPLs are limited to small values of t -- which miss faulty interactions -- or limited by the size of the FM. Furthermore, they neither prioritize the products to test nor provide means to finely control the generation process. This paper offers (a) a search-based approach capable of generating products for large SPLs, forming a scalable and flexible alternative to current techniques and (b) prioritization algorithms for any set of products. Experiments conducted on 124 FMs (including large FMs such as the Linux kernel) demonstrate the feasibility and the practicality of our approach.
Bypassing the Combinatorial Explosion: Using Similarity to Generate and Prioritize T-wise Test Suites for Large Software Product Lines
5,619
We present a new unit test generator for C code, CTGEN. It generates test data for C1 structural coverage and functional coverage based on pre-/post-condition specifications or internal assertions. The generator supports automated stub generation, and data to be returned by the stub to the unit under test (UUT) may be specified by means of constraints. The typical application field for CTGEN is embedded systems testing; therefore the tool can cope with the typical aliasing problems present in low-level C, including pointer arithmetics, structures and unions. CTGEN creates complete test procedures which are ready to be compiled and run against the UUT. In this paper we describe the main features of CTGEN, their technical realisation, and we elaborate on its performance in comparison to a list of competing test generation tools. Since 2011, CTGEN is used in industrial scale test campaigns for embedded systems code in the automotive domain.
CTGEN - a Unit Test Generator for C
5,620
This paper builds on existing Goal Oriented Requirements Engineering (GORE) research by presenting a methodology with a supporting tool for analysing and demonstrating the alignment between software requirements and business objectives. Current GORE methodologies can be used to relate business goals to software goals through goal abstraction in goal graphs. However, we argue that unless the extent of goal-goal contribution is quantified with verifiable metrics and confidence levels, goal graphs are not sufficient for demonstrating the strategic alignment of software requirements. We introduce our methodology using an example software project from Rolls-Royce. We conclude that our methodology can improve requirements by making the relationships to business problems explicit, thereby disambiguating a requirement's underlying purpose and value.
Modelling the Strategic Alignment of Software Requirements using Goal Graphs
5,621
Modelling and thus metamodelling have become increasingly important in Software Engineering through the use of Model Driven Engineering. In this paper we present a systematic literature review of instance generation techniques for metamodels, i.e. the process of automatically generating models from a given metamodel. We start by presenting a set of research questions that our review is intended to answer. We then identify the main topics that are related to metamodel instance generation techniques, and use these to initiate our literature search. This search resulted in the identification of 34 key papers in the area, and each of these is reviewed here and discussed in detail. The outcome is that we are able to identify a knowledge gap in this field, and we offer suggestions as to some potential directions for future research.
Metamodel Instance Generation: A systematic literature review
5,622
Web processes are made up of services as their units of functionality. The services are represented as a graph and compose a synergy of service. The composite service is prone to failure due to various causes. However, the end-user should receive a smooth and non-interrupted execution. Atomic replacement of a failed Web service to recover the system is a straightforward approach. Nevertheless, finding a similar service is not reliable. In order to increase the probability of the recovery of a failed composite service, a set of services is replaced with another similar set.
Increasing the failure recovery probability of atomic replacement approaches
5,623
Creating user defined functions (UDFs) is a powerful method to improve the quality of computer applications, in particular spreadsheets. However, the only direct way to use UDFs in spreadsheets is to switch from the functional and declarative style of spreadsheet formulas to the imperative VBA, which creates a high entry barrier even for proficient spreadsheet users. It has been proposed to extend Excel by UDFs declared by a spreadsheet: user defined spreadsheet functions (UDSFs). In this paper we present a method to create a limited form of UDSFs in Excel without any use of VBA. Calls to those UDSFs utilize what-if data tables to execute the same part of a worksheet several times, thus turning it into a reusable function definition.
User Defined Spreadsheet Functions in Excel
5,624
We present a pragmatic method for management of risks that arise due to spreadsheet use in large organizations. We combine peer-review, tool-assisted evaluation and other pre-existing approaches into a single organization-wide approach that reduces spreadsheet risk without overly restricting spreadsheet use. The method was developed in the course of several spreadsheet evaluation assignments for a corporate customer. Our method addresses a number of issues pertinent to spreadsheet risks that were raised by the Sarbanes-Oxley act.
Governance of Spreadsheets through Spreadsheet Change Reviews
5,625
Spreadsheets are software programs which are typically created by end-users and often used for business-critical tasks. Many studies indicate that errors in spreadsheets are very common. Thus, a number of vendors offer auditing tools which promise to detect errors by checking spreadsheets against so-called Best Practices such as "Don't put constants in fomulae". Unfortunately, it is largely unknown which Best Practices have which actual effects on which spreadsheet quality aspects in which settings. We have conducted a controlled experiment with 42 subjects to investigate the question whether observance of three commonly suggested Best Practices is correlated with desired positive effects regarding correctness and maintainability: "Do not put constants in formulae", "keep formula complexity low" and "refer to the left and above". The experiment was carried out in two phases which covered the creation of new and the modification of existing spreadsheets. It was evaluated using a novel construction kit for spreadsheet auditing tools called Spreadsheet Inspection Framework. The experiment produced a small sample of directly comparable spreadsheets which all try to solve the same task. Our analysis of the obtained spreadsheets indicates that the correctness of "bottom-line" results is not affected by the observance of the three Best Practices. However, initially correct spreadsheets with high observance of these Best Practices tend to be the ones whose later modifications yield the most correct results.
Investigating Effects of Common Spreadsheet Design Practices on Correctness and Maintainability
5,626
Changing functional and non-functional software implementation at runtime is useful and even sometimes critical both in development and production environments. JooFlux is a JVM agent that allows both the dynamic replacement of method implementations and the application of aspect advices. It works by doing bytecode transformation to take advantage of the new invokedynamic instruction added in Java SE 7 to help implementing dynamic languages for the JVM. JooFlux can be managed using a JMX agent so as to operate dynamic modifications at runtime, without resorting to a dedicated domain-specific language. We compared JooFlux with existing AOP platforms and dynamic languages. Results demonstrate that JooFlux performances are close to the Java ones --- with most of the time a marginal overhead, and sometimes a gain --- where AOP platforms and dynamic languages present significant overheads. This paves the way for interesting future evolutions and applications of JooFlux.
JooFlux : modification de code à chaud et injection d'aspects directement dans une JVM 7
5,627
Software reuse is a subfield of software engineering that is used to adopt the existing software for similar purposes. Reuse Metrics determine the extent to which an existing software component is reused in new software with an objective to minimize the errors and cost of the new project. In this paper, medical database related to cardiology is considered. The Pearson Type I Distribution is used to calculate the probability density function (pdf) and thereby utilizing it for clustering the data. Further, coupling methodology is used to bring out the similarity of the new patient data by comparing it with the existing data. By this, the concerned treatment to be followed for the new patient is deduced by comparing with that of the previous patients case history. The metrics proposed by Chidamber and Kemerer are utilized for this purpose. This model will be useful for the medical field through software, particularly in remote areas.
Software Reuse in Medical Database for Cardiac Patients using Pearson Family Equations
5,628
Distributed collaborative software development tends to make artifacts and decisions inconsistent and uncertain. We try to solve this problem by providing an information repository to reflect the state of works precisely, by managing the states of artifacts/products made through collaborative work, and the states of decisions made through communications. In this paper, we propose models and a tool to construct the artifact-related part of the information repository, and explain the way to use the repository to resolve inconsistencies caused by concurrent changes of artifacts. We first show the model and the tool to generate the dependency relationships among UML model elements as content of the information repository. Next, we present the model and the method to generate change support workflows from the information repository. These workflows give us the way to efficiently modify the change-related artifacts for each change request. Finally, we define inconsistency patterns that enable us to be aware of the possibility of inconsistency occurrences. By combining this mechanism with version control systems, we can make changes safely. Our models and tool are useful in the maintenance phase to perform changes safely and efficiently.
A Change Support Model for Distributed Collaborative Work
5,629
Software engineering methodologies propose that developers should capture their efforts in ensuring that programs run correctly in repeatable and automated artifacts, such as unit tests. However, when looking at developer activities on a spectrum from exploratory testing to scripted testing we find that many engineering activities include bursts of exploratory testing. In this paper we propose to leverage these exploratory testing bursts by automatically extracting scripted tests from a recording of these sessions. In order to do so, we wiretap the development environment so we can record all program input, all user-issued functions calls, and all program output of an exploratory testing session. We propose to then use machine learning (i.e. clustering) to extract scripted test cases from these recordings in real-time. We outline two early-stage prototypes, one for a static and one for a dynamic language. And we outline how this idea fits into the bigger research direction of programming by example.
On Extracting Unit Tests from Interactive Programming Sessions
5,630
Use case scenarios are created during the analysis phase to specify software system requirements and can also be used for creating system level test cases. Using use cases to get system tests has several benefits including test design at early stages of software development life cycle that reduces over all development cost of the system. Current approaches for system testing using use cases involve functional details and does not include guards as passing criteria i.e. use of class diagram that seem to be difficult at very initial level which lead the need of specification based testing without involving functional details. In this paper, we proposed a technique for system testing directly derived from the specification without involving functional details. We utilize initial and post conditions applied as guards at each level of the use cases that enables us generation of formalized test cases and makes it possible to generate test cases for each flow of the system. We used use case scenarios to generate system level test cases, whereas system sequence diagram is being used to bridge the gap between the test objective and test cases, derived from the specification of the system. Since, a state chart derived from the combination of sequence diagrams can model the entire behavior of the system.Generated test cases can be employed and executed to state chart in order to capture behavior of the system with the state change.All these steps enable us to systematically refine the specification to achieve the goals of system testing at early development stages.
A use case driven approach for system level testing
5,631
This paper presents a case study on the application of cause effect graph for representing the college placement process. This paper begins with giving a brief overview of the college placement process which will serve as the basis for developing the cause effect graph and the decision table for the same in a systematic manner. Finally, it concludes with the design of test cases thus giving a complete and clear representation about the application of cause-effect graph in the software testing domain.
The application of cause effect graph for the college placement process
5,632
Service oriented architecture (SOA) is one of the latest software architectures. This architecture is created in direction of the business requirements and removed the gap between softwares and businesses. The software testing is the rising cost of activities in development software. SOA has different specifications and features proportion of the other software architectures. First this paper reviews SOA testing challenges and existing solution(s) for those challenges. Then that reports a survey of recent research to SOA systems testing, that covers both functional and non-functional testing. Those are presented for different levels of functional testing, including unit, integration, and regression testing.
A survey of service oriented architecture systems testing
5,633
The work presented in this paper is part of a proposed framework as complete and rigorous as possible for the design of complex systems. The methodological framework used is System Engineering, which is a methodological approach to control the design of complex systems. The practices of this approach are transcribed in standards, realized by methods and supported by tools. In our case, the standard EIA-632 was adopted. Specifically, to deal with the dependability of these complex systems and to improve the processes dealing with dependability, we have defined a global approach. This approach incorporates the consideration of dependability in system engineering processes. The work presented in this paper supports and complements the overall approach: it is the proposal of an information model based on the SysML language, allowing the requirements management, including safety requirements
Sysml Knowledge base for Designing Dependable Complex System
5,634
This paper presents a novel approach to the design verification of Software Product Lines(SPL). The proposed approach assumes that the requirements and designs are modeled as finite state machines with variability information. The variability information at the requirement and design levels are expressed differently and at different levels of abstraction. Also the proposed approach supports verification of SPL in which new features and variability may be added incrementally. Given the design and requirements of an SPL, the proposed design verification method ensures that every product at the design level behaviorally conforms to a product at the requirement level. The conformance procedure is compositional in the sense that the verification of an entire SPL consisting of multiple features is reduced to the verification of the individual features. The method has been implemented and demonstrated in a prototype tool SPLEnD (SPL Engine for Design Verification) on a couple of fairly large case studies.
Compositional Verification of Evolving Software Product Lines
5,635
To overcome the limitations of both approaches classical and formal for the development of complex software, we proposed a hybrid approach combining the formal approach (Event-B) and the classical approach (UML/OCL). Upstream phases of our approach include: Rewriting the requirements document, Refinement strategy, Abstract specification and Horizontal refinement. We have shown the feasibility of our approach on a case study: An Electronic Hotel Key System (SCEH). The problem of transition from the formal (Event-B) to the semi-formal (UML/OCL) is processed through our extension to OCL (EM-OCL).
D'Event-B vers UML/OCL en passant par UML/EM-OCL
5,636
This paper presents a new approach for optimizing multitheaded programs with pointer constructs. The approach has applications in the area of certified code (proof-carrying code) where a justification or a proof for the correctness of each optimization is required. The optimization meant here is that of dead code elimination. Towards optimizing multithreaded programs the paper presents a new operational semantics for parallel constructs like join-fork constructs, parallel loops, and conditionally spawned threads. The paper also presents a novel type system for flow-sensitive pointer analysis of multithreaded programs. This type system is extended to obtain a new type system for live-variables analysis of multithreaded programs. The live-variables type system is extended to build the third novel type system, proposed in this paper, which carries the optimization of dead code elimination. The justification mentioned above takes the form of type derivation in our approach.
Dead code elimination based pointer analysis for multithreaded programs
5,637
Reversible debuggers have been developed at least since 1970. Such a feature is useful when the cause of a bug is close in time to the bug manifestation. When the cause is far back in time, one resorts to setting appropriate breakpoints in the debugger and beginning a new debugging session. For these cases when the cause of a bug is far in time from its manifestation, bug diagnosis requires a series of debugging sessions with which to narrow down the cause of the bug. For such "difficult" bugs, this work presents an automated tool to search through the process lifetime and locate the cause. As an example, the bug could be related to a program invariant failing. A binary search through the process lifetime suffices, since the invariant expression is true at the beginning of the program execution, and false when the bug is encountered. An algorithm for such a binary search is presented within the FReD (Fast Reversible Debugger) software. It is based on the ability to checkpoint, restart and deterministically replay the multiple processes of a debugging session. It is based on GDB (a debugger), DMTCP (for checkpoint-restart), and a custom deterministic record-replay plugin for DMTCP. FReD supports complex, real-world multithreaded programs, such as MySQL and Firefox. Further, the binary search is robust. It operates on multi-threaded programs, and takes advantage of multi-core architectures during replay.
FReD: Automated Debugging via Binary Search through a Process Lifetime
5,638
The development of concurrent applications is challenging because of the complexity of concurrent designs and the hazards of concurrent programming. Architectural modeling using the Unified Modeling Language (UML) can support the development process, but the problem of mapping the model to a concurrent implementation remains. This paper addresses this problem by defining a scheme to map concurrent UML designs to a concurrent object-oriented program. Using the COMET method for the architectural design of concurrent object-oriented systems, each component and connector is annotated with a stereotype indicating its behavioral design pattern. For each of these patterns, a reference implementation is provided using SCOOP, a concurrent object-oriented programming model. We evaluate this development process using a case study of an ATM system, obtaining a fully functional implementation based on the systematic mapping of the individual patterns. Given the strong execution guarantees of the SCOOP model, which is free of data races by construction, this development method eliminates a source of intricate concurrent programming errors.
Concurrent object-oriented development with behavioral design patterns
5,639
The purpose of this paper is to implement software that can save time, effort, and facilitate XML and XSL programming. The XML parser helps the programmer to determine whether the XML document is Well-formed or not, by specifying if any the positions of the errors.
XML parser GUI using .NET Technology
5,640
How to apply automated verification technology such as model checking and static program analysis to millions of lines of embedded C/C++ code? How to package this technology in a way that it can be used by software developers and engineers, who might have no background in formal verification? And how to convince business managers to actually pay for such a software? This work addresses a number of those questions. Based on our own experience on developing and distributing the Goanna source code analyzer for detecting software bugs and security vulnerabilities in C/C++ code, we explain the underlying technology of model checking, static analysis and SMT solving, steps involved in creating industrial-proof tools.
Formal Verification, Engineering and Business Value
5,641
Testing is a de-facto verification technique in industry, but insufficient for identifying subtle issues due to its optimistic incompleteness. On the other hand, model checking is a powerful technique that supports comprehensiveness, and is thus suitable for the verification of safety-critical systems. However, it generally requires more knowledge and cost more than testing. This work attempts to take advantage of both techniques to achieve integrated and efficient verification of OSEK/VDX-based automotive operating systems. We propose property-based environment generation and model extraction techniques using static code analysis, which can be applied to both model checking and testing. The technique is automated and applied to an OSEK/VDX-based automotive operating system, Trampoline. Comparative experiments using random testing and model checking for the verification of assertions in the Trampoline kernel code show how our environment generation and abstraction approach can be utilized for efficient fault-detection.
Property-based Code Slicing for Efficient Verification of OSEK/VDX Operating Systems
5,642
The demand of transparency of clinical research results, the need of accelerating the process of transferring innovation in the daily medical practice as well as assuring patient safety and product efficacy make it necessary to extend the functionality of traditional trial registries. These new systems should combine different functionalities to track the information exchange, support collaborative work, manage regulatory documents and monitor the entire clinical investigation (CIV) lifecycle. This is the approach used to develop MEDIS, a Medical Device Information System, described in this paper under the perspective of the business process, and the underlining architecture. Moreover, MEDIS was designed on the basis of Health Level 7 (HL7) v.3 standards and methodology to make it interoperable with similar registries, but also to facilitate information exchange between different health information systems.
An Information System to Support and Monitor Clinical Trial Process
5,643
Cloned code is one of the most important obstacles against consistent software maintenance and evolution. Although today's clone detection tools find a variety of clones, they do not offer any advice how to remove such clones. We explain the problems involved in finding a sequence of changes for clone removal and suggest to view this problem as a process of stepwise unification of the clone instances. Consequently the problem can be solved by backtracking over the possible unification steps.
Clone Removal in Java Programs as a Process of Stepwise Unification
5,644
Distributed information systems are needed to be autonomous, heterogeneous and adaptable to the context. This is the reason why they resort Web services based on SOA Based on the advanced technology of SOA. These technologies can evolve in a dynamic environment in a well-defined context and according to events automatically, such as time, temperature, location, authentication, etc... This is what we call self-adaptability to context. In this paper, we are interested in improving the different needs of this criterion and we propose a new approach towards a self-adaptability of SOA to the context based on workflow and showing the feasibility of this approach by integration the workflow under a platform and test this integration by a case study.
A new approach towards the self-adaptability of Service-Oriented Architectures to the context based on workflow
5,645
Requirements traceability can in principle support stakeholders coping with rising development complexity. However, studies showed that practitioners rarely use available traceability information after its initial creation. In the position paper for the Dagstuhl seminar 1242, we argued that a more integrated approach allowing interactive traceability queries and context-specific traceability visualizations is needed to let practitioner access and use valuable traceability information. The information retrieved via traceability can be very specific to a current task of a stakeholder, abstracting from everything that is not required to solve the task.
Interactive Traceability Querying and Visualization for Coping With Development Complexity
5,646
Despite several scientific achievements in the last years, there are still a lot of IT projects that fail. Researchers found that one out of five IT-projects run out of time, budget or value. Major reasons for this failure are unexpected economic risk factors that emerge during the runtime of projects. In order to be able to identify emerging risks early and to counteract reasonably, financial methods for a continuous IT-project-steering are necessary, which as of today to the best of our knowledge are missing within scientific literature.
The Importance of Continuous Value Based Project Management in the Context of Requirements Engineering
5,647
Agile software development (ASD) methods were introduced as a reaction to traditional software development methods. Principles of these methods are different from traditional methods and so there are some different processes and activities in agile methods comparing to traditional methods. Thus ASD methods require different measurement practices comparing to traditional methods. Agile teams often do their projects in the simplest and most effective way so, measurement practices in agile methods are more important than traditional methods, because lack of appropriate and effective measurement practices, will increase risk of project. The aims of this paper are investigation on current measurement practices in ASD methods, collecting them together in one study and also reviewing agile version of Common Software Measurement International Consortium (COSMIC) publication.
On the Current Measurement Practices in Agile Software Development
5,648
An issue limiting the adoption of model checking technologies by the industry is the ability, for non-experts, to express their requirements using the property languages supported by verification tools. This has motivated the definition of dedicated assertion languages for expressing temporal properties at a higher level. However, only a limited number of these formalisms support the definition of timing constraints. In this paper, we propose a set of specification patterns that can be used to express real-time requirements commonly found in the design of reactive systems. We also provide an integrated model checking tool chain for the verification of timed requirements on TTS, an extension of Timed Petri Nets with data variables and priorities.
Real-Time Specification Patterns and Tools
5,649
Nowadays, most of the software development projects in Mexico are short-term projects (micro and small projects); for this reason, in this paper we are presenting a research proposal with the goal of identifying the elements contributing to their success or failure. With this research, we are trying to identify and propose techniques and tools that would contribute in the successful outcome of these projects.
On the need for optimization of the software development processes in short-term projects
5,650
Software Interfaces are meant to describe contracts governing interactions between logic modules. Interfaces, if well designed, significantly reduce software complexity and ease maintainability . However, as software evolves, the organization and the quality of software interfaces gradually deteriorate. As a consequence, this often leads to increased development cost, lower code quality and reduced reusability . Code clones are one of the most known bad smells in source code. This design defect may occur in interfaces by duplicating method/API declarations in several interfaces. Such interfaces are similar from the point of view of public services/APIs they specify, thus they indicate a bad organization of application services. In this paper, we characterize the interface clone design defect and illustrate it via examples taken from real-world open source software applications. We conduct an empirical study covering nine real-world open source software applications to quantify the presence of interface clones and evaluate their impact on interface design quality . The results of the empirical study show that interface clones are widely present in software interfaces. They also show that the presence of interface clones may cause a degradation of interface cohesion and indicate a considerable presence of code clones at implementations level.
Characterizing and Evaluating The Impact of Software Interface Clones
5,651
The present work is inscribed within the intersection of two scientific thematic: the engineering by reuse of components and ontologies alignment. The integration of Business Components (BC) is a research problem that has been identified in the field of engineering by reuse. Our proposal aims to provide assistance to designers of information systems in the integration phase. It is a process guided by domain ontology to provide semantic integration of BC. This process allows the detection and resolution of semantic conflicts naming type encountered in the process of integration of BC.
Semantic integration process of business components to support information system designers
5,652
Program build information, such as compilers and libraries used, is vitally important in an auditing and benchmarking framework for HPC systems. We have developed a tool to automatically extract this information using signature-based detection, a common strategy employed by anti-virus software to search for known patterns of data within the program binaries. We formulate the patterns from various "features" embedded in the program binaries, and the experiment shows that our tool can successfully identify many different compilers, libraries, and their versions.
Automatically Mining Program Build Information via Signature Matching
5,653
Evaluation of service oriented system has been a challenge, though there are large number of evaluation metrics exist but none of them is efficient to evaluate these systems effectively.This paper discusses the different testing tools and evaluation methods available for SOA and summarizes their limitation and support in context of service oriented architectures.
Testing and Evaluation of Service Oriented Systems
5,654
Expert judgment for software effort estimation is oriented toward direct evidences that refer to actual effort of similar projects or activities through experts' experiences. However, the availability of direct evidences implies the requirement of suitable experts together with past data. The circumstantial-evidence-based judgment proposed in this paper focuses on the development experiences deposited in human knowledge, and can then be used to qualitatively estimate implementation effort of different proposals of a new project by rational inference. To demonstrate the process of circumstantial-evidence-based judgment, this paper adopts propositional learning theory based diagnostic reasoning to infer and compare different effort estimates when implementing a Web service composition project with some different techniques and contexts. The exemplar shows our proposed work can help determine effort tradeoff before project implementation. Overall, circumstantial-evidence-based judgment is not an alternative but complementary to expert judgment so as to facilitate and improve software effort estimation.
Circumstantial-Evidence-Based Judgment for Software Effort Estimation
5,655
Recent studies have largely investigated the detection of class design anomalies. They proposed a large set of metrics that help in detecting those anomalies and in predicting the quality of class design. While those studies and the proposed metrics are valuable, they do not address the particularities of software interfaces. Interfaces define the contracts that spell out how software modules and logic units interact with each other. This paper proposes a list of design defects related to interfaces: shared similarity between interfaces, interface clones and redundancy in interface hierarchy. We identify and describe those design defects through real examples, taken from well-known Java applications. Then we define three metrics that help in automatically estimating the interface design quality, regarding the proposed design anomalies, and identify refactoring candidates. We investigate our metrics and show their usefulness through an empirical study conducted on three large Java applications.
Metrics for Assessing The Design of Software Interfaces
5,656
After introducing agile approach in 2001, several agile methods were founded over the last decade. Agile values such as customer collaboration, embracing changes, iteration and frequent delivery, continuous integration, etc. motivate all software stakeholders to use these methods in their projects. The main issue is that for using these methods instead of traditional methods in software development, companies should change their approach from traditional to agile. This change is a fundamental and critical mutation. Several studies have been done for investigating of barriers, challenges and issues in agile movement process and also in how to use agile methods in companies. The main issue is altering attitude from traditional to agile approach. We believe that before managing agile transformation process, its related factors should be studied in deep. This study focuses on different dimensions of changing approach to agile from change management perspective. These factors are how to being agile, method selection and awareness of challenges and issues. These fundamental factors encompass many items for agile movement and adoption process. However these factors may change in different organization, but they should be studied in deep before any action plan for designing a change strategy. The main contribution of this paper is introducing and these factors and discuss on them deeply.
Effective factors in agile transformation process from change management perspective
5,657
Agile software development methods (ASD) and open source software development methods (OSSD) are two different approaches which were introduced in last decade and both of them have their fanatical advocators. Yet, it seems that relation and interface between ASD and OSSD is a fertile area and few rigorous studies have been done in this matter. Major goal of this study was assessment of the relation and integration of ASD and OSSD. Analyzing of collected data shows that ASD and OSSD are able to support each other. Some practices in one of them are useful in the other. Another finding is that however there are some case studies using ASD and OSSD simultaneously, but there is not enough evidence about comprehensive integration of them.
A Systematic Literature Review on relationship between agile methods and Open Source Software Development methodology
5,658
Understanding software design practice is critical to understanding modern information systems development. New developments in empirical software engineering, information systems design science and the interdisciplinary design literature combined with recent advances in process theory and testability have created a situation ripe for innovation. Consequently, this paper utilizes these breakthroughs to formulate a process theory of software design practice: Sensemaking-Coevolution-Implementation Theory explains how complex software systems are created by collocated software development teams in organizations. It posits that an independent agent (design team) creates a software system by alternating between three activities: organizing their perceptions about the context, mutually refining their understandings of the context and design space, and manifesting their understanding of the design space in a technological artifact. This theory development paper defines and illustrates Sensemaking-Coevolution-Implementation Theory, grounds its concepts and relationships in existing literature, conceptually evaluates the theory and situates it in the broader context of information systems development.
The Sensemaking-Coevolution-Implementation Theory of Software Design
5,659
These are the proceedings of the 10th International Workshop on Formal Engineering approaches to Software Components and Architectures (FESCA). The workshop was held on March 23, 2013 in Rome (Italy) as a satellite event to the European Joint Conference on Theory and Practice of Software (ETAPS'13). The aim of the FESCA workshop is to bring together both young and senior researchers from formal methods, software engineering, and industry interested in the development and application of formal modelling approaches as well as associated analysis and reasoning techniques with practical benefits for component-based software engineering. FESCA aims to address the open question of how formal methods can be applied effectively to these new contexts and challenges. FESCA is interested in both the development and application of formal methods in component-based development and tries to cross-fertilize their research and application.
Proceedings 10th International Workshop on Formal Engineering Approaches to Software Components and Architectures
5,660
With numerous specialised technologies available to industry, it has become increasingly frequent for computer systems to be composed of heterogeneous components built over, and using, different technologies and languages. While this enables developers to use the appropriate technologies for specific contexts, it becomes more challenging to ensure the correctness of the overall system. In this paper we propose a framework to enable extensible technology agnostic runtime verification and we present an extension of polyLarva, a runtime-verification tool able to handle the monitoring of heterogeneous-component systems. The approach is then applied to a case study of a component-based artefact using different technologies, namely C and Java.
Extensible Technology-Agnostic Runtime Verification
5,661
In this paper, we formally define Test Case Sequence Diagrams (TCSD) as an easy-to-use means to specify test cases for components including timing constraints. These test cases are modeled using the UML2 syntax and can be specified by standard UML-modeling-tools. In a component-based design an early identification of errors can be achieved by a virtual integration of components before the actual system is build. We define such a procedure which integrates the individual test cases of the components according to the interconnections of a given architecture and checks if all specified communication sequences are consistent. Therefore, we formally define the transformation of TCSD into timed-arc Petri nets and a process for the combination of these nets. The applicability of our approach is demonstrated on an avionic use case from the ARP4761 standard.
Sequence Diagram Test Case Specification and Virtual Integration Analysis using Timed-Arc Petri Nets
5,662
Object oriented design is becoming more popular in software development and object oriented design metrics which is an essential part of software environment. The main goal in this paper is to predict factors of MOOD method for OO using a statistical approach. Therefore, linear regression model is used to find the relationship between factors of MOOD method and their influences on OO software measurements. Fortunately, through this process a prediction could be made for the line of code (LOC), number of classes (NOC), number of methods (NOM), and number of attributes (NOA). These measurements permit designers to access the software early in process, making changes that will reduce complexity and improve the continuing capability of the design.
Statistical Approach for Predicting Factors of Mood Method for Object Oriented
5,663
With the market competition aggravating, it becomes necessary for market players to adopt a business model which can adopt dynamic business changes. Any enterprise has the possibility to win in the competition only when it forms the strategic alliance with the upstream and downstream enterprise. This paper articulates a way of using unified modelling language (UML) to develop business value chain activities for any enterprise to develop dynamic, adhoc and agile business model. The results show that the UML is useful in the development of information systems and is independent of any programming language.
Unified Modeling Language for Describing Business Value Chain Activities
5,664
This volume contains the proceedings of the Eighth Workshop on Model-Based Testing (MBT 2013), which was held on March 17, 2013 in Rome, Italy, as a satellite event of the European Joint Conferences on Theory and Practice of Software, ETAPS 2013. The workshop is devoted to model-based testing of both software and hardware. Model-based testing uses models describing the required behavior of the system under consideration to guide such efforts as test selection and test results evaluation. Testing validates the real system behavior against models and checks that the implementation conforms to them, but is capable also to find errors in the models themselves. The first MBT workshop was held in 2004, in Barcelona. At that time MBT already had become a hot topic, but the MBT workshop was the first event devoted mostly to this domain. Since that time the area has generated enormous scientific interest, and today there are several specialized workshops and more broad conferences on software and hardware design and quality assurance covering model based testing. MBT has become one of the most powerful system analysis tools, one of the latest cutting-edge topics related is applying MBT in security analysis and testing. MBT workshop tries to keep up with current trends.
Proceedings Eighth Workshop on Model-Based Testing
5,665
Around the globe the number of older people in relation to the rest is constantly growing. As a result, medical and care facilities cannot handle the growing number of patients. Therefore, elderly in-home assistance gets more attention an importance. Due to issues regarding memory, physical strength and reduced self-assessment, old people face a lot of challenges in accomplishing their activities of daily living. This thesis is meant to address these problems by analysing the required infrastructure of a home-care facility as well as the arising issues regarding used components, especially wireless sensors. After the analysis, a prototype of a home-care system is designed and implemented. Furthermore, the issue of energy consumption of the used wireless sensor node is addressed by modifying the intelligence of the used sensor. After that, the design and components of the prototype used for the energy consumption analysis is explained, together with the programming structure of the sensor nodes used in this thesis. Thereupon, the results are of the simulations are discussed and compared with the authors' expectations. Finally the overall outcomes of the thesis are analysed and summed up, followed by a short outlook of further possible improvements and developments.
ICT System Design & Implementation Using Wireless Sensors to Support Elderly In-home Assistance
5,666
As of today, model-based testing (MBT) is considered as leading-edge technology in industry. We sketch the different MBT variants that - according to our experience - are currently applied in practice, with special emphasis on the avionic, railway and automotive domains. The key factors for successful industrial-scale application of MBT are described, both from a scientific and a managerial point of view. With respect to the former view, we describe the techniques for automated test case, test data and test procedure generation for concurrent reactive real-time systems which are considered as the most important enablers for MBT in practice. With respect to the latter view, our experience with introducing MBT approaches in testing teams are sketched. Finally, the most challenging open scientific problems whose solutions are bound to improve the acceptance and effectiveness of MBT in industry are discussed.
Industrial-Strength Model-Based Testing - State of the Art and Current Challenges
5,667
In 2012 the Specialists Task Force (STF) 442 appointed by the European Telcommunication Standards Institute (ETSI) explored the possibilities of using Model Based Testing (MBT) for test development in standardization. STF 442 performed two case studies and developed an MBT-methodology for ETSI. The case studies were based on the ETSI-standards GeoNetworking protocol (ETSI TS 102 636) and the Diameter-based Rx protocol (ETSI TS 129 214). Models have been developed for parts of both standards and four different MBT-tools have been employed for generating test cases from the models. The case studies were successful in the sense that all the tools were able to produce the test suites having the same test adequacy as the corresponding manually developed conformance test suites. The MBT-methodology developed by STF 442 is based on the experiences with the case studies. It focusses on integrating MBT into the sophisticated standardization process at ETSI. This paper summarizes the results of the STF 442 work.
Towards the Usage of MBT at ETSI
5,668
In this paper we focus on exploiting a specification and the structures that satisfy it, to obtain a means of comparing implemented and expected behaviours and find the origin of faults in implementations. We present an approach to the creation of tests that are based on those specification-compliant structures, and to the interpretation of those tests' results leading to the discovery of the method responsible for an eventual test failure. Results of comparative experiments with a tool implementing this approach are presented.
Testing Java implementations of algebraic specifications
5,669
Runtime verification is checking whether a system execution satisfies or violates a given correctness property. A procedure that automatically, and typically on the fly, verifies conformance of the system's behavior to the specified property is called a monitor. Nowadays, a variety of formalisms are used to express properties on observed behavior of computer systems, and a lot of methods have been proposed to construct monitors. However, it is a frequent situation when advanced formalisms and methods are not needed, because an executable model of the system is available. The original purpose and structure of the model are out of importance; rather what is required is that the system and its model have similar sets of interfaces. In this case, monitoring is carried out as follows. Two "black boxes", the system and its reference model, are executed in parallel and stimulated with the same input sequences; the monitor dynamically captures their output traces and tries to match them. The main problem is that a model is usually more abstract than the real system, both in terms of functionality and timing. Therefore, trace-to-trace matching is not straightforward and allows the system to produce events in different order or even miss some of them. The paper studies on-the-fly conformance relations for timed systems (i.e., systems whose inputs and outputs are distributed along the time axis). It also suggests a practice-oriented methodology for creating and configuring monitors for timed systems based on executable models. The methodology has been successfully applied to a number of industrial projects of simulation-based hardware verification.
Runtime Verification Based on Executable Models: On-the-Fly Matching of Timed Traces
5,670
Systems tend to become more and more complex. This has a direct impact on system engineering processes. Two of the most important phases in these processes are requirements engineering and quality assurance. Two significant complexity drivers located in these phases are the growing number of product variants that have to be integrated into the requirements engineering and the ever growing effort for manual test design. There are modeling techniques to deal with both complexity drivers like, e.g., feature modeling and model-based test design. Their combination, however, has been seldom the focus of investigation. In this paper, we present two approaches to combine feature modeling and model-based testing as an efficient quality assurance technique for product lines. We present the corresponding difficulties and approaches to overcome them. All explanations are supported by an example of an online shop product line.
Top-Down and Bottom-Up Approach for Model-Based Testing of Product Lines
5,671
The agile approach uses continuous delivery, instead of distinct procedure, to work closer with customers and to respond faster requirement changes. All of these are against the traditional plan driven approach. Due to agile method's characteristics and its success in the real world practices, a number of discussions regarding the differences between agile and traditional approaches emerged recently and many studies intended to integrate both methods to synthesize the benefits from these two sides. However, this type of research often concludes from observations of a development activity or surveys after a project. To provide a more objective supportive evidence of comparing these two approaches, our research analyzes the source codes, logs, and notes. We argue that the agile and traditional approaches share common characteristics, which can be considered as the glue for integrating both methods. In our study, we collect all the submissions from the version control repository, and meeting notes and discussions. By applying our suggested analysis method, we illustrate the shared properties between agile and traditional approaches; thus, different development phases, like implementation and test, can still be identified in agile development history. This result not only provides a positive result for our hypothesis but also offers a suggestion for a better integration.
Toward the Integration of Traditional and Agile Approaches
5,672
While organizations want to develop software products with reduced cost and flexible scope, stories about the applicability of agile practices to improve project development and performance in the software industry are scarce and focused on specific methodologies such as Scrum and XP. Given these facts, this paper aims to investigate, through practitioners' perceptions of value, which agile practices are being used to improve two performance criteria for software projects-cost and scope. Using a multivariate statistical technique known as Exploratory Factor Analysis (EFA), the results suggest that the use of agile practices can be represented in factors which describe different applications in software development process to improve cost and scope. Also, we conclude that some agile practices should be used together in order to get better efficiency on cost and scope in four development aspects: improving (a) team abilities, (b)management of requirements, (c) quality of the code developed, and (d) delivery of software on-budget and on-time.
Improving the management of cost and scope in software projects using agile practices
5,673
Artifact-centric modeling is a promising approach for modeling business processes based on the so-called business artifacts - key entities driving the company's operations and whose lifecycles define the overall business process. While artifact-centric modeling shows significant advantages, the overwhelming majority of existing process mining methods cannot be applied (directly) as they are tailored to discover monolithic process models. This paper addresses the problem by proposing a chain of methods that can be applied to discover artifact lifecycle models in Guard-Stage-Milestone notation. We decompose the problem in such a way that a wide range of existing (non-artifact-centric) process discovery and analysis methods can be reused in a flexible manner. The methods presented in this paper are implemented as software plug-ins for ProM, a generic open-source framework and architecture for implementing process mining tools.
Artifact Lifecycle Discovery
5,674
Using data from a web-based survey of software developers, the author attempts to determine root causes of "death march" projects and excessive work hours in the software industry in relation to company practices and management. Special emphasis is placed on the factor of business/technical supervisor background. An analysis of variance revealed significant differences between these supervisor groups with regard to a "Pointy-Haired Boss" (PHB) sentiment index. This difference, combined with correlations between the PHB index and the endpoints of project failure and use of software engineering practices, indicate some disparity in the suitability of businessbackground supervisors to manage software development projects compared with their technical-background counterparts. Other survey data points to improved project management skills as the biggest necessity for supervisors in the business-background group.
Work Issues in Software Engineering
5,675
Mutation analysis evaluates test suites and testing techniques by measuring how well they detect seeded defects (mutants). Even though well established in research, mutation analysis is rarely used in practice due to scalability problems --- there are multiple mutations per code statement leading to a large number of mutants, and hence executions of the test suite. In addition, the use of mutation to improve test suites is futile for mutants that are equivalent, which means that there exists no test case that distinguishes them from the original program. This paper introduces two optimizations based on state infection conditions, i.e., conditions that determine for a test execution whether the same execution on a mutant would lead to a different state. First, redundant test execution can be avoided by monitoring state infection conditions, leading to an overall performance improvement. Second, state infection conditions can aid in identifying equivalent mutants, thus guiding efforts to improve test suites.
Using State Infection Conditions to Detect Equivalent Mutants and Speed up Mutation Analysis
5,676
Computer-based control systems have grown in size, complexity, distribution and criticality. In this paper a methodology is presented to perform an abstract testing of such large control systems in an efficient way: an abstract test is specified directly from system functional requirements and has to be instantiated in more test runs to cover a specific configuration, comprising any number of control entities (sensors, actuators and logic processes). Such a process is usually performed by hand for each installation of the control system, requiring a considerable time effort and being an error prone verification activity. To automate a safe passage from abstract tests, related to the so called generic software application, to any specific installation, an algorithm is provided, starting from a reference architecture and a state-based behavioural model of the control software. The presented approach has been applied to a railway interlocking system, demonstrating its feasibility and effectiveness in several years of testing experience.
Automatic instantiation of abstract tests on specific configurations for large critical control systems
5,677
The development of a Cooperative Information System (CIS) becomes more and more complex, new challenges arise for managing this complexity. So, the aspect paradigm is regarded as a promising software development technique which can reduce the complexity and cost of developing large software systems. This opportunity can be used to develop a CIS able to support the interconnection of organizations information systems in order to ensure a common global service and to support the tempo of change in the business world that is increasing at an exponential level. We previously proposed an approach named AspeCiS (An Aspect-oriented Approach to Develop a Cooperative Information System) to develop a Cooperative Information System from existing Information Systems by using their artifacts such as existing requirements, and design. In this approach we have studied how to elicit CIS Requirements called Cooperative Requirements in AspeCiS. In this paper we propose a weaving process to define these requirements by reusing existing requirements and new aspectual requirements that we define to modify these requirements in order to be reused.
A weaving process to define requirements for Cooperative Information System
5,678
IEC 61131 has been widely accepted in the industrial automation domain. However, it is claimed that the standard does not address today the new requirements of complex industrial systems, which include among others, portability, interoperability, increased reusability and distribution. To address these restrictions, IEC has initiated the task of developing IEC 61499, which is presented as a mature technology to enable intelligent automation in various domains. This standard was not accepted by industry even though it is highly promoted by the academic community. In this paper, a comparison between the two standards is presented. We argue that IEC 61499 has been promoted by academy based on unsubstantiated claims on its main features, i.e., reusability, portability, interoperability, event-driven execution. A number of misperceptions are presented and discussed. Based on this, it is claimed that IEC 61499 does not provide a solid framework for the next generation of industrial automation systems.
IEC 61499 vs. 61131: A Comparison Based on Misperceptions
5,679
Dedicated software search engines that index open source software repositories or in-house software assets significantly enhance the chance of finding software components suitable for reuse. However, they still leave the work of evaluating and testing components to the developer. To significantly change the risk-cost-benefit tradeoff involved in software reuse, search engines need to be supported by user friendly environments that deliver code search functionality non-intrusively right to developers' fingertips.
Lowering the Barrier to Reuse through Test-Driven Search
5,680
In this paper, we present BIMS (Biomedical Information Management System). BIMS is a software architecture designed to provide a flexible computational framework to manage the information needs of a wide range of biomedical research projects. The main goal is to facilitate the clinicians' job in data entry, and researcher's tasks in data management, in high data quality biomedical research projects. The BIMS architecture has been designed following the two-level modeling paradigm, a promising methodology to model rich and dynamic information environments. In addition, a functional implementation of BIMS architecture has been developed as a web-based application. The result is a highly flexible web application which allows modeling and managing large amounts of heterogeneous biomedical data sets, both textual as well as visual (medical images) information.
BIMS: Biomedical Information Management System
5,681
Service discovery is one of the key problems that has been widely researched in the area of Service Oriented Architecture (SOA) based systems. Service category learning is a technique for efficiently facilitating service discovery. Most approaches for service category learning are based on suitable similarity distance measures using thresholds. Threshold selection is essentially difficult and often leads to unsatisfactory accuracy. In this paper, we have proposed a self-organizing based clustering algorithm called Semantic Taxonomical Clustering (STC) for taxonomically organizing services with self-organizing information and knowledge. We have tested the STC algorithm on both randomly generated data and the standard OWL-S TC dataset. We have observed promising results both in terms of classification accuracy and runtime performance compared to existing approaches.
STC: Semantic Taxonomical Clustering for Service Category Learning
5,682
The dominant view of design in information systems and software engineering, the Rational Design Paradigm, views software development as a methodical, plan-centered, approximately rational process of optimizing a design candidate for known constraints and objectives. This paper synthesizes an Alternative Design Paradigm, which views software development as an amethodical, improvisational, emotional process of simultaneously framing the problem and building artifacts to address it. These conflicting paradigms are manifestations of a deeper philosophical conflict between rationalism and empiricism. The paper clarifies the nature, components and assumptions of each paradigm and explores the implications of the paradigmatic conflict for research, practice and education.
The Two Paradigms of Software Design
5,683
Even though there are more development to improving the Quality of Service and requirement engineering in web services yet there is a big scarcity for its related standardization in day to day progress leading to vast needs in its area. Also in web service environment it always has been a big challenge to raise the standard of Quality of Service in requirement engineering analysis.
High Quality Requirement Engineering and Applying Priority Based Tools for QoS Standardization in Web Service Architecture
5,684
A comprehensive verification of parallel software imposes three crucial requirements on the procedure that implements it. Apart from accepting real code as program input and temporal formulae as specification input, the verification should be exhaustive, with respect to both control and data flows. This paper is concerned with the third requirement, proposing to combine explicit model checking to handle the control with symbolic set representations to handle the data. The combination of explicit and symbolic approaches is first investigated theoretically and we report the requirements on the symbolic representation and the changes to the model checking process the combination entails. The feasibility and efficiency of the combination is demonstrated on a case study using the DVE modelling language and we report a marked improvement in scalability compared to previous solutions. The results described in this paper show the potential to meet all three requirements for automatic verification in a single procedure combining explicit model checking with symbolic set representations.
Control Explicit---Data Symbolic Model Checking: An Introduction
5,685
It is widely accepted that understanding system requirements is important for software development project success. However, this paper presents two novel challenges to the requirements concept. First, where many plausible approaches to achieving a goal are evident, there may be insufficient overlap between approaches to form requirements. Second, while all plausible approaches may have sufficient overlap to state requirements, we cannot know that unless all approaches are identified and we are sure that none have been missed. This suggest that many, if not most, software projects may have too few requirements drive the design process, and that analysts may misrepresent design decisions as requirements to compensate.
The Illusion of Requirements in Software Development
5,686
Cognitive complexity measures quantify human difficulty in understanding the source code based on cognitive informatics foundation. The discipline derives cognitive complexity on a basis of fundamental software factors i.e, inputs, outputs, and internal processing architecture. An approach to integrating Granular Computing into the new measure called Structured Cognitive Information Measure or SCIM. The proposed measure unifies and re-organizes complexity factors analogous to human cognitive process. However, according to the methodology of software and the scope of the variables, Information Complexity Number(ICN) of variables is depended on change of variable value and cognitive complexity is measured in several ways. In this paper, we define the Scope Information Complexity Number (SICN) and present the cognitive complexity based on functional decomposition of software, including theoretical validation through nine Weyuker's properties.
Software Cognitive Information Measure based on Relation Between Structures
5,687
Context: Software Engineering research makes use of collections of software artifacts (corpora) to derive empirical evidence from. Goal: To improve quality and reproducibility of research, we need to understand the characteristics of used corpora. Method: For that, we perform a literature survey using grounded theory. We analyze the latest proceedings of seven relevant conferences. Results: While almost all papers use corpora of some kind with the common case of collections of source code of open-source Java projects, there are no frequently used projects or corpora across all the papers. For some conferences we can detect recurrences. We discover several forms of requirements and applied tunings for corpora which indicate more specific needs of research efforts. Conclusion: Our survey feeds into a quantitative basis for discussing the current state of empirical research in software engineering, thereby enabling ultimately improvement of research quality specifically in terms of use (and reuse) of empirical evidence.
A Literature Survey on Empirical Evidence in Software Engineering
5,688
Nowadays, with the emergence and the evolution of new technologies, such as e-business, a large number of companies are connected to Internet, and have proposed web services to trade. Web services as presented, are conceptually limited components to relatively simple functionalities. Generally, a single service does not satisfy the users needs that are more and more complex. Therefore, services must be made able to be composed to offer added value services. In this paper, a web services composition approach, modelled by Objects-Oriented Petri nets, is presented. In his context, an expressive algebra, which successfully solves the web services complex composition problem, is proposed. A java tool that allows automating this approach; based on a definite algebra and a G-nets meta-model, proposed by us, is developed.
Web Services Modeling and Composition Approach using Object-Oriented Petri Nets
5,689
Motivated from the Context Aware Computing, and more particularly from the Data-Driven Process Adaptation approach, we propose the Semantic Context Space (SCS) Engine which aims to facilitate the provision of adaptable business processes. The SCS Engine provides a space which stores semantically annotated data and it is open to other processes, systems, and external sources for information exchange. The specified implementation is inspired from the Semantic TupleSpace and uses the JavaSpace Service of the Jini Framework (changed to Apache River lately) as an underlying basis. The SCS Engine supplies an interface where a client can execute the following operations: (i) write: which inserts in the space available information along with its respective meta-information, (ii) read: which retrieves from the space information which meets specific meta-information constrains, and (iii) take: which retrieves and simultaneously deletes from the space information which meets specific meta-information constrains. In terms of this thesis the available types of meta-information are based on ontologies described in RDFS or WSML. The applicability of the SCS Engine implementation in the context of data-driven process adaptation has been ensured by an experimental evaluation of the provided operations. Ultimately, we refer to open issues which could be addressed to enrich the proposed Engine with additional features.
Sharing of Semantically Enhanced Information for the Adaptive Execution of Business Processes
5,690
This paper presents deductive programming for scheduling scenario generation. Modeling for solution is achieved through program transformations. First, declarative model for scheduling problem domain is introduced. After that model is interpreted as scheduling domain language and as predicate transition Petri net. Generated reachability tree presents search space with solutions. At the end results are discussed and analyzed.
From Declarative Model to Solution: Scheduling Scenario Synthesis
5,691
The dissertation provides a comparative analysis of a number of variability tools currently in use. It serves as a catalogue for practitioners interested in the topic. We compare a range of modelling, configuring, and management tools for product line engineering. The tools surveyed are compared against the following criteria: functional, non-functional, governance issues and Technical aspects. The outcome of the analysis is provided in tabular format.
Comparative Study and Analysis of Variability Tools
5,692
Software reliability analysis is performed at various stages during the process of engineering software as an attempt to evaluate if the software reliability requirements have been (or might be) met. In this report, I present a summary of some fundamental black-box and white-box software reliability models. I also present some general shortcomings of these models and suggest avenues for further research.
A Survey of Software Reliability Models
5,693
Products with new features need to be introduced on the market in a rapid pace and organizations need to speed up their development process. The ordinary way to develop products, one at a time, is not time efficient enough and is costly. Reuse has been suggested as a solution, but to achieve effective reuse within an organization a planned and proactive effort must be used. Product lines are the most promising technique and it increases productivity and software quality and decreases time-to-market. This paper describes the architecture of product line engineering process and also addresses what the design issues of product line architecture are and how a UML profile looks like for a product line by referring to the basic aspects of a case study, CelsiusTech in its Naval Product Line, SS2000.
Product line Development Architectural Model
5,694
What factors impact the comprehensibility of code? Previous research suggests that expectation-congruent programs should take less time to understand and be less prone to errors. We present an experiment in which participants with programming experience predict the exact output of ten small Python programs. We use subtle differences between program versions to demonstrate that seemingly insignificant notational changes can have profound effects on correctness and response times. Our results show that experience increases performance in most cases, but may hurt performance significantly when underlying assumptions about related code statements are violated.
What Makes Code Hard to Understand?
5,695
Automated testing improves the efficiency of testing practice at various levels of projects in the organization. Unfortunately, we do not have a common architecture or common standards for designing frameworks across different test levels, projects and test tools which can assist developers, testers and business analysts. To address the above problem, in this paper, I have first proposed a unique reference model and then a design architecture using the proposed model for designing any Data Driven Automation Frameworks. The reference model is K model which can be used for modeling any data-driven automation framework. The unique Design architecture, based on above model is Snow Leopard.
K model for designing Data Driven Test Automation Frameworks and its Design Architecture Snow Leopard
5,696
High-level scripting languages are in many ways polar opposites to GPUs. GPUs are highly parallel, subject to hardware subtleties, and designed for maximum throughput, and they offer a tremendous advance in the performance achievable for a significant number of computational problems. On the other hand, scripting languages such as Python favor ease of use over computational speed and do not generally emphasize parallelism. PyCUDA is a package that attempts to join the two together. This chapter argues that in doing so, a programming environment is created that is greater than just the sum of its two parts. We would like to note that nearly all of this chapter applies in unmodified form to PyOpenCL, a sister project of PyCUDA, whose goal it is to realize the same concepts as PyCUDA for OpenCL.
GPU Scripting and Code Generation with PyCUDA
5,697
SESAR is supposed to boost the development of new operational procedures together with the supporting systems in order to modernize the pan-European air traffic management (ATM). One consequence of this development is that more and more information is presented to - and has to be processed by - air traffic control officers (ATCOs). Thus, there is a strong need for a software design concept that fosters the development of an advanced (tower) controller working position (A-CWP) that comprehensively integrates the still counting amount of information while reducing the data management workload of ATCOs. We report on our first hands-on experiences obtained during the development of an A-CWP prototype that was used in two SESAR validation sessions.
Software Design Principles of a DFS Tower A-CWP Prototype
5,698
This document gathers high-level users requirements and describes the system features. It provides a detailed explanation of the main functionalities of the system with a more emphasis on the stakeholders needs and wants. Indeed, the document goes through design constraints that may restrict various aspects of the design and implementation.
Toward Recovering Complete SRS for Softbody Simulation System and a Sample Application - a Team 4 SOEN6481-W13 Project Report
5,699