text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
Service-oriented High Level Architecture (SOHLA) refers to the high level architecture (HLA) enabled by Service-Oriented Architecture (SOA) and Web Services etc. techniques which supports distributed interoperating services. The detailed comparisons between HLA and SOA are made to illustrate the importance of their combination. Then several key enhancements and changes of HLA Evolved Web Service API are introduced in comparison with native APIs, such as Federation Development and Execution Process, communication mechanisms, data encoding, session handling, testing environment and performance analysis. Some approaches are summarized including Web-Enabling HLA at the communication layer, HLA interface specification layer, federate interface layer and application layer. Finally the problems of current research are discussed, and the future directions are pointed out. | Service-oriented high level architecture | 5,100 |
This paper describes the use of the Levels of Conceptual Interoperability Model (LCIM) as a framework for conceptual modeling and its descriptive and prescriptive uses. LCIM is applied to show its potential and shortcomings in the current simulation interoperability approaches, in particular the High Level Architecture (HLA) and Base Object Models (BOM). It emphasizes the need to apply rigorous engineering methods and principles and replace ad-hoc approaches. | The Levels of Conceptual Interoperability Model: Applying Systems
Engineering Principles to M&S | 5,101 |
In previous work we described how the process algebra based language PSF can be used in software engineering, using the ToolBus, a coordination architecture also based on process algebra, as implementation model. We also described this software development process more formally by presenting the tools we use in this process in a CASE setting, leading to the PSF-ToolBus software engineering environment. In this article we summarize that work and describe a similar software development process for implementation of software systems using a client / server model and present this in a CASE setting as well. | Software Engineering with Process Algebra: Modelling Client / Server
Architectures | 5,102 |
From the preliminary stage of software engineering, selection of appropriate enforcement of standards remained a challenge for stakeholders during entire cycle of software development, but it can lead to reduce the efforts desired for software maintenance phase. Corrective maintenance is the reactive modification of a software product performed after delivery to correct discovered faults. Studies conducted by different researchers reveal that approximately 50 to 75 percent of the effort is spent on maintenance, out of which about 17 to 21 percent is exercised on corrective maintenance. In this paper, authors proposed a RCM (Reduce Corrective Maintenance) model which represents the implementation process of number of checklists to guide the stakeholders of all phases of software development. These check lists will be filled by corresponding stake holder of all phases before its start. More precise usage of the check list in relevant phase ensures successful enforcement of analysis, design, coding and testing standards for reducing errors in operation stage. Moreover authors represent the step by step integration of checklists in software development life cycle through RCM model. | A Step towards Software Corrective Maintenance Using RCM model | 5,103 |
This paper is the longer version of the extended abstract with the same name published in FM 06. We describe in detail the algorithm to generate verification conditions from statechart structures implemented in the iState tool. This approach also suggests us a novel method to define a version of predicate semantics for statecharts analogous to how we assign predicate semantics to programming languages. | Statechart Verification with iState | 5,104 |
To improve the agility, dynamics, composability, reusability, and development efficiency restricted by monolithic Federation Object Model (FOM), a modular FOM was proposed by High Level Architecture (HLA) Evolved product development group. This paper reviews the state-of-the-art of HLA Evolved modular FOM. In particular, related concepts, the overall impact on HLA standards, extension principles, and merging processes are discussed. Also permitted and restricted combinations, and merging rules are provided, and the influence on HLA interface specification is given. The comparison between modular FOM and Base Object Model (BOM) is performed to illustrate the importance of their combination. The applications of modular FOM are summarized. Finally, the significance to facilitate composable simulation both in academia and practice is presented and future directions are pointed out. | High level architecture evolved modular federation object model | 5,105 |
Actual applications (mostly component based) requirements cannot be expressed without a ubiquitous and mobile part for end-users as well as for M2M applications (Machine to Machine). Such an evolution implies context management in order to evaluate the consequences of the mobility and corresponding mechanisms to adapt or to be adapted to the new environment. Applications are then qualified as context aware applications. This first part of this paper presents an overview of context and its management by application adaptation. This part starts by a definition and proposes a model for the context. It also presents various techniques to adapt applications to the context: from self-adaptation to supervised approached. The second part is an overview of architectures for adaptable applications. It focuses on platforms based solutions and shows information flows between application, platform and context. Finally it makes a synthesis proposition with a platform for adaptable context-aware applications called Kalimucho. Then we present implementations tools for software components and a dataflow models in order to implement the Kalimucho platform. | Context Aware Adaptable Applications - A global approach | 5,106 |
The advent of the Java Card standard has been a major turning point in smart card technology. With the growing acceptance of this standard, understanding the performance behavior of these platforms is becoming crucial. To meet this need, we present in this paper a novel benchmarking framework to test and evaluate the performance of Java Card platforms. MESURE tool is the first framework which accuracy and effectiveness are independent from the particular Java Card platform tested and CAD used. | MESURE Tool to benchmark Java Card platforms | 5,107 |
This paper presents a general xml-based distributed software architecture in the aim of accessing and sharing resources in an opened client/server environment. The paper is organized as follows : First, we introduce the idea of a "General Distributed Software Architecture". Second, we describe the general framework in which this architecture is used. Third, we describe the process of information exchange and we introduce some technical issues involved in the implementation of the proposed architecture. Finally, we present some projects which are currently using, or which should use, the proposed architecture. | A general XML-based distributed software architecture for accessing and
sharing ressources | 5,108 |
The development of pervasive computing has put the light on a challenging problem: how to dynamically compose services in heterogeneous and highly changing environments? We propose a survey that defines the service composition as a sequence of four steps: the translation, the generation, the evaluation, and finally the execution. With this powerful and simple model we describe the major service composition middleware. Then, a classification of these service composition middleware according to pervasive requirements - interoperability, discoverability, adaptability, context awareness, QoS management, security, spontaneous management, and autonomous management - is given. The classification highlights what has been done and what remains to do to develop the service composition in pervasive environments. | A Survey on Service Composition Middleware in Pervasive Environments | 5,109 |
In this letter, we propose a novel three-dimensional conceptual model for an emerging service-oriented simulation paradigm. The model can be used as a guideline or an analytic means to find the potential and possible future directions of the current simulation frameworks. In particular, the model inspects the crossover between the disciplines of modeling and simulation, service-orientation, and software/systems engineering. Finally, two specific simulation frameworks are studied as examples. | Three-dimensional conceptual model for service-oriented simulation | 5,110 |
Free and Open Source Software (FOSS) distributions are complex software systems, made of thousands packages that evolve rapidly, independently, and without centralized coordination. During packages upgrades, corner case failures can be encountered and are hard to deal with, especially when they are due to misbehaving maintainer scripts: executable code snippets used to finalize package configuration. In this paper we report a software modernization experience, the process of representing existing legacy systems in terms of models, applied to FOSS distributions. We present a process to define meta-models that enable dealing with upgrade failures and help rolling back from them, taking into account maintainer scripts. The process has been applied to widely used FOSS distributions and we report about such experiences. | Towards maintainer script modernization in FOSS distributions | 5,111 |
State of the art component-based software collections - such as FOSS distributions - are made of up to dozens of thousands components, with complex inter-dependencies and conflicts. Given a particular installation of such a system, each request to alter the set of installed components has potentially (too) many satisfying answers. We present an architecture that allows to express advanced user preferences about package selection in FOSS distributions. The architecture is composed by a distribution-independent format for describing available and installed packages called CUDF (Common Upgradeability Description Format), and a foundational language called MooML to specify optimization criteria. We present the syntax and semantics of CUDF and MooML, and discuss the partial evaluation mechanism of MooML which allows to gain efficiency in package dependency solvers. | Expressing advanced user preferences in component installation | 5,112 |
When engineering complex and distributed software and hardware systems (increasingly used in many sectors, such as manufacturing, aerospace, transportation, communication, energy, and health-care), quality has become a big issue, since failures can have economics consequences and can also endanger human life. Model-based specifications of a component-based system permit to explicitly model the structure and behaviour of components and their integration. In particular Software Architectures (SA) has been advocated as an effective means to produce quality systems. In this chapter by combining different technologies and tools for analysis and development, we propose an architecture-centric model-driven approach to validate required properties and to generate the system code. Functional requirements are elicited and used for identifying expected properties the architecture shall express. The architectural compliance to the properties is formally demonstrated, and the produced architectural model is used to automatically generate the Java code. Suitable transformations assure that the code is conforming to both structural and behavioural SA constraints. This chapter describes the process and discusses how some existing tools and languages can be exploited to support the approach. | From Requirements to code: an Architecture-centric Approach for
producing Quality Systems | 5,113 |
Tool-assisted analysis of software systems and convenient guides to practise the formal methods are still motivating challenges. This article addresses these challenges. We ex periment on analysing a formal specification from multiple aspects. The B method and the Atelier-B tool are used for formal specifications, for safety property analysis and for refinements. The ProB tool is used to supplement the study with model checking; it helps to discover errors and there fore to improve the former specifications. | Tool-Assisted Multi-Facet Analysis of Formal Specifications (Using
Alelier-B and ProB) | 5,114 |
Component-oriented and service-oriented approaches have gained a strong enthusiasm in industries and academia with a particular interest for service-oriented approaches. A component is a software entity with given functionalities, made available by a provider, and used to build other application within which it is integrated. The service concept and its use in web-based application development have a huge impact on reuse practices. Accordingly a considerable part of software architectures is influenced; these architectures are moving towards service-oriented architectures. Therefore applications (re)use services that are available elsewhere and many applications interact, without knowing each other, using services available via service servers and their published interfaces and functionalities. Industries propose, through various consortium, languages, technologies and standards. More academic works are also undertaken concerning semantics and formalisation of components and service-based systems. We consider here both streams of works in order to raise research concerns that will help in building quality software. Are there new challenging problems with respect to service-based software construction? Besides, what are the links and the advances compared to distributed systems? | Can Component/Service-Based Systems Be Proved Correct? | 5,115 |
In Method Engineering (ME) science, the key issue is the consideration of information system development methods as fragments. Numerous ME approaches have produced several definitions of method parts. Different in nature, these fragments have nevertheless some common disadvantages: lack of implementation tools, insufficient standardization effort, and so on. On the whole, the observed drawbacks are related to the shortage of usage orientation. We have proceeded to an in-depth analysis of existing method fragments within a comparison framework in order to identify their drawbacks. We suggest overcoming them by an improvement of the ?method service? concept. In this paper, the method service is defined through the service paradigm applied to a specific method fragment ? chunk. A discussion on the possibility to develop a unique representation of method fragment completes our contribution. | From Method Fragments to Method Services | 5,116 |
The MAP model was introduced in information system engineering in order to model processes on a flexible way. The intentional level of this model helps an engineer to execute a process with a strong relationship to the situation of the project at hand. In the literature, attempts for having a practical use of maps are not numerous. Our aim is to enhance the guidance mechanisms of the process execution by reusing graph algorithms. After clarifying the existing relationship between graphs and maps, we improve the MAP model by adding qualitative criteria. We then offer a way to express maps with graphs and propose to use Graph theory algorithms to offer an automatic guidance of the map. We illustrate our proposal by an example and discuss its limitations. | Enhancing the Guidance of the Intentional Model "MAP": Graph Theory
Application | 5,117 |
The work presented in this paper is related to the area of Situational Method Engineering (SME) which focuses on project-specific method construction. We propose a faceted framework to understand and classify issues in system development SME. The framework identifies four different but complementary viewpoints. Each view allows us to capture a particular aspect of situational methods. Inter-relationships between these views show how they influence each other. In order to study, understand and classify a particular view of SME in its diversity, we associate a set of facets with each view. As a facet allows an in-depth description of one specific aspect of SME, the views show the variety and diversity of these aspects. | Situational Method Engineering: Fundamentals and Experiences | 5,118 |
The work presented in this paper is related to the area of situational method engineering (SME). In this domain, approaches are developed accordingly to specific project specifications. We propose to adapt an existing method construction process, namely the assembly-based one. One of the particular features of assembly-based SME approach is the selection of method chunks. Our proposal is to offer a better guidance in the retrieval of chunks by the introduction of multicriteria techniques. To use them efficiently, we defined a typology of projects characteristics, in order to identify all their critical aspects, which will offer a priorisation to help the method engineer in the choice between similar chunks. | Method Chunks Selection by Multicriteria Techniques: an Extension of the
Assembly-based Approach | 5,119 |
All software development processes include steps where several alternatives induce a choice, a decision-making. Sometimes, methodologies offer a way to make decisions. However, in a lot of cases, the arguments to carry out the decision are very poor and the choice is made in an intuitive and hazardous way. The aim of our work is to offer a scientifically founded way to guide the engineer through tactical choices with the application of multicriteria methods in software development processes. This approach is illustrated with three cases: risks, use cases and tools within Rational Unified Process. | Improving Software Development Processes with Multicriteria Methods | 5,120 |
Based on the old but famous distinction between "in the small" and "in the large" software development, at Nancy Universit\'e, UHP Nancy 1, we experience for a while software engineering education thanks to actual project engineering. This education method has the merit to enable students to discover and to overcome actual problems when faced to a large project which may be conducted by a large development team. The mode of education is a simulation of an actual software engineering project as encountered in "real life\'e" activities. | Software Engineering Education by Example | 5,121 |
The complexity of software in embedded systems has increased significantly over the last years so that software verification now plays an important role in ensuring the overall product quality. In this context, SAT-based bounded model checking has been successfully applied to discover subtle errors, but for larger applications, it often suffers from the state space explosion problem. This paper describes a new approach called continuous verification to detect design errors as quickly as possible by looking at the Software Configuration Management (SCM) system and by combining dynamic and static verification to reduce the state space to be explored. We also give a set of encodings that provide accurate support for program verification and use different background theories in order to improve scalability and precision in a completely automatic way. A case study from the telecommunications domain shows that the proposed approach improves the error-detection capability and reduces the overall verification time by up to 50%. | Continuous Verification of Large Embedded Software using SMT-Based
Bounded Model Checking | 5,122 |
Existing economic models support the estimation of the costs and benefits of developing and evolving a Software Product Line (SPL) as compared to undertaking traditional software development approaches. In addition, Feature Diagrams (FDs) are a valuable tool to scope the domain of a SPL. This paper proposes an algorithm to calculate, from a FD, the following information for economic models: the total number of products of a SPL, the SPL homogeneity and the commonality of the SPL requirements. The algorithm running time belongs to the complexity class $O(f^42^c)$. In contrast to related work, the algorithm is free of dependencies on off-the-self tools and is generally specified for an abstract FD notation, that works as a pivot language for most of the available notations for feature modeling. | Inferring Information from Feature Diagrams to Product Line Economic
Models | 5,123 |
The technical skill or ability of an individual is different to person in software developments of projects. So, it is necessary to identify the talent and attitude of an individual contribution can be uniformly distributed to the different phases of software development cycle. The line of code analysis metrics to understanding the various skills of the programmers in code development. By using the inclusion set theory of n (AUB) refer to strength and risk free code developed from union of software professionals and system must comprise of achievement of the system goal, effective memory utilization and intime delivery of the product. | Ethics Understanding of Software Professional In Risk Reducing
Reusability Coding Using Inclusion Set Theory | 5,124 |
The critical part for building any software system is its architecture. Architectural design is a design at a higher level of abstraction. A good architecture ensures that software will satisfy its requirement. This paper defines the most important activities of architectural design that used through building any software; also it applies these activities on one type of Electronic Commerce (EC) applications that is Job Agency System(JAS) to show how these activities can work through these types of applications. | Architectural Design Activities for JAS | 5,125 |
The dynamic software development organizations optimize the usage of resources to deliver the products in the specified time with the fulfilled requirements. This requires prevention or repairing of the faults as quick as possible. In this paper an approach for predicting the run-time errors in java is introduced. The paper is concerned with faults due to inheritance and violation of java constraints. The proposed fault prediction model is designed to separate the faulty classes in the field of software testing. Separated faulty classes are classified according to the fault occurring in the specific class. The results are papered by clustering the faults in the class. This model can be used for predicting software reliability. | Fault Predictions in Object Oriented Software | 5,126 |
Software project management is an interpolation of project planning, project monitoring and project termination. The substratal goals of planning are to scout for the future, to diagnose the attributes that are essentially done for the consummation of the project successfully, animate the scheduling and allocate resources for the attributes. Software cost estimation is a vital role in preeminent software project decisions such as resource allocation and bidding. This paper articulates the conventional overview of software cost estimation modus operandi available. The cost, effort estimates of software projects done by the various companies are congregated, the results are segregated with the present cost models and the MRE (Mean Relative Error) is enumerated. We have administered the historical data to COCOMO 81, COCOMOII model and identified that the stellar predicament is that no cost model gives the exact estimate of a software project. | Identifying the Importance of Software Reuse in COCOMO81, COCOMOII | 5,127 |
To learn how to introduce automated regression testing to existing medium scale Open Source projects, a long-term field experiment was performed with the Open Source project FreeCol. Results indicate that (1) introducing testing is both beneficial for the project and feasible for an outside innovator, (2) testing can enhance communication between developers, (3) signaling is important for engaging the project participants to fill a newly vacant position left by a withdrawal of the innovator. Five prescriptive strategies are extracted for the innovator and two conjectures offered about the ability of an Open Source project to learn about innovations. | Introducing Automated Regression Testing in Open Source Projects | 5,128 |
A large number of metrics have been proposed for the quality of object oriented software. Many of these metrics have not been properly validated due to poor methods of validation and non acceptance of metrics on scientific grounds. In the literature, two types of validations namely internal (theoretical) and external (empirical) are recommended. In this study, the authors have used both theoretical as well as empirical validation for validating already proposed set of metrics for the five quality factors. These metrics were proposed by Kumar and Soni. | A Framework for Validation of Object Oriented Design Metrics | 5,129 |
Software is intangible and knowledge about software systems is typically tacit. The mental model of software developers is thus an important factor in software engineering. It is our vision that developers should be able to refer to code as being "up in the north", "over in the west", or "down-under in the south". We want to provide developers, and everyone else involved in software development, with a *shared*, spatial and stable mental model of their software project. We aim to reinforce this by embedding a cartographic visualization in the IDE (Integrated Development Environment). The visualization is always visible in the bottom-left, similar to the GPS navigation device for car drivers. For each development task, related information is displayed on the map. In this paper we present CODEMAP, an eclipse plug-in, and report on preliminary results from an ongoing user study with professional developers and students. | Towards Improving the Mental Model of Software Developers through
Cartographic Visualization | 5,130 |
Software engineering activities in the Industry has come a long way with various improve- ments brought in various stages of the software development life cycle. The complexity of modern software, the commercial constraints and the expectation for high quality products demand the accurate fault prediction based on OO design metrics in the class level in the early stages of software development. The object oriented class metrics are used as quality predictors in the entire OO software development life cycle even when a highly iterative, incremental model or agile software process is employed. Recent research has shown some of the OO design metrics are useful for predicting fault-proneness of classes. In this paper the empirical validation of a set of metrics proposed by Chidamber and Kemerer is performed to assess their ability in predicting the software quality in terms of fault proneness and degradation. We have also proposed the design complexity of object-oriented software with Weighted Methods per Class metric (WMC-CK metric) expressed in terms of Shannon entropy, and error proneness. | Software Metrics Evaluation Based on Entropy | 5,131 |
Additional requirements are set for mobile applications in relation to applications for desktop computers. These requirements primarily concern the support to different platforms on which such applications are performed, as well as the requirement for providing more modalities of input/output interaction. These requirements have influence on the user interface and therefore it is needed to consider the usability of MVC (Model-View-Controller) and PAC (Presentation-Abstraction-Control) design patterns for the separation of the user interface tasks from the business logic, specifically in mobile applications. One of the questions is making certain choices of design patterns for certain classes of mobile applications. When using these patterns the possibilities of user interface automatic transformation should be kept in mind. Although the MVC design pattern is widely used in mobile applications, it is not universal, especially in cases where there are requirements for heterogeneous multi-modal input-output interactions. | Applying MVC and PAC patterns in mobile applications | 5,132 |
SOA (Service Oriented Architecture) is a new trend towards increasing the profit margins in an organization due to incorporating business services to business practices. Rational Unified Process (RUP) is a unified method planning form for large business applications that provides a language for describing method content and processes. The well defined mapping of SOA and RUP leads to successful completion of RUP software projects to provide services to their users. DOA (Digital Office Assistant) is a multi user SOA type application that provides appropriate viewer for each user to assist him through services. In this paper authors proposed the mapping strategy of SOA with RUP by considering DOA as case study. | Mapping of SOA and RUP: DOA as Case Study | 5,133 |
Microprocessor roadmaps clearly show a trend towards multiple core CPUs. Modern operating systems already make use of these CPU architectures by distributing tasks between processing cores thereby increasing system performance. This review article highlights a brief introduction of what a multicore system is, the various methods adopted to program these systems and also the industrial application of these high speed systems. | Multicore Applications in Real Time Systems | 5,134 |
Defect prevention is the most vital but habitually neglected facet of software quality assurance in any project. If functional at all stages of software development, it can condense the time, overheads and wherewithal entailed to engineer a high quality product. The key challenge of an IT industry is to engineer a software product with minimum post deployment defects. This effort is an analysis based on data obtained for five selected projects from leading software companies of varying software production competence. The main aim of this paper is to provide information on various methods and practices supporting defect detection and prevention leading to thriving software generation. The defect prevention technique unearths 99% of defects. Inspection is found to be an essential technique in generating ideal software generation in factories through enhanced methodologies of abetted and unaided inspection schedules. On an average 13 % to 15% of inspection and 25% - 30% of testing out of whole project effort time is required for 99% - 99.75% of defect elimination. A comparison of the end results for the five selected projects between the companies is also brought about throwing light on the possibility of a particular company to position itself with an appropriate complementary ratio of inspection testing. | Effective Defect Prevention Approach in Software Process for Achieving
Better Quality Levels | 5,135 |
Software engineering is continuously facing the challenges of growing complexity of software packages and increased level of data on defects and drawbacks from software production process. This makes a clarion call for inventions and methods which can enable a more reusable, reliable, easily maintainable and high quality software systems with deeper control on software generation process. Quality and productivity are indeed the two most important parameters for controlling any industrial process. Implementation of a successful control system requires some means of measurement. Software metrics play an important role in the management aspects of the software development process such as better planning, assessment of improvements, resource allocation and reduction of unpredictability. The process involving early detection of potential problems, productivity evaluation and evaluating external quality factors such as reusability, maintainability, defect proneness and complexity are of utmost importance. Here we discuss the application of CK metrics and estimation model to predict the external quality parameters for optimizing the design process and production process for desired levels of quality. Estimation of defect-proneness in object-oriented system at design level is developed using a novel methodology where models of relationship between CK metrics and defect-proneness index is achieved. A multifunctional estimation approach captures the correlation between CK metrics and defect proneness level of software modules. | Estimation of Defect proneness Using Design complexity Measurements in
Object- Oriented Software | 5,136 |
Defect Prevention is the most critical but most neglected component of the software quality assurance in any project. If applied at all stages of software development, it can reduce the time, cost and resources required to engineer a high quality product. Software inspection has proved to be the most effective and efficient technique enabling defect detection and prevention. Inspections carried at all phases of software life cycle have proved to be most beneficial and value added to the attributes of the software. Work is an analysis based on the data collected for three different projects from a leading product based company. The purpose of the paper is to show that 55% to 65% of total number of defects occurs at design phase. Position of this paper also emphasizes the importance of inspections at all phases of the product development life cycle in order to achieve the minimal post deployment defects. | Effectiveness Of Defect Prevention In I.T. For Product Development | 5,137 |
The software industry is successful, if it can draw the complete attention of the customers towards it. This is achievable if the organization can produce a high quality product. To identify a product to be of high quality, it should be free of defects, should be capable of producing expected results. It should be delivered in an estimated cost, time and be maintainable with minimum effort. Defect Prevention is the most critical but often neglected component of the software quality assurance in any project. If applied at all stages of software development, it can reduce the time, cost and resources required to engineer a high quality product. | Defect Prevention Approaches in Medium Scale it Enterprises | 5,138 |
The Function point analysis (FPA) method is the preferred scheme of estimation for project managers to determine the size, effort, schedule, resource loading and other such parameters. The FPA method by International Function Point Users Group (IFPUG) has captured the critical implementation features of an application through fourteen general system characteristics. However, Non- functional requirements (NFRs) such as functionality, reliability, efficiency, usability, maintainability, portability, etc. have not been included in the FPA estimation method. This paper discusses some of the NFRs and tries to determine a degree of influence for each of them. An attempt to factor the NFRs into estimation has been made. This approach needs to be validated with data collection and analysis. | Mapping General System Characteristics to Non- Functional Requirements | 5,139 |
The software industry is successful, if it can draw the complete attention of the customers towards it. This is achievable if the organization can produce a high quality product. To identify a product to be of high quality, it should be free of defects, should be capable of producing expected results. It should be delivered in an estimated cost, time and be maintainable with minimum effort. | Defect Prevention Approaches In Medium Scale It Enterprises | 5,140 |
The UML allows us to specify models in a precise, complete and unambiguous manner. In particular, the UML addresses the specification of all important decisions regarding analysis, design and implementation. Although UML is not a visual programming language, its models can be directly connected to a vast variety of programming languages. This enables a dual approach to software development: the developer has a choice as to the means of input. UML can be used directly, from which code can be generated; or on the other hand, that which is best expressed as text can be entered into the program as code. In an ideal world, the UML tool will be able to reverse-engineer any direct changes to code and the UML representations will be kept in sync with the code. However, without human intervention this is not always possible. There are certain elements of information that are lost when moving from models to code. Even then, there are certain aspects of programming language code do seem to preserve more of their semantics and therefore permits automatic reverse-engineering of code back to a subset of the UML models. | Review and Analysis of The Issues of Unified Modeling Language for
Visualizing, Specifying, Constructing and Documenting the Artifacts of a
Software-Intensive System | 5,141 |
Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design and coding. Software testing is the process of testing the functionality and correctness of software by running it. Software testing is usually performed for one of two reasons: defect detection, and reliability estimation. The problem of applying software testing to defect detection is that software can only suggest the presence of flaws, not their absence (unless the testing is exhaustive). The problem of applying software testing to reliability estimation is that the input distribution used for selecting test cases may be flawed. The key to software testing is trying to find the modes of failure - something that requires exhaustively testing the code on all possible inputs. Software Testing, depending on the testing method employed, can be implemented at any time in the development process. | Studying the Feasibility and Importance of Software Testing: An Analysis | 5,142 |
In this paper, an approach to facilitate the treatment with variabilities in system families is presented by explicitly modelling variants. The proposed method of managing variability consists of a variant part, which models variants and a decision table to depict the customisation decision regarding each variant. We have found that it is easy to implement and has advantage over other methods. We present this model as an integral part of modelling system families. | Modelling Variability for System Families | 5,143 |
The primary focus of Monte Carlo simulation is to identify and quantify risk related to uncertainty and variability in spreadsheet model inputs. The stress of Monte Carlo simulation often reveals logical errors in the underlying spreadsheet model that might be overlooked during day-to-day use or traditional "what-if" testing. This secondary benefit of simulation requires a trained eye to recognize warning signs of poor model construction. | Identification of Logical Errors through Monte-Carlo Simulation | 5,144 |
FAVO2009 was the second workshop on Formal Aspects of Virtual Organisations. The purpose of the FAVO workshops is to encourage an active community of researchers and practitioners using formal methods in the research and development of Virtual Organisations. | Proceedings Second Workshop on Formal Aspects of Virtual Organisations | 5,145 |
Although a lot of research has taken place in Object Oriented Design of software for Real Time systems and mapping of design models to implementation models, these methodologies are applicable to systems which are less complex and small in source code size. However, in practice, the size of the software for real time applications is growing. The run time architecture of real time applications is becoming increasingly complex. In this paper, we present a generic approach for mapping the design models to run time architectures resulting in combination of processes and threads. This method is applied in development of a communication subsystem of C4I complex and shall be presented as a case study. | Design of Run time Architectures for Real time UML Models an Actor
Centric Approach | 5,146 |
This paper considers economic intelligence contribution to exploit individual and collective images of change, in ICT design decision-making. Technical devices meeting with real use situations often gives the opportunity to emerge mental images, that a innovation process, through its unprecedented nature, can not anticipate. Although methodologies exists for quality and design project management, the survey we conduct among small ICT publishers, show how they are not very suitable for small firms. This elements taken into account, we try to build a proposition of exploration ? analyze ? sum up process, adapted to this type of actors decisional process. | Usages et conception des TIC : Proposition d'un modèle d'aide à la
représentation de problème de conception | 5,147 |
The draft paper defines a system, which is capable of maintaining bases of test cases for logical specifications. The specifications, which are subject to this system are transformed from their original shape in first-order logic to form-based expressions as originally introduced in logics of George Spencer-Brown. The innovation comes from the operations the system provides when injecting faults - so-called mutations - to the specifications. The system presented here applies to logical specifications from areas as different as programming, ontologies or hardware specifications. | FORMT: Form-based Mutation Testing of Logical Specifications | 5,148 |
Agile development processes and component-based software architectures are two software engineering approaches that contribute to enable the rapid building and evolution of applications. Nevertheless, few approaches have proposed a framework to combine agile and component-based development, allowing an application to be tested throughout the entire development cycle. To address this problematic, we have built CALICO, a model-based framework that allows applications to be safely developed in an iterative and incremental manner. The CALICO approach relies on the synchronization of a model view, which specifies the application properties, and a runtime view, which contains the application in its execution context. Tests on the application specifications that require values only known at runtime, are automatically integrated by CALICO into the running application, and the captured needed values are reified at execution time to resume the tests and inform the architect of potential problems. Any modification at the model level that does not introduce new errors is automatically propagated to the running system, allowing the safe evolution of the application. In this paper, we illustrate the CALICO development process with a concrete example and provide information on the current implementation of our framework. | A Framework for Agile Development of Component-Based Applications | 5,149 |
Software Visualization encompasses the development and evaluation of methods for graphically representing different aspects of methods of software, including its structure, execution and evolution. Creating visualizations helps the user to better understand complex phenomena. It is also found by the software engineering community that visualization is essential and important. In order to visualize the evolution of the models in Model-Driven Software Evolution, authors have proposed a framework which consists of 7 key areas (views) and 22 key features for the assessment of Model Driven Software Evolution process and addresses a number of stakeholder concerns. The framework is derived by the application of the Goal Question Metric Paradigm. This paper aims to describe an application of the framework by considering different visualization tools/CASE tools which are used to visualize the models in different views and to capture the information of models during their evolution. Comparison of such tools is also possible by using the framework. | Framework for Visualizing Model-Driven Software Evolution and its
Application | 5,150 |
Writing requirements is a two-way process. In this paper we use to classify Functional Requirements (FR) and Non Functional Requirements (NFR) statements from Software Requirements Specification (SRS) documents. This is systematically transformed into state charts considering all relevant information. The current paper outlines how test cases can be automatically generated from these state charts. The application of the states yields the different test cases as solutions to a planning problem. The test cases can be used for automated or manual software testing on system level. And also the paper presents a method for reduction of test suite by using mining methods thereby facilitating the mining and knowledge extraction from test cases. | Reliable Mining of Automatically Generated Test Cases from Software
Requirements Specification (SRS) | 5,151 |
UCMs (Use Case Maps) model describes functional requirements and high-level designs with causal paths superimposed on a structure of components. It could provide useful resources for software acceptance testing. However until now statistical testing technologies for large scale software is not considered yet in UCMs model. Thus if one applies UCMs model to a large scale software using traditional coverage based exhaustive tasting, then it requires too much costs for the quality assurance. Therefore this paper proposes an importance analysis of UCMs model with Markov chains. With this approach not only highly frequently used usage scenarios but also important objects such as components, responsibilities, stubs and plugins can also be identified from UCMs specifications. Therefore careful analysis, design, implementation and efficient testing could be possible with the importance of scenarios and objects during the full software life cycle. Consequently product reliability can be obtained with low costs. This paper includes an importance analysis method that identifies important scenarios and objects and a case study to illustrate the applicability of the proposed approach. | The Importance Analysis of Use Case Map with Markov Chains | 5,152 |
Software testing is the important phase of software development process. But, this phase can be easily missed by software developers because of their limited time to complete the project. Since, software developers finish their software nearer to the delivery time; they dont get enough time to test their program by creating effective test cases. . One of the major difficulties in software testing is the generation of test cases that satisfy the given adequacy criterion Moreover, creating manual test cases is a tedious work for software developers in the final rush hours. A new approach which generates test cases can help the software developers to create test cases from software specifications in early stage of software development (before coding) and as well as from program execution traces from after software development (after coding). Heuristic techniques can be applied for creating quality test cases. Mutation testing is a technique for testing software units that has great potential for improving the quality of testing, and to assure the high reliability of software. In this paper, a mutation testing based test cases generation technique has been proposed to generate test cases from program execution trace, so that the test cases can be generated after coding. The paper details about the mutation testing implementation to generate test cases. The proposed algorithm has been demonstrated for an example. | Test Case Generation using Mutation Operators and Fault Classification | 5,153 |
A Data Warehouse stores integrated information as materialized views over data from one or more remote sources. These materialized views must be maintained in response to actual relation updates in the remote sources. The data warehouse view maintenance techniques are classified into four major categories self maintainable recomputation, not self maintainable recomputation, self maintainable incremental maintenance, and not self maintainable incremental maintenance. This paper provides a comprehensive comparison of the techniques in these four categories in terms of the data warehouse space usage and number of rows accessed in order to propagate an update from a remote data source to a target materialized view in the data warehouse. | Performance Analysis of View Maintenance Techniques for DW | 5,154 |
Software design in Software Engineering is a critical and dynamic cognitive process. Accurate and flawless system design will lead to fast coding and early completion of a software project. Blooms taxonomy classifies cognitive domain into six dynamic levels such as Knowledge at base level to Comprehension, Application, Analysis, Synthesis and Evaluation at the highest level in the order of increasing complexity. A case study indicated in this paper is a gira system, which is a gprs based Intranet Remote Administration which monitors and controls the intranet from a mobile device. This paper investigates from this case study that the System Design stage in Software Engineering uses all the six levels of Blooms Taxonomy. The application of the highest levels of Blooms Taxonomy such as Synthesis and Evaluation in the design of gira indicates that Software Design in Software Development Life Cycle is a complex and critical cognitive process. | Dynamic Cognitive Process Application of Blooms Taxonomy for Complex
Software Design in the Cognitive Domain | 5,155 |
Requirement Analysis is an important phase in software development which deals with understanding the customers requirements. It includes the collection of information from the customer, which is regarding the customers requirements and what he expects from the software which is to be developed. By doing so, you can have a better understanding of what the customer actually needs and hence can deliver the output as per the customers requirements. Studies are being carried out to bring about improvements in the process of requirement analysis so that errors in software development could be minimized and hence improved and reliable products could be delivered. | Cognitive Process of Comprehension in Requirement Analysis in IT
Applications | 5,156 |
In a world of increasing mobility, there is a growing need for people to communicate with each other and have timely access to information regardless of the location of the individuals or the information. With the advent of moblle technology, the way of communication has changed. The gira system is basically a mobile phone technology service. In this paper we discuss about a novel local area network control system called gprs based Intranet Remote Administration gira. This system finds application in a mobile handset. With this system, a network administrator will have an effective remote control over the network. gira system is developed using gprs, gcf Generic Connection Framework of j2me, sockets and rmi technologies | GPRS Based Intranet Remote Administration GIRA | 5,157 |
The design structure of OO software has decisive impact on its quality. The design must be strongly correlated with quality characteristics like analyzability, changeability, stability and testability, which are important for maintaining the system. But due to the diversity and complexity of the design properties of OO system e.g. Polymorphism, encapsulation, coupling it becomes cumbersome. | Quantifying the Deign Quality of Object Oriented System The metric based
rules and heuristic | 5,158 |
Regulatory compliance is increasingly being addressed in the practice of requirements engineering as a main stream concern. This paper points out a gap in the theoretical foundations of regulatory compliance, and presents a theory that states (i) what it means for requirements to be compliant, (ii) the compliance problem, i.e., the problem that the engineer should resolve in order to verify whether requirements are compliant, and (iii) testable hypotheses (predictions) about how compliance of requirements is verified. The theory is instantiated by presenting a requirements engineering framework that implements its principles, and is exemplified on a real-world case study. | Theory of Regulatory Compliance for Requirements Engineering | 5,159 |
Management of project planning, monitoring, scheduling, estimation and risk management are critical issues faced by a project manager during development life cycle of software. In RUP, project management is considered as core discipline whose activities are carried in all phases during development of software products. On other side service monitoring is considered as best practice of SOA which leads to availability, auditing, debugging and tracing process. In this paper, authors define a strategy to incorporate the service monitoring of SOA into RUP to improve the artifacts of project management activities. Moreover, the authors define the rules to implement the features of service monitoring, which help the project manager to carry on activities in well define manner. Proposed frame work is implemented on RB (Resuming Bank) application and obtained improved results on PM (Project Management) work. | Improvement in RUP Project Management via Service Monitoring: Best
Practice of SOA | 5,160 |
The direct measurement of quality is difficult because there is no way we can measure quality factors. For measuring these factors, we have to express them in terms of metrics or models. Researchers have developed quality models that attempt to measure quality in terms of attributes, characteristics and metrics. In this work we have proposed the methodology of controlled experimentation coupled with power of Logical Scoring of Preferences to evaluate global quality of four object-oriented designs. | A Methodology for Empirical Quality Assessment of Object-Oriented Design | 5,161 |
Parallel programming models exist as an abstraction of hardware and memory architectures. There are several parallel programming models in commonly use; they are shared memory model, thread model, message passing model, data parallel model, hybrid model, Flynn's models, embarrassingly parallel computations model, pipelined computations model. These models are not specific to a particular type of machine or memory architecture. This paper focuses the concurrent approach to Flynn's SPMD classification in single processing environment through java program. | Concurrent Approach to Flynn's SPMD Classification through Java | 5,162 |
Formal specification languages have long languished, due to the grave scalability problems faced by complete verification methods. Runtime verification promises to use formal specifications to automate part of the more scalable art of testing, but has not been widely applied to real systems, and often falters due to the cost and complexity of instrumentation for online monitoring. In this paper we discuss work in progress to apply an event-based specification system to the logging mechanism of the Mars Science Laboratory mission at JPL. By focusing on log analysis, we exploit the "instrumentation" already implemented and required for communicating with the spacecraft. We argue that this work both shows a practical method for using formal specifications in testing and opens interesting research avenues, including a challenging specification learning problem. | An Entry Point for Formal Methods: Specification and Analysis of Event
Logs | 5,163 |
The validation of requirements is a fundamental step in the development process of safety-critical systems. In safety critical applications such as aerospace, avionics and railways, the use of formal methods is of paramount importance both for requirements and for design validation. Nevertheless, while for the verification of the design, many formal techniques have been conceived and applied, the research on formal methods for requirements validation is not yet mature. The main obstacles are that, on the one hand, the correctness of requirements is not formally defined; on the other hand that the formalization and the validation of the requirements usually demands a strong involvement of domain experts. We report on a methodology and a series of techniques that we developed for the formalization and validation of high-level requirements for safety-critical applications. The main ingredients are a very expressive formal language and automatic satisfiability procedures. The language combines first-order, temporal, and hybrid logic. The satisfiability procedures are based on model checking and satisfiability modulo theory. We applied this technology within an industrial project to the validation of railways requirements. | Formalization and Validation of Safety-Critical Requirements | 5,164 |
In this technical report we describe describe the Domain Specific Language (DSL) of the Workflow Execution Execution (WEE). Instead of interpreting an XML based workflow description language like BPEL, the WEE uses a minimized but expressive set of statements that runs directly on to of a virtual machine that supports the Ruby language.Frameworks/Virtual Machines supporting supporting this language include Java, .NET and there exists also a standalone Virtual Machine. Using a DSL gives us the advantage of maintaining a very compact code base of under 400 lines of code, as the host programming language implements all the concepts like parallelism, threads, checking for syntactic correctness. The implementation just hooks into existing statements to keep track of the workflow and deliver information about current existing context variables and state to the environment that embeds WEE. | Cloud Process Execution Engine - Evaluation of the Core Concepts | 5,165 |
This document reports on the use of an algebraic, visual, formal approach to the specification of patterns for the formalization of the GoF design patterns. The approach is based on graphs, morphisms and operations from category theory and exploits triple graphs to annotate model elements with pattern roles. Being based on category theory, the approach can be applied to formalize patterns in different domains. Novel in our proposal is the possibility of describing (nested) variable submodels, inter-pattern synchronization across several diagrams (e.g. class and sequence diagrams for UML design patterns), pattern composition, and conflict analysis. | An Algebraic Formalization of the GoF Design Patterns | 5,166 |
This paper argues that IT failures diagnosed as errors at the technical or project management level are often mistakenly pointing to symptoms of failure rather than a project's underlying socio-complexity (complexity resulting from the interactions of people and groups) which is usually the actual source of failure. We propose a novel method, Stakeholder Impact Analysis, that can be used to identify risks associated with socio-complexity as it is grounded in insights from the social sciences, psychology and management science. This paper demonstrates the effectiveness of Stakeholder Impact Analysis by using the 1992 London Ambulance Service Computer Aided Dispatch project as a case study, and shows that had our method been used to identify the risks and had they been mitigated, it would have reduced the risk of project failure. This paper's original contribution comprises expanding upon existing accounts of failure by examining failures at a level of granularity not seen elsewhere that enables the underlying socio-complexity sources of risk to be identified. | Lessons from the Failure and Subsequent Success of a Complex Healthcare
Sector IT Project | 5,167 |
Software engineering is one of the most recent additions in various disciplines of system engineering. It has emerged as a key obedience of system engineering in a quick succession of time. Various Software Engineering approaches are followed in order to produce comprehensive software solutions of affordable cost with reasonable delivery timeframe with less uncertainty. All these objectives are only satisfied when project's status is properly monitored and controlled; eXtreme Programming (XP) uses the best practices of AGILE methodology and helps in development of small size software very sharply. In this paper, authors proposed that via XP, high quality software with less uncertainty and under estimated cost can be developed due to proper monitoring and controlling of project. Moreover, authors give guidelines that how activities of project management can be embedded into development life cycle of XP to enhance the quality of software products and reduce the uncertainty. | Mapping The Best Practices of XP and Project Management: Well defined
approach for Project Manager | 5,168 |
Reusable software components need expressive specifications. This paper outlines a rigorous foundation to model-based contracts, a method to equip classes with strong contracts that support accurate design, implementation, and formal verification of reusable components. Model-based contracts conservatively extend the classic Design by Contract with a notion of model, which underpins the precise definitions of such concepts as abstract equivalence and specification completeness. Experiments applying model-based contracts to libraries of data structures suggest that the method enables accurate specification of practical software. | Specifying Reusable Components | 5,169 |
Computer System Administration and Network Administration are few such areas where Practical Extraction Reporting Language (Perl) has robust utilization these days apart from Bioinformatics. The key role of a System/Network Administrator is to monitor log files. Log file are updated every day. To scan the summary of large log files and to quickly determine if there is anything wrong with the server or network we develop a Firewall Log Status Reporter (SRr). SRr helps to generate the reports based on the parameters of interest. SRr provides the facility to admin to generate the individual firewall report or all reports in one go. By scrutinizing the results of the reports admin can trace how many times a particular request has been made from which source to which destination and can track the errors easily. Perl scripts can be seen as the UNIX script replacement in future arena and SRr is one development with the same hope that we can believe in. SRr is a generalized and customizable utility completely written in Perl and may be used for text mining and data mining application in Bioinformatics research and development too. | On Generation of Firewall Log Status Reporter (SRr) Using Perl | 5,170 |
Reliable effort estimation remains an ongoing challenge to software engineers. Accurate effort estimation is the state of art of software engineering, effort estimation of software is the preliminary phase between the client and the business enterprise. The relationship between the client and the business enterprise begins with the estimation of the software. The credibility of the client to the business enterprise increases with the accurate estimation. Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently under constrained problem. Accurate estimation is a complex process because it can be visualized as software effort prediction, as the term indicates prediction never becomes an actual. This work follows the basics of the empirical software effort estimation models. The goal of this paper is to study the empirical software effort estimation. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates. | Analysis of Empirical Software Effort Estimation Models | 5,171 |
Mathematics has many useful properties for developing of complex software systems. One is that it can exactly describe a physical situation of the object or outcome of an action. Mathematics support abstraction and this is an excellent medium for modeling, since it is an exact medium there is a little possibility of ambiguity. This paper demonstrates that mathematics provides a high level of validation when it is used as a software medium. It also outlines distinguishing characteristics of structural testing which is based on the source code of the program tested. Structural testing methods are very amenable to rigorous definition, mathematical analysis and precise measurement. Finally, it also discusses functional and structural testing debate to have a sense of complete testing. Any program can be considered to be a function in the sense that program input forms its domain and program outputs form its range. In general discrete mathematics is more applicable to functional testing, while graph theory pertains more to structural testing. | Mathematical Principles in Software Quality Engineering | 5,172 |
Cloud computing is an emerging platform of service computing designed for swift and dynamic delivery of assured computing resources. Cloud computing provide Service-Level Agreements (SLAs) for guaranteed uptime availability for enabling convenient and on-demand network access to the distributed and shared computing resources. Though the cloud computing paradigm holds its potential status in the field of distributed computing, cloud platforms are not yet to the attention of majority of the researchers and practitioners. More specifically, still the researchers and practitioners community has fragmented and imperfect knowledge on cloud computing principles and techniques. In this context, one of the primary motivations of the work presented in this paper is to reveal the versatile merits of cloud computing paradigm and hence the objective of this work is defined to bring out the remarkable significances of cloud computing paradigm through an application environment. In this work, a cloud computing model for software testing is developed. | A Model of Cloud Based Application Environment for Software Testing | 5,173 |
The most expensive source of errors and the more difficult to detect in a formal development is the error during specification. Hence, the first step in a formal development usually consists in exhibiting the set of all behaviors of the specification, for instance with an automaton. Starting from this observation, many researches are about the generation of a B machine from a behavioral specification, such as UML. However, no backward verification are done. This is why, we propose the GeneSyst tool, which aims at generating an automaton describing at least all behaviors of the specification. The refinement step is considered and appears as sub-automatons in the produced SLTS. | GénéSyst : Génération d'un système de transitions
étiquetées à partir d'une spécification B événementiel | 5,174 |
To fork a project is to copy the existing code base and move in a direction different than that of the erstwhile project leadership. Forking provides a rapid way to address new requirements by adapting an existing solution. However, it can also create a plethora of similar tools, and fragment the developer community. Hence, it is not always clear whether forking is the right strategy. In this paper, we describe a mixed-methods exploratory case study that investigated the process of forking a project. The study concerned the forking of an open-source tool for managing software projects, Trac. Trac was forked to address differing requirements in an academic setting. The paper makes two contributions to our understanding of code forking. First, our exploratory study generated several theories about code forking in open source projects, for further research. Second, we investigated one of these theories in depth, via a quantitative study. We conjectured that the features of the OSS forking process would allow new requirements to be addressed. We show that the forking process in this case was successful at fulfilling the new projects requirements. | Code forking in open-source software: a requirements perspective | 5,175 |
Real-life engineering optimization problems need Multiobjective Optimization (MOO) tools. These problems are highly nonlinear. As the process of Multiple Criteria Decision-Making (MCDM) is much expanded most MOO problems in different disciplines can be classified on the basis of it. Thus MCDM methods have gained wide popularity in different sciences and applications. Meanwhile the increasing number of involved components, variables, parameters, constraints and objectives in the process, has made the process very complicated. However the new generation of MOO tools has made the optimization process more automated, but still initializing the process and setting the initial value of simulation tools and also identifying the effective input variables and objectives in order to reach the smaller design space are still complicated. In this situation adding a preprocessing step into the MCDM procedure could make a huge difference in terms of organizing the input variables according to their effects on the optimization objectives of the system. The aim of this paper is to introduce the classification task of data mining as an effective option for identifying the most effective variables of the MCDM systems. To evaluate the effectiveness of the proposed method an example has been given for 3D wing design. | Multiple Criteria Decision-Making Preprocessing Using Data Mining Tools | 5,176 |
In this paper, we present Digital Rights Management systems (DRMS) which are becoming more and more complex due to technology revolution in relation with telecommunication networks, multimedia applications and the reading equipments (Mobile Phone, IPhone, PDA, DVD Player,..). The complexity of the DRMS, involves the use of new tools and methodologies that support software components and hardware components coupled design. The traditional systems design approach has been somewhat hardware first in that the software components are designed after the hardware has been designed and prototyped. This leaves little flexibility in evaluating different design options and hardware-software mappings. The key of codesign is to avoid isolation between hardware and software designs to proceed in parallel, with feedback and interaction between the two as the design progresses, in order to achieve high quality designs with a reduced design time. In this paper, we present the F4MS (Framework for Mixed Systems) which is a unified framework for software and hardware design environment, simulation and aided execution of mixed systems. To illustrate this work we propose an implementation of DRMS business model based on F4MS framework. | DRMS Co-design by F4MS | 5,177 |
Software effort estimation at early stages of project development holds great significance for the industry to meet the competitive demands of today's world. Accuracy, reliability and precision in the estimates of effort are quite desirable. The inherent imprecision present in the inputs of the algorithmic models like Constructive Cost Model (COCOMO) yields imprecision in the output, resulting in erroneous effort estimation. Fuzzy logic based cost estimation models are inherently suitable to address the vagueness and imprecision in the inputs, to make reliable and accurate estimates of effort. In this paper, we present an optimized fuzzy logic based framework for software development effort prediction. The said framework tolerates imprecision, incorporates experts knowledge, explains prediction rationale through rules, offers transparency in the prediction system, and could adapt to changing environments with the availability of new data. The traditional cost estimation model COCOMO is extended in the proposed study by incorporating the concept of fuzziness into the measurements of size, mode of development for projects and the cost drivers contributing to the overall development effort. | Optimized Fuzzy Logic Based Framework for Effort Estimation in Software
Development | 5,178 |
Chidamber and Kemerer first defined a cohesion measure for object-oriented software - the Lack of Cohesion in Methods (LCOM) metric. This paper presents a pedagogic evaluation and discussion about the LCOM metric using field data from three industrial systems. System 1 has 34 classes, System 2 has 383 classes and System 3 has 1055 classes. The main objectives of the study were to determine if the LCOM metric was appropriate in the measurement of class cohesion and the determination of properly and improperly designed classes in the studied systems. Chidamber and Kemerer's suite of metric was used as metric tool. Descriptive statistics was used to analyze results. The result of the study showed that in System 1, 78.8% (26 classes) were cohesive; System 2 54% (207 classes) were cohesive; System 3 30% (317 classes) were cohesive. We suggest that the LCOM metric measures class cohesiveness and was appropriate in the determination of properly and improperly designed classes in the studied system. | A Pedagogical Evaluation and Discussion about the Lack of Cohesion in
Method (LCOM) Metric Using Field Experiment | 5,179 |
A UML based metamodel for Bunge-Wand-Weber (BWW) ontology is presented. BWW ontology is a generic framework for analysis and conceptualization of real world objects. It includes categories that can be applied to analyze and classify objects found in an information system. In the context of BWW ontology, the metamodel is a representation of the ontological categories and relationships among them. An objective behind developing an object-oriented metamodel has been to model BWW ontology in terms of widely used notions in software development. The main contributions of this paper are a classification for ontological categories, a description template, and representations through UML and typed based models. | An Object-Oriented Metamodel for Bunge-Wand-Weber Ontology | 5,180 |
Measuring software maintainability early in the development life cycle, especially at the design phase, may help designers to incorporate required enhancement and corrections for improving maintainability of the final software. This paper developed a multivariate linear model 'Maintainability Estimation Model for Object-Oriented software in Design phase' (MEMOOD), which estimates the maintainability of class diagrams in terms of their understandability and modifiability. While, in order to quantify class diagram's understandability and modifiability the paper further developed two more multivariate models. These two models use design level object-oriented metrics, to quantify understandability and modifiability of class diagram. Such early quantification of maintainability provides an opportunity to improve the maintainability of class diagram and consequently the maintainability of final software. All the three models have been validated through appropriate statistical measures and contextual interpretation has been drawn. | Maintainability Estimation Model for Object-Oriented Software in Design
Phase (MEMOOD) | 5,181 |
Software developers and maintainers need to read and understand source programs and other software artifacts. The increase in size and complexity of software drastically affects several quality attributes, especially understandability and maintainability. False interpretation often leads to ambiguities, misunderstanding and hence to faulty development results. Despite the fact that software understandability is vital and one of the most significant components of the software development process, it is poorly managed. This is mainly due to the lack of its proper management and control. The paper highlights the importance of understandability in general and as a factor of software testability. Two major contributions are made in the paper. A relation between testability factors and object oriented characteristics has been established as a first contribution. In second contribution, a model has been proposed for estimating understandability of object oriented software using design metrics. In addition, the proposed model has been validated using experimental try-out. | A Metrics Based Model for Understandability Quantification | 5,182 |
This document presents the business requirement of Unified University Inventory System (UUIS) in Technology-independent manner. All attempts have been made in using mostly business terminology and business language while describing the requirements in this document. Very minimal and commonly understood Technical terminology is used. Use case approach is used in modeling the business requirements in this document. | Software Requirements Specification of the IUfA's UUIS -- a Team 4
COMP5541-W10 Project Approach | 5,183 |
This document provides a description of the technical design for Unified University Inventory System - Web Portal. This document's primary purpose is to describe the technical vision for how business requirements will be realized. This document provides an architectural overview of the system to depict different aspects of the system. This document also functions as a foundational reference point for developers. | Software Design Document, Testing, Deployment and Configuration
Management, and User Manual of the UUIS -- a Team 4 COMP5541-W10 Project
Approach | 5,184 |
Unified University Inventory System (UUIS), is an inventory system created for the Imaginary University of Arctica (IUfA) to facilitate its inventory management, of all the faculties in one system. Team 1 elucidates the functions of the system and the characteristics of the users who have access to these functions. It shows the access restrictions to different functionalities of the system provided to users, who are the staff and students of the University. Team 1, also, emphasises on the necessary steps required to prevent the security of the system and its data. | Software Requirements Specification of the IUfA's UUIS -- a Team 1
COMP5541-W10 Project Approach | 5,185 |
The document presents a detailed description of the designs for the implementation of the Unified University Inventory System for the Imaginary University of Arctica. The document, through numerous diagrams and UI samples, gives the structure of the system and the functions of its modules. It also gives test cases and reports that support the system's architecture and design. | Software Design Document, Testing, and Deployment and Configuration
Management of the UUIS - a Team 1 COMP5541-W10 Project Approach | 5,186 |
The purpose of this document is to specify the requirements of the University Unified Inventory System, of the UIfA. The Team of Analysts used a Feedback Waterfall approach to collect the requirements. UML diagrams, such as Use case diagrams, Block Diagrams, Domain Models, and interface prototypes are some of the tools employed to develop the present document. | Software Requirements Specification of the IUfA's UUIS -- a Team 3
COMP5541-W10 Project Approach | 5,187 |
The Software Design Document of UUIS describes the prototype design details of the system architecture, database layer, deployment and configuration details as well as test cases produced while working the design and implementation of the prototype. The requirements specification of UUIS are detailed in arXiv:1005.0783. | Software Design Document, Testing, Deployment and Configuration
Management of the UUIS--a Team 2 COMP5541-W10 Project Approach | 5,188 |
In the 52-page document, we describe our approach to the Software Requirements Specification of the IUfA's UUIS prototype. This includes the overall system description, functional requirements, non-functional requirements, use cases, the corresponding data dictionary for all entities involved, mock user interface (UI) design, and the overall projected cost estimate. The design specification of UUIS can be found in arXiv:1005.0665. | Software Requirements Specification of the IUfA's UUIS -- a Team 2
COMP5541-W10 Project Approach | 5,189 |
The purpose of this document is to provide technical specifications concerned to the Design of the University Unified Inventory System - Web Portal, of the UIfA. The Team of Developers used a Feedback Waterfall approach to build up the system, under an Object Oriented paradigm. The architectural model followed was the Model-View-Controller, mixed with a Mapper layer between the database and the Model. Some of the patterns utilized in the developing of the System were the Observer Pattern, the Command Pattern, and the Mapper Pattern. | Software Design Document, Testing, Deployment and Configuration
Management of the IUfA's UUIS -- a Team 3 COMP5541-W10 Project Approach | 5,190 |
A complex pervasive system is typically composed of many cooperating \emph{nodes}, running on machines with different capabilities, and pervasively distributed across the environment. These systems pose several new challenges such as the need for the nodes to manage autonomously and dynamically in order to adapt to changes detected in the environment. To address the above issue, a number of autonomic frameworks has been proposed. These usually offer either predefined self-management policies or programmatic mechanisms for creating new policies at design time. From a more theoretical perspective, some works propose the adoption of prediction models as a way to anticipate the evolution of the system and to make timely decisions. In this context, our aim is to experiment with the integration of prediction models within a specific autonomic framework in order to assess the feasibility of such integration in a setting where the characteristics of dynamicity, decentralization, and cooperation among nodes are important. We extend an existing infrastructure called \emph{SelfLets} in order to make it ready to host various prediction models that can be dynamically plugged and unplugged in the various component nodes, thus enabling a wide range of predictions to be performed. Also, we show in a simple example how the system works when adopting a specific prediction model from the literature. | Incorporating prediction models in the SelfLet framework: a plugin
approach | 5,191 |
The success of several constraint-based modeling languages such as OPL, ZINC, or COMET, appeals for better software engineering practices, particularly in the testing phase. This paper introduces a testing framework enabling automated test case generation for constraint programming. We propose a general framework of constraint program development which supposes that a first declarative and simple constraint model is available from the problem specifications analysis. Then, this model is refined using classical techniques such as constraint reformulation, surrogate and global constraint addition, or symmetry-breaking to form an improved constraint model that must be thoroughly tested before being used to address real-sized problems. We think that most of the faults are introduced in this refinement step and propose a process which takes the first declarative model as an oracle for detecting non-conformities. We derive practical test purposes from this process to generate automatically test data that exhibit non-conformities. We implemented this approach in a new tool called CPTEST that was used to automatically detect non-conformities on two classical benchmark programs, namely the Golomb rulers and the car-sequencing problem. | On Testing Constraint Programs | 5,192 |
Software development effort estimation is one of the most major activities in software project management. A number of models have been proposed to construct a relationship between software size and effort; however we still have problems for effort estimation. This is because project data, available in the initial stages of project is often incomplete, inconsistent, uncertain and unclear. The need for accurate effort estimation in software industry is still a challenge. Artificial Neural Network models are more suitable in such situations. The present paper is concerned with developing software effort estimation models based on artificial neural networks. The models are designed to improve the performance of the network that suits to the COCOMO Model. Artificial Neural Network models are created using Radial Basis and Generalized Regression. A case study based on the COCOMO81 database compares the proposed neural network models with the Intermediate COCOMO. The results were analyzed using five different criterions MMRE, MARE, VARE, Mean BRE and Prediction. It is observed that the Radial Basis Neural Network provided better results | Software Effort Estimation using Radial Basis and Generalized Regression
Neural Networks | 5,193 |
Agile Methods are designed for customization; they offer an organization or a team the flexibility to adopt a set of principles and practices based on their culture and values. While that flexibility is consistent with the agile philosophy, it can lead to the adoption of principles and practices that can be sub-optimal relative to the desired objectives. We question then, how can one determine if adopted practices are "in sync" with the identified principles, and to what extent those principles support organizational objectives? In this research, we focus on assessing the "goodness" of an agile method adopted by an organization based on (1) its adequacy, (2) the capability of the organization to provide the supporting environment to competently implement the method, and (3) its effectiveness. To guide our assessment, we propose the Objectives, Principles and Practices (OPP) framework. The design of the OPP framework revolves around the identification of the agile objectives, principles that support the achievement of those objectives, and practices that reflect the "spirit" of those principles. Well-defined linkages between the objectives and principles, and between the principles and practices are also established to support the assessment process. We traverse these linkages in a top-down fashion to assess adequacy and a bottom-up fashion to assess capability and effectiveness. This is a work-in-progress paper, outlining our proposed research, preliminary results and future directions. | A Structured Framework for Assessing the "Goodness" of Agile Methods | 5,194 |
The continuing process of software systems enlargement in size and complexity becomes system design extremely important for software production. In this way, the role of software architecture is significantly important in software development. It serves as an evaluation and implementation plan for software development and software evaluation. Consequently, choosing the correct architecture is a critical issue in software engineering domain. Moreover,software architecture selection is a multicriteria decision-making problem in which different goals and objectives must be taken into consideration. In this paper, more precise and suitable decisions in selection of architecture styles have been presented by using ANP inference to support decisions of software architects in order to exploit properties of styles in the best way to optimize the design of software architecture. | Selection of Architecture Styles using Analytic Network Process for the
Optimization of Software Architecture | 5,195 |
Today, reusable components are available in several repositories. These last are certainly conceived for the reusing However, this re-use is not immediate; it requires, in the fact, to pass through some essential conceptual operations, among them in particular, research, integration, adaptation, and composition. We are interested in the present work to the problem of semantic integration of heterogeneous Business Components. This problem is often put in syntactical terms, while the real stake is of semantic order. Our contribution concerns a model proposal for Business components integration as well as resolution method of semantic naming conflicts, met during the integration of Business Components. | A model for semantic integration of business components | 5,196 |
Today, reusable components are available in several repositorys. These are certainly conceived for re-use. However, this re-use is not immediate, it requires, in effect, to pass by some essential conceptual operations, among which in particular, research, integration, adaptation, and composition. We are interested in the present work to the problem of semantic integration of heterogeneous Business Components. This problem is often put in syntactical terms, while the real stake is of semantic order. Our contribution concerns an architecture proposal for Business components integration and a resolution method of semantic naming conflicts, met during the integration of Business Components | Towards an architecture for semantic integration of business components | 5,197 |
Behavior Driven Development (NORTH, 2006) is a specification technique that is growing in acceptance in the Agile methods communities. BDD allows to securely verify that all functional requirements were treated properly by source code, by connecting the textual description of these requirements to tests. On the other side, the Enterprise Information Systems (EIS) researchers and practitioners defends the use of Business Process Modeling (BPM) to, before defining any part of the system, perform the modeling of the system's underlying business process. Therefore, it can be stated that, in the case of EIS, functional requirements are obtained by identifying Use Cases from the business process models. The aim of this paper is, in a narrower perspective, to propose the use of Finite State Machines (FSM) to model business process and then connect them to the BDD machinery, thus driving better quality for EIS. In a broader perspective, this article aims to provoke a discussion on the mapping of the various BPM notations, since there isn't a real standard for business process modeling (Moller et al., 2007), to BDD. Firstly a historical perspective of the evolution of previous proposals from which this one emerged will be presented, and then the reasons to change from Model Driven Development (MDD) to BDD will be presented also in a historical perspective. Finally the proposal of using FSM, specifically by using UML Statechart diagrams, will be presented, followed by some conclusions. | Filling the Gap between Business Process Modeling and Behavior Driven
Development | 5,198 |
Because of the importance of object oriented methodologies, the research in developing new measure for object oriented system development is getting increased focus. The most of the metrics need to find the interactions between the objects and modules for developing necessary metric and an influential software measure that is attracting the software developers, designers and researchers. In this paper a new interactions are defined for object oriented system. Using these interactions, a parser is developed to analyze the existing architecture of the software. Within the design model, it is necessary for design classes to collaborate with one another. However, collaboration should be kept to an acceptable minimum i.e. better designing practice will introduce low coupling. If a design model is highly coupled, the system is difficult to implement, to test and to maintain overtime. In case of enhancing software, we need to introduce or remove module and in that case coupling is the most important factor to be considered because unnecessary coupling may make the system unstable and may cause reduction in the system's performance. So coupling is thought to be a desirable goal in software construction, leading to better values for external software qualities such as maintainability, reusability and so on. To test this hypothesis, a good measure of class coupling is needed. In this paper, based on the developed tool called Design Analyzer we propose a methodology to reuse an existing system with the objective of enhancing an existing Object oriented system keeping the coupling as low as possible. | A Parsing Scheme for Finding the Design Pattern and Reducing the
Development Cost of Reusable Object Oriented Software | 5,199 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.