text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
This paper presents an overview of S2AEA (v2) (Strategic Alignment Assessment based on Enterprise Architecture (version2)), a platform for modelling enterprise architecture and for assessing strategic alignment based on internal enterprise architecture metrics. The idea of the platform is based on the fact that enterprise architecture provides a structure for business processes and information systems that supports them. This structure can be used to measure the degree of consistency between business strategies and information systems. In that sense, this paper presents a platform illustrating the role of enterprise architecture in the strategic alignment assessment. This assessment can be used in auditing information systems. The platform is applied to assess an e-government process. | Platform for Assessing Strategic Alignment Using Enterprise
Architecture: Application to E-Government Process Assessment | 5,300 |
Over two decades, we and other research groups have found that ethnographic and social analyses of work settings can provide insights useful to the process of system analysis and design. Despite this, ethnographic and social analyses have not been widely assimilated into industry practice. Practitioners tend to address sociotechnical factors in an ad-hoc manner, often post-implementation, once system use or outcome has become problematic. In response to this, we have developed a lightweight qualitative approach to provide insights to ameliorate problematic system deployments. Unlike typical ethnographies and social analyses of work activity that inform systems analysis and design; we argue that analysis of intentional and structural factors to inform system deployment and integration can have a shorter time duration and yet can provide actionable insights. We evaluate our approach using a case study of a problematic enterprise document manage-ment system within a multinational systems engineering organization. Our find-ings are of academic and practical significance as our approach demonstrates that structural-intentional analysis scales to enable the timely analysis of large-scale system deployments. | Expectations and Reality: Why an enterprise software system didn't work
as planned | 5,301 |
Society is challenging systems engineers by demanding ever more complex and integrated systems. With the rise of cloud computing and systems-of-systems (including cyber-physical systems) we are entering an era where mission critical services and applications will be dependent upon 'coalitions-of-systems'. Coalitions-of-systems (CoS) are a class of system similar to systems-of-systems but they differ in that they interact to further overlapping self-interests rather than an overarching mission. Assessing the sociotechnical risks associated with CoS is an open research question of societal importance as existing risk analysis techniques typically focus on the technical aspects of systems and ignore risks associated with coalition partners reneging on responsibilities or leaving the coalition. We demonstrate that a responsibility modeling based risk analysis approach enables the identification of sociotechnical risks associated with CoS. The approach identifies hazards and associated risks that may arise when relying upon a coalition of human/organizational/technical agents to provision a service or application. Through a case study of a proposed cloud IT infrastructure migration we show how the technique identifies vulnerabilities that may arise because of human, organizational or technical agents failing to discharge responsibilities. | Responsibility Modeling for the Sociotechnical Risk Analysis of
Coalitions of Systems | 5,302 |
Interface adapters allow applications written for one interface to be reused with another interface without having to rewrite application code, and chaining interface adapters can significantly reduce the development effort required to create the adapters. However, interface adapters will often be unable to convert interfaces perfectly, so there must be a way to analyze the loss from interface adapter chains in order to improve the quality of interface adaptation. This paper describes a probabilistic approach to analyzing loss in interface adapter chains, which not only models whether a method can be adapted but also how well methods can be adapted. We also show that probabilistic optimal adapter chaining is an NP-complete problem, so we describe a greedy algorithm which can construct an optimal interface adapter chain with exponential time in the worst case. | Probabilistic Analysis of Loss in Interface Adapter Chaining | 5,303 |
The use of embedded software is growing very rapidly. Accessing the internet is a necessary service which has large range of applications in many fields. The Internet is based on TCP/IP which is a very important stack. Although TCP/IP is very important there is not a software engineering model describing it. The common method in modeling and describing TCP/IP is RFCs which is not sufficient for software engineer and developers. Therefore there is a need for software engineering approach to help engineers and developers to customize their own web based applications for embedded systems. This research presents a model based system engineering approach of lightweight TCP/IP. The model contains the necessary phases for developing a lightweight TCP/IP for embedded systems. The proposed model is based on SysML as a model based system engineering language. | Model based system engineering approach of a lightweight embedded TCP/IP | 5,304 |
It is known that IT projects are high-risk. To achieve project success, the strategies to avoid and reduce risks must be designed meticulously and implemented accordingly. This paper presents methods for avoiding and reducing risks throughout the development of an information system, specifically electronic payment system to handle tuition in the universities in Indonesia. The university policies, regulations and system models are design in such a way to resolve the project key success factors. By implementing the proposed methods, the system has been successfully developed and currently operated. The research is conducted in Parahyangan Catholic University, Bandung, Indonesia. | The Development of Electronic Payment System for Universities in
Indonesia: On Resolving Key Success Factors | 5,305 |
Multi-criteria decision support systems are used in various fields of human activities. In every alternative multi-criteria decision making problem can be represented by a set of properties or constraints. The properties can be qualitative & quantitative. For measurement of these properties, there are different unit, as well as there are different optimization techniques. Depending upon the desired goal, the normalization aims for obtaining reference scales of values of these properties. This paper deals with a new additive ratio assessment method. In order to make the appropriate decision and to make a proper comparison among the available alternatives Analytic Hierarchy Process (AHP) and ARAS have been used. The uses of AHP is for analysis the structure of the project selection problem and to assign the weights of the properties and the ARAS method is used to obtain the final ranking and select the best one among the projects. To illustrate the above mention methods survey data on the expansion of optical fibre for a telecommunication sector is used. The decision maker can also used different weight combination in the decision making process according to the demand of the system. | MCA Based Performance Evaluation of Project Selection | 5,306 |
Software development environments (IDEs) have not followed the IT industry's inexorable trend towards distribution. They do too little to address the problems raised by today's increasingly distributed projects; neither do they facilitate collaborative and interactive development practices. A consequence is the continued reliance of today's IDEs on paradigms such as traditional configuration management, which were developed for earlier modes of operation and hamper collaborative projects. This contribution describes a new paradigm: cloud-based development, which caters to the specific needs of distributed and collaborative projects. The CloudStudio IDE embodies this paradigm by enabling developers to work on a shared project repository. Configuration management becomes unobtrusive; it replaces the explicit update-modify-commit cycle by interactive editing and real-time conflict tracking and management. A case study involving three teams of pairs demonstrates the usability of CloudStudio and its advantages for collaborative software development over traditional configuration management practices. | Collaborative Software Development on the Web | 5,307 |
Software Architecture defines the overview of the system which consists of various components and their relationships among the software. Architectural design is very important in the development of large scale software solution and plays a very active role in achieving business goals, quality and reusable solution. It is often difficult to choose the best software architecture for your system from the several candidate types available. In this paper we look at the several architectural types and compare them based on the key requirements of our system, and select the most appropriate architecture for the implementation of campus information systems at Fiji National University. Finally we provide details of proposed architecture and outline future plans for implementation of our system. | Software Architecture for Fiji National University Campus Information
Systems | 5,308 |
Computing accurate WCET on modern complex architectures is a challenging task. This problem has been devoted a lot of attention in the last decade but there are still some open issues. First, the control flow graph (CFG) of a binary program is needed to compute the WCET and this CFG is built using some internal knowledge of the compiler that generated the binary code; moreover once constructed the CFG has to be manually annotated with loop bounds. Second, the algorithms to compute the WCET (combining Abstract Interpretation and Integer Linear Programming) are tailored for specific architectures: changing the architecture (e.g. replacing an ARM7 by an ARM9) requires the design of a new ad hoc algorithm. Third, the tightness of the computed results (obtained using the available tools) are not compared to actual execution times measured on the real hardware. In this paper we address the above mentioned problems. We first describe a fully automatic method to compute a CFG based solely on the binary program to analyse. Second, we describe the model of the hardware as a product of timed automata, and this model is independent from the program description. The model of a program running on a hardware is obtained by synchronizing (the automaton of) the program with the (timed automata) model of the hardware. Computing the WCET is reduced to a reachability problem on the synchronised model and solved using the model-checker UPPAAL. Finally, we present a rigorous methodology that enables us to compare our computed results to actual execution times measured on a real platform, the ARM920T. | Computation of WCET using Program Slicing and Real-Time Model-Checking | 5,309 |
Software is among the most complex endeavors of the human mind; large scale systems can have tens of millions of lines of source code. However, seldom is complexity measured above the lowest level of code, and sometimes source code files or low level modules. In this paper a hierarchical approach is explored in order to find a set of metrics that can measure higher levels of organization. These metrics are then used on a few popular free software packages (totaling more than 25 million lines of code) to check their efficiency and coherency. | Hierarchical Complexity: Measures of High Level Modularity | 5,310 |
This paper examines the current system development processes of three major Turkish banks in terms of compliance to internationally accepted system development and software engineering standards to determine the common process problems of banks. After an in-depth investigation into system development and software engineering standards, related process-based standards were selected. Questions were then prepared covering the whole system development process by applying the classical Waterfall life cycle model. Each question is made up of guidance and suggestions from the international system development standards. To collect data, people from the information technology departments of three major banks in Turkey were interviewed. Results have been aggregated by examining the current process status of the three banks together. Problematic issues were identified using the international system development standards. | Standardization of information systems development processes and banking
industry adaptations | 5,311 |
If a modeling task is distributed, it will frequently be necessary to integrate models developed by different team members. Problems occur in the models integration step and particularly, in the comparison phase of the integration. This issue had been discussed in several domains and various models. However, previous approaches have not correctly handled the semantic comparison. In the current paper, we provide a MDA-based approach for models comparison which aims at comparing UML models. We develop an hybrid approach which takes into account syntactic, semantic and structural comparison aspects. For this purpose, we use the domain ontology as well as other resources such as dictionaries. We propose a decision support system which permits the user to validate (or not) correspondences extracted in the comparison phase. For implementation, we propose an extension of the generic correspondence metamodel AMW in order to transform UML models to the correspondence model. | MDA based-approach for UML Models Complete Comparison | 5,312 |
Software testing is an important and valuable part of the software development life cycle. Due to time, cost and other circumstances, exhaustive testing is not feasible that's why there is a need to automate the software testing process. Testing effectiveness can be achieved by the State Transition Testing (STT) which is commonly used in real time, embedded and web-based type of software systems. Aim of the current paper is to present an algorithm by applying an ant colony optimization technique, for generation of optimal and minimal test sequences for behavior specification of software. Present paper approach generates test sequence in order to obtain the complete software coverage. This paper also discusses the comparison between two metaheuristic techniques (Genetic Algorithm and Ant Colony optimization) for transition based testing | Automated Software Testing Using Metahurestic Technique Based on An Ant
Colony Optimization | 5,313 |
This paper offers a natural stochastic semantics of Networks of Priced Timed Automata (NPTA) based on races between components. The semantics provides the basis for satisfaction of probabilistic Weighted CTL properties (PWCTL), conservatively extending the classical satisfaction of timed automata with respect to TCTL. In particular the extension allows for hard real-time properties of timed automata expressible in TCTL to be refined by performance properties, e.g. in terms of probabilistic guarantees of time- and cost-bounded properties. A second contribution of the paper is the application of Statistical Model Checking (SMC) to efficiently estimate the correctness of non-nested PWCTL model checking problems with a desired level of confidence, based on a number of independent runs of the NPTA. In addition to applying classical SMC algorithms, we also offer an extension that allows to efficiently compare performance properties of NPTAs in a parametric setting. The third contribution is an efficient tool implementation of our result and applications to several case studies. | Stochastic Semantics and Statistical Model Checking for Networks of
Priced Timed Automata | 5,314 |
We propose a mechanism for the vertical refinement of bigraphical reactive systems, based upon a mechanism for limiting observations and utilising the underlying categorical structure of bigraphs. We present a motivating example to demonstrate that the proposed notion of refinement is sensible with respect to the theory of bigraphical reactive systems; and we propose a sufficient condition for guaranteeing the existence of a safety-preserving vertical refinement. We postulate the existence of a complimentary notion of horizontal refinement for bigraphical agents, and finally we discuss the connection of this work to the general refinement of Reeves and Streader. | Bigraphical Refinement | 5,315 |
Before we combine actions and probabilities two very obvious questions should be asked. Firstly, what does "the probability of an action" mean? Secondly, how does probability interact with nondeterminism? Neither question has a single universally agreed upon answer but by considering these questions at the outset we build a novel and hopefully intuitive probabilistic event-based formalism. In previous work we have characterised refinement via the notion of testing. Basically, if one system passes all the tests that another system passes (and maybe more) we say the first system is a refinement of the second. This is, in our view, an important way of characterising refinement, via the question "what sort of refinement should I be using?" We use testing in this paper as the basis for our refinement. We develop tests for probabilistic systems by analogy with the tests developed for non-probabilistic systems. We make sure that our probabilistic tests, when performed on non-probabilistic automata, give us refinement relations which agree with for those non-probabilistic automata. We formalise this property as a vertical refinement. | Refinement for Probabilistic Systems with Nondeterminism | 5,316 |
Formally capturing the transition from a continuous model to a discrete model is investigated using model based refinement techniques. A very simple model for stopping (eg. of a train) is developed in both the continuous and discrete domains. The difference between the two is quantified using generic results from ODE theory, and these estimates can be compared with the exact solutions. Such results do not fit well into a conventional model based refinement framework; however they can be accommodated into a model based retrenchment. The retrenchment is described, and the way it can interface to refinement development on both the continuous and discrete sides is outlined. The approach is compared to what can be achieved using hybrid systems techniques. | Formalising the Continuous/Discrete Modeling Step | 5,317 |
This paper reconsiders refinements which introduce actions on the concrete level which were not present at the abstract level. It draws a distinction between concrete actions which are "perspicuous" at the abstract level, and changes of granularity of actions between different levels of abstraction. The main contribution of this paper is in exploring the relation between these different methods of "action refinement", and the basic refinement relation that is used. In particular, it shows how the "refining skip" method is incompatible with failures-based refinement relations, and consequently some decisions in designing Event-B refinement are entangled. | Perspicuity and Granularity in Refinement | 5,318 |
Static program verifiers such as Spec#, Dafny, jStar, and VeriFast define the state of the art in automated functional verification techniques. The next open challenges are to make verification tools usable even by programmers not fluent in formal techniques. This paper presents AutoProof, a verification tool that translates Eiffel programs to Boogie and uses the Boogie verifier to prove them. In an effort to be usable with real programs, AutoProof fully supports several advanced object-oriented features including polymorphism, inheritance, and function objects. AutoProof also adopts simple strategies to reduce the amount of annotations needed when verifying programs (e.g., frame conditions). The paper illustrates the main features of AutoProof's translation, including some whose implementation is underway, and demonstrates them with examples and a case study. | Verifying Eiffel Programs with Boogie | 5,319 |
Over the past years there has been quite a lot of activity in the algebraic community about using algebraic methods for providing support to model-driven software engineering. The aim of this workshop is to gather researchers working on the development and application of algebraic methods to provide rigorous support to model-based software engineering. The topics relevant to the workshop are all those related to the use of algebraic methods in software engineering, including but not limited to: formally specifying and verifying model-based software engineering concepts and related ones (MDE, UML, OCL, MOF, DSLs, ...); tool support for the above; integration of formal and informal methods; and theoretical frameworks (algebraic, rewriting-based, category theory-based, ...). The workshop's main goal is to examine, discuss, and relate the existing projects within the algebraic community that address common open-issues in model-driven software engineering. | Proceedings Second International Workshop on Algebraic Methods in
Model-based Software Engineering | 5,320 |
A formal definition of the semantics of a domain-specific language (DSL) is a key prerequisite for the verification of the correctness of models specified using such a DSL and of transformations applied to these models. For this reason, we implemented a prototype of the semantics of a DSL for the specification of systems consisting of concurrent, communicating objects. Using this prototype, models specified in the DSL can be transformed to labeled transition systems (LTS). This approach of transforming models to LTSs allows us to apply existing tools for visualization and verification to models with little or no further effort. The prototype is implemented using the ASF+SDF Meta-Environment, an IDE for the algebraic specification language ASF+SDF, which offers efficient execution of the transformation as well as the ability to read models and produce LTSs without any additional pre or post processing. | Prototyping the Semantics of a DSL using ASF+SDF: Link to Formal
Verification of DSL Models | 5,321 |
We introduce a technique for reachability analysis of Time-Basic (TB) Petri nets, a powerful formalism for real- time systems where time constraints are expressed as intervals, representing possible transition firing times, whose bounds are functions of marking's time description. The technique consists of building a symbolic reachability graph relying on a sort of time coverage, and overcomes the limitations of the only available analyzer for TB nets, based in turn on a time-bounded inspection of a (possibly infinite) reachability-tree. The graph construction algorithm has been automated by a tool-set, briefly described in the paper together with its main functionality and analysis capability. A running example is used throughout the paper to sketch the symbolic graph construction. A use case describing a small real system - that the running example is an excerpt from - has been employed to benchmark the technique and the tool-set. The main outcome of this test are also presented in the paper. Ongoing work, in the perspective of integrating with a model-checking engine, is shortly discussed. | Reachability Analysis of Time Basic Petri Nets: a Time Coverage Approach | 5,322 |
When developing a safety-critical system it is essential to obtain an assessment of different design alternatives. In particular, an early safety assessment of the architectural design of a system is desirable. In spite of the plethora of available formal quantitative analysis methods it is still difficult for software and system architects to integrate these techniques into their every day work. This is mainly due to the lack of methods that can be directly applied to architecture level models, for instance given as UML diagrams. Also, it is necessary that the description methods used do not require a profound knowledge of formal methods. Our approach bridges this gap and improves the integration of quantitative safety analysis methods into the development process. All inputs of the analysis are specified at the level of a UML model. This model is then automatically translated into the analysis model, and the results of the analysis are consequently represented on the level of the UML model. Thus the analysis model and the formal methods used during the analysis are hidden from the user. We illustrate the usefulness of our approach using an industrial strength case study. | QuantUM: Quantitative Safety Analysis of UML Models | 5,323 |
This paper presents the Eclipse plug-ins for the Task Flow model in the Discovery Method. These plug-ins provide an IDE for the Task Algebra compiler and the model-checking tools. The Task Algebra is the formal representation for the Task Model and it is based on simple and compound tasks. The model-checking techniques were developed to validate Task Models represented in the algebra. | An IDE to Build and Check Task Flow Models | 5,324 |
The continued existence of any software industry depends on its capability to develop nearly zero-defect product, which is achievable through effective defect management. Inspection has proven to be one of the promising techniques of defect management. Introductions of metrics like, Depth of Inspection (DI, a process metric) and Inspection Performance Metric (IPM, a people metric) enable one to have an appropriate measurement of inspection technique. This article elucidates a mathematical approach to estimate the IPM value without depending on shop floor defect count at every time. By applying multiple linear regression models, a set of characteristic coefficients of the team is evaluated. These coefficients are calculated from the empirical projects that are sampled from the teams of product-based and service-based IT industries. A sample of three verification projects indicates a close match between the IPM values obtained from the defect count (IPMdc) and IPM values obtained using the team coefficients using the mathematical model (IPMtc). The IPM values observed onsite and IPM values produced by our model which are strongly matching, support the predictive capability of IPM through team coefficients. Having finalized the value of IPM that a company should achieve for a project, it can tune the inspection influencing parameters to realize the desired quality level of IPM. Evaluation of team coefficients resolves several defect-associated issues, which are related to the management, stakeholders, outsourcing agents and customers. In addition, the coefficient vector will further aid the strategy of PSP and TSP | Estimation of Characteristics of a Software Team for Implementing
Effective Inspection Process through Inspection Performance Metric | 5,325 |
The increasing complexity of software engineering requires effective methods and tools to support requirements analysts' activities. While much of a company's knowledge can be found in text repositories, current content management systems have limited capabilities for structuring and interpreting documents. In this context, we propose a tool for transforming text documents describing users' requirements to an UML model. The presented tool uses Natural Language Processing (NLP) and semantic rules to generate an UML class diagram. The main contribution of our tool is to provide assistance to designers facilitating the transition from a textual description of user requirements to their UML diagrams based on GATE (General Architecture of Text) by formulating necessary rules that generate new semantic annotations. | Semantic annotation of requirements for automatic UML class diagram
generation | 5,326 |
In this paper we propose an extension of the Rebeca language that can be used to model distributed and asynchronous systems with timing constraints. We provide the formal semantics of the language using Structural Operational Semantics, and show its expressiveness by means of examples. We developed a tool for automated translation from timed Rebeca to the Erlang language, which provides a first implementation of timed Rebeca. We can use the tool to set the parameters of timed Rebeca models, which represent the environment and component variables, and use McErlang to run multiple simulations for different settings. Timed Rebeca restricts the modeller to a pure asynchronous actor-based paradigm, where the structure of the model represents the service oriented architecture, while the computational model matches the network infrastructure. Simulation is shown to be an effective analysis support, specially where model checking faces almost immediate state explosion in an asynchronous setting. | Modelling and Simulation of Asynchronous Real-Time Systems using Timed
Rebeca | 5,327 |
Agile methods provide an organization or a team the flexibility to adopt a selected subset of principles and practices based on their culture, their values, and the types of systems that they develop. More specifically, every organization or team implements a customized agile method, tailored to better accommodate its needs. However, the extent to which a customized method supports the organizational objectives, or rather the 'goodness' of that method is questionable. Existing agile assessment approaches focus on a comparative analysis, or are limited in scope and application. In this research, we propose a structured, systematic and comprehensive approach to assess the 'goodness' of agile methods. We examine an agile method based on (1) its adequacy, (2) the capability of the organization to support the adopted principles and practices specified by the method, and (3) the method's effectiveness. We propose the Objectives, Principles and Practices (OPP) Framework to guide our assessment. The Framework identifies (1) objectives of the agile philosophy, (2) principles that support the objectives, (3) practices that are reflective of the principles, (4) the linkages between the objectives, principles and practices, and (5) indicators for each practice to assess the effectiveness of the practice and the extent to which the organization supports its implementation. In this document, we discuss our solution approach, preliminary results, and future work. | A Methodology for assessing Agile Software Development Approaches | 5,328 |
This paper presents an investigation of the notion of reaction time in some synchronous systems. A state-based description of such systems is given, and the reaction time of such systems under some classic composition primitives is studied. Reaction time is shown to be non-compositional in general. Possible solutions are proposed, and applications to verification are discussed. This framework is illustrated by some examples issued from studies on real-time embedded systems. | On the reaction time of some synchronous systems | 5,329 |
Automated random testing has shown to be an effective approach to finding faults but still faces a major unsolved issue: how to generate test inputs diverse enough to find many faults and find them quickly. Stateful testing, the automated testing technique introduced in this article, generates new test cases that improve an existing test suite. The generated test cases are designed to violate the dynamically inferred contracts (invariants) characterizing the existing test suite. As a consequence, they are in a good position to detect new errors, and also to improve the accuracy of the inferred contracts by discovering those that are unsound. Experiments on 13 data structure classes totalling over 28,000 lines of code demonstrate the effectiveness of stateful testing in improving over the results of long sessions of random testing: stateful testing found 68.4% new errors and improved the accuracy of automatically inferred contracts to over 99%, with just a 7% time overhead. | Stateful Testing: Finding More Errors in Code and Contracts | 5,330 |
In this article, we examine how security applies to Service Oriented Architecture (SOA). Before we discuss security for SOA, lets take a step back and examine what SOA is. SOA is an architectural approach which involves applications being exposed as "services". Originally, services in SOA were associated with a stack of technologies which included SOAP, WSDL, and UDDI. This article addresses the defects of traditional enterprise application integration by combining service oriented-architecture and web service technology. Application integration is then simplified to development and integration of services to tackle connectivity of isomerous enterprise application integration, security, loose coupling between systems and process refactoring and optimization. | Security Model For Service-Oriented Architecture | 5,331 |
Program understanding is an important aspect in Software Maintenance and Reengineering. Understanding the program is related to execution behaviour and relationship of variable involved in the program. The task of finding all statements in a program that directly or indirectly influence the value for an occurrence of a variable gives the set of statements that can affect the value of a variable at some point in a program is called a program slice. Program slicing is a technique for extracting parts of computer programs by tracing the programs' control and data flow related to some data item. This technique is applicable in various areas such as debugging, program comprehension and understanding, program integration, cohesion measurement, re-engineering, maintenance, testing where it is useful to be able to focus on relevant parts of large programs. This paper focuses on the various slicing techniques (not limited to) like static slicing, quasi static slicing, dynamic slicing and conditional slicing. This paper also includes various methods in performing the slicing like forward slicing, backward slicing, syntactic slicing and semantic slicing. The slicing of a program is carried out using Java which is a object oriented programming language. | Program slicing techniques and its applications | 5,332 |
Requirements are found to change in various ways during the course of a project. This can affect the process in widely different manner and extent. Here we present a case study where-in we investigate the impact of requirement volatility pattern on project performance. The project setting described in the case is emulated on a validated system dynamics model representing the waterfall model. The findings indicate deviations in project outcome from the estimated thereby corroborating to previous findings. The results reinforce the applicability of system dynamics approach to analyze project performance under requirement volatility, which is expected to speed up adoption of the same in organizations and in the process contribute to more project successes. | Impact of Software Requirement Volatility Pattern on Project Dynamics:
Evidences from a Case Study | 5,333 |
Software project development process is requiring accurate software cost and schedule estimation for achieve goal or success. A lot it referred to as the "Intricate brainteaser" because of its conscience attribute which is impact by complexity and uncertainty, Generally estimation is not as difficult or puzzling as people think. In fact, generating accurate estimates is straightforward-once you understand the intensity of uncertainty and module which contribute itself process. In our everyday life, we enhance our estimation based on past experience in which problem solve by which method and in which condition and which opportune provide that method to produce better result . So, Instead of unexplained treatises and inflexible modeling techniques, this will guide highlights a proven set of procedures, understandable formulas, and heuristics that individuals and complete team can apply to their projects to help achieve estimation ability with choose appropriate development approaches In the early stage of software life cycle project manager are inefficient to estimate the effort, schedule, cost estimation and its development approach .This in turn, confuses the manager to bid effectively on software project and choose incorrect development approach. That will directly effect on productivity cycle and increase level of uncertainty. This becomes a strong cause of project failure. So to avoid such problem if we know level and sources of uncertainty in model design, It will directive the developer to design accurate software cost and schedule estimation. which are require l for software project success. This paper demonstrates need of uncertainty analysis module at the modeling process for assist to recognize modular uncertainty system development process and the role of uncertainty at different stages in the modeling | Understanding need of "Uncertainty Analysis" in the system Design
process | 5,334 |
Service-based systems are software systems composed of autonomous components or services provided by different vendors, deployed on remote machines and accessible through the web. One of the challenges of modern software engineering is to ensure that such a system behaves as intended by its designer. The Reo coordination language is an extensible notation for formal modeling and execution of service compositions. Services that have no prior knowledge about each other communicate through advanced channel connectors which guarantee that each participant, service or client, receives the right data at the right time. Each channel is a binary relation that imposes synchronization and data constraints on input and output messages. Furthermore, channels are composed together to realize arbitrarily complex behavioral protocols. During this process, a designer may introduce errors into the connector model or the code for their execution, and thus affect the behavior of a composed service. In this paper, we present an approach for model-based testing of coordination protocols designed in Reo. Our approach is based on the input-output conformance (ioco) testing theory and exploits the mapping of automata-based semantic models for Reo to equivalent process algebra specifications. | Input-output Conformance Testing for Channel-based Service Connectors | 5,335 |
Current approaches for the discovery, specification, and provision of services ignore the relationship between the service contract and the conditions in which the service can guarantee its contract. Moreover, they do not use formal methods for specifying services, contracts, and compositions. Without a formal basis it is not possible to justify through formal verification the correctness conditions for service compositions and the satisfaction of contractual obligations in service provisions. We remedy this situation in this paper. We present a formal definition of services with context-dependent contracts. We define a composition theory of services with context-dependent contracts taking into consideration functional, nonfunctional, legal and contextual information. Finally, we present a formal verification approach that transforms the formal specification of service composition into extended timed automata that can be verified using the model checking tool UPPAAL. | Specification and Verification of Context-dependent Services | 5,336 |
PL for SOA proposes, formally, a software engineering methodology, development techniques and support tools for the provision of service product lines. We propose rigorous modeling techniques for the specification and verification of formal notations and languages for service computing with inclinations of variability. Through these cutting-edge technologies, increased levels of flexibility and adaptivity can be achieved. This will involve developing semantics of variability over behavioural models of services. Such tools will assist organizations to plan, optimize and control the quality of software service provision, both at design and at run time by making it possible to develop flexible and cost-effective software systems that support high levels of reuse. We tackle this challenge from two levels. We use feature modeling from product line engineering and, from a services point of view, the orchestration language Orc. We introduce the Smart Grid as the service product line to apply the techniques to. | Product Lines for Service Oriented Applications - PL for SOA | 5,337 |
Web applications are becoming more and more complex. Testing such applications is an intricate hard and time-consuming activity. Therefore, testing is often poorly performed or skipped by practitioners. Test automation can help to avoid this situation. Hence, this paper presents a novel approach to perform automated software testing for web applications based on its navigation. On the one hand, web navigation is the process of traversing a web application using a browser. On the other hand, functional requirements are actions that an application must do. Therefore, the evaluation of the correct navigation of web applications results in the assessment of the specified functional requirements. The proposed method to perform the automation is done in four levels: test case generation, test data derivation, test case execution, and test case reporting. This method is driven by three kinds of inputs: i) UML models; ii) Selenium scripts; iii) XML files. We have implemented our approach in an open-source testing framework named Automatic Testing Platform. The validation of this work has been carried out by means of a case study, in which the target is a real invoice management system developed using a model-driven approach. | Automated Functional Testing based on the Navigation of Web Applications | 5,338 |
This paper contributes to the solution of the problem of transforming a process model with an arbitrary topology into an equivalent structured process model. In particular, this paper addresses the subclass of process models that have no equivalent well-structured representation but which, nevertheless, can be partially structured into their maximally-structured representation. The structuring is performed under a behavioral equivalence notion that preserves observed concurrency of tasks in equivalent process models. The paper gives a full characterization of the subclass of acyclic process models that have no equivalent well-structured representation but do have an equivalent maximally-structured one, as well as proposes a complete structuring method. | Maximal Structuring of Acyclic Process Models | 5,339 |
E-commerce is one of the most important web applications. We present here a set of patterns that describe shopping carts, products, catalogue, customer accounts, shipping, and invoices. We combine them in the form of composite patterns, which in turn make up a domain model for business-to-consumer e-commerce. We also indicate how to add security constraints to this model. This domain model can be used as a computation-independent model from which specific applications can be produced using a model-driven architecture approach. | Patterns for Business-to-consumer E-Commerce Applications | 5,340 |
Bug localization in object oriented program ha s always been an important issue in softeware engineering. In this paper, I propose a source level bug localization technique for object oriented embedded programs. My proposed technique, presents the idea of debugging an object oriented program in class level, incorporating the object state information into the Class Dependence Graph (ClDG). Given a program (having buggy statement) and an input that fails and others pass, my approach uses concrete as well as symbolic execution to synthesize the passing inputs that marginally from the failing input in their control flow behavior. A comparison of the execution traces of the failing input and the passing input provides necessary clues to the root-cause of the failure. A state trace difference, regarding the respective nodes of the ClDG is obtained, which leads to detect the bug in the program. | OSD: A Source Level Bug Localization Technique Incorporating Control
Flow and State Information in Object Oriented Program | 5,341 |
Demand for more advanced Web applications is the driving force behind Web browser evolution. Recent requirements for Rich Internet Applications, such as mashing-up data and background processing, are emphasizing the need for building and executing Web applications as a coordination of browser execution contexts. Since development of such Web applications depends on cross-context communication, many browser primitives and client-side frameworks have been developed to support this communication. In this paper we present a systematization of cross-context communication systems for Web browsers. Based on an analysis of previous research, requirements for modern Web applications and existing systems, we extract a framework for classifying cross-context communica-tion systems. Using the framework, we evaluate the current ecosystem of cross-context communication and outline directions for future Web research and engineering. | A Classification Framework for Web Browser Cross-Context Communication | 5,342 |
This paper concerns state-based systems that interact with their environment at physically distributed interfaces, called ports. When such a system is used a projection of the global trace, called a local trace, is observed at each port. This leads to the environment having reduced observational power: the set of local traces observed need not uniquely define the global trace that occurred. We consider the previously defined implementation relation $\sqsubseteq_s$ and start by investigating the problem of defining a language ${\mathcal {\tilde L}} (M)$ for a multi-port finite state machine (FSM) $M$ such that $N \sqsubseteq_s M$ if and only if every global trace of $N$ is in ${\mathcal {\tilde L}} (M)$. The motivation is that if we can produce such a language ${\mathcal {\tilde L}} (M)$ then this can potentially be used to inform development and testing. We show that ${\mathcal {\tilde L}} (M)$ can be uniquely defined but need not be regular. We then prove that it is generally undecidable whether $N \sqsubseteq_s M$, a consequence of this result being that it is undecidable whether there is a test case that is capable of distinguishing two states or two multi-port FSM in distributed testing. This result complements a previous result that it is undecidable whether there is a test case that is guaranteed to distinguish two states or multi-port FSMs. We also give some conditions under which $N \sqsubseteq_s M$ is decidable. We then consider the implementation relation $\sqsubseteq_s^k$ that only concerns input sequences of length $k$ or less. Naturally, given FSMs $N$ and $M$ it is decidable whether $N \sqsubseteq_s^k M$ since only a finite set of traces is relevant. We prove that if we place bounds on $k$ and the number of ports then we can decide $N \sqsubseteq_s^k M$ in polynomial time but otherwise this problem is NP-hard. | Checking Finite State Machine Conformance when there are Distributed
Observations | 5,343 |
In past years ERP Systems have become one of the main components within the corporate IT structure. Several problems exist around implementing and operating these systems within companies. In the literature one can find several studies about the problems arising during the implementation of an ERP system. The main problem areas are around the complexity of ERP systems. One vision to overcome some of these problems is federated ERP. Federated ERP systems are built of components from different vendors, which are distributed within a network. All components act as one single ERP system from the user perspective. The decreased complexity of such a system would require lower installation and maintenance cost. Additional, only the components which are needed to cover the company's business processes would be used. Several theories around this concept exist, but a feasibility assessment of developing a federated ERP system has not been done yet. Based on a literary analysis of existing methods for feasibility studies, this paper is applying strategic planning concepts and referential data from the traditional ERP development to provide a first assessment of the overall feasibility of developing a platform for federated ERP systems. An analytical hierarchical approach is used to define effort and effect related criteria and their domain values. The assessment as the criteria is done in comparison to the development of a classical ERP system. Using the developed criteria, a net present value calculation is done. The calculation of the net present value is done on an overall, not company specific level. In order to estimate the weighted average cost of capital, the values from successful software companies are used as a baseline. Additional potential risks and obstacles are identified for further clarification. | Assessing the Feasibility of Developing a Federated ERP System | 5,344 |
Software has gained immense importance in our everyday lifeand is handling each and every aspect of today's technologicalworld. The idea of software at initial phase was implemented bya very precise minority of individual and now it's everywherewhether one's personal life or an organization .Financiallystrong organization and people who can purchase this bounty oftechnological era can fulfill their desires efficiently. For sure it's not a generalized case that one is financially strong and caneasily afford the desired software. There are numerous userswho cannot do so. Open source software has a way out for theseusers it provides them the same facilities and functionalities asin their equivalent software irrespective of any financialpressure. So the financially constrained personals ororganization can make use of open source software forachievement of their desired tasks. In this research paper ananalysis of open source software has been presented byproviding a brief comparison of Ubuntu as an emerging highquality open source modern operating system with well knownMicrosoft windows operating system | Critical Aspects of Modern Open Source SoftwareTechnology to Support
Emerging Demands | 5,345 |
In the component-based software development, the selection step is very important. It consists of searching and selecting appropriate software components from a set of candidate components in order to satisfy the developer-specific requirements. In the selection process, both functional and non-functional requirements are generally considered. In this paper, we focus only on the QoS, a subset of non-functional characteristics, in order to determine the best components for selection. The component selection based on the QoS is a hard task due to the QoS descriptions heterogeneity. Thus, we propose a QoS ontology which provides a formal, a common and an explicit description of the software components QoS. We use this ontology in order to semantically select relevant components based on the QoS specified by the developer. Our selection process is performed in two steps: (1) a QoS matching process that uses the relations between QoS concepts to pre-select candidate components. Each candidate component is matched against the developer's request and (2) a component ranking process that uses the QoS values to determine the best components for selection from the pre-selected components. The algorithms of QoS matching and component ranking are then presented and experimented in the domain of multimedia components. | A qos ontology-based component selection | 5,346 |
As hardware components are becoming cheaper and powerful day by day, the expected services from modern software are increasing like any thing. Developing such software has become extremely challenging. Not only the complexity, but also the developing of such software within the time constraints and budget has become the real challenge. Quality concern and maintainability are added flavour to the challenge. On stream, the requirements of the clients are changing so frequently that it has become extremely tough to manage these changes. More often, the clients are unhappy with the end product. Large, complex software projects are notoriously late to market, often exhibit quality problems, and don't always deliver on promised functionality. None of the existing models are helpful to cater the modern software crisis. Hence, a better modern software development process model to handle with the present software crisis is badly needed. This paper suggests a new software development process model, BRIDGE, to tackle present software crisis. | BRIDGE: A Model for Modern Software Development Process to Cater the
Present Software Crisis | 5,347 |
Software (SW) development is a labor intensive activity. Modern software projects generally have to deal with producing and managing large and complex software products. Developing such software has become an extremely challenging job not only because of inherent complexity, but also mainly for economic constraints unlike time, quality, maintainability concerns. Hence, developing modern software within the budget still remains as one of the main software crisis. The most significant way to reduce the software development cost is to use the Computer-Aided Software Engineering (CASE) tools over the entire Software Development Life Cycle (SDLC) process as substitute to expensive human labor cost. We think that automation of software development methods is a valuable support for the software engineers in coping with this complexity and for improving quality too. This paper demonstrates the newly developed CASE tools name "SRS Builder 1.0" for software requirement specification developed at our university laboratory, University of North Bengal, India. This paper discusses our new developed product with its functionalities and usages. We believe the tool has the potential to play an important role in the software development process. | SRS BUILDER 1.0: An Upper Type CASE Tool For Requirement Specification | 5,348 |
Enterprise Architecture defines the overall form and function of systems across an enterprise involving the stakeholders and providing a framework, standards and guidelines for project-specific architectures. Project-specific Architecture defines the form and function of the systems in a project or program, within the context of the enterprise as a whole with broad scope and business alignments. Application-specific Architecture defines the form and function of the applications that will be developed to realize functionality of the system with narrow scope and technical alignments. Because of the magnitude and complexity of any enterprise integration project, a major engineering and operations planning effort must be accomplished prior to any actual integration work. As the needs and the requirements vary depending on their volume, the entire enterprise problem can be broken into chunks of manageable pieces. These pieces can be implemented and tested individually with high integration effort. Therefore it becomes essential to analyze the economic and technical feasibility of realizable enterprise solution. It is difficult to migrate from one technological and business aspect to other as the enterprise evolves. The existing process models in system engineering emphasize on life-cycle management and low-level activity coordination with milestone verification. Many organizations are developing enterprise architecture to provide a clear vision of how systems will support and enable their business. The paper proposes an approach for selection of suitable enterprise architecture depending on the measurement framework. The framework consists of unique combination of higher order goals, non-functional requirement support and inputs-outcomes pair evaluation. The earlier efforts in this regard were concerned about only custom scales indicating the availability of a parameter in a range. | Comprehensive measurement framework for enterprise architectures | 5,349 |
Background: Aspect-oriented programming (AOP) is an emerging programming paradigm whose focus is about improving modularity, with an emphasis on the modularization of crosscutting concerns. Objective: The goal of this paper is to assess the extent to which an AOP language -ObjectTeams/Java (OT/J) -improves the modularity of a software system. This improvement has been claimed but, to the best of our knowledge, this paper is the first attempting to present quantitative evidence of it. Method: We compare functionally-equivalent implementations of the Gang-of-Four design patterns, developed in Java and OT/J, using software metrics. Results: The results of our comparison support the modularity improvement claims made in the literature. For six of the seven metrics used, the OT/J versions of the patterns obtained significantly better results. Limitations: This work uses a set of metrics originally defined for object-oriented (OO) systems. It may be the case that the metrics are biased, in that they were created in the context of OO programming (OOP), before the advent of AOP. We consider this comparison a stepping stone as, ultimately, we plan to assess the modularity improvements with paradigm independent metrics, which will conceivably eliminate the bias. Each individual example from the sample used in this paper is small. In future, we plan to replicate this experiment using larger systems, where the benefits of AOP may be more noticeable. Conclusion: This work contributes with evidence to fill gaps in the body of quantitative results supporting alleged benefits to software modularity brought by AOP languages, namely OT/J. | Evidence-Based Comparison of Modularity Support Between Java and Object
Teams | 5,350 |
Development and maintenance of Web application is still a complex and error-prone process. We need integrated techniques and tool support for automated generation of Web systems and a ready prescription for easy maintenance. The MDA approach proposes an architecture taking into account the development and maintenance of large and complex software. In this paper, we apply MDA approach for generating PSM from UML design to MVC 2Web implementation. That is why we have developed two meta-models handling UML class diagrams and MVC 2 Web applications, then we have to set up transformation rules. These last are expressed in ATL language. To specify the transformation rules (especially CRUD methods) we used a UML profiles. To clearly illustrate the result generated by this transformation, we converted the XMI file generated in an EMF (Eclipse Modeling Framework) model. | MDA-based ATL transformation to generate MVC 2 web models | 5,351 |
In this report, we present functional models for software and hardware components of Time-Triggered Systems on a Chip (TTSoC). These are modeled in the asynchronous component based language BIP. We demonstrate the usability of our components for simulation of software which is developed for the TTSoC. Our software comprises services and an application part. Our approach allows us to simulate and validate aspects of the software system at an early stage in the development process and without the need to have the TTSoC hardware at hand. | On the Simulation of Time-Triggered Systems on a Chip with BIP | 5,352 |
IT services provisioning is usually underpinned by service level agreements (SLAs), aimed at guaranteeing services quality. However, there is a gap between the customer perspective (business oriented) and that of the service provider (implementation oriented) that becomes more evident while defining and monitoring SLAs. This paper proposes a domain specific language (SLA Language for specificatiOn and Monitoring - SLALOM) to bridge the previous gap. The first step in SLALOM creation was factoring out common concepts, by composing the BPMN metamodel with that of the SLA life cycle, as described in ITIL. The derived metamodel expresses the SLALOM abstract syntax model. The second step was to write concrete syntaxes targeting different aims, such as SLA representation in process models. An example of SLALOM's concrete syntax model instantiation for an IT service sup-ported by self-service financial terminals is presented. | SLALOM: a Language for SLA specification and monitoring | 5,353 |
Domain Specific Languages (DSLs) can contribute to increment productivity, while reducing the required maintenance and programming expertise. We hypothesize that Software Languages Engineering (SLE) developers consistently skip, or relax, Language Evaluation. Based on the experience of engineering other types of software products, we assume that this may potentially lead to the deployment of inadequate languages. The fact that the languages already deal with concepts from the problem domain, and not the solution domain, is not enough to validate several issues at stake, such as its expressiveness, usability, effectiveness, maintainability, or even the domain expert's productivity while using them. We present a systematic review on articles published in top ranked venues, from 2001 to 2008, which report DSLs' construction, to characterize the common practice. This work confirms our initial hypothesis and lays the ground for the discussion on how to include a systematic approach to DSL evaluation in the SLE process. | Do Software Languages Engineers Evaluate their Languages? | 5,354 |
Objective: To present an overview on the current state of the art concerning metrics-based quality evaluation of software components and component assemblies. Method: Comparison of several approaches available in the literature, using a framework comprising several aspects, such as scope, intent, definition technique, and maturity. Results: The identification of common shortcomings of current approaches, such as ambiguity in definition, lack of adequacy of the specifying formalisms and insufficient validation of current quality models and metrics for software components. Conclusions: Quality evaluation of components and component-based infrastructures presents new challenges to the Experimental Software Engineering community. | An overview of metrics-based approaches to support software components
reusability assessment | 5,355 |
Accurate software cost and schedule estimation are essential for software project success. Often it referred to as the "black art" because of its complexity and uncertainty, software estimation is not as difficult or puzzling as people think. In fact, generating accurate estimates is straightforward-once you understand the intensity of uncertainty and framework for the modeling process. The mystery to successful software estimation-distilling academic information and real-world experience into a practical guide for working software professionals. Instead of arcane treatises and rigid modeling techniques, this will guide highlights a proven set of procedures, understandable formulas, and heuristics that individuals and development teams can apply to their projects to help achieve estimation proficiency with choose appropriate development approaches In the early stage of software life cycle project manager are inefficient to estimate the effort, schedule, cost estimation and its development approach .This in turn, confuses the manager to bid effectively on software project and choose incorrect development approach. That will directly effect on productivity cycle and increase level of uncertainty. This becomes a strong cause of project failure. So to avoid such problem if we know level and sources of uncertainty in model design, It will directive the developer to design accurate software cost and schedule estimation, which are essential for software project success. However once the required efforts have estimated, little is done to recalibrate and reduce the uncertainty of the initial estimates. | Enhance accuracy in Software cost and schedule estimation by using
"Uncertainty Analysis and Assessment" in the system modeling process | 5,356 |
Inter-package conflicts require the presence of two or more packages in a particular configuration, and thus tend to be harder to detect and localize than conventional (intra-package) defects. Hundreds of such inter-package conflicts go undetected by the normal testing and distribution process until they are later reported by a user. The reason for this is that current meta-data is not fine-grained and accurate enough to cover all common types of conflicts. A case study of inter-package conflicts in Debian has shown that with more detailed package meta-data, at least one third of all package conflicts could be prevented relatively easily, while another one third could be found by targeted testing of packages that share common resources or characteristics. This paper reports the case study and proposes ideas to detect inter-package conflicts in the future. | Sources of Inter-package Conflicts in Debian | 5,357 |
The terminology of sourcing, outsourcing and insourcing is developed in detail on the basis of the preliminary definitions of outsourcing and insourcing and related activities and competences as given in our three previous papers on business mereology, on the concept of a sourcement, and on outsourcing competence respectively. Besides providing more a detailed semantic analysis we will introduce, explain, and illustrate a number of additional concepts including: principal unit of a sourcement, theme of a sourcement, current sourcement, (un)stable sourcement, and sourcement transformation. A three level terminology is designed: (i) factual level: operational facts that hold for sourcements including histories thereof, (ii) business level: roles and objectives of various parts of the factual level description, thus explaining each partner's business process and business objectives, (iii) contract level: specification of intended facts and intended business models as found at the business level. Orthogonal to these three conceptual levels, are four temporal aspects: history, now (actuality), transformation, and transition. A detailed description of the well-known range of sourcement transformations is given. | Stratified Outsourcing Theory | 5,358 |
Most businesses these days use the web services technology as a medium to allow interaction between a service provider and a service requestor. However, both the service provider and the requestor would be unable to achieve their business goals when there are miscommunications between their processes. This research focuses on the process incompatibility between the web services and the way to automatically resolve them by using a process mediator. This paper presents an overview of the behavioral incompatibility between web services and the overview of process mediation in order to resolve the complications faced due to the incompatibility. Several state-of the-art approaches have been selected and analyzed to understand the existing process mediation components. This paper aims to provide a valuable gap analysis that identifies the important research areas in process mediation that have yet to be fully explored. | A comparative study of process mediator components that support
behavioral incompatibility | 5,359 |
Recently Java programming environment has become so popular. Java programming language is a language that is designed to be portable enough to be executed in wide range of computers ranging from cell phones to supercomputers. Computer programs written in Java are compiled into Java Byte code instructions that are suitable for execution by a Java Virtual Machine implementation. Java virtual Machine is commonly implemented in software by means of an interpreter for the Java Virtual Machine instruction set. As an object oriented language, Java utilizes the concept of objects. Our idea is to identify the candidate objects' references in a Java environment through hierarchical cluster analysis using reference stack and execution stack. | Identifying Reference Objects by Hierarchical Clustering in Java
Environment | 5,360 |
Modularity is one of the most important principles in software engineering and a necessity for every practical software. Since the design space of software is generally quite large, it is valuable to provide automatic means to help modularizing it. An automatic technique for software modularization is object- oriented concept analysis (OOCA). X-ray view of the class is one of the aspect of this Object oriented concept analysis. We shall use this concept in a java environment. | X-ray view on a Class using Conceptual Analysis in Java Environment | 5,361 |
Reusing and integrating Business Components in a new Information System requires detection and resolution of semantic conflicts. Moreover, most of integration and semantic conflict resolution systems rely on ontology alignment methods based on domain ontology. This work is positioned at the intersection of two research areas: Integration of reusable B C and alignment of ontologies for semantic conflict resolution. Our contribution concerns both the proposal of a BC integration solution based on ontologies alignment and a method for enriching the domain ontology used as a support for alignment | Semantic conflict resolution for integration of business components | 5,362 |
Building new business information systems from reusable components is today an approach widely adopted and used. Using this approach in analysis and design phases presents a great interest and requires the use of a particular class of components called Business Components (BC). Business Components are today developed by several manufacturers and are available in many repositories. However, reusing and integrating them in a new Information System requires detection and resolution of semantic conflicts. Moreover, most of integration and semantic conflict resolution systems rely on ontology alignment methods based on domain ontology. This work is positioned at the intersection of two research areas: Integration of reusable Business Components and alignment of ontologies for semantic conflict resolution. Our contribution concerns both the proposal of a BC integration solution based on ontologies alignment and a method for enriching the domain ontology used as a support for alignment. | An Ontology-Based Method for Semantic Integration of Business Components | 5,363 |
Nowadays agile software development is used in greater extend but for small organizations only, whereas MDA is suitable for large organizations but yet not standardized. In this paper the pros and cons of Model Driven Architecture (MDA) and Extreme programming have been discussed. As both of them have some limitations and cannot be used in both large scale and small scale organizations a new architecture has been proposed. In this model it is tried to opt the advantages and important values to overcome the limitations of both the software development procedures. In support to the proposed architecture the implementation of it on Online Polling System has been discussed and all the phases of software development have been explained. | Incorporating Agile with MDA Case Study: Online Polling System | 5,364 |
This report describes de design of an experiment that intends to compare two variants of a modeldriven system development method, so as to assess the impact of requirements engineering practice in the quality of the conceptual models. The conceptual modelling method being assessed is the OO-Method [Pastor and Molina 2007]. One of its variants includes Communication Analysis, a communication-oriented requirements engineering method [Espa\~na, Gonz\'alez et al. 2009] and a set of guidelines to derive conceptual models from requirements models [Espa\~na, Ruiz et al. 2011; Gonz\'alez, Espa\~na et al. 2011]. The other variant is an ad-hoc, text-based requirements practice similar to the one that is applied in industrial projects by OO-Method practitioners. The goal of the research, summarised according to the Goal/Question/Metric template [Basili and Rombach 1988], is to: *) analyse the resulting models of two model-based information systems analysis method variants; namely, the OO-Method (OOM) and the integration of Communication Analysis and the OO-Method (CA+OOM), *) for the purpose of carrying out a comparative evaluation *) with respect to performance of the subject and acceptance of the method; *) from the viewpoint of the information systems researcher *) in the context of bachelor students. | Model-driven system development: Experimental design and report of the
pilot experiment | 5,365 |
Large Intelligent Systems are so complex these days that an urgent need for designing such systems in best available way is evolving. Modeling is the useful technique to show a complex real world system into the form of abstraction, so that analysis and implementation of the intelligent system become easy and is useful in gathering the prior knowledge of system that is not possible to experiment with the real world complex systems. This paper discusses a formal approach of agent-based large systems modeling for intelligent systems, which describes design level precautions, challenges and techniques using autonomous agents, as its fundamental modeling abstraction. We are discussing Ad-Hoc Network System as a case study in which we are using mobile agents where nodes are free to relocate, as they form an Intelligent Systems. The designing is very critical in this scenario and it can reduce the whole cost, time duration and risk involved in the project. | A Formal Approach for Agent Based Large Concurrent Intelligent Systems | 5,366 |
There are some non-formal methodologies such as RUP, OpenUP, agile methodologies such as SCRUP, XP and techniques like those proposed by UML, which allow the development of software. The software industry has struggled to generate quality software, as importance has not been given to the engineering requirements, resulting in a poor specification of requirements and software of poor quality. In order to generate a contribution to the specification of requirements, this article describes a methodological proposal, implementing formal methods to the results of the process of requirements analysis of the methodology \'Ancora. | Towards the integration of formal specification in the Áncora
methodology | 5,367 |
Today's competitive environment drives the enterprises to extend their focus and collaborate with their business partners to carry out the necessities. Tight coordination among business partners assists to share and integrate the service logic globally. But integrating service logics across diverse enterprises leads to exponential problem which stipulates developers to comprehend the whole service and must resolve suitable method to integrate the services. It is complex and time-consuming task. So the present focus is to have a mechanized system to analyze the Business logics and convey the proper mode to integrate them. There is no standard model to undertake these issues and one such a framework proposed in this paper examines the Business logics individually and suggests proper structure to integrate them. One of the innovative concepts of proposed model is Property Evaluation System which scrutinizes the service logics and generates Business Logic Property Schema (BLPS) for the required services. BLPS holds necessary information to recognize the correct structure for integrating the service logics. At the time of integration, System consumes this BLPS schema and suggests the feasible ways to integrate the service logics. Also if the service logics are attempted to integrate in invalid structure or attempted to violate accessibility levels, system will throw exception with necessary information. This helps developers to ascertain the efficient structure to integrate the services with least effort. | Evaluation of Computability Criterions for Runtime Web Service
Integration | 5,368 |
In light of the recent humongous growth of the human population worldwide, there has also been a voluminous and uncontrolled growth of vehicles, which has consequently increased the number of road accidents to a large extent. In lieu of a solution to the above mentioned issue, our system is an attempt to mitigate the same using synchronous programming language. The aim is to develop a safety crash warning system that will address the rear end crashes and also take over the controlling of the vehicle when the threat is at a very high level. Adapting according to the environmental conditions is also a prominent feature of the system. Safety System provides warnings to drivers to assist in avoiding rear-end crashes with other vehicles. Initially the system provides a low level alarm and as the severity of the threat increases the level of warnings or alerts also rises. At the highest level of threat, the system enters in a Cruise Control Mode, wherein the system controls the speed of the vehicle by controlling the engine throttle and if permitted, the brake system of the vehicle. We focus on this crash area as it has a very high percentage of the crash-related fatalities. To prove the feasibility, robustness and reliability of the system, we have also proved some of the properties of the system using temporal logic along with a reference implementation in ESTEREL. To bolster the same, we have formally verified various properties of the system along with their proofs. | Design and Validation of Safety Cruise Control System for Automobiles | 5,369 |
This paper consider an MMLE (Modified Maximum Likelihood Estimation) based scheme to estimate software reliability using exponential distribution. The MMLE is one of the generalized frameworks of software reliability models of Non Homogeneous Poisson Processes (NHPPs). The MMLE gives analytical estimators rather than an iterative approximation to estimate the parameters. In this paper we proposed SPC (Statistical Process Control) Charts mechanism to determine the software quality using inter failure times data. The Control charts can be used to measure whether the software process is statistically under control or not. | Monitoring Software Reliability using Statistical Process control: An
MMLE approach | 5,370 |
Highly dynamic computing environments, like ubiquitous and pervasive computing environments, require frequent adaptation of applications. This has to be done in a timely fashion, and the adaptation process must be as fast as possible and mastered. Moreover the adaptation process has to ensure a consistent result when finished whereas adaptations to be implemented cannot be anticipated at design time. In this paper we present our mechanism for self-adaptation based on the aspect oriented programming paradigm called Aspect of Assembly (AAs). Using AAs: (1) the adaptations process is fast and its duration is mastered; (2) adaptations' entities are independent of each other thanks to the weaver logical merging mechanism; and (3) the high variability of the software infrastructure can be managed using a mono or multi-cycle weaving approach. | Aspects of Assembly and Cascaded Aspects of Assembly: Logical and
Temporal Properties | 5,371 |
Now-a-days they are very much considering about the changes to be done at shorter time since the reaction time needs are decreasing every moment. Business Logic Evaluation Model (BLEM) are the proposed solution targeting business logic automation and facilitating business experts to write sophisticated business rules and complex calculations without costly custom programming. BLEM is powerful enough to handle service manageability issues by analyzing and evaluating the computability and traceability and other criteria of modified business logic at run time. The web service and QOS grows expensively based on the reliability of the service. Hence the service provider of today things that reliability is the major factor and any problem in the reliability of the service should overcome then and there in order to achieve the expected level of reliability. In our paper we propose business logic evaluation model for web service reliability analysis using Finite State Machine (FSM) where FSM will be extended to analyze the reliability of composed set of service i.e., services under composition, by analyzing reliability of each participating service of composition with its functional work flow process. FSM is exploited to measure the quality parameters. If any change occurs in the business logic the FSM will automatically measure the reliability. | Finite State Machine Based Evaluation Model for Web Service Reliability
Analysis | 5,372 |
This paper considers how a formal mathematically-based model can be used in support of evolutionary software development, and in particular how such a model can be kept consistent with the implementation as it changes to meet new requirements. A number of techniques are listed can make use of such a model to enhance the development process, and also ways to keep model and implementation consistent. The effectiveness of these techniques is investigated through two case studies concerning the development of small e-business applications, a travel agent and a mortgage broker. Some successes are reported, notably in the use of rapid throwaway modelling to investigate design alternatives, and also in the use of close team working and modelbased trace-checking to maintain synchronisation between model and implementation throughout the development. The main areas of weakness were seen to derive from deficiencies in tool support. Recommendations are therefore made for future improvements to tools supporting formal models which would, in principle, make this co-evolutionary approach attractive to industrial software developers. It is claimed that in fact tools already exist that provide the desired facilities, but these are not necessarily production-quality, and do not all support the same notations, and hence cannot be used together. | Concurrent Development of Model and Implementation | 5,373 |
Software has been playing a key role in the development of modern society. Software industry has an option to choose suitable methodology/process model for its current needs to provide solutions to give problems. Though some companies have their own customized methodology for developing their software but majority agrees that software methodologies fall under two categories that are heavyweight and lightweight. Heavyweight methodologies (Waterfall Model, Spiral Model) are also known as the traditional methodologies, and their focuses are detailed documentation, inclusive planning, and extroverted design. Lightweight methodologies (XP, SCRUM) are, referred as agile methodologies. Light weight methodologies focused mainly on short iterative cycles, and rely on the knowledge within a team. The aim of this paper is to describe the characteristics of popular heavyweight and lightweight methodologies that are widely practiced in software industries. We have discussed the strengths and weakness of the selected models. Further we have discussed the strengths and weakness between the two opponent methodologies and some criteria is also illustrated that help project managers for the selection of suitable model for their projects. | A Comprehensive Study of Commonly Practiced Heavy and Light Weight
Software Methodologies | 5,374 |
This case comprises several primitive tasks that can be solved straight away with most transformation tools. The aim is to cover the most important kinds of primitive operations on models, i.e. create, read, update and delete (CRUD). To this end, tasks such as a constant transformation, a model-to-text transformation, a very basic migration transformation or diverse simple queries or in-place operations on graphs have to be solved. The motivation for this case is that the results expectedly will be very instructive for beginners. Also, it is really hard to compare transformation languages along complex cases, because the complexity of the respective case might hide the basic language concepts and constructs. | HelloWorld! An Instructive Case for the Transformation Tool Contest | 5,375 |
This report presents a partial solution to the Compiler Optimization case study using GROOVE. We explain how the input graphs provided with the case study were adapted into a GROOVE representation and we describe an initial solution for Task 1. This solution allows us to automatically reproduce the steps of the constant folding example given in the case description. We did not solve Task 2. | Solving the TTC 2011 Compiler Optimization Case with GROOVE | 5,376 |
In this short paper we present our solution for the Hello World case study of the Transformation Tool Contest (TTC) 2011 using the QVTR-XSLT tool. The tool supports editing and execution of the graphical notation of QVT Relations language. The case study consists of a set of simple transformation tasks which covers the basic functions required for a transformation language, such as creating, reading/querying, updating and deleting of model elements. We design a transformation for each of the tasks. | Saying HelloWorld with QVTR-XSLT - A Solution to the TTC 2011
Instructive Case | 5,377 |
Epsilon is an extensible platform of integrated and task-specific languages for model management. With solutions to the 2011 TTC Hello World case, this paper demonstrates some of the key features of the Epsilon Object Language (an extension and reworking of OCL), which is at the core of Epsilon. In addition, the paper introduces several of the task-specific languages provided by Epsilon including the Epsilon Generation Language (for model-to-text transformation), the Epsilon Validation Language (for model validation) and Epsilon Flock (for model migration). | Saying Hello World with Epsilon - A Solution to the 2011 Instructive
Case | 5,378 |
A review of 75 formal audit assignments shows that the effort taken to identify defects in financial models taken from the domain of limited recourse (project) finance is uncorrelated with common measures of the physical characteristics of the spreadsheets concerned. | Drivers of the Cost of Spreadsheet Audit | 5,379 |
Recognizing that the use of spreadsheets within finance will likely not subside in the near future, this paper discusses a major barrier that is preventing more organizations from adopting enterprise spreadsheet management programs. But even without a corporate mandated effort to improve spreadsheet controls, finance functions can still take simple yet effective steps to start managing the risk of errors in key spreadsheets by strategically selecting controls that complement existing user practice | Leveraging User Profile and Behaviour to Design Practical Spreadsheet
Controls for the Finance Function | 5,380 |
Users wanting to monitor distributed or component-based systems often perceive them as monolithic systems which, seen from the outside, exhibit a uniform behaviour as opposed to many components displaying many local behaviours that together constitute the system's global behaviour. This level of abstraction is often reasonable, hiding implementation details from users who may want to specify the system's global behaviour in terms of an LTL formula. However, the problem that arises then is how such a specification can actually be monitored in a distributed system that has no central data collection point, where all the components' local behaviours are observable. In this case, the LTL specification needs to be decomposed into sub-formulae which, in turn, need to be distributed amongst the components' locally attached monitors, each of which sees only a distinct part of the global behaviour. The main contribution of this paper is an algorithm for distributing and monitoring LTL formulae, such that satisfac- tion or violation of specifications can be detected by local monitors alone. We present an implementation and show that our algorithm introduces only a minimum delay in detecting satisfaction/violation of a specification. Moreover, our practical results show that the communication overhead introduced by the local monitors is considerably lower than the number of messages that would need to be sent to a central data collection point. | Decentralised LTL Monitoring | 5,381 |
We derive an abstract computational model from a sequential computational model that is generally used for function execution. This abstract computational model allows for the concurrent execution of functions. We discuss concurrent models for function execution as implementations from the abstract computational model. We give an example of a particular concurrent function construct that can be implemented on a concurrent machine model using multi-threading. The result is a framework of computational models at different levels of abstraction that can be used in further development of concurrent computational models that deal with the problems inherent with concurrency. | Concurrent Models for Function Execution | 5,382 |
In this article we extend the framework of execution of concurrent functions on different abstract levels from previous work with communication between the concurrent functions. We classify the communications and identify problems that can occur with these communications. We present solutions for the problems based on encapsulation and abstraction to obtain correct behaviours. The result is that communication on a low level of abstraction in the form of shared memory and message passing is dealt with on an higher level of abstraction. | Communicating Concurrent Functions | 5,383 |
In this article, we describe the regression test process to test and verify the changes made on software. A developed technique use the automation test based on decision tree and test selection process in order to reduce the testing cost is given. The developed technique is applied to a practical case and the result show its improvement. | A New Proposed Technique to Improve Software Regression Testing Cost | 5,384 |
In Service-Oriented Virtual Organization Breeding Environments (SOVOBEs), services performed by people, organizations and information systems are composed in potentially complex business processes performed by a set of partners. In a SOVOBE, the success of a virtual organization depends largely on the partner and service selection process, which determines the composition of services performed by the VO partners. In this paper requirements for a partner and service selection method for SOVOBEs are defined and a novel Multi-Aspect Partner and Service Selection method, MAPSS, is presented. The MAPSS method allows a VO planner to select appropriate services and partners based on their competences and their relations with other services/partners. The MAPSS method relies on a genetic algorithm to select the most appropriate set of partners and services. | MAPSS, a Multi-Aspect Partner and Service Selection Method | 5,385 |
Software systems endure many noteworthy changes throughout their life-cycle in order to follow the evolution of the problem domains. Generally, the software system architecture cannot follow the rapid evolution of a problem domain which results in the discrepancies between the implemented and designed architecture. Software architecture illustrates a system's structure and global properties and consequently determines not only how the system should be constructed but also leads its evolution. Architecture plays an important role to ensure that a system satisfies its business and mission goals during implementation and evolution. However, the capabilities of the designed architecture may possibly be lost when the implementation does not conform to the designed architecture. Such a loss of consistency causes the risk of architectural decay. The architectural decay can be avoided if architectural changes are made as early as possible. The paper presents the Process Model for Architecture-Centric Evolution which improves the quality of software systems through maintaining consistency between designed architecture and implementation. It also increases architecture awareness of developers which assists in minimizing the risk of architectural decay. In the proposed approach consistency checks are performed before and after the change implementation. | Minimizing the Risk of Architectural Decay by using Architecture-Centric
Evolution Process | 5,386 |
This Paper summarises the operation of software developed for the analysis of workbook structure. This comprises: the identification of layout in terms of filled areas formed into "Stripes", the identification of all the Formula Blocks/Cells and the identification of Data Blocks/Cells referenced by those formulas. This development forms part of our FormulaDataSleuth toolset. It is essential for the initial "Watching" of an existing workbook and enables the workbook to be subsequently managed and protected from damage. | Workbook Structure Analysis - "Coping with the Imperfect" | 5,387 |
Spreadsheet technology is a cornerstone of IT systems in most organisations. It is often the glue that binds more structured transaction-based systems together. Financial operations are a case in point where spreadsheets fill the gaps left by dedicated accounting systems, particularly covering reporting and business process operations. However, little is understood as to the nature of spreadsheet usage in organisations and the contents and structure of these spreadsheets as they relate to key business functions with few, if any, comprehensive analyses of spreadsheet repositories in real organisations. As such this paper represents an important attempt at profiling real and substantial spreadsheet repositories. Using the Luminous technology an analysis of 65,000 spreadsheets for the financial departments of both a government and a private commercial organisation was conducted. This provides an important insight into the nature and structure of these spreadsheets, the links between them, the existence and nature of macros and the level of repetitive processes performed through the spreadsheets. Furthermore it highlights the organisational dependence on spreadsheets and the range and number of spreadsheets dealt with by individuals on a daily basis. In so doing, this paper prompts important questions that can frame future research in the domain. | Spreadsheets in Financial Departments: An Automated Analysis of 65,000
Spreadsheets using the Luminous Technology | 5,388 |
Practitioners often argue that range names make spreadsheets easier to understand and use, akin to the role of good variable names in traditional programming languages, yet there is no supporting scientific evidence. The authors previously published experiments that disproved this theory in relation to debugging, and now turn their focus to development. This paper presents the results of two iterations of a new experiment, which measure the effect of range names on the correctness of, and the time it takes to develop, simple summation formulas. Our findings, supported by statistically significant results, show that formulas developed by non-experts using range names are more likely to contain errors and take longer to develop. Taking these findings with the findings from previous experiments, we conclude that range names do not improve the quality of spreadsheets developed by novice and intermediate users. This paper is important in that it finds that the choice of naming convention can have a significant impact on novice and intermediate users' performance in formula development, with less structured naming conventions resulting in poorer performance by users. | Effect of Range Naming Conventions on Reliability and Development Time
for Simple Spreadsheet Formulas | 5,389 |
Thanks to the enormous flexibility they provide, spreadsheets are considered a priceless blessing by many end-users. Many spreadsheets, however, contain errors which can lead to severe consequences in some cases. To manage these risks, quality managers in companies are often asked to develop appropriate policies for preventing spreadsheet errors. Good policies should specify rules which are based on "known-good" practices. While there are many proposals for such practices in literature written by practitioners and researchers, they are often not consistent with each other. Therefore no general agreement has been reached yet and no science-based "golden rules" have been published. This paper proposes an expert-based, retrospective approach to the identification of good practices for spreadsheets. It is based on an evaluation loop that cross-validates the findings of human domain experts against rules implemented in a semi-automated spreadsheet workbench, taking into account the context in which the spreadsheets are used. | From Good Practices to Effective Policies for Preventing Errors in
Spreadsheets | 5,390 |
A huge amount of data is everyday managed in large organizations in many critical business sectors with the support of spreadsheet applications. The process of elaborating spreadsheet data is often performed in a distributed, collaborative way, where many actors enter data belonging to their local business domain to contribute to a global business view. The manual fusion of such data may lead to errors in copy-paste operations, loss of alignment and coherency due to multiple spreadsheet copies in circulation, as well as loss of data due to broken cross-spreadsheet links. In this paper we describe a methodology, based on a Spreadsheet Composition Platform, which greatly reduces these risks. The proposed platform seamlessly integrates the distributed spreadsheet elaboration, supports the commonly known spreadsheet tools for data processing and helps organizations to adopt a more controlled and secure environment for data fusion. | A Platform for Spreadsheet Composition | 5,391 |
Spreadsheets are used extensively in industry, often for business critical purposes. In previous work we have analyzed the information needs of spreadsheet professionals and addressed their need for support with the transition of a spreadsheet to a colleague with the generation of data flow diagrams. In this paper we describe the application of these data flow diagrams for the purpose of understanding a spreadsheet with three example cases. We furthermore suggest an additional application of the data flow diagrams: the assessment of the quality of the spreadsheet's design. | Breviz: Visualizing Spreadsheets using Dataflow Diagrams | 5,392 |
The use of spreadsheets is widespread. Be it in business, finance, engineering or other areas, spreadsheets are created for their flexibility and ease to quickly model a problem. Very often they evolve from simple prototypes to implementations of crucial business logic. Spreadsheets that play a crucial role in an organization will naturally have a long lifespan and will be maintained and evolved by several people. Therefore, it is important not only to look at their reliability, i.e., how well is the intended functionality implemented, but also at their maintainability, i.e., how easy it is to diagnose a spreadsheet for deficiencies and modify it without degrading its quality. In this position paper we argue for the need to create a model to estimate the maintainability of a spreadsheet based on (automated) measurement. We propose to do so by applying a structured methodology that has already shown its value in the estimation of maintainability of software products. We also argue for the creation of a curated, community-contributed repository of spreadsheets. | Requirements for Automated Assessment of Spreadsheet Maintainability | 5,393 |
We consider the challenge of creating guidelines to evaluate the quality of a spreadsheet model. We suggest four principles. First, state the domain-the spreadsheets to which the guidelines apply. Second, distinguish between the process by which a spreadsheet is constructed from the resulting spreadsheet artifact. Third, guidelines should be written in terms of the artifact, independent of the process. Fourth, the meaning of "quality" must be defined. We illustrate these principles with an example. We define the domain of "analytical spreadsheet models", which are used in business, finance, engineering, and science. We propose for discussion a framework and terminology for evaluating the quality of analytical spreadsheet models. This framework categorizes and generalizes the findings of previous work on the more narrow domain of financial spreadsheet models. We suggest that the ultimate goal is a set of guidelines for an evaluator, and a checklist for a developer. | Towards Evaluating the Quality of a Spreadsheet: The Case of the
Analytical Spreadsheet Model | 5,394 |
Most organizations use large and complex spreadsheets that are embedded in their mission-critical processes and are used for decision-making purposes. Identification of the various types of errors that can be present in these spreadsheets is, therefore, an important control that organizations can use to govern their spreadsheets. In this paper, we propose a taxonomy for categorizing qualitative errors in spreadsheet models that offers a framework for evaluating the readiness of a spreadsheet model before it is released for use by others in the organization. The classification was developed based on types of qualitative errors identified in the literature and errors committed by end-users in developing a spreadsheet model for Panko's (1996) "Wall problem". Closer inspection of the errors reveals four logical groupings of the errors creating four categories of qualitative errors. The usability and limitations of the proposed taxonomy and areas for future extension are discussed. | In Search of a Taxonomy for Classifying Qualitative Spreadsheet Errors | 5,395 |
Cloud Computing has caused a paradigm shift in the world of computing. Several use case scenarios have been floating around the programming world in relation to this. Applications such as Spreadsheets have the capability to use the Cloud framework to create complex web based applications. In our effort to do the same, we have proposed a Spreadsheet on the cloud as the framework for building new web applications, which will be useful in various scenarios, specifically a School administration system and governance scenarios, such as Health and Administration. This paper is a manifestation of this work, and contains some use cases and architectures which can be used to realize these scenarios in the most efficient manner. | Spreadsheet on Cloud -- Framework for Learning and Health Management
System | 5,396 |
The refinement calculus provides a methodology for transforming an abstract specification into a concrete implementation, by following a succession of refinement rules. These rules have been mechanized in theorem-provers, thus providing a formal and rigorous way to prove that a given program refines another one. In a previous work, we have extended this mechanization for object-oriented programs, where the memory is represented as a graph, and we have integrated our approach within the rCOS tool, a model-driven software development tool providing a refinement language. Hence, for any refinement step, the tool automatically generates the corresponding proof obligations and the user can manually discharge them, using a provided library of refinement lemmas. In this work, we propose an approach to automate the search of possible refinement rules from a program to another, using the rewriting tool Maude. Each refinement rule in Maude is associated with the corresponding lemma in Isabelle, thus allowing the tool to automatically generate the Isabelle proof when a refinement rule can be automatically found. The user can add a new refinement rule by providing the corresponding Maude rule and Isabelle lemma. | A Framework for Automated and Certified Refinement Steps | 5,397 |
This paper, framed in a vast investigation, describes the application of techniques and methodologies in Organizational Engineering connected to the associated risk to the processes developed in an Emergency Service of an important Portuguese Hospital. The transactions performed in an emergency service and the consequent risk identification (negative behaviour associated to those transactions) is done based on static and dynamic models, developed during the business modelling. Any non-trivial system is better portrayed trough a small number of reasonably independent models. From this point of view it is important to look at the systems from a "micro" perspective, which allows us to analyse the system at the transaction level. All processes have some associated risk (inherent risk). Its identification will be decisive for future analysis and for the consequent decision over the need, or not, to study internal control mechanisms. This decision will depend on the risk level that the organization considers acceptable. | Identification of the Risk Related to a Process on Hospital Emergency
Service: a Case Study | 5,398 |
The growing complexity and sophistication of the organizational information systems, and hospital ones particularly, render difficult their comprehension and, consequently, the implementation of control mechanisms that may assure, at all times, the auditability of the above mentioned systems, without having to use models. This paper, framed in a wider investigation, aims to describe the application of techniques and methodologies, in the sphere of action of Organizational Engineering, in the modelling of business processes developed in the main Operating Theatre of the Coimbra's University Hospital Emergency Service, as a support for the implementation of an information system architecture, using for that purpose the CEO framework, developed and suggested by the Centre for Organizational Engineering (CEO), based on the UML language. | Urgency/Emergency Health Processes' Modelling: A Case Study | 5,399 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.