text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
Inbusiness,engineering, andmanufacturing,quality– orhigh quality– has a pragmatic interpretation as the non-inferiority orsuperiorityof something (goodsorservices); it is also defined as being suitable for the intended purpose (fitness for purpose) while satisfying customer expectations. Quality is a perceptual, conditional, and somewhatsubjectiveattribute and may be understood differently by different people.[1][2]Consumers may focus on thespecification qualityof a product/service, or how it compares to competitors in the marketplace. Producers might measure theconformance quality, or degree to which the product/service was produced correctly. Support personnel may measure quality in the degree that a product isreliable,maintainable, orsustainable. In such ways, the subjectivity of quality is renderedobjectiveviaoperational definitionsand measured withmetricssuch asproxy measures.
In a general manner, quality in business consists of "producing a good or service that conforms [to the specification of the client] the first time, in the right quantity, and at the right time".[3]The product or service should not be lower or higher than the specification (under or overquality). Overquality leads to unnecessary additional production costs.
There are many aspects of quality in a business context, though primary is the idea the businessproducessomething, whether it be a physical good or a particular service. These goods and/or services and how they are produced involve many types of processes, procedures, equipment, personnel, and investments, which all fall under the quality umbrella. Key aspects of quality and how it's diffused throughout the business are rooted in the concept ofquality management:[1][2]
While quality management and its tenets are relatively recent phenomena, the idea of quality in business is not new. In the early 1900s, pioneers such as Frederick Winslow Taylor and Henry Ford recognized the limitations of the methods being used in mass production at the time and the subsequent varying quality of output, implementing quality control, inspection, and standardization procedures in their work.[4][5]Later in the twentieth century, the likes ofWilliam Edwards DemingandJoseph M. Juranhelped take quality to new heights, initially in Japan and later (in the late '70s and early '80s) globally.[2][6]
Customers recognize that quality is an important attribute in products and services, and suppliers recognize that quality can be an important differentiator between their own offerings and those of competitors (the quality gap). In the past two decades this quality gap has been gradually decreasing between competitive products and services. This is partly due to the contracting (also called outsourcing) of manufacturing to countries like China and India, as well internationalization of trade and competition. These countries, among many others, have raised their own standards of quality in order to meet international standards and customer demands.[7][8]TheISO 9000series of standards are probably the best known international standards for quality management, though specialized standards such asISO 15189(for medical laboratories) andISO 14001(for environmental management) also exist.[9]
The business meanings ofqualityhave developed over time. Various interpretations are given below:
Traditionally, quality acts as one of five operations/project performance objectives dictated byoperations managementpolicy. Operations management, by definition, focuses on the most effective and efficient ways for creating and delivering a good or service that satisfies customer needs and expectations.[23]As such, its ties to quality are apparent. The five performance objectives which give business a way to measure their operational performance are:[24][25]
Based on an earlier model called the sand cone model, these objectives support each other, with quality at the base.[26][25]By extension, quality increases dependability, reduces cost, and increases customer satisfaction.[25]
The early 1920s saw a slow but gradual movement among manufacturers away from a "maximum production" philosophy to one aligned more closely with "positive and continuous control of quality to definite standards in the factory."[27][5]That standardization, further pioneered by Deming and Juran later in the twentieth century,[2][6]has become deeply integrated into how manufacturing businesses operate today. The introduction of theISO 9001, 9002, and 9003standards in 1987 — based on work from previous British and U.S. military standards — sought to "provide organizations with the requirements to create aquality management system(QMS) for a range of different business activities."[28]Additionally,good manufacturing practice(GMP) standards became more common place in countries around the world, laying out the minimum requirements manufacturers in industries includingfoodandbeverages,[29]cosmetics,[30]pharmaceutical products,[31]dietary supplements,[32]andmedical devices[33]must meet to assure their products are consistently high in quality. Process improvement philosophies such asSix SigmaandLean Six Sigmahave further pushed quality to the forefront of business management and operations. At the heart of these and other efforts is often the QMS, a documented collection of processes, management models, business strategies, human capital, and information technology used to plan, develop, deploy, evaluate, and improve a set of models, methods, and tools across an organization for the purpose of improving quality that aligns with the organization's strategic goals.[34][35]
The push to integrate the concept of quality into the functions of the service industry takes a slightly different path from manufacturing. Where manufacturers focus on "tangible, visible, persistent issues," many — but not all — quality aspects of the service provider's output are intangible and fleeting.[36][37][38]Other obstacles include management's perceptions not aligning with customer expectations due to lack of communication and market research and the improper or lack of delivery of skill-based knowledge to personnel.[36][37]Like manufacturing, customer expectations are key in the service industry, though the degree with which the service interacts with the customer definitely shapes perceived service quality. Perceptions such as being dependable, responsive, understanding, competent, and clean (which are difficult to describe tangibly) may drive service quality,[39]somewhat in contrast to factors that drive measurement of manufacturing quality.
In Japanese culture, there are two types of quality:atarimae hinshitsuandmiryokuteki hinshitsu.[40]
In the design of goods or services,atarimae hinshitsuandmiryokuteki hinshitsutogether ensure that a creation will both work to customers' expectations and also be desirable to have.
|
https://en.wikipedia.org/wiki/Quality_(business)
|
Total quality management(TQM) is an organization-wide effort to "install and make a permanent climate where employeescontinuously improvetheir ability to provide on-demand products and services that customers will find of particular value."[1]Totalemphasizes that departments in addition to production (for example sales and marketing, accounting and finance, engineering and design) are obligated to improve their operations;managementemphasizes that executives are obligated to actively manage quality through funding, training, staffing, and goal setting. While there is no widely agreed-upon approach, TQM efforts typically draw heavily on the previously developed tools and techniques ofquality control. TQM received widespread attention during the late 1980s and early 1990s before being overshadowed byISO 9000,Lean manufacturing, andSix Sigma.
In the late 1970s and early 1980s, the developed countries of North America and Western Europesuffered economicallyin the face of stiff competition from Japan'sability to produce high-quality goods at competitive cost. For the first time since the start of theIndustrial Revolution, the United Kingdom became a net importer of finished goods. The United States undertook its own soul-searching, expressed most pointedly in the television broadcast ofIf Japan Can... Why Can't We?.Firms began reexamining the techniques ofquality controlinvented over the past 50 years and how those techniques had been so successfully employed by the Japanese. It was in the midst of this economic turmoil that TQM took root.
The exact origin of the term "total quality management" is uncertain.[2]It is almost certainly inspired byArmand V. Feigenbaum's multi-edition bookTotal Quality Control(OCLC299383303) andKaoru Ishikawa'sWhat Is Total Quality Control? The Japanese Way(OCLC11467749). It may have been first coined in the United Kingdom by theDepartment of Trade and Industryduring its 1983 "National Quality Campaign".[2]Or it may have been first coined in the United States by theNaval Air Systems Commandto describe its quality-improvement efforts in 1985.[2]
In the spring of 1984, an arm of theUnited States Navyasked some of its civilian researchers to assessstatistical process controland the work of several prominent quality consultants and to make recommendations as to how to apply their approaches to improve the Navy's operational effectiveness.[3]The recommendation was to adopt the teachings ofW. Edwards Deming.[3][4]The Navy branded the effort "Total Quality Management" in 1985.[3][Note 1]
From the Navy, TQM spread throughout the US Federal Government, resulting in the following:
TheUS Environmental Protection Agency's Underground Storage Tanks program, which was established in 1985, also employed Total Quality Management to develop its management style.[8]The private sector followed suit, flocking to TQM principles not only as a means to recapture market share from the Japanese, but also to remain competitive when bidding for contracts from the Federal Government[9]since "total quality" requires involving suppliers, not just employees, in process improvement efforts.
There is no widespread agreement as to what TQM is and what actions it requires of organizations,[10][11][12]however a review of the original United States Navy effort gives a rough understanding of what is involved in TQM.
The key concepts in the TQM effort undertaken by the Navy in the 1980s include:[13]
The Navy used the following tools and techniques:
While there is no generally accepted definition of TQM, several notable organizations have attempted to define it. These include:
"Total Quality Management (TQM) in the Department of Defense is a strategy for continuously improving performance at every level, and in all areas of responsibility. It combines fundamental management techniques, existing improvement efforts, and specialized technical tools under a disciplined structure focused on continuously improving all processes. Improved performance is directed at satisfying such broad goals as cost, quality, schedule, and mission need and suitability. Increasing user satisfaction is the overriding objective. The TQM effort builds on the pioneering work ofDr. W. E. Deming,Dr. J. M. Juran, and others, and benefits from both private and public sector experience with continuous process improvement."[14]
"A management philosophy and company practices that aim to harness the human and material resources of an organization in the most effective way to achieve the objectives of the organization."[15]
"A management approach of an organisation centred on quality, based on the participation of all its members and aiming at long term success through customer satisfaction and benefits to all members of the organisation and society."[16]
"A term first used to describe a management approach to quality improvement. Since then, TQM has taken on many meanings. Simply put, it is a management approach to long-term success through customer satisfaction. TQM is based on all members of an organization participating in improving processes, products, services and the culture in which they work. The methods for implementing this approach are found in the teachings of such quality leaders asPhilip B. Crosby,W. Edwards Deming,Armand V. Feigenbaum,Kaoru IshikawaandJoseph M. Juran."[17]
"TQM is a philosophy for managing an organization in a way which enables it to meet stakeholder needs and expectations efficiently and effectively, without compromising ethical values."[18]
In the United States, the Baldrige Award, created by Public Law 100–107, annually recognizes American businesses, education institutions, health care organizations, and government or nonprofit organizations that are role models for organizational performance excellence. Organizations are judged on criteria from seven categories:[19]
Example criteria are:[20]
Joseph M. Juranbelieved the Baldrige Award judging criteria to be the most widely accepted description of what TQM entails.[10]: 650
During the 1990s, standards bodies in Belgium, France, Germany, Turkey, and the United Kingdom attempted to standardize TQM. While many of these standards have since been explicitly withdrawn, they all are effectively superseded byISO 9000:
Interest in TQM as an academic subject peaked around 1993.[2]
The Federal Quality Institute was shuttered in September 1995 as part of theClinton administration's efforts tostreamline government.[21]The European Centre for Total Quality Management closed in August 2009.[22]
TQM, as a vaguely defined quality management approach, was largely supplanted by theISO 9000collection of standards and their formal certification processes in the 1990s. Business interest in quality improvement under the TQM name also faded asJack Welch's success attracted attention toSix SigmaandToyota's success attracted attention tolean manufacturing, though the three share many of the same tools, techniques, and significant portions of the same philosophy.
TQM lives on in variousnational quality awardsaround the globe.[23]
|
https://en.wikipedia.org/wiki/Total_quality_management
|
Requirements managementis the process of documenting,analyzing,tracing,prioritizingand agreeing on requirements and then controlling change and communicating to relevant stakeholders. It is a continuous process throughout a project. A requirement is a capability to which a project outcome (product or service) should conform.
The purpose of requirements management is to ensure that an organization documents, verifies, and meets the needs and expectations of its customers and internal or external stakeholders.[1]Requirements management begins with the analysis and elicitation of the objectives and constraints of the organization. Requirements management further includes supporting planning for requirements, integrating requirements and the organization for working with them (attributes for requirements), as well as relationships with other information delivering against requirements, and changes for these.
The traceability thus established is used in managing requirements to report back fulfilment of company and stakeholder interests in terms of compliance, completeness, coverage, and consistency. Traceabilities also support change management as part of requirements management in understanding the impacts of changes through requirements or other related elements (e.g., functional impacts through relations to functional architecture), and facilitating introducing these changes.[2]
Requirements management involves communication between the project team members and stakeholders, and adjustment to requirements changes throughout the course of the project.[3]To prevent one class of requirements from overriding another, constant communication among members of the development team is critical. For example, in software development for internal applications, the business has such strong needs that it may ignore user requirements, or believe that in creatinguse cases, the user requirements are being taken care of.
Requirements traceability is concerned with documenting the life of a requirement.[4]It should be possible to trace back to the origin of each requirement and every change made to the requirement should therefore be documented in order to achieve traceability.[5]Even the use of the requirement after the implementedfeatureshave been deployed and used should be traceable.[5]
Requirements come from different sources, like the business person ordering the product, the marketing manager and the actual user. These people all have different requirements for the product. Using requirements traceability, an implemented feature can be traced back to the person or group that wanted it during therequirements elicitation. This can, for example, be used during the development process to prioritize the requirement,[6]determining how valuable the requirement is to a specific user. It can also be used after the deployment when user studies show that a feature is not used, to see why it was required in the first place.
At each stage in adevelopment process, there are key requirements management activities and methods. To illustrate, consider a standard five-phase development process with Investigation, Feasibility, Design, Construction and Test, and Release stages.
In Investigation, the first threeclasses of requirementsare gathered from the users, from the business and from the development team. In each area, similar questions are asked; what are the goals, what are the constraints, what are the current tools or processes in place, and so on. Only when these requirements are well understood canfunctional requirementsbe developed.
In the common case, requirements cannot be fully defined at the beginning of the project. Some requirements will change, either because they simply weren’t extracted, or because internal or external forces at work affect the project in mid-cycle.
The deliverable from the Investigation stage is a requirements document that has been approved by all members of the team. Later, in the thick of development, this document will be critical in preventingscope creepor unnecessary changes. As the system develops, each new feature opens a world of new possibilities, so the requirements specification anchors the team to the original vision and permits a controlled discussion of scope change.[citation needed]
While many organizations still use only documents to manage requirements, others manage their requirements baselines using software tools. These tools allow requirements to be managed in a database, and usually have functions to automate traceability (e.g., by allowing electronic links to be created between parent and child requirements, or between test cases and requirements), electronic baseline creation, version control, and change management. Usually such tools contain an export function that allows a specification document to be created by exporting the requirements data into a standard document application.[citation needed]
In the Feasibility stage, costs of the requirements are determined. For user requirements, the current cost of work is compared to the future projected costs once the new system is in place. Questions such as these are asked: “What are data entry errors costing us now?” Or “What is the cost of scrap due to operator error with the current interface?” Actually, the need for the new tool is often recognized as these questions come to the attention of financial people in the organization.
Business costs would include, “What department has the budget for this?” “What is the expected rate of return on the new product in the marketplace?” “What’s the internal rate of return in reducing costs of training and support if we make a new, easier-to-use system?”
Technical costs are related to software development costs and hardware costs. “Do we have the right people to create the tool?” “Do we need new equipment to support expanded software roles?” This last question is an important type. The team must inquire into whether the newest automated tools will add sufficient processing power to shift some of the burden from the user to the system in order to save people time.
The question also points out a fundamental point about requirements management. A human and a tool form a system, and this realization is especially important if the tool is a computer or a new application on a computer. The human mind excels in parallel processing and interpretation of trends with insufficient data. The CPU excels in serial processing and accurate mathematical computation. The overarching goal of the requirements management effort for a software project would thus be to make sure the work being automated gets assigned to the proper processor. For instance, “Don’t make the human remember where she is in the interface. Make the interface report the human’s location in the system at all times.” Or “Don’t make the human enter the same data in two screens. Make the system store the data and fill in the second screen as needed.”
The deliverable from the Feasibility stage is thebudgetand schedule for the project.
Assuming that costs are accurately determined and benefits to be gained are sufficiently large, the project can proceed to the Design stage. In Design, the main requirements management activity is comparing the results of the design against the requirements document to make sure that work is staying in scope.
Again, flexibility is paramount to success. Here’s a classic story of scope change in mid-stream that actually worked well. Ford auto designers in the early ‘80s were expecting gasoline prices to hit $3.18 per gallon by the end of the decade. Midway through the design of the Ford Taurus, prices had centered to around $1.50 a gallon. The design team decided they could build a larger, more comfortable, and more powerful car if the gas prices stayed low, so they redesigned the car. The Taurus launch set nationwide sales records when the new car came out, primarily because it was so roomy and comfortable to drive.
In most cases, however, departing from the original requirements to that degree does not work. So the requirements document becomes a critical tool that helps the team make decisions about design changes.[7]
In the construction and testing stage, the main activity of requirements management is to make sure that work and cost stay within schedule and budget, and that the emerging tool does in fact meet the requirements set. A main tool used in this stage is prototype construction and iterative testing. For a software application, the user interface can be created on paper and tested with potential users, while the framework of the software is being built. Results of these tests are recorded in a user interface design guide and handed off to the design team when they are ready to develop the interface.
An important aspect of this stage is verification. This effort verifies that the requirement has been implemented correctly. There are 4 methods of verification: analysis, inspection, testing, and demonstration. Numerical software execution results or through-put on a network test, for example, provides analytical evidence that the requirement has been met. Inspection of vendor documentation or spec sheets also verifies requirements. Testing or demonstrating the software in a lab environment also verifies the requirements: a test type of verification will occur when test equipment not normally part of the lab (or system under test) is used. Comprehensive test procedures which outline the steps, and their expected results clearly identify what is to be seen as a result of performing the step. After the step or set of steps is completed the last step's expected result will call out what has been seen and then identify what requirement or requirements have been verified (identified by number). The requirement number, title and verbiage are tied together in another location in the test document.
Hardly would any software development project be completed without some changes being asked of the project. The changes can stem from changes in the environment in which the finished product is envisaged to be used, business changes, regulation changes, errors in the original definition of requirements, limitations in technology, changes in the security environment and so on. The activities of requirements change management include receiving the change requests from the stakeholders, recording the received change requests, analyzing and determining the desirability and process of implementation, implementation of the change request, quality assurance for the implementation and closing the change request. Then the data of change requests be compiled, analyzed and appropriate metrics are derived and dovetailed into the organizational knowledge repository.[8]
Requirements management does not end with product release. From that point on, the data coming in about the application’s acceptability is gathered and fed into the Investigation phase of the next generation or release. Thus the process begins again.
Acquiring a tool to support requirements management is no trivial matter and it needs to be undertaken as part of a broader process improvement initiative. It has long been a perception that a tool, once acquired and installed on a project, can address all of its requirements management-related needs. However, the purchase or development of a tool to support requirements management can be a costly decision. Organizations may get burdened with expensive support contracts, disproportionate effort can get misdirected towards learning to use the tool and configuring it to address particular needs, and inappropriate use that can lead to erroneous decisions. Organizations should follow an incremental process to make decisions about tools to support their particular needs from within the wider context of their development process and tooling.[9]The tools are presented inRequirements traceability.
|
https://en.wikipedia.org/wiki/Requirements_management
|
Inproject management,scopeis the defined features and functions of a product, or the scope of work needed to finish a project.[1]Scope involves getting information required to start a project, including the features the product needs to meet its stakeholders' requirements.[2][3]: 116
Project scope is oriented towards the work required and methods needed, while product scope is more oriented towardfunctional requirements. If requirements are not completely defined and described and if there is no effectivechange controlin a project,scope or requirement creepmay ensue.[4][5]: 434[3]: 13
Scope managementis the process of defining,[3]: 481–483and managing the scope of a project to ensure that it stays on track, within budget, and meets the expectations of stakeholders.
|
https://en.wikipedia.org/wiki/Scope_(project_management)
|
Software architectureis the set of structures needed to reason about asoftware systemand the discipline of creating such structures and systems. Each structure comprises software elements, relations among them, and properties of both elements and relations.[1]
Thearchitectureof a software system is ametaphor, analogous to thearchitectureof a building.[2]It functions as the blueprints for the system and the development project, whichproject managementcan later use to extrapolate the tasks necessary to be executed by the teams and people involved.
Software architecture is about making fundamental structural choices that are costly to change once implemented. Software architecture choices include specific structural options from possibilities inthe design of the software. There are two fundamental laws in software architecture:[3][4]
"Architectural Kata" is a teamwork which can be used to produce an architectural solution that fits the needs. Each team extracts and prioritizes architectural characteristics (akanon functional requirements) then models the components accordingly. The team can useC4 Modelwhich is a flexible method to model the architecture just enough. Note that synchronous communication between architectural components, entangles them and they must share the same architectural characteristics.[4]
Documenting softwarearchitecture facilitates communication betweenstakeholders, captures early decisions about the high-level design, and allows the reuse of design components between projects.[5]: 29–35
Software architecture design is commonly juxtaposed withsoftware application design. Whilst application design focuses on the design of the processes and data supporting the required functionality (the services offered by the system), software architecture design focuses on designing the infrastructure within which application functionality can be realized and executed such that the functionality is provided in a way which meets the system'snon-functional requirements.
Software architectures can be categorized into two main types:monolithanddistributed architecture, each having its own subcategories.[4]
Software architecture tends to become more complex over time.Software architectsshould use "fitness functions" tocontinuouslykeep the architecture in check.[4]
Opinions vary as to the scope of software architectures:[6]
There is no sharp distinction between software architecture versus design and requirements engineering (seeRelated fieldsbelow). They are all part of a "chain of intentionality" from high-level intentions to low-level details.[12]: 18
Software Architecture Patternrefers to a reusable, proven solution to a recurring problem at the system level, addressing concerns related to the overall structure, component interactions, and quality attributes of the system. Software architecture patterns operate at a higher level of abstraction than softwaredesign patterns, solving broader system-level challenges. While these patterns typically affect system-level concerns, the distinction between architectural patterns and architectural styles can sometimes be blurry. Examples includeCircuit Breaker.[13][14][15]
Software Architecture Stylerefers to a high-level structural organization that defines the overall system organization, specifying how components are organized, how they interact, and the constraints on those interactions. Architecture styles typically include a vocabulary of component and connector types, as well as semantic models for interpreting the system's properties. These styles represent the most coarse-grained level of system organization. Examples includeLayered Architecture,Microservices, andEvent-Driven Architecture.[13][14][15]
The following architecturalanti-patternscan arise whenarchitectsmake decisions. These anti-patterns often follow a progressive sequence, where resolving one may lead to the emergence of another.[4]
Software architecture exhibits the following:
Multitude of stakeholders:software systems have to cater to a variety of stakeholders such as business managers, owners, users, and operators. These stakeholders all have their own concerns with respect to the system. Balancing these concerns and demonstrating that they are addressed is part of designing the system.[5]: 29–31This implies that architecture involves dealing with a broad variety of concerns and stakeholders, and has a multidisciplinary nature.
Separation of concerns:the established way for architects to reduce complexity is to separate the concerns that drive the design. Architecture documentation shows that all stakeholder concerns are addressed by modeling and describing the architecture from separate points of view associated with the various stakeholder concerns.[16]These separate descriptions are called architectural views (see for example the4+1 architectural view model).
Quality-driven:classicsoftware designapproaches (e.g.Jackson Structured Programming) were driven by required functionality and the flow of data through the system, but the current insight[5]: 26–28is that the architecture of a software system is more closely related to itsquality attributessuch asfault-tolerance,backward compatibility,extensibility,reliability,maintainability,availability, security, usability, and other such –ilities. Stakeholder concerns often translate intorequirementson these quality attributes, which are variously callednon-functional requirements, extra-functional requirements, behavioral requirements, or quality attribute requirements.
Recurring styles:like building architecture, the software architecture discipline has developed standard ways to address recurring concerns. These "standard ways" are called by various names at various levels of abstraction. Common terms for recurring solutions are architectural style,[12]: 273–277tactic,[5]: 70–72reference architectureandarchitectural pattern.[17][18][5]: 203–205
Conceptual integrity:a term introduced byFred Brooksin his 1975 bookThe Mythical Man-Monthto denote the idea that the architecture of a software system represents an overall vision of what it should do and how it should do it. This vision should be separated from its implementation. The architect assumes the role of "keeper of the vision", making sure that additions to the system are in line with the architecture, hence preservingconceptual integrity.[19]: 41–50
Cognitive constraints:An observation first made in a 1967 paper by computer programmerMelvin Conwaythat organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.[20]Fred Brooks introduced it to a wider audience when he cited the paper and the idea inThe Mythical Man-Month, calling itConway's Law.
Software architecture is an "intellectually graspable" abstraction of a complex system.[5]: 5–6This abstraction provides a number of benefits:
The comparison between software design and (civil) architecture was first drawn in the late 1960s,[23]but the term "software architecture" did not see widespread usage until the 1990s.[24]The field ofcomputer sciencehad encountered problems associated with complexity since its formation.[25]Earlier problems of complexity were solved by developers by choosing the rightdata structures, developingalgorithms, and by applying the concept ofseparation of concerns. Although the term "software architecture" is relatively new to the industry, the fundamental principles of the field have been applied sporadically bysoftware engineeringpioneers since the mid-1980s. Early attempts to capture and explain software architecture of a system were imprecise and disorganized, often characterized by a set of box-and-linediagrams.[26]
Software architecture as a concept has its origins in the research ofEdsger Dijkstrain 1968 andDavid Parnasin the early 1970s. These scientists emphasized that the structure of a software system matters and getting the structure right is critical. During the 1990s there was a concerted effort to define and codify fundamental aspects of the discipline, with research work concentrating on architectural styles (patterns),architecture description languages,architecture documentation, andformal methods.[27]
Research institutions have played a prominent role in furthering software architecture as a discipline.Mary ShawandDavid GarlanofCarnegie Mellonwrote a book titledSoftware Architecture: Perspectives on an Emerging Disciplinein 1996, which promoted software architecture concepts such ascomponents, connectors, and styles. TheUniversity of California, Irvine's Institute for Software Research's efforts in software architecture research is directed primarily in architectural styles, architecture description languages, and dynamic architectures.
IEEE 1471-2000, "Recommended Practice for Architecture Description of Software-Intensive Systems", was the first formal standard in the area of software architecture. It was adopted in 2007 by ISO asISO/IEC 42010:2007. In November 2011, IEEE 1471–2000 was superseded byISO/IEC/IEEE 42010:2011, "Systems and software engineering – Architecture description" (jointly published by IEEE and ISO).[16]
While inIEEE 1471, software architecture was about the architecture of "software-intensive systems", defined as "any system where software contributes essential influences to the design, construction, deployment, and evolution of the system as a whole", the 2011 edition goes a step further by including theISO/IEC 15288andISO/IEC 12207definitions of a system, which embrace not only hardware and software, but also "humans, processes, procedures, facilities, materials and naturally occurring entities". This reflects the relationship between software architecture,enterprise architectureandsolution architecture.
Making architectural decisions involves collecting sufficient relevant information, providing justification for the decision, documenting the decision and its rationale, and communicating it effectively to the appropriate stakeholders.[4]
It's software architect's responsibility to matcharchitectural characteristics(akanon-functional requirements) with business requirements. For example:[4]
There are four core activities in software architecture design.[28]These core architecture activities are performed iteratively and at different stages of the initial software development life-cycle, as well as over the evolution of a system.
Architectural analysisis the process of understanding the environment in which a proposed system will operate and determining the requirements for the system. The input or requirements to the analysis activity can come from any number of stakeholders and include items such as:
The outputs of the analysis activity are those requirements that have a measurable impact on a software system's architecture, called architecturally significant requirements.[31]
Architectural synthesisor design is the process of creating an architecture. Given the architecturally significant requirements determined by the analysis, the current state of the design and the results of any evaluation activities, the design is created and improved.[28][5]: 311–326
Architecture evaluationis the process of determining how well the current design or a portion of it satisfies the requirements derived during analysis. An evaluation can occur whenever an architect is considering a design decision, it can occur after some portion of the design has been completed, it can occur after the final design has been completed or it can occur after the system has been constructed. Some of the available software architecture evaluation techniques includeArchitecture Tradeoff Analysis Method (ATAM)and TARA.[32]Frameworks for comparing the techniques are discussed in frameworks such asSARA Report[21]andArchitecture Reviews: Practice and Experience.[33]
Architecture evolutionis the process of maintaining and adapting an existing software architecture to meet changes in requirements and environment. As software architecture provides a fundamental structure of a software system, its evolution and maintenance would necessarily impact its fundamental structure. As such, architecture evolution is concerned with adding new functionality as well as maintaining existing functionality and system behavior.
Architecture requires critical supporting activities. These supporting activities take place throughout the core software architecture process. They include knowledge management and communication, design reasoning and decision-making, and documentation.
Software architecture supporting activities are carried out during core software architecture activities. These supporting activities assist a software architect to carry out analysis, synthesis, evaluation, and evolution. For instance, an architect has to gather knowledge, make decisions, and document during the analysis phase.
Software architecture inherently deals with uncertainties, and the size of architectural components can significantly influence a system's outcomes, both positively and negatively. Neal Ford and Mark Richards propose an iterative approach to address the challenge of identifying and right-sizing components. This method emphasizes continuous refinement as teams develop a more nuanced understanding of system behavior and requirements.[4]
The approach typically involves a cycle with several stages:[4]
This cycle serves as a general framework and can be adapted to different domains.
There are also concerns that software architecture leads to too muchbig design up front, especially among proponents ofagile software development. A number of methods have been developed to balance the trade-offs of up-front design and agility,[38]including the agile methodDSDMwhich mandates a "Foundations" phase during which "just enough" architectural foundations are laid.IEEE Softwaredevoted a special issue to the interaction between agility and architecture.
Software architecture erosion refers to a gradual gap between the intended and implemented architecture of a software system over time.[39]The phenomenon of software architecture erosion was initially brought to light in 1992 by Perry and Wolf alongside their definition of software architecture.[2]
Software architecture erosion may occur in each stage of the software development life cycle and has varying impacts on the development speed and the cost of maintenance. Software architecture erosion occurs due to various reasons, such asarchitectural violations,the accumulation of technical debt, andknowledge vaporization.[40]A famous case of architecture erosion is the failure of Mozilla Web browser.[41]Mozilla is an application created by Netscape with a complex codebase that became harder to maintain due to continuous changes. Due to initial poor design and growing architecture erosion, Netscape spent two years redeveloping the Mozilla Web browser, demonstrating the importance of proactive architecture management to prevent costly repairs and project delays.
Architecture erosion can decrease software performance, substantially increase evolutionary costs, and degrade software quality. Various approaches and tools have been proposed to detect architecture erosion. These approaches are primarily classified into four categories: consistency-based, evolution-based, defect-based, and decision-based approaches.[39]For instance, automated architecture conformance checks, static code analysis tools, and refactoring techniques help identify and mitigate erosion early.
Besides, the measures used to address architecture erosion contain two main types: preventative and remedial measures.[39]Preventative measures include enforcing architectural rules, regular code reviews, and automated testing, while remedial measures involve refactoring, redesign, and documentation updates.
Software architecture recovery (or reconstruction, orreverse engineering) includes the methods, techniques, and processes to uncover a software system's architecture from available information, including its implementation and documentation. Architecture recovery is often necessary to make informed decisions in the face of obsolete or out-of-date documentation andarchitecture erosion: implementation and maintenance decisions diverging from the envisioned architecture.[42]Practices exist to recover software architecture asstatic program analysis. This is a part of the subjects covered by thesoftware intelligencepractice.
Architecture isdesignbut not all design is architectural.[1]In practice, the architect is the one who draws the line between software architecture (architectural design) and detailed design (non-architectural design). There are no rules or guidelines that fit all cases, although there have been attempts to formalize the distinction.
According to theIntension/Locality Hypothesis,[43]the distinction between architectural and detailed design is defined by theLocality Criterion,[43]according to which a statement about software design is non-local (architectural) if and only if a program that satisfies it can be expanded into a program that does not. For example, theclient–serverstyle is architectural (strategic) because a program that is built on this principle can be expanded into a program that is not client–server—for example, by addingpeer-to-peernodes.
Requirements engineeringand software architecture can be seen as complementary approaches: while software architecture targets the 'solution space' or the 'how', requirements engineering addresses the 'problem space' or the 'what'.[44]Requirements engineering entails theelicitation,negotiation,specification,validation,documentation, andmanagementofrequirements. Both requirements engineering and software architecture revolve aroundstakeholderconcerns, needs, and wishes.
There is considerable overlap between requirements engineering and software architecture, as evidenced for example by a study into five industrial software architecture methods that concludes that"the inputs (goals, constraints, etc.) are usually ill-defined, and only get discovered or better understood as the architecture starts to emerge"and that while"most architectural concerns are expressed as requirements on the system, they can also include mandated design decisions".[28]In short, required behavior impacts solution architecture, which in turn may introduce new requirements.[45]Approaches such as the Twin Peaks model[46]aim to exploit thesynergisticrelation between requirements and architecture.
|
https://en.wikipedia.org/wiki/Software_architecture
|
Software quality controlis the set of procedures used by organizations[1]to ensure that a software product will meet its quality goals at the best value to the customer,[2]and to continually improve the organization’s ability to produce software products in the future.[1]
Software quality control refers to specified functional requirements as well as non-functional requirements such as supportability, performance and usability.[2]It also refers to the ability for software to perform well in unforeseeable scenarios and to keep a relatively low defect rate.
These specified procedures and outlined requirements lead to the idea of Verification and Validation and software testing.
It is distinct from softwarequality assurancewhich encompasses processes and standards for ongoing maintenance of high quality of products, e.g. software deliverables, documentation and processes - avoiding defects. Whereas software quality control is a validation of artifacts compliance against established criteria - finding defects.
Software quality control is a function that checks whether a software component, or supporting artifact meets requirements, or is "fit for use". Software Quality Control is commonly referred to as Testing.
Verification and validationassure that a software system meets a user's needs.
Verification: "Are we building the product right?" The software should conform to its specification.
Validation: "Are we building the right product?" The software should do what the user really requires.
Two principal objectives are:
|
https://en.wikipedia.org/wiki/Software_quality_control
|
Incomputer programming,reusabilitydescribes the quality of asoftwareasset that affects its ability to be used in asoftware systemfor which it wasnotspecifically designed. An asset that is easy toreuseand provides utility is considered to have high reusability. A related concept,leverageinvolves modifying an existing asset to meet system requirements.[1]
The ability to reuse can be viewed as the ability to build larger things from smaller parts, and to identify commonality among the parts. Reusability is often a required characteristic ofplatformsoftware. Reusability brings several aspects tosoftware developmentthat do not need to be considered when reusability is not required.
Reusability may be impacted by variousDevOpsaspects including:build,packaging,distribution,installation,configuration,deployment,maintenanceandupgrade. If these aspects are not considered, software may seem to be reusable based on itsdesign, but may not be reusable in practice.
Many reuse design principles were developed at the WISR workshops.[2]Although lacking consensus candidate design features for software reuse include:
|
https://en.wikipedia.org/wiki/Software_reusability
|
Asoftware standardis astandard,protocol, or other common format of a document, file, or data transfer accepted and used by one or moresoftware developerswhile working on one or more than one computer programs. Software standards enable interoperability between different programs created by different developers.
Software standards consist of certain terms, concepts, data formats, document styles and techniques agreed upon by software creators so that their software can understand the files and data created by a different computer program. To be considered a standard, a certain protocol needs to be accepted and incorporated by a group of developers who contribute to the definition and maintenance of the standard.
Some developers prefer using standards for software development because of the efficiencies it provides for code development[1]and wider user acceptance and use of the resulting application.[2]
The protocolsHTML,TCP/IP,SMTP,POPandFTPare examples of software standards that application designers must understand and follow if their software expects to interface with these standards. For instance, in order for an email sent usingMicrosoft Outlookto be read by someone usingYahoo! Mail, the email must be sent using theSMTPso that the recipient's software can understand and correctly parse and display the email. Without such a standardized protocol, two different software applications would be unable to accurately share and display the information delivered between each other.
Some other widely used data formats, while understood and used by a variety of computer programs, are not considered a software standard.Microsoft Officefile formats, such as .doc and .xls, are commonly converted by other computer programs to use, but are still owned and controlled byMicrosoft, unlike text files (TXTorRTF.[3])
Representatives fromstandards organizations, likeW3C[4]andISOC,[5]collaborate on how to make a unified software standard to ensure seamless communication between software applications. These organisations consist of groups of larger software companies likeMicrosoftandApple Inc.
The complexity of a standard varies based on the specific problem it aims to address but it needs to remain simple, maintainable and understandable. The standard document must comprehensively outline various conditions, types, and elements to ensure practicality and fulfill its intended purpose. For instance, although bothFTP (File Transfer Protocol)andSMTP (Simple Mail Transfer Protocol)facilitate computer-to-computer communication,FTPspecifically handles the exchange of files, whileSMTPfocuses on the transmission of emails.
A standard can be a closed standard or anopen standard. The documentation for an open standard is open to the public and anyone can create a software that implements and uses the standard. The documentation and specification for closed standards are not available to the public, enabling its developer to sell and license the code to manage their data format to other interested software developers. While this process increases the revenue potential for a useful file format, it may limit acceptance and drive the adoption of a similar, open standard instead.[6]
|
https://en.wikipedia.org/wiki/Software_standard
|
Software testabilityis the degree to which a software artifact (e.g. a software system, module, requirement, or design document) supportstestingin a given test context. If the testability of an artifact is high, then finding faults in the system (if any) by means of testing is easier.
Formally, some systems are testable, and some are not. This classification can be achieved by noticing that, to be testable, for a functionality of the system under test "S", which takes input "I", a computablefunctional predicate"V" must exists such thatV(S,I){\displaystyle V(S,I)}is true when S, given input I, produce a valid output, false otherwise. This function "V" is known as the verification function for the system with input I.
Many software systems are untestable, or not immediately testable. For example, Google'sReCAPTCHA, without having any metadata about the images is not a testable system. Recaptcha, however, can be immediately tested if for each image shown, there is a tag stored elsewhere. Given this meta information, one can test the system.
Therefore, testability is often thought of as anextrinsicproperty which results from interdependency of the software to be tested and the test goals, test methods used, and test resources (i.e., the test context). Even though testability can not be measured directly (such as software size) it should be considered anintrinsicproperty of a software artifact because it is highly correlated with other key software qualities such as encapsulation, coupling, cohesion, and redundancy.
The correlation of 'testability' to good design can be observed by seeing that code that has weak cohesion, tight coupling, redundancy and lack of encapsulation is difficult to test.[1]
A lower degree of testability results in increasedtest effort. In extreme cases a lack of testability may hinder testing parts of the software orsoftware requirementsat all.
Testability, a property applying to empirical hypothesis, involves two components.
The effort and effectiveness of software tests depends on numerous factors including:
The testability of software components (modules, classes) is determined by factors such as:
The testability of software components can be improved by:
Requirements need to fulfill the following criteria in order to be testable:
Treating the requirement as axioms, testability can be treated via asserting existence of a functionFS{\displaystyle F_{S}}(software)
such that inputIk{\displaystyle I_{k}}generates outputOk{\displaystyle O_{k}}, thereforeFS:I→O{\displaystyle F_{S}:I\to O}. Therefore, the ideal software generates the tuple(Ik,Ok){\displaystyle (I_{k},O_{k})}which is the input-output setΣ{\displaystyle \Sigma },
standing for specification.
Now, take a test inputIt{\displaystyle I_{t}}, which generates the outputOt{\displaystyle O_{t}}, that is the test tupleτ=(It,Ot){\displaystyle \tau =(I_{t},O_{t})}. Now, the question is whether or notτ∈Σ{\displaystyle \tau \in \Sigma }orτ∉Σ{\displaystyle \tau \not \in \Sigma }. If it is in the set, the test tupleτ{\displaystyle \tau }passes, else the system fails the test input. Therefore, it is of imperative importance to figure out : can we or can we not create a function that effectively translates into the notion of the setindicator functionfor the specification setΣ{\displaystyle \Sigma }.
By the notion,1Σ{\displaystyle 1_{\Sigma }}is the testability function for the specificationΣ{\displaystyle \Sigma }.
The existence should not merely be asserted, should be proven rigorously. Therefore, obviously without algebraic consistency, no such function can be found, and therefore, the specification cease to be termed as testable.
|
https://en.wikipedia.org/wiki/Software_testability
|
TheLinux kernelprovides multiple interfaces touser-space and kernel-modecode. The interfaces can be classified as eitherapplication programming interface(API) orapplication binary interface(ABI), and they can be classified as either kernel–user space or kernel-internal.
The Linux API includes the kernel–user space API, which allows code in user space to access system resources and services of the Linux kernel.[3]It is composed of the system call interface of the Linux kernel and the subroutines in theC standard library. The focus of the development of the Linux API has been to provide theusable featuresof the specifications defined inPOSIXin a way which is reasonably compatible, robust and performant, and to provide additional useful features not defined in POSIX, just as the kernel–user space APIs of other systems implementing the POSIX API also provide additional features not defined in POSIX.
The Linux API, by choice, has been kept stable over the decades through a policy of not introducing breaking changes; this stability guarantees the portability ofsource code.[4]At the same time, Linux kernel developers have historically been conservative and meticulous about introducing new system calls.[citation needed]
Much availablefree and open-source softwareis written for the POSIX API. Since so much more development flows into the Linux kernel as compared to the other POSIX-compliant combinations of kernel and C standard library,[citation needed]the Linux kernel and its API have been augmented with additional features. Programming for the full Linux API, rather than just the POSIX API, may provide advantages in cases where those additional features are useful. Well-known current examples areudev,systemdandWeston.[5]People such asLennart Poetteringopenly advocate to prefer the Linux API over the POSIX API, where this offers advantages.[6]
AtFOSDEM2016,Michael Kerriskexplained some of the perceived issues with the Linux kernel's user-space API, describing that it contains multiple design errors by being non-extensible, unmaintainable, overly complex, of limited purpose, in violation of standards, and inconsistent. Most of those mistakes cannot be fixed because doing so would break the ABI that the kernel presents to the user space.[7]
Thesystem call interfaceof a kernel is the set of all implemented and availablesystem callsin a kernel. In the Linux kernel, various subsystems, such as theDirect Rendering Manager(DRM), define their own system calls, all of which are part of the system call interface.
Various issues with the organization of the Linux kernel system calls are being publicly discussed. Issues have been pointed out by Andy Lutomirski,Michael Kerriskand others.[8][9][10][11]
AC standard libraryfor Linux includes wrappers around the system calls of the Linux kernel; the combination of the Linux kernel system call interface and a C standard library is what builds the Linux API. Some popular implementations of the C standard library are
Although the landscape is shifting, amongst these options, glibc remains the most popular implementation, to the point of many treating it as the default and the term equivalent to libc.
As in otherUnix-likesystems, additional capabilities of the Linux kernel exist that are not part of POSIX:
DRMhas been paramount for the development and implementations of well-defined and performantfree and open-source graphics device driverswithout which no rendering acceleration would be available at all, only the 2D drivers would be available in theX.Org Server. DRM was developed for Linux, and since has been ported to other operating systems as well.[14]
The Linux ABI is a kernel–user space ABI. As ABI is amachine codeinterface, the Linux ABI is bound to theinstruction set. Defining a useful ABI and keeping it stable is less the responsibility of the Linux kernel developers or of the developers of the GNU C Library, and more the task forLinux distributionsandindependent software vendors(ISVs) who wish to sell and provide support for their proprietary software as binaries only for such a single Linux ABI, as opposed to supporting multiple Linux ABIs.
An ABI has to be defined for every instruction set, such asx86,x86-64,MIPS,ARMv7-A(32-Bit),ARMv8-A(64-Bit), etc. with theendianness, if both are supported.
It should be able to compile the software with different compilers against the definitions specified in the ABI and achieve full binary compatibility. Compilers that arefree and open-source softwareare e.g.GNU Compiler Collection,LLVM/Clang.
Many kernel-internal APIs exist, allowing kernel subsystems to interface with one another. These are being kept fairly stable, but there is no guarantee for stability. A kernel-internal API can be changed when such a need is indicated by new research or insights; all necessary modifications and testing have to be done by the author.
The Linux kernel is a monolithic kernel, hence device drivers are kernel components. To ease the burden of companies maintaining their (proprietary) device drivers outside of the main kernel tree, stable APIs for the device drivers have been repeatedly requested. The Linux kernel developers have repeatedly denied guaranteeing stable in-kernel APIs for device drivers. Guaranteeing such would have faltered the development of the Linux kernel in the past and would still in the future and, due to the nature of free and open-source software, are not necessary. Ergo, by choice, the Linux kernel has nostablein-kernel API.[15]
Since there are no stable in-kernel APIs, there cannot be stable in-kernel ABIs.[16]
For many use cases, the Linux API is considered too low-level, so APIs of higher abstraction must be used. Higher-level APIs must be implemeted on top of lower-level APIs. Examples:
|
https://en.wikipedia.org/wiki/Linux_kernel_interfaces
|
In theStandard Generalized Markup Language(SGML), anentityis aprimitivedata type, which associates astringwith either a unique alias (such as a user-specified name) or an SGMLreserved word(such as#DEFAULT). Entities are foundational to the organizational structure and definition of SGML documents. The SGML specification defines numerousentity types, which are distinguished by keyword qualifiers and context. An entity string value may variously consist ofplain text, SGML tags, and/or references to previously defined entities. Certain entity types may also invoke external documents. Entities arecalled by reference.
Entities are classified as general or parameter:
Entities are also further classified as parsed or unparsed:
Aninternal entityhas a value that is either aliteralstring, or a parsed string comprising markup and entities defined in the same document (such as aDocument Type Declarationor subdocument). In contrast, anexternal entityhas adeclarationthat invokes an external document, thereby necessitating the intervention of anentity managerto resolve the external document reference.
An entity declaration may have a literal value, or may have some combination of an optionalSYSTEMidentifier, which allows SGML parsers to process an entity's string referent as a resource identifier, and an optionalPUBLICidentifier, which identifies the entity independent of any particular representation. InXML, a subset ofSGML, an entity declaration may not have aPUBLICidentifier without aSYSTEMidentifier.
When an external entity references a complete SGML document, it is known in the calling document as anSGML document entity. An SGML document is a text document with SGML markup defined in an SGML prologue (i.e., the DTD and subdocuments). A complete SGML document comprises not only the document instance itself, but also the prologue and, optionally, the SGML declaration (which defines the document's markup syntax and declares thecharacter encoding).[1]
An entity is defined via anentity declarationin a document'sdocument type definition(DTD). For example:
This DTD markup declares the following:
Names for entities must follow the rules forSGML names, and there are limitations on where entities can be referenced.
Parameter entities are referenced by placing the entity name between%and;. Parsed general entities are referenced by placing the entity name between "&" and ";". Unparsed entities are referenced by placing the entity name in the value of an attribute declared as type ENTITY.
The general entities from the example above might be referenced in a document as follows:
When parsed, this document would be reported to the downstream application the same as if it has been written as follows, assuming thehello.txtfile contains the textSalutations:
A reference to an undeclared entity is an error unless a default entity has been defined. For example:
Additional markup constructs and processor options may affect whether and how entities are processed. For example, a processor may optionally ignore external entities.
Standard entity sets for SGML and some of its derivatives have been developed asmnemonicdevices, to ease document authoring when there is a need to use characters that are not easily typed or that are not widely supported by legacy character encodings. Each such entity consists of just one character from theUniversal Character Set. Although any character can be referenced using anumeric character reference, acharacter entity referenceallows characters to be referenced by name instead ofcode point.
For example,HTML 4has 252 built-in character entities that do not need to be explicitly declared, whileXMLhas five.XHTMLhas the same five as XML, but if its DTDs are explicitly used, then it has 253 ('being the extra entity beyond those in HTML 4).
|
https://en.wikipedia.org/wiki/XML_external_entity
|
Free and open-source software(FOSS) issoftwareavailable under alicensethat grants users the right to use, modify, and distribute the software – modified or not – to everyone free of charge. FOSS is an inclusiveumbrella termencompassingfree softwareandopen-source software.[a][1]The rights guaranteed by FOSS originate from the "Four Essential Freedoms" ofThe Free Software Definitionand the criteria ofThe Open Source Definition.[4][6]All FOSS must have publicly availablesource code, but not allsource-available softwareis FOSS. FOSS is the opposite ofproprietary software, which is licensed restrictively or has undisclosed source code.[4]
The historical precursor to FOSS was the hobbyist and academicpublic domain softwareecosystem of the 1960s to 1980s. Free and open-source operating systems such asLinux distributionsand descendants ofBSDare widely used, powering millions ofservers,desktops,smartphones, and other devices.[9][10]Free-software licensesandopen-source licenseshave been adopted bymany software packages. Reasons for using FOSS include decreased software costs, increasedsecurityagainstmalware, stability,privacy, opportunities for educational usage, and giving users more control over their own hardware.
Thefree software movementand theopen-source software movementareonline social movementsbehind widespread production, adoption and promotion of FOSS, with the former preferring to use the equivalent termfree/libre and open-source software(FLOSS). FOSS is supported by a loosely associated movement of multiple organizations, foundations, communities and individuals who share basic philosophical perspectives and collaborate practically, but may diverge in detail questions.
"Free and open-source software" (FOSS) is anumbrella termfor software that is consideredfree softwareandopen-source software.[1]The precise definition of the terms "free software" and "open-source software" applies them to any software distributed under terms that allow users to use, modify, and redistribute said software in any manner they see fit, without requiring that they pay the author(s) of the software aroyaltyor fee for engaging in the listed activities.[11]
Although there is an almost complete overlap between free-software licenses and open-source-software licenses, there is a strong philosophical disagreement between the advocates of these two positions. The terminology of FOSS was created to be a neutral on these philosophical disagreements between theFree Software Foundation(FSF) andOpen Source Initiative(OSI) and have a single unified term that could refer to both concepts, although Richard Stallman argues that it fails to be neutral unlike the similar term; "Free/Libre and Open Source Software" (FLOSS).[12]
Richard Stallman'sThe Free Software Definition, adopted by the FSF, definesfree softwareas a matter of liberty, not price,[13][14]and that which upholds the Four Essential Freedoms. The earliest known publication of this definition of his free software definition was in the February 1986 edition[15]of the FSF's now-discontinued GNU's Bulletin publication. The canonical source for the document is in the philosophy section of theGNU Projectwebsite. As of August 2017[update], it is published in 40 languages.[16]
To meet the definition of "free software", the FSF requires the software's licensing respect the civil liberties / human rights of what the FSF calls the software user's "Four Essential Freedoms".[17]
The Open Source Definitionis used by theOpen Source Initiative(OSI) to determine whether asoftwarelicense qualifies for the organization's insignia foropen-source software. The definition was based on theDebian Free Software Guidelines, written and adapted primarily byBruce Perens.[18][19]Perens did not base his writing on the Four Essential Freedoms of free software from theFree Software Foundation, which were only later available on the web.[20]Perens subsequently stated that he feltEric Raymond's promotion of open-source unfairly overshadowed the Free Software Foundation's efforts and reaffirmed his support for free software.[21]In the following 2000s, he spoke about open source again.[22][23]
In the early decades of computing, particularly from the 1950s through the 1970s, software development was largely collaborative. Programs were commonly shared in source code form among academics, researchers, and corporate developers. Most companies at the time made their revenue from hardware sales, and software—including source code—was distributed freely alongside it, often as public-domain software.[24][25]
By the late 1960s and 1970s, a distinct software industry began to emerge. Companies started selling software as a separate product, leading to the use of restrictive licenses and technical measures—such as distributing only binary executables—to limit user access and control. This shift was driven by growing competition and the U.S. government's antitrust scrutiny of bundled software, exemplified by the 1969 antitrust caseUnited States v. IBM.[26]
A key turning point came in 1980 when U.S. copyright law was formally extended to cover computer software.[27][28]This enabled companies like IBM to further enforce closed-source distribution models. In 1983, IBM introduced its "object code only" policy, ceasing the distribution of source code for its system software.[29]
In response to the growing restrictions on software, Richard Stallman launched the GNU Project in 1983 at MIT. His goal was to develop a complete Free software operating system and restore user freedom. The Free Software Foundation (FSF) was established in 1985 to support this mission. Stallman'sGNU Manifestoand the Four Essential Freedoms outlined the movement's ethical stance, emphasizing user control over software.[17]
The release of the Linux kernel by Linus Torvalds in 1991, and its relicense under the GNU General Public License (GPL) in 1992, marked a major step toward a fully Free operating system.[30]Other Free software projects like FreeBSD, NetBSD, and OpenBSD also gained traction following the resolution of theUSL v. BSDilawsuit in 1993.
In 1997, Eric Raymond’s essay *The Cathedral and the Bazaar* explored the development model of Free software, influencing Netscape’s decision in 1998 to release the source code for its browser suite. This code base became Mozilla Firefox and Thunderbird.
To broaden business adoption, a group of developers including Raymond, Bruce Perens, Tim O’Reilly, and Linus Torvalds rebranded the Free software movement as “Open Source.” The Open Source Initiative (OSI) was founded in 1998 to promote this new term and emphasize collaborative development benefits over ideology.[31]
Despite initial resistance—such as Microsoft's 2001 claim that "Open-source is an intellectual property destroyer"—FOSS eventually gained widespread acceptance in the corporate world. Companies like Red Hat proved that commercial success and Free software principles could coexist.[32][33][34]
Users of FOSS benefit from theFour Essential Freedomsto make unrestricted use of, and to study, copy, modify, and redistribute such software with or without modification. If they would like to change the functionality of software they can bring about changes to the code and, if they wish, distribute such modified versions of the software or often − depending on the software'sdecision making modeland its other users − even push or request such changes to be made via updates to the original software.[35][36][37][38][39]
Manufacturers of proprietary, closed-source software are sometimes pressured to building inbackdoorsor other covert, undesired features into their software.[40][41][42][43]Instead of having to trust software vendors, users of FOSS can inspect and verify the source code themselves and can put trust on a community of volunteers and users.[39]As proprietary code is typically hidden from public view, only the vendors themselves and hackers may be aware of anyvulnerabilitiesin them[39]while FOSS involves as many people as possible for exposing bugs quickly.[44][45]
FOSS is often free of charge although donations are often encouraged. This also allows users to better test and compare software.[39]
FOSS allows for better collaboration among various parties and individuals with the goal of developing the most efficient software for its users or use-cases while proprietary software is typicallymeant to generate profits. Furthermore, in many cases more organizations and individuals contribute to such projects than to proprietary software.[39]It has been shown that technical superiority is typically the primary reason why companies choose open source software.[39]
According toLinus's lawthe more people who can see and test a set of code, the more likely any flaws will be caught and fixed quickly. However, this does not guarantee a high level of participation. Having a grouping of full-time professionals behind a commercial product can in some cases be superior to FOSS.[39][44][46]
Furthermore, publicized source code might make it easier for hackers to find vulnerabilities in it and write exploits. This however assumes that such malicious hackers are more effective thanwhite hat hackerswhichresponsibly discloseor help fix the vulnerabilities, that no code leaks orexfiltrationsoccur and thatreverse engineeringof proprietary code is a hindrance of significance for malicious hackers.[44]
Sometimes, FOSS is not compatible with proprietary hardware or specific software. This is often due to manufacturers obstructing FOSS such as by not disclosing theinterfacesor other specifications needed for members of the FOSS movement to writedriversfor their hardware – for instance as they wish customers to run only their own proprietary software or as they might benefit from partnerships.[47][48][49][50][51][52][53]
While FOSS can be superior to proprietary equivalents in terms of software features and stability, in many cases it has more unfixed bugs and missing features when compared to similar commercial software.[54][additional citation(s) needed]This varies per case, and usually depends on the level of interest in a particular project. However, unlike close-sourced software, improvements can be made by anyone who has the motivation, time and skill to do so.[46][additional citation(s) needed]
A common obstacle in FOSS development is the lack of access to some common official standards, due to costlyroyaltiesor requirednon-disclosure agreements(e.g., for theDVD-Videoformat).[55]
There is often less certainty of FOSS projects gaining the required resources and participation for continued development than commercial software backed by companies.[56][additional citation(s) needed]However, companies also often abolish projects for being unprofitable, yet large companies may rely on, and hence co-develop, open source software.[45]On the other hand, if the vendor of proprietary software ceases development, there are no alternatives; whereas with FOSS, any user who needs it still has the right, and the source-code, to continue to develop it themself, or pay a 3rd party to do so.
As the FOSS operating system distributions ofLinuxhas a lowermarket shareof end users there are also fewer applications available.[57][58]
"We migrated key functions from Windows to Linux because we needed an operating system that was stable and reliable -- one that would give us in-house control. So if we needed to patch, adjust, or adapt, we could."
In 2017, theEuropean Commissionstated that "EU institutions should become open source software users themselves, even more than they already are" and listed open source software as one of the nine key drivers of innovation, together withbig data, mobility,cloud computingand theinternet of things.[96]
In 2020, theEuropean Commissionadopted itsOpen Source Strategy 2020-2023,[97]including encouraging sharing and reuse of software and publishing Commission's source code as key objectives. Among concrete actions there is also to set up an Open Source Programme Office in 2020[98]and in 2022 it launched its own FOSS repositoryhttps://code.europa.eu/.[99]
In 2021, theCommission Decision on the open source licensing and reuse of Commission software (2021/C 495 I/01)[100]was adopted, under which, as a general principle, the European Commission may release software underEUPLor another FOSS license, if more appropriate. There are exceptions though.
In May 2022,[101]theExpert group on the Interoperability of European Public Servicescame published 27 recommendations to strengthen the interoperability of public administrations across the EU. These recommendations are to be taken into account later in the same year in Commission's proposal of the"Interoperable Europe Act".
Open-source software development (OSSD) is the process by which open-source software is developed. The software's source code is publicly available to be used, modified, and enhanced.[102]Notable examples of open-source software products are Mozilla Firefox, Android, and VLC media player.[103]The development process is typically different from traditional methods such as Waterfall. Instead favoring early releases and community involvement.[103]Agile development strategies are most often employed OSSD, with are characterized by their iterative and incremental frameworks.[104]Open-source software developers will typically use methods such as E-mail, Wikis, web forums, and instant messaging services for communication, as individuals are not typically working in close proximity to one another.[105]Version control systems such as Git are utilized to make code collaboration easier.[103]
The GNU General Public License (GPL) is one of the most widely used copyleft licenses in the free and open-source software (FOSS) community and was created by the Free Software Foundation (FSF).Version 2 (GPLv2), published in 1991, played a central role in protecting the freedom of software to be run, studied, modified, and shared by users.[106]However, as technology and legal landscapes evolved, particularly with the rise ofDigital Rights Management (DRM)and software patents, some developers and legal experts argued that GPLv2 did not adequately protect user freedoms in newer contexts.[107]This led to the development of GPLv3, which sought to address these concerns.[108]
While copyright is the primary legal mechanism that FOSS authors use to ensure license compliance for their software, other mechanisms such as legislation, patents, and trademarks have implications as well. In response to legal issues with patents and theDigital Millennium Copyright Act(DMCA), the Free Software Foundation releasedversion 3 of its GNU General Public License(GNU GPLv3) in 2007 that explicitly addressed the DMCA and patent rights.
One of the key issues GPLv3 aimed to address was a practice known asTivoization, named after the company TiVo, which used GPL-covered software but implemented hardware restrictions that prevented users from running modified versions of the software. This was seen by the Free Software Foundation (FSF) as a direct violation of software freedom, prompting GPLv3 to include language explicitly forbidding such restrictions.[109]Additionally, GPLv3 introduced clauses to protect users against aggressive enforcement of software patents and reinforced the idea that users should retain control over the software they use.
After the development of the GNU GPLv3 in 2007, the FSF (as the copyright holder of many pieces of the GNU system) updated many[citation needed]of the GNU programs' licenses from GPLv2 to GPLv3. On the other hand, the adoption of the new GPL version was heavily discussed in the FOSS ecosystem,[110]several projects decided against upgrading to GPLv3. For instance theLinux kernel,[111][112]theBusyBox[113][114]project,AdvFS,[115]Blender,[116]and theVLC media playerdecided against adopting the GPLv3.[117]
Apple, a user ofGCCand a heavy user of bothDRMand patents, switched the compiler in itsXcodeIDE from GCC toClang, which is another FOSS compiler[118]but is under apermissive license.[119]LWNspeculated that Apple was motivated partly by a desire to avoid GPLv3.[118]TheSambaproject also switched to GPLv3, so Apple replacedSambain their software suite by a closed-source, proprietary software alternative.[120]
The controversy with GPLv3 mirrored a more general philosophical split in the open source community: whether people should hold licenses that aggressively defend user freedoms (as with copyleft) or take a more permissive, collaborative yet ambiguous approach. Supporters applauded GPLv3 for fortifying protections against restrictions imposed by hardware and patent threats,[121]while critics felt it created legal and ideological barriers that complicated its development and made it less appealing to adopt.[122]The fallout helped to raise the acceptance of permissive licenses like the MIT and Apache licenses, especially by commercial software developers.[123]
Leemhuis criticizes theprioritizationof skilled developers who − instead of fixing issues in already popular open-source applications and desktop environments − create new, mostly redundant software to gain fame and fortune.[124]
He also criticizes notebook manufacturers for optimizing their own products only privately or creatingworkaroundsinstead of helping fix the actual causes of the many issues with Linux on notebooks such as the unnecessary power consumption.[124]
Mergers have affected major open-source software.Sun Microsystems(Sun) acquiredMySQL AB, owner of the popular open-sourceMySQLdatabase, in 2008.[125]
Oracle in turn purchased Sun in January 2010, acquiring their copyrights, patents, and trademarks. Thus, Oracle became the owner of both the most popular proprietary database and the most popular open-source database. Oracle's attempts to commercialize the open-source MySQL database have raised concerns in the FOSS community.[126]Partly in response to uncertainty about the future of MySQL, the FOSS communityforkedthe project into newdatabase systemsoutside of Oracle's control. These includeMariaDB,Percona, andDrizzle.[127]All of these have distinct names; they are distinct projects and cannot use the trademarked name MySQL.[128]
In August 2010,OraclesuedGoogle, claiming that its use ofJavainAndroidinfringed on Oracle's copyrights and patents. In May 2012, the trial judge determined that Google did not infringe on Oracle's patents and ruled that the structure of the Java APIs used by Google was not copyrightable. The jury found that Google infringed a small number of copied files, but the partiesstipulatedthat Google would pay no damages.[129]Oracle appealed to theFederal Circuit, and Google filed across-appealon the literal copying claim.[130]
By defying ownership regulations in the construction and use of information—a key area of contemporarygrowth—theFree/Open Source Software (FOSS) movementcountersneoliberalismandprivatizationin general.[131][132]
By realizing the historical potential of an "economy of abundance" for thenew digital world, FOSS may lay down a plan for political resistance or show the way towards a potentialtransformationofcapitalism.[132]
According toYochai Benkler, Jack N. and Lillian R. Berkman Professor for Entrepreneurial Legal Studies atHarvard Law School, free software is the most visible part of a new economy ofcommons-based peer productionof information, knowledge, and culture. As examples, he cites a variety of FOSS projects, including both free software and open-source.[133]
|
https://en.wikipedia.org/wiki/Free_and_open_source
|
Cure53is a Germancybersecurityfirm.[1][2][3][4]The company was founded by Mario Heiderich, a security researcher.
After a report from Cure53 on theSouth Koreansecurity appSmart Sheriff, that described the app's security holes as "catastrophic", the South Korean government ordered theSmart Sheriffto be shut down.[1][2]
Software audited by Cure53 includesMastodon,OnionShare,Bitwarden,Mailvelope,GlobaLeaks,SecureDrop,Obsidian (client software),OpenPGP, Onion Browser,F-Droid,Nitrokey,Peerio,OpenKeychain,cURL,Briar,Mozilla Thunderbird,Threema,MetaMask,Obsidian,Proton Pass,EnpassandPassbolt, as well as manyVPNandpassword managerproviders.[5]
Cure53 created the DOMpurifyJavaScriptlibrary for prevention ofcross-site scripting.[6]
Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Cure53
|
Web Messaging, orcross-document messaging, is anAPIintroduced in theWHATWGHTML5draft specification, allowing documents to communicate with one another across different origins, or source domains[1]while rendered in aweb browser. Prior to HTML5, web browsers disallowedcross-site scripting, to protect against security attacks. This practice barred communication between non-hostile pages as well, making document interaction of any kind difficult.[1][2]Cross-document messaging allows scripts to interact across these boundaries, while providing a rudimentary level of security.
Using the Messaging API'spostMessagemethod, plain text messages can be sent from one domain to another, e.g. from a parent document to anIFRAME.[3]This requires that the author first obtain theWindowobject of the receiving document. As a result, messages can be posted to the following:[2]
The messageeventbeing received has the following attributes:
postMessageis not a blocking call; messages are processed asynchronously.[4]
Consider we want document A loaded fromexample.netto communicate with document B loaded fromexample.cominto aniframeor popup window.[1]TheJavaScriptfor document A will look as follows:
The origin of ourcontentWindowobject is passed topostMessage. It must match theoriginof the document we wish to communicate with (in this case, document B). Otherwise, a security error will be thrown and the script will stop.[3]The JavaScript for document B will look as follows:
Anevent listeneris set up to receive messages from document A. Using theoriginproperty, it then checks that the domain of the sender is the expected domain. Document B then looks at the message, either displaying it to the user, or responding in turn with a message of its own for document A.[1]
Poor origin checking can pose a risk for applications which employ cross-document messaging.[5]To safeguard against malicious code from foreign domains, authors should check theoriginattribute to ensure messages are accepted from domains they expect to receive messages from. The format of incoming data should also be checked that it matches the expected format.[1]
Support for cross-document messaging exists in current versions ofInternet Explorer,Mozilla Firefox,Safari,Google Chrome,Opera,Opera Mini,Opera Mobile, andAndroid web browser.[6]Support for the API exists in theTrident,Gecko,WebKitandPrestolayout engines.[7]
|
https://en.wikipedia.org/wiki/Cross-document_messaging
|
Samy(also known asJS.Spacehero) is across-site scriptingworm(XSS worm) that was designed to propagate across thesocial networking siteMySpacebySamy Kamkar. Within just 20 hours[1]of its October 4, 2005 release, over one million users had run the payload[2]making Samy the fastest-spreadingvirusof all time.[3]
The worm itself was relatively harmless; it carried apayloadthat would display the string "but most of all, samy is my hero" on a victim's MySpace profile page as well as send Samy a friend request. When a user viewed that profile page, the payload would then be replicated and planted on their own profile page continuing the distribution of the worm. MySpace has since secured its site against the vulnerability.[1]
Samy Kamkar, the author of the worm, was raided by theUnited States Secret Serviceand Electronic Crimes Task Force in 2006 for releasing the worm.[4]He entered aplea agreementon January 31, 2007, to afelonycharge.[5]The action resulted in Kamkar being sentenced to three years'probationwith only one (remotely-monitored) computer and no access to the Internet for life (this provision was later struck off by a judge), 90 days'community service, and $15,000–$100,000,000 in restitution, as well as a 20-year suspended prison sentence, as directly reported by Kamkar himself on "Greatest Moments in Hacking History" byVice Media's video website,Motherboard.[6]
|
https://en.wikipedia.org/wiki/Samy_(computer_worm)
|
In computersoftware, the termparameter validation[1][2]is the automated processing, in a module, to validate the spelling or accuracy of parameters passed to that module. The term has been in common use for over 30 years.[1]Specificbest practiceshave been developed, for decades, to improve the handling of such parameters.[1][2][3]
Parameter validation can be used to defend againstcross-site scriptingattacks.[4]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Parameter_validation
|
Formatis afunctioninCommon Lispthat can produceformatted textusing a formatstringsimilar to theprint format string. It provides more functionality thanprint, allowing the user to output numbers in various formats (including, for instance:hex,binary,octal,roman numerals, andEnglish), apply certain format specifiers only under certain conditions, iterate overdata structures, output datatabularly, and evenrecurse, callingformatinternally to handledata structuresthat include their own preferred formattingstrings. This functionally originates inMIT'sLisp Machine Lisp,[1]where it was based onMultics.
Theformatfunction is specified by thesyntax:[2]
Directivesin the control string are interpolated using the format arguments, and the thus constructed character sequence is written to the destination.
The destination may either be astream, a dynamicstring,T, or theNILconstant; the latter of which presents a special case in that it creates, formats and returns a new stringobject, whileTrefers to thestandard output, usually being equivalent to theconsole. Streams in Common Lisp comprehend, among others,string outputand file streams; hence, being capable of writing to such a variety of destinations, this function unifies capabilities distributed among distinct commands in some otherprogramming languages, such asC'sprintffor console output,sprintffor string formatting, andfprintffor file writing.
The multitude of destination types is exemplified in the following:
The control string may containliteralcharacters as well as themeta character~(tilde), which demarcatesformat directives. While literals in the input are echoed verbatim, directives produce a special output, often consuming one or more formatarguments.
A formatdirective, introduced by a~, is followed byzeroor more prefixparameters, zero or more modifiers, and the directive type. A directive definition, hence, must conform to the following pattern:
The directive type is always specified by a single character, case-insensitive in the case of letters. Thedatato be processed by a format directive, if at all necessary, is called its format argument and may be zero or moreobjectsof any type compatible. Whether and in which quantity such data is accepted depends on the directive and potential modifiers applied unto it. The directive type~%, for instance, abstains from the consumption of any format arguments, whereas~Dexpects exactly oneinteger numberto print, and~@{, a directive influenced by the at-sign modifier, processes all remaining arguments.
The following directive,~B, expects one number object from the format arguments and writes itsbinary(radix2) equivalent to thestandard output.
Where configurations are permissive, prefix parameters may be specified.
Prefix parameters enable an injection of additional information into a directive to operate upon, similar to the operation ofparameterswhen provided to afunction. Prefix parameters are always optional, and, if provided, must be located between the introducing~and either the modifiers or, if none present, the directive type. Thevaluesare separated by commas, but do not tolerate white spaces on either side. The number and type of these parameters depends on the directive and the influence of potential modifiers.
Two particular characters may be utilized as prefix parameter values with distinctive interpretation:vorVacts as a placeholder for an integer number or character from the format arguments which is consumed and placed into its stead. The second special character,#, is substituted by the tally of format arguments yet abiding their consumption. BothVand#enable behavior defined by dynamic content injected into the prefix parameter list.
TheVparameter value introduces a functionality equivalent to avariablein the context of general programming. Given this simple scenario, in order to left-pad a binary representation of the integer number5to at least eight digits with zeros, the literal solution is as follows:
The first prefixparametercontrolling the output width may, however, be defined in terms of theVcharacter, delegating the parameter value specification to the next format argument, in our case8.
Solutions of this kind are particularly a benefit if parts of the prefix parameter list shall be described byvariablesor functionargumentsinstead of literals, as is the case in the following piece of code:
Even more fitting in those situations involving external input, a function argument may be passed into the format directive:
#as a prefix parameter tallies those format arguments not yet processed by preceding directives, doing so without actually consuming anything from this list. The utility of such a dynamically inserted value is preponderantly restricted to use cases pertaining to conditional processing. As the argument number can only be an integer number greater than or equal to zero, its significance coincides with that of an index into the clauses of a conditional~[directive.
The interplay of the special#prefix parameter value with the conditional selection directive~[is illustrated in the following example. The condition states four clauses, accessible via the indices 0, 1, 2, and 3 respectively. The number of format arguments is employed as the means for the clause index retrieval; to do so, we insert#into the conditional directive which permits the index to be a prefix parameter.#computes the tally of format arguments and suggests this number as the selection index. The arguments, not consumed by this act, are then available to and processed by the selected clause's directives.
Modifiers act in the capacity offlagsintending to influence the behavior of a directive. The admission, magnitude of behavioral modification and effect, as with prefixparameters, depends upon the directive. In some severe cases, thesyntaxof a directive may be varied to a degree as to invalidate certain prefix parameters; this power especially distinguishes modifiers from most parameters. The two valid modifier characters are@(at-sign) and:(colon), possibly in combination as either:@or@:.
The following example illustrates a rather mild case of influence exerted upon a directive by the@modifier: It merely ensures that the binary representation of a formatted number is always preceded by the number's sign:
An enumeration of the format directives, including their complete syntax and modifier effects, is adduced below.[3]
Additionally, zero or more arguments may be specified if the function shall also permit prefix parameters.
An example of a Cprintfcallis the following:
Using Common Lisp, this is equivalent to:
Another example would be to print every element of list delimited with commas, which can be done using the~{,~^and~} directives:[4]
Note that not only is the list of values iterated over directly byformat, but the commas correctly are printedbetweenitems, notafterthem. A yet more complex example would be printing out a list using customary English phrasing:
The ability to define a new directive through~/functionName/provides the means for customization. The next example implements a function which prints an input string either in lowercase, uppercase or reverse style, permitting a configuration of the number of repetitions, too.
Whilstformatis somewhat infamous for its tendency to become opaque and hard to read, it provides a remarkably concise yet powerful syntax for a specialized and common need.[4]
A Common Lisp FORMAT summary table is available.[5]
|
https://en.wikipedia.org/wiki/Format_(Common_Lisp)
|
TheC standard library, sometimes referred to aslibc,[1]is thestandard libraryfor theC programming language, as specified in theISO Cstandard.[2]Starting from the originalANSI Cstandard, it was developed at the same time as theC POSIX library, which is a superset of it.[3]Since ANSI C was adopted by theInternational Organization for Standardization,[4]the C standard library is also called theISO C library.[5]
The C standard library providesmacros,typedefinitions andfunctionsfor tasks such asstringmanipulation, mathematical computation, input/output processing,memory management, andinput/output.
Theapplication programming interface(API) of the C standard library is declared in a number ofheader files. Each header file contains one or more function declarations, data type definitions, and macros.
After a long period of stability, three new header files (iso646.h,wchar.h, andwctype.h) were added withNormative Addendum 1(NA1), an addition to the C Standard ratified in 1995. Six more header files (complex.h,fenv.h,inttypes.h,stdbool.h,stdint.h, andtgmath.h) were added withC99, a revision to the C Standard published in 1999, five more files (stdalign.h,stdatomic.h,stdnoreturn.h,threads.h, anduchar.h) withC11in 2011 and one more file (stdbit.h) withC23in 2023. In total, there are now 30 header files:
Three of the header files (complex.h,stdatomic.h, andthreads.h) are conditional features that implementations are not required to support.
ThePOSIXstandard added several nonstandard C headers for Unix-specific functionality. Many have found their way to other architectures. Examples includefcntl.handunistd.h. A number of other groups are using other nonstandard headers – theGNU C Libraryhasalloca.h, andOpenVMShas theva_count()function.
On Unix-like systems, the authoritative documentation of the API is provided in the form ofman pages. On most systems, man pages on standard library functions are in section 3; section 7 may contain some more generic pages on underlying concepts (e.g.man 7 math_errorinLinux).
Unix-likesystems typically have a C library inshared libraryform, but the header files (and compiler toolchain) may be absent from an installation so C development may not be possible. The C library is considered part of the operating system on Unix-like systems; in addition to functions specified by the C standard, it includes other functions that are part of the operating system API, such as functions specified in thePOSIXstandard. The C library functions, including the ISO C standard ones, are widely used by programs, and are regarded as if they were not only an implementation of something in the C language, but alsode factopart of the operating system interface. Unix-like operating systems generally cannot function if the C library is erased. This is true for applications which are dynamically as opposed to statically linked. Further, the kernel itself (at least in the case of Linux) operates independently of any libraries.
On Microsoft Windows, the core system dynamic libraries (DLLs) provide an implementation of the C standard library for theMicrosoft Visual C++compiler v6.0; the C standard library for newer versions of the Microsoft Visual C++ compiler is provided by each compiler individually, as well asredistributablepackages. Compiled applications written in C are either statically linked with a C library, or linked to a dynamic version of the library that is shipped with these applications, rather than relied upon to be present on the targeted systems. Functions in a compiler's C library are not regarded as interfaces to Microsoft Windows.
Many C library implementations exist, provided with both various operating systems and C compilers. Some of the popular implementations are the following:
Some compilers (for example,GCC[8]) provide built-in versions of many of the functions in the C standard library; that is, the implementations of the functions are written into the compiledobject file, and the program calls the built-in versions instead of the functions in the C libraryshared objectfile. This reduces function-call overhead, especially if function calls are replaced withinlinevariants, and allows other forms ofoptimization(as the compiler knows thecontrol-flowcharacteristics of the built-in variants), but may cause confusion when debugging (for example, the built-in versions cannot be replaced withinstrumentedvariants).
However, the built-in functions must behave like ordinary functions in accordance with ISO C. The main implication is that the program must be able to create a pointer to these functions by taking their address, and invoke the function by means of that pointer. If two pointers to the same function are derived in two different translation units in the program, these two pointers must compare equal; that is, the address comes by resolving the name of the function, which has external (program-wide) linkage.
Under FreeBSD[9]and glibc,[10]some functions such as sin() are not linked in by default and are instead bundled in the mathematical librarylibm. If any of them are used, the linker must be given the directive-lm. POSIX requires that the c99 compiler supports-lm, and that the functions declared in the headersmath.h,complex.h, andfenv.hare available for linking if-lmis specified, but does not specify if the functions are linked by default.[11]musl satisfies this requirement by putting everything into a single libc library and providing an empty libm.[12]
According to the C standard the macro__STDC_HOSTED__shall be defined to1if the implementation is hosted. A hosted implementation has all the headers specified by the C standard. An implementation can also befreestandingwhich means that these headers will not be present. If an implementation isfreestanding, it shall define__STDC_HOSTED__to0.
Some functions in the C standard library have been notorious for havingbuffer overflowvulnerabilities and generally encouraging buggy programming ever since their adoption.[13][a]The most criticized items are:
Except the extreme case withgets(), all the security vulnerabilities can be avoided by introducing auxiliary code to perform memory management, bounds checking, input checking, etc. This is often done in the form of wrappers that make standard library functions safer and easier to use. This dates back to as early asThe Practice of Programmingbook by B. Kernighan and R. Pike where the authors commonly use wrappers that print error messages and quit the program if an error occurs.
The ISO C committee published Technical reports TR 24731-1[14]and is working on TR 24731-2[15]to propose adoption of some functions with bounds checking and automatic buffer allocation, correspondingly. The former has met severe criticism with some praise,[16][17]and the latter saw mixed response.
Despite concerns, TR 24731-1 was integrated into the C standards track in ISO/IEC 9899:2011 (C11), Annex K (Bounds-checking interfaces), and implemented approximately in Microsoft’s C/++ runtime (CRT) library for the Win32 and Win64 platforms.
(By default, Microsoft Visual Studio’s C and C++ compilers issue warnings when using older, "insecure" functions. However, Microsoft’s implementation of TR 24731-1 is subtly incompatible with both TR 24731-1 and Annex K,[18]so it’s common for portable projects to disable or ignore these warnings. They can be disabled directly by issuing
before/around the call site[s] in question, or indirectly by issuing
before including any headers.[19]Command-line option/D_CRT_NO_SECURE_WARNINGS=1should have the same effect as this#define.)
Thestrerror()routine is criticized for beingthread unsafeand otherwise vulnerable torace conditions.
The error handling of the functions in the C standard library is not consistent and sometimes confusing. According to the Linux manual pagemath_error, "The current (version 2.8) situation under glibc is messy. Most (but not all) functions raise exceptions on errors. Some also seterrno. A few functions seterrno, but do not raise an exception. A very few functions do neither."[20]
The originalC languageprovided no built-in functions such as I/O operations, unlike traditional languages such asCOBOLandFortran.[citation needed]Over time, user communities of C shared ideas and implementations of what is now called C standard libraries. Many of these ideas were incorporated eventually into the definition of the standardized C language.
BothUnixand C were created atAT&T's Bell Laboratoriesin the late 1960s and early 1970s. During the 1970s the C language became increasingly popular. Many universities and organizations began creating their own variants of the language for their own projects. By the beginning of the 1980s compatibility problems between the various C implementations became apparent. In 1983 theAmerican National Standards Institute(ANSI) formed a committee to establish a standard specification of C known as "ANSI C". This work culminated in the creation of the so-called C89 standard in 1989. Part of the resulting standard was a set ofsoftware librariescalled the ANSI C standard library.
POSIX, as well asSUS, specify a number of routines that should be available over and above those in the basic C standard library. The POSIX specification includes header files for, among other uses,multi-threading,networking, andregular expressions. These are often implemented alongside the C standard library functionality, with varying degrees of closeness. For example,glibcimplements functions such asforkwithinlibc.so, but beforeNPTLwas merged into glibc it constituted a separate library with its own linker flag argument. Often, this POSIX-specified functionality will be regarded as part of the library; the basic C library may be identified as the ANSI orISOC library.
BSD libcis a superset of the POSIX standard library supported by the C libraries included withBSDoperating systemssuch asFreeBSD,NetBSD,OpenBSDandmacOS. BSD libc has some extensions that are not defined in the original standard, many of which first appeared in 1994's4.4BSDrelease (the first to be largely developed after the first standard was issued in 1989). Some of the extensions of BSD libc are:
Some languages include the functionality of the standard C library in their own libraries. The library may be adapted to better suit the language's structure, but theoperational semanticsare kept similar.
TheC++language incorporates the majority of the C standard library’s constructs into its own, excluding C-specific machinery. C standard library functions are exported from the C++ standard library in two ways.
For backwards-/cross-compatibility to C and pre-Standard C++, functions can be accessed in the globalnamespace(::), after#includeing the C standard header name as in C.[42]Thus, the C++98 program
should exhibit (apparently-)identical behavior toC95program
FromC++98on, C functions are also made available in namespace::std(e.g., Cprintfas C++::std::printf,atoias::std::atoi,feofas::std::feof), by including header<chdrname>instead of corresponding C header<hdrname.h>. E.g.,<cstdio>substitutes for<stdio.h>and<cmath>for<math.h>; note lack of.hextension on C++ header names.
Thus, an equivalent (generally preferable) C++≥98 program to the above two is:
Ausingnamespace::stddeclaration above or withinmaincan be issued to apply the::std::prefix automatically, although it’s generally considered poor practice to use it globally in headers because it pollutes the global namespace.[43]
A few of the C++≥98 versions of C’s headers are missing; e.g., C≥11<stdnoreturn.h>and<threads.h>have no C++ counterparts.[44]
Others are reduced to placeholders, such as (untilC++20)<ciso646>for C95<iso646.h>, all of whose requisite macros are rendered as keywords in C++98. C-specific syntactic constructs aren’t generally supported, even if their header is.[45]
Several C headers exist primarily for C++ compatibility, and these tend to be near-empty in C++. For example,C99–17<stdbool.h>require only
in order to feign support for the C++98bool,false, andtruekeywords in C.C++11requires<stdbool.h>and<cstdbool>for compatibility, but all they need to define is__bool_true_false_are_defined.C23obsoletes older_Boolkeyword in favor of new, C++98-equivalentbool,false, andtruekeywords, so the C≥23 and C++≥11<stdbool.h>/<cstdbool>headers are fully equivalent. (In particular, C23 doesn’t require any__STDC_VERSION_BOOL_H__macro for<stdbool.h>.)
Access to C library functions via namespace::stdand the C++≥98 header names is preferred where possible. To encourage adoption,C++98obsoletes the C (*.h) header names, so it’s possible that use of C compatibility headers will cause an especially strict C++98–20preprocessor to raise a diagnostic of some sort. However,C++23(unusually)de-obsoletes these headers, so newer C++ implementations/modes shouldn’t complain without being asked to specifically.[46]
Other languages take a similar approach, placing C compatibility functions/routines under a common namespace; these includeD,Perl, andRuby.
CPythonincludes wrappers for some of the C library functions in its own common library, and it also grants more direct access to C functions and variables via itsctypespackage.[47]
More generally,Python2.xspecifies the built-in file objects as being “implemented using C'sstdiopackage,"[48]and frequent reference is made to C standard library behaviors; the available operations (open,read,write, etc.) are expected to have the same behavior as the corresponding C functions (fopen,fread,fwrite, etc.).
Python 3’s specification relies considerably less on C specifics thanPython 2, however.
Rustoffers cratelibc, which allows various C standard (and other) library functions and type definitions to be used.[49]
The C standard library is small compared to the standard libraries of some other languages. The C library provides a basic set of mathematical functions, string manipulation,type conversions, and file and console-based I/O. It does not include a standard set of "container types" like theC++Standard Template Library, let alone the completegraphical user interface(GUI) toolkits, networking tools, and profusion of other functionality thatJavaand the.NET Frameworkprovide as standard. The main advantage of the small standard library is that providing a working ISO C environment is much easier than it is with other languages, and consequently porting C to a new platform is comparatively easy.
|
https://en.wikipedia.org/wiki/C_standard_library
|
In theC++programming language,input/outputlibrary refers to a family ofclass templatesand supporting functions in theC++ Standard Librarythat implement stream-based input/output capabilities.[1][2]It is anobject-orientedalternative to C'sFILE-based streams from theC standard library.[3][4]
Bjarne Stroustrup, the creator of C++, wrote the first version of the stream I/O library in 1984, as a type-safe and extensible alternative toC's I/O library.[5]The library has undergone a number of enhancements since this early version, including the introduction of manipulators to control formatting, and templatization to allow its use with character types other thanchar.
Standardization in 1998 saw the library moved into thestdnamespace, and the main header changed from<iostream.h>to<iostream>. It is this standardized version that is covered in the rest of the article.
In theC++23revision, the header<print>was added, which addsstd::print()andstd::println(), allowing for formatted printing to any output or file stream.
Most of the classes in the library are actually very generalized class templates. Each template can operate on various character types, and even the operations themselves, such as how two characters are compared for equality, can be customized. However, the majority of code needs to do input and output operations using only one or two character types, thus most of the time the functionality is accessed through severaltypedefs, which specify names for commonly used combinations of template and character type.
For example,basic_fstream<CharT,Traits>refers to the generic class template that implements input/output operations on file streams. It is usually used asfstreamwhich is an alias forbasic_fstream<char,char_traits<char>>, or, in other words,basic_fstreamworking on characters of typecharwith the default character operation set.
The classes in the library could be divided into roughly two categories: abstractions and implementations. Classes, that fall into abstractions category, provide an interface which is sufficient for working with any type of a stream. The code using such classes doesn't depend on the exact location the data is read from or is written to. For example, such code could write data to a file, a memory buffer or a web socket without a recompilation. The implementation classes inherit the abstraction classes and provide an implementation for concrete type of data source or sink. The library provides implementations only for file-based streams and memory buffer-based streams.
The classes in the library could also be divided into two groups by whether it implements low-level or high-level operations. The classes that deal with low-level stuff are called stream buffers. They operate on characters without providing any formatting functionality. These classes are very rarely used directly. The high-level classes are called streams and provide various formatting capabilities. They are built on top of stream buffers.
The following table lists and categorizes all classes provided by the input-output library.
The classes of the input/output library reside in several headers.
There are twelve stream buffer classes defined in the C++ language as the table.
ios_baseandbasic_iosare two classes that manage the lower-level bits of a stream.ios_basestores formatting information and the state of the stream.basic_iosmanages the associated stream-buffer.basic_iosis commonly known as simplyiosorwios, which are two typedefs forbasic_ioswith a specific character type.basic_iosandios_baseare very rarely used directly by programmers. Usually, their functionality is accessed through other classes such asiostreamwhich inherit them.[6][7]
C++input/outputstreams are primarily defined byiostream, aheader filethat is part of theC++ standard library(the name stands forInput/OutputStream). In C++ and its predecessor, theC programming language, there is no special syntax for streaming data input or output. Instead, these are combined as alibraryoffunctions. Like thecstdioheader inherited from C'sstdio.h,iostreamprovides basic input and output services for C++ programs. iostream uses theobjectscin,cout,cerr, andclogfor sending data to and from thestandard streamsinput, output, error (unbuffered), and log (buffered) respectively. As part of the C++standard library, these objects are a part of thestdnamespace.[8]
Thecoutobject is of typeostream, whichoverloadsthe leftbit-shiftoperatorto make it perform an operation completely unrelated tobitwise operations, and notably evaluate to the value of the left argument, allowing multiple operations on the same ostream object, essentially as a different syntax formethod cascading, exposing afluent interface. Thecerrandclogobjects are also of typeostream, so they overload that operator as well. Thecinobject is of typeistream, which overloads the right bit-shift operator. The directions of the bit-shift operators make it seem as though data is flowing towards the output stream or flowing away from the input stream.
Manipulators are objects that can modify a stream using the<<or>>operators.
Other manipulators can be found using the headeriomanip.
The formatting manipulators must be "reset" at the end or the programmer will unexpectedly get their effects on the next output statement.
Some implementations of the C++ standard library have significant amounts ofdead code. For example, GNU libstdc++ automaticallyconstructsalocalewhen building anostreameven if a program never uses any types (date, time or money) that a locale affects,[9]and a statically linked"Hello, World!" programthat uses<iostream>of GNU libstdc++ produces an executable anorder of magnitudelarger than an equivalent program that uses<cstdio>.[10]There exist partial implementations of the C++ standard library designed for space-constrained environments; their<iostream>may leave out features that programs in such environments may not need, such as locale support.[11]
The pre-C++23 canonical"Hello, World!" programwhich used the<iostream>library, can be expressed as follows:
This program would output "Hello, world!" followed by anewlineand standard output stream buffer flush.
The following example, which uses the<fstream>library, creates a file called 'file.txt' and puts the text 'Hello, world!' followed by a newline into it.
Using the<print>library added in C++23 (which is also imported by the standard library modulestd), the post-C++23 canonical "Hello, World!" program is expressed as:
|
https://en.wikipedia.org/wiki/Iostream
|
ML(Meta Language) is ageneral-purpose,high-level,functionalprogramming language. It is known for its use of the polymorphicHindley–Milner type system, which automatically assigns thedata typesof mostexpressionswithout requiring explicit type annotations (type inference), and ensures type safety; there is aformal proofthat a well-typed ML program does not cause runtime type errors.[1]ML provides pattern matching for function arguments,garbage collection,imperative programming,call-by-valueandcurrying. While ageneral-purpose programming language, ML is used heavily inprogramming language researchand is one of the few languages to be completely specified and verified usingformal semantics. Its types and pattern matching make it well-suited and commonly used to operate on other formal languages, such as incompiler writing,automated theorem proving, andformal verification.
Features of ML include a call-by-valueevaluation strategy,first-class functions, automatic memory management through garbage collection,parametric polymorphism,static typing,type inference,algebraic data types,pattern matching, andexception handling. ML usesstatic scopingrules.[2]
ML can be referred to as animpurefunctional language, because although it encourages functional programming, it does allowside-effects[3](like languages such asLisp, but unlike apurely functional languagesuch asHaskell). Like most programming languages, ML useseager evaluation, meaning that all subexpressions are always evaluated, thoughlazy evaluationcan be achieved through the use ofclosures. Thus, infinite streams can be created and used as in Haskell, but their expression is indirect.
ML's strengths are mostly applied in language design and manipulation (compilers, analyzers, theorem provers), but it is a general-purpose language also used inbioinformaticsand financial systems.
ML was developed byRobin Milnerand others in the early 1970s at theUniversity of Edinburgh,[4]and its syntax is inspired byISWIM. Historically, ML was conceived to develop proof tactics in theLCF theorem prover(whose language,pplambda, a combination of thefirst-order predicate calculusand the simply typedpolymorphiclambda calculus, had ML as its metalanguage).
Today there are several languages in the ML family; the three most prominent areStandard ML(SML),OCamlandF#. Ideas from ML have influenced numerous other languages, likeHaskell,Cyclone,Nemerle,[5]ATS, andElm.[6]
The following examples use the syntax of Standard ML. Other ML dialects such as OCaml and F# differ in small ways.
Thefactorialfunction expressed as pure ML:
This describes the factorial as a recursive function, with a single terminating base case. It is similar to the descriptions of factorials found in mathematics textbooks. Much of ML code is similar to mathematics in facility and syntax.
Part of the definition shown is optional, and describes thetypesof this function. The notation E : t can be read asexpression E has type t. For instance, the argument n is assigned typeinteger(int), and fac (n : int), the result of applying fac to the integer n, also has type integer. The function fac as a whole then has typefunction from integer to integer(int -> int), that is, fac accepts an integer as an argument and returns an integer result. Thanks to type inference, the type annotations can be omitted and will be derived by the compiler. Rewritten without the type annotations, the example looks like:
The function also relies on pattern matching, an important part of ML programming. Note that parameters of a function are not necessarily in parentheses but separated by spaces. When the function's argument is 0 (zero) it will return the integer 1 (one). For all other cases the second line is tried. This is therecursion, and executes the function again until the base case is reached.
This implementation of the factorial function is not guaranteed to terminate, since a negative argument causes aninfinite descending chainof recursive calls. A more robust implementation would check for a nonnegative argument before recursing, as follows:
The problematic case (when n is negative) demonstrates a use of ML's exception system.
The function can be improved further by writing its inner loop as atail call, such that thecall stackneed not grow in proportion to the number of function calls. This is achieved by adding an extra,accumulator, parameter to the inner function. At last, we arrive at
The following functionreversesthe elements in a list. More precisely, it returns a new list whose elements are in reverse order compared to the given list.
This implementation of reverse, while correct and clear, is inefficient, requiringquadratic timefor execution. The function can be rewritten to execute inlinear time:
This function is an example of parametric polymorphism. That is, it can consume lists whose elements have any type, and return lists of the same type.
Modules are ML's system for structuring large projects and libraries. A module consists of a signature file and one or more structure files. The signature file specifies theAPIto be implemented (like a C header file, orJava interfacefile). The structure implements the signature (like a C source file or Java class file). For example, the following define an Arithmetic signature and an implementation of it using Rational numbers:
These are imported into the interpreter by the 'use' command. Interaction with the implementation is only allowed via the signature functions, for example it is not possible to create a 'Rat' data object directly via this code. The 'structure' block hides all the implementation detail from outside.
ML's standard libraries are implemented as modules in this way.
|
https://en.wikipedia.org/wiki/ML_(programming_language)
|
Inengineering,debuggingis the process of finding theroot cause,workarounds, and possible fixes forbugs.
Forsoftware, debugging tactics can involveinteractivedebugging,control flowanalysis,log file analysis, monitoring at theapplicationorsystemlevel,memory dumps, andprofiling. Manyprogramming languagesandsoftware development toolsalso offer programs to aid in debugging, known asdebuggers.
The termbug, in the sense of defect, dates back at least to 1878 whenThomas Edisonwrote "little faults and difficulties" in his inventions as "Bugs".
A popular story from the 1940s is fromAdmiral Grace Hopper.[1]While she was working on aMark IIcomputer at Harvard University, her associates discovered amothstuck in a relay that impeded operation and wrote in a log book "First actual case of a bug being found". Although probably ajoke, conflating the two meanings of bug (biological and defect), the story indicates that the term was used in the computer field at that time.
Similarly, the termdebuggingwas used in aeronautics before entering the world ofcomputers. A letter fromJ. Robert Oppenheimer, director of theWWIIatomic bombManhattan Projectat Los Alamos, used the term in a letter to Dr.Ernest Lawrenceat UC Berkeley, dated October 27, 1944,[2]regarding the recruitment of additional technical staff.
TheOxford English Dictionaryentry fordebuguses the termdebuggingin reference to airplane engine testing in a 1945 article in the Journal of the Royal Aeronautical Society.
An article in "Airforce" (June 1945 p. 50) refers todebuggingaircraft cameras.
The seminal article by Gill[3]in 1951 is the earliest in-depth discussion of programming errors, but it does not use the termbugordebugging.
In theACM's digital library, the termdebuggingis first used in three papers from the 1952 ACM National Meetings.[4][5][6]Two of the three use the term in quotation marks.
By 1963,debuggingwas a common enough term to be mentioned in passing without explanation on page 1 of theCTSSmanual.[7]
As software and electronic systems have become generally more complex, the various common debugging techniques have expanded with more methods to detect anomalies, assess impact, and schedulesoftware patchesor full updates to a system. The words "anomaly" and "discrepancy" can be used, as beingmore neutral terms, to avoid the words "error" and "defect" or "bug" where there might be an implication that all so-callederrors,defectsorbugsmust be fixed (at all costs). Instead, animpact assessmentcan be made to determine if changes to remove ananomaly(ordiscrepancy) would be cost-effective for the system, or perhaps a scheduled new release might render the change(s) unnecessary. Not all issues aresafety-criticalormission-criticalin a system. Also, it is important to avoid the situation where a change might be more upsetting to users, long-term, than living with the known problem(s) (where the "cure would be worse than the disease"). Basing decisions of the acceptability of some anomalies can avoid a culture of a "zero-defects" mandate, where people might be tempted to deny the existence of problems so that the result would appear as zerodefects. Considering the collateral issues, such as the cost-versus-benefit impact assessment, then broader debugging techniques will expand to determine the frequency of anomalies (how often the same "bugs" occur) to help assess their impact to the overall system.
Debugging ranges in complexity from fixing simple errors to performing lengthy and tiresome tasks of data collection, analysis, and scheduling updates. The debugging skill of the programmer can be a major factor in the ability to debug a problem, but the difficulty of software debugging varies greatly with the complexity of the system, and also depends, to some extent, on theprogramming language(s) used and the available tools, such asdebuggers. Debuggers are software tools which enable theprogrammerto monitor theexecutionof a program, stop it, restart it, setbreakpoints, and change values in memory. The termdebuggercan also refer to the person who is doing the debugging.
Generally,high-level programming languages, such asJava, make debugging easier, because they have features such asexception handlingandtype checkingthat make real sources of erratic behaviour easier to spot. In programming languages such asCorassembly, bugs may cause silent problems such asmemory corruption, and it is often difficult to see where the initial problem happened. In those cases,memory debuggertools may be needed.
In certain situations, general purpose software tools that are language specific in nature can be very useful. These take the form ofstatic code analysis tools. These tools look for a very specific set of known problems, some common and some rare, within the source code, concentrating more on the semantics (e.g. data flow) rather than the syntax, as compilers and interpreters do.
Both commercial and free tools exist for various languages; some claim to be able to detect hundreds of different problems. These tools can be extremely useful when checking very large source trees, where it is impractical to do code walk-throughs. A typical example of a problem detected would be a variable dereference that occursbeforethe variable is assigned a value. As another example, some such tools perform strong type checking when the language does not require it. Thus, they are better at locating likely errors in code that is syntactically correct. But these tools have a reputation of false positives, where correct code is flagged as dubious. The old Unixlintprogram is an early example.
For debugging electronic hardware (e.g.,computer hardware) as well as low-level software (e.g.,BIOSes,device drivers) andfirmware, instruments such asoscilloscopes,logic analyzers, orin-circuit emulators(ICEs) are often used, alone or in combination. An ICE may perform many of the typical software debugger's tasks on low-levelsoftwareandfirmware.
The debugging process normally begins with identifying the steps to reproduce the problem. This can be a non-trivial task, particularly with parallel processes and someHeisenbugsfor example. The specificuser environmentand usage history can also make it difficult to reproduce the problem.
After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, a bug in a compiler can make itcrashwhen parsing a large source file. However, after simplification of the test case, only few lines from the original source file can be sufficient to reproduce the same crash. Simplification may be done manually using adivide-and-conquerapproach, in which the programmer attempts to remove some parts of original test case then checks if the problem still occurs. When debugging in aGUI, the programmer can try skipping some user interaction from the original problem description to check if the remaining actions are sufficient for causing the bug to occur.
After the test case is sufficiently simplified, a programmer can use a debugger tool to examine program states (values of variables, plus thecall stack) and track down the origin of the problem(s). Alternatively,tracingcan be used. In simple cases, tracing is just a few print statements which output the values of variables at particular points during the execution of the program.[citation needed]
In contrast to the general purpose computer software design environment, a primary characteristic of embedded environments is the sheer number of different platforms available to the developers (CPU architectures, vendors, operating systems, and their variants). Embedded systems are, by definition, not general-purpose designs: they are typically developed for a single task (or small range of tasks), and the platform is chosen specifically to optimize that application. Not only does this fact make life tough for embedded system developers, it also makes debugging and testing of these systems harder as well, since different debugging tools are needed for different platforms.
Despite the challenge of heterogeneity mentioned above, some debuggers have been developed commercially as well as research prototypes. Examples of commercial solutions come fromGreen Hills Software,[19]Lauterbach GmbH[20]and Microchip's MPLAB-ICD (for in-circuit debugger). Two examples of research prototype tools are Aveksha[21]and Flocklab.[22]They all leverage a functionality available on low-cost embedded processors, an On-Chip Debug Module (OCDM), whose signals are exposed through a standardJTAG interface. They are benchmarked based on how much change to the application is needed and the rate of events that they can keep up with.
In addition to the typical task of identifying bugs in the system, embedded system debugging also seeks to collect information about the operating states of the system that may then be used to analyze the system: to find ways to boost its performance or to optimize other important characteristics (e.g. energy consumption, reliability, real-time response, etc.).
Anti-debugging is "the implementation of one or more techniques within computer code that hinders attempts atreverse engineeringor debugging a target process".[23]It is actively used by recognized publishers incopy-protectionschemas, but is also used bymalwareto complicate its detection and elimination.[24]Techniques used in anti-debugging include:
An early example of anti-debugging existed in early versions ofMicrosoft Wordwhich, if a debugger was detected, produced a message that said, "The tree of evil bears bitter fruit. Now trashing program disk.", after which it caused the floppy disk drive to emit alarming noises with the intent of scaring the user away from attempting it again.[25][26]
|
https://en.wikipedia.org/wiki/Printf_debugging
|
printfis ashellcommandthat formats and outputs text like thesame-named C function. It is available in a variety ofUnixandUnix-likesystems. Some shells implement the command asbuiltinand some provide it as autilityprogram[2]
The command has similarsyntaxandsemanticsas the library function. The command outputs text tostandard output[3]as specified by a format string and a list of values.Charactersof the format string are copied to the output verbatim except when a format specifier is found which causes a value to be output per the specifier.
The command has some aspects unlike the library function. In addition to the library function format specifiers,%bcauses the command to expand backslashescape sequences(for example\nfornewline), and%qoutputs an item that can be used asshellinput.[3]The value used for an unmatched specifier (too few values) is an empty string for%sor 0 for a numeric specifier. If there are more values than specifiers, then the command restarts processing the format string from its beginning,
The command is part of theX/OpenPortability Guide since issue 4 of 1992. It was inherited into the first version of POSIX.1 and theSingle Unix Specification.[4]It first appeared in4.3BSD-Reno.[5]
The implementation bundled inGNU Core Utilitieswas written by David MacKenzie. It has an extension%qfor escaping strings in POSIX-shell format.[3]
This prints a list of numbers:
This produces output for a directory's content similar tols:
|
https://en.wikipedia.org/wiki/Printf_(Unix)
|
printkis aprintf-like function of theLinux kernel interfacefor formatting and writing kernel log entries.[1]Since theC standard library(which contains the ubiquitous printf-like functions) is not available inkernel mode,printkprovides for general-purpose output in the kernel.[2]Due to limitations of the kernel design, the function is often used to aiddebuggingkernel modesoftware.[1]
printkcan be called from anywhere in the kernel except during early stages of the boot process; before the system console is initialized.[3]The alternative functionearly_printkis implemented on some architectures and is used identically toprintkbut during the early stages of the boot process.[3]
printkhas the samesyntaxasprintf, but somewhat differentsemantics. Likeprintf,printkaccepts a formatc-stringargumentand a list of value arguments.[1]Both format text based on the input parameters and with significantly similar behavior, but there are also significant differences.[1]Theprintkfunction prototype(which matches that ofprintf) is:
The features different fromprintfare described below.
printkallows a caller to specify a log level – the type and importance of the message being sent. The level is specified by prepending text that identifies a log level. Typically the text is prepended via C'sstring literal concatenationand via one of themacrosdesigned for this purpose. For example, a message could be logged at the informational level as:[1]
The text specifying the log level consists of theASCIISOHcharacter followed by a digit that identifies the log level or the letter 'c' to indicate the message is a continuation of the previous message.[1][4]The following table lists each log level with its canonical meaning.[3]
When no log level is specified, the entry is logged as the default level which is typicallyKERN_WARNING,[1]but can be set; such as via theloglevel=boot argument.[5]
Log levels are defined in header file<linux/kern_levels.h>.[4]Which log levels are printed is configured using thesysctlfile/proc/sys/kernel/printk.[1]
The%pformat specifier which is supported byprintf, is extended with additional formatting modes. For example, requesting to print astruct sockaddr *using%pISpcformats an IPv4/v6 address and port in a human-friendly format such as1.2.3.4:12345or[1:2:3:4:5:6:7:8]:12345.[6]
Whileprintfsupports formattingfloating pointnumbers,printkdoes not,[6]since the Linux kernel does not support floating point numbers.[7]
The function tries to lock thesemaphorecontrolling access to theLinux system console.[1][8]If it succeeds, the output is logged and the console drivers are called.[1]If it is not possible to acquire the semaphore the output is placed into the log buffer, and the current holder of the console semaphore will notice the new output when they release the console semaphore and will send the buffered output to the console before releasing the semaphore.[1]
One effect of this deferred printing is that code which callsprintkand then changes the log levels to be printed may break. This is because the log level to be printed is inspected when the actual printing occurs.[1]
|
https://en.wikipedia.org/wiki/Printk
|
Incomputer programming,string interpolation(orvariable interpolation,variable substitution, orvariable expansion) is the process of evaluating astring literalcontaining one or moreplaceholders, yielding a result in which the placeholders are replaced with their corresponding values. It is a form of simpletemplate processing[1]or, in formal terms, a form ofquasi-quotation(or logicsubstitutioninterpretation). The placeholder may be a variable name, or in some languages an arbitrary expression, in either case evaluated in the currentcontext.
String interpolation is an alternative to building string viaconcatenation, which requires repeat quoting and unquoting;[2]or substituting into aprintf format string, where the variable is far from where it is used. Compare:
Two types of literal expression are usually offered: one with interpolation enabled, the other without. Non-interpolated strings may alsoescape sequences, in which case they are termed araw string, though in other cases this is separate, yielding three classes of raw string, non-interpolated (but escaped) string, interpolated (and escaped) string. For example, in Unix shells, single-quoted strings are raw, while double-quoted strings are interpolated. Placeholders are usually represented by a bare or a namedsigil(typically$or%), e.g.$applesor%apples, or with braces, e.g.{apples}, sometimes both, e.g.${apples}. In some cases additional formatting specifiers can be used (as in printf), e.g.{apples:3}, and in some cases the formatting specifiers themselves can be interpolated, e.g.{apples:width}. Expansion of the string usually occurs atrun time.
Language support for string interpolation varies widely. Some languages do not offer string interpolation, instead using concatenation, simple formatting functions, or template libraries. String interpolation is common in manyprogramming languageswhich make heavy use ofstringrepresentations of data, such asApache Groovy,Julia,Kotlin,Perl,PHP,Python,Ruby,Scala,Swift,Tcland mostUnix shells.
There are two main types of variable-expanding algorithms forvariable interpolation:[3]
String interpolation, like string concatenation, may lead to security problems. If user input data is improperly escaped or filtered, the system will be exposed toSQL injection,script injection,XML external entity (XXE) injection, andcross-site scripting(XSS) attacks.[4]
An SQL injection example:
If$idis replaced with"';DELETEFROMTable;SELECT*FROMTableWHEREid='", executing this query will wipe out all the data inTable.
The output will be:
The output will be:
The output will be:
[5]
The output will be:
ColdFusion Markup Language(CFML) script syntax:
Tag syntax:
The output will be:
The output will be:
The output will be:
As of 2025[update], Go does not have string interpolation. There have been some proposals for string interpolation, which have been rejected.[6][7][8]
In groovy, interpolated strings are known as GStrings:[9]
The output will be:
The output will be:[10]
Java had interpolated strings as a preview feature in Java 21 and Java 22. You could use the constant STR ofjava.lang.StringTemplatedirectly.
They were removed in Java 23 due to design issues.[11]
JavaScript, as of theECMAScript2015 (ES6) standard, supports string interpolation using backticks``. This feature is calledtemplate literals.[12]Here is an example:
The output will be:
Template literals can also be used for multi-line strings:
The output will be:
The output will be:
The output will be:
It also supports advanced formatting features, such as:
The output will be:
Nim provides string interpolation via the strutils module.
Formatted string literals inspired by Python F-string are provided via the strformat module,
the strformat macro verifies that the format string is well-formed and well-typed,
and then are expanded into Nim source code at compile-time.
The output will be:
The output will be:
The output will be:
The output will be:
The output will be:
Python supports string interpolation as of version 3.6, referred to as
"formatted string literals" or "f-strings".[13][14][15]Such a literal begins with anforFbefore the opening quote, and uses braces for placeholders:
The output will be:
The output will be:
Rust does not have general string interpolation, but provides similar functionality via macros, referred to as "Captured identifiers in format strings", introduced in version 1.58.0, released 2022-01-13.[16]
Rust provides formatting via thestd::fmtmodule, which is interfaced with through various macros such asformat!,write!, andprint!. These macros are converted into Rust source code at compile-time, whereby each argument interacts with aformatter. The formatter supportspositional parameters,named parameters,argument types, defining variousformatting traits, and capturing identifiers from the environment.
The output will be:
Scala2.10+ provides a general facility to allow arbitrary processing of a string literal, and supports string interpolation using the includedsandfstring interpolators. It is also possible to write custom ones or override the standard ones.
Thefinterpolator is a compiler macro that rewrites a format string with embedded expressions as an invocation of String.format. It verifies that the format string is well-formed and well-typed.
Scala 2.10+'s string interpolation allows embedding variable references directly in processed string literals. Here is an example:
The output will be:
In Sciter any function with name starting from $ is considered as interpolating function and so interpolation is customizable and context sensitive:
Where
gets compiled to this:
The output will be:
InSwift, a new String value can be created from a mix of constants, variables, literals, and expressions by including their values inside a string literal.[17]Each item inserted into the string literal is wrapped in a pair of parentheses, prefixed by a backslash.
The output will be:
The Tool Command Language has always supported string interpolation in all quote-delimited strings.
The output will be:
In order to actually format – and not simply replace – the values, there is a formatting function.
As of version 1.4,TypeScriptsupports string interpolation using backticks``. Here is an example:
The output will be:
Theconsole.logfunction can be used as aprintffunction. The above example can be rewritten, thusly:
The output remains the same.
As of Visual Basic 14, string interpolation is supported in Visual Basic.[18]
The output will be:
|
https://en.wikipedia.org/wiki/String_interpolation
|
C(pronounced/ˈsiː/– like the letterc)[6]is ageneral-purpose programming language. It was created in the 1970s byDennis Ritchieand remains very widely used and influential. By design, C's features cleanly reflect the capabilities of the targetedCPUs. It has found lasting use inoperating systemscode (especially inkernels[7]),device drivers, andprotocol stacks, but its use inapplication softwarehas been decreasing.[8]C is commonly used on computer architectures that range from the largestsupercomputersto the smallestmicrocontrollersandembedded systems.
A successor to the programming languageB, C was originally developed atBell Labsby Ritchie between 1972 and 1973 to construct utilities running onUnix. It was applied to re-implementing the kernel of the Unix operating system.[9]During the 1980s, C gradually gained popularity. It has become one of the most widely usedprogramming languages,[10][11]with Ccompilersavailable for practically all moderncomputer architecturesandoperating systems. The bookThe C Programming Language, co-authored by the original language designer, served for many years as thede factostandard for the language.[12][1]C has been standardized since 1989 by theAmerican National Standards Institute(ANSI) and, subsequently, jointly by theInternational Organization for Standardization(ISO) and theInternational Electrotechnical Commission(IEC).
C is animperativeprocedurallanguage, supportingstructured programming,lexical variable scope, andrecursion, with astatic type system. It was designed to becompiledto providelow-levelaccess tomemoryand language constructs that map efficiently tomachine instructions, all with minimalruntime support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. Astandards-compliant C program written withportabilityin mind can be compiled for a wide variety of computer platforms and operating systems with few changes to its source code.
Since 2000, C has consistently ranked among the top four languages in theTIOBE index, a measure of the popularity of programming languages.[13]
C is animperative, procedural language in theALGOLtradition. It has a statictype system. In C, allexecutable codeis contained withinsubroutines(also called "functions", though not in the sense offunctional programming).Function parametersare passed by value, althougharraysare passed aspointers, i.e. the address of the first item in the array.Pass-by-referenceis simulated in C by explicitly passing pointers to the thing being referenced.
C program source text isfree-formcode.Semicolonsterminatestatements, whilecurly bracesare used to group statements intoblocks.
The C language also exhibits the following characteristics:
While C does not include certain features found in other languages (such asobject orientationandgarbage collection), these can be implemented or emulated, often through the use of external libraries (e.g., theGLib Object Systemor theBoehm garbage collector).
Many later languages have borrowed directly or indirectly from C, includingC++,C#, Unix'sC shell,D,Go,Java,JavaScript(includingtranspilers),Julia,Limbo,LPC,Objective-C,Perl,PHP,Python,Ruby,Rust,Swift,VerilogandSystemVerilog(hardware description languages).[5]These languages have drawn many of theircontrol structuresand other basic features from C. Most of them also express highly similarsyntaxto C, and they tend to combine the recognizable expression and statementsyntax of Cwith underlying type systems,data models, and semantics that can be radically different.
The origin of C is closely tied to the development of theUnixoperating system, originally implemented inassembly languageon aPDP-7byDennis RitchieandKen Thompson, incorporating several ideas from colleagues. Eventually, they decided to port the operating system to aPDP-11. The original PDP-11 version of Unix was also developed in assembly language.[9]
Thompson wanted a programming language for developing utilities for the new platform. He first tried writing aFortrancompiler, but he soon gave up the idea and instead created a cut-down version of the recently developedsystems programming languagecalledBCPL. The official description of BCPL was not available at the time,[14]and Thompson modified the syntax to be less 'wordy' and similar to a simplifiedALGOLknown as SMALGOL.[15]He called the resultB,[9]describing it as "BCPL semantics with a lot of SMALGOL syntax".[15]Like BCPL, B had abootstrappingcompiler to facilitate porting to new machines.[15]Ultimately, few utilities were written in B because it was too slow and could not take advantage of PDP-11 features such asbyteaddressability.
In 1971 Ritchie started to improve B, to use the features of the more-powerful PDP-11. A significant addition was a character data type. He called thisNew B(NB).[15]Thompson started to use NB to write theUnixkernel, and his requirements shaped the direction of the language development.[15][16]Through to 1972, richer types were added to the NB language: NB had arrays ofintandchar. Pointers, the ability to generate pointers to other types, arrays of all types, and types to be returned from functions were all also added. Arrays within expressions became pointers. A new compiler was written, and the language was renamed C.[9]
The C compiler and some utilities made with it were included inVersion 2 Unix, which is also known asResearch Unix.[17]
AtVersion 4 Unix, released in November 1973, theUnixkernelwas extensively re-implemented in C.[9]By this time, the C language had acquired some powerful features such asstructtypes.
Thepreprocessorwas introduced around 1973 at the urging ofAlan Snyderand also in recognition of the usefulness of the file-inclusion mechanisms available in BCPL andPL/I. Its original version provided only included files and simple string replacements:#includeand#defineof parameterless macros. Soon after that, it was extended, mostly byMike Leskand then by John Reiser, to incorporate macros with arguments andconditional compilation.[9]
Unix was one of the first operating system kernels implemented in a language other thanassembly. Earlier instances include theMulticssystem (which was written inPL/I) andMaster Control Program(MCP) for theBurroughs B5000(which was written inALGOL) in 1961. In around 1977, Ritchie andStephen C. Johnsonmade further changes to the language to facilitateportabilityof the Unix operating system. Johnson'sPortable C Compilerserved as the basis for several implementations of C on new platforms.[16]
In 1978Brian KernighanandDennis Ritchiepublished the first edition ofThe C Programming Language.[18]Known asK&Rfrom the initials of its authors, the book served for many years as an informalspecificationof the language. The version of C that it describes is commonly referred to as "K&R C". As this was released in 1978, it is now also referred to asC78.[19]The second edition of the book[20]covers the laterANSI Cstandard, described below.
K&Rintroduced several language features:
Even after the publication of the 1989 ANSI standard, for many years K&R C was still considered the "lowest common denominator" to which C programmers restricted themselves when maximum portability was desired, since many older compilers were still in use, and because carefully written K&R C code can be legal Standard C as well.
In early versions of C, only functions that return types other thanintmust be declared if used before the function definition; functions used without prior declaration were presumed to return typeint.
For example:
Theinttype specifiers which are commented out could be omitted in K&R C, but are required in later standards.
Since K&R function declarations did not include any information about function arguments, function parametertype checkswere not performed, although some compilers would issue a warning message if a local function was called with the wrong number of arguments, or if different calls to an external function used different numbers or types of arguments. Separate tools such as Unix'slintutility were developed that (among other things) could check for consistency of function use across multiple source files.
In the years following the publication of K&R C, several features were added to the language, supported by compilers from AT&T (in particularPCC[21]) and some other vendors. These included:
The large number of extensions and lack of agreement on astandard library, together with the language popularity and the fact that not even the Unix compilers precisely implemented the K&R specification, led to the necessity of standardization.[22]
During the late 1970s and 1980s, versions of C were implemented for a wide variety ofmainframe computers,minicomputers, andmicrocomputers, including theIBM PC, as its popularity began to increase significantly.
In 1983 theAmerican National Standards Institute(ANSI) formed a committee, X3J11, to establish a standard specification of C. X3J11 based the C standard on the Unix implementation; however, the non-portable portion of the Unix C library was handed off to theIEEEworking group1003 to become the basis for the 1988POSIXstandard. In 1989, the C standard was ratified as ANSI X3.159-1989 "Programming Language C". This version of the language is often referred to asANSI C, Standard C, or sometimesC89.
In 1990 the ANSI C standard (with formatting changes) was adopted by theInternational Organization for Standardization(ISO) as ISO/IEC 9899:1990, which is sometimes calledC90. Therefore, the terms "C89" and "C90" refer to the same programming language.
ANSI, like other national standards bodies, no longer develops the C standard independently, but defers to the international C standard, maintained by the working groupISO/IEC JTC1/SC22/WG14. National adoption of an update to the international standard typically occurs within a year of ISO publication.
One of the aims of the C standardization process was to produce asupersetof K&R C, incorporating many of the subsequently introduced unofficial features. The standards committee also included several additional features such asfunction prototypes(borrowed from C++),voidpointers, support for internationalcharacter setsandlocales, and preprocessor enhancements. Although thesyntaxfor parameter declarations was augmented to include the style used in C++, the K&R interface continued to be permitted, for compatibility with existing source code.
C89 is supported by current C compilers, and most modern C code is based on it. Any program written only in Standard C and without any hardware-dependent assumptions will run correctly on anyplatformwith a conforming C implementation, within its resource limits. Without such precautions, programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such asGUIlibraries, or to a reliance on compiler- or platform-specific attributes such as the exact size of data types and byteendianness.
In cases where code must be compilable by either standard-conforming or K&R C-based compilers, the__STDC__macro can be used to split the code into Standard and K&R sections to prevent the use on a K&R C-based compiler of features available only in Standard C.
After the ANSI/ISO standardization process, the C language specification remained relatively static for several years. In 1995, Normative Amendment 1 to the 1990 C standard (ISO/IEC 9899/AMD1:1995, known informally as C95) was published, to correct some details and to add more extensive support for international character sets.[23]
The C standard was further revised in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, which is commonly referred to as "C99". It has since been amended three times by Technical Corrigenda.[24]
C99 introduced several new features, includinginline functions, several newdata types(includinglong long intand acomplextype to representcomplex numbers),variable-length arraysandflexible array members, improved support forIEEE 754floating point, support forvariadic macros(macros of variablearity), and support for one-line comments beginning with//, as in BCPL or C++. Many of these had already been implemented as extensions in several C compilers.
C99 is for the most part backward compatible with C90, but is stricter in some ways; in particular, a declaration that lacks a type specifier no longer hasintimplicitly assumed. A standard macro__STDC_VERSION__is defined with value199901Lto indicate that C99 support is available.GCC,Solaris Studio, and other C compilers now[when?]support many or all of the new features of C99. The C compiler inMicrosoft Visual C++, however, implements the C89 standard and those parts of C99 that are required for compatibility withC++11.[25][needs update]
In addition, the C99 standard requires support foridentifiersusingUnicodein the form of escaped characters (e.g.\u0040or\U0001f431) and suggests support for raw Unicode names.
Work began in 2007 on another revision of the C standard, informally called "C1X" until its official publication of ISO/IEC 9899:2011 on December 8, 2011. The C standards committee adopted guidelines to limit the adoption of new features that had not been tested by existing implementations.
The C11 standard adds numerous new features to C and the library, including type generic macros, anonymous structures, improved Unicode support, atomic operations, multi-threading, and bounds-checked functions. It also makes some portions of the existing C99 library optional, and improves compatibility with C++. The standard macro__STDC_VERSION__is defined as201112Lto indicate that C11 support is available.
C17 is an informal name for ISO/IEC 9899:2018, a standard for the C programming language published in June 2018. It introduces no new language features, only technical corrections, and clarifications to defects in C11. The standard macro__STDC_VERSION__is defined as201710Lto indicate that C17 support is available.
C23 is an informal name for the current major C language standard revision. It was informally known as "C2X" through most of its development. C23 was published in October 2024 as ISO/IEC 9899:2024.[26]The standard macro__STDC_VERSION__is defined as202311Lto indicate that C23 support is available.
C2Y is an informal name for the next major C language standard revision, after C23 (C2X), that is hoped to be released later in the 2020s, hence the '2' in "C2Y". An early working draft of C2Y was released in February 2024 as N3220 by the working groupISO/IEC JTC1/SC22/WG14.[27]
Historically, embedded C programming requires non-standard extensions to the C language to support exotic features such asfixed-point arithmetic, multiple distinctmemory banks, and basic I/O operations.
In 2008, the C Standards Committee published atechnical reportextending the C language[28]to address these issues by providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as fixed-point arithmetic, named address spaces, and basic I/O hardware addressing.
C has aformal grammarspecified by the C standard.[29]Line endings are generally not significant in C; however, line boundaries do have significance during the preprocessing phase. Comments may appear either between the delimiters/*and*/, or (since C99) following//until the end of the line. Comments delimited by/*and*/do not nest, and these sequences of characters are not interpreted as comment delimiters if they appear insidestringor character literals.[30]
C source files contain declarations and function definitions. Function definitions, in turn, contain declarations andstatements. Declarations either define new types using keywords such asstruct,union, andenum, or assign types to and perhaps reserve storage for new variables, usually by writing the type followed by the variable name. Keywords such ascharandintspecify built-in types. Sections of code are enclosed in braces ({and}, sometimes called "curly brackets") to limit the scope of declarations and to act as a single statement for control structures.
As an imperative language, C usesstatementsto specify actions. The most common statement is anexpression statement, consisting of an expression to be evaluated, followed by a semicolon; as aside effectof the evaluation,functions may be calledandvariables assignednew values. To modify the normal sequential execution of statements, C provides several control-flow statements identified by reserved keywords.Structured programmingis supported byif... [else] conditional execution and bydo...while,while, andforiterative execution (looping). Theforstatement has separate initialization, testing, and reinitialization expressions, any or all of which can be omitted.breakandcontinuecan be used within the loop. Break is used to leave the innermost enclosing loop statement and continue is used to skip to its reinitialisation. There is also a non-structuredgotostatement which branches directly to the designatedlabelwithin the function.switchselects acaseto be executed based on the value of an integer expression. Different from many other languages, control-flow willfall throughto the nextcaseunless terminated by abreak.
Expressions can use a variety of built-in operators and may contain function calls. The order in which arguments to functions and operands to most operators are evaluated is unspecified. The evaluations may even be interleaved. However, all side effects (including storage to variables) will occur before the next "sequence point"; sequence points include the end of each expression statement, and the entry to and return from each function call. Sequence points also occur during evaluation of expressions containing certain operators (&&,||,?:and thecomma operator). This permits a high degree of object code optimization by the compiler, but requires C programmers to take more care to obtain reliable results than is needed for other programming languages.
Kernighan and Ritchie say in the Introduction ofThe C Programming Language: "C, like any other language, has its blemishes. Some of the operators have the wrong precedence; some parts of the syntax could be better."[31]The C standard did not attempt to correct many of these blemishes, because of the impact of such changes on already existing software.
The basic C source character set includes the following characters:
Thenewlinecharacter indicates the end of a text line; it need not correspond to an actual single character, although for convenience C treats it as such.
Additional multi-byte encoded characters may be used instring literals, but they are not entirelyportable. SinceC99multi-national Unicode characters can be embedded portably within C source text by using\uXXXXor\UXXXXXXXXencoding (whereXdenotes a hexadecimal character).
The basic C execution character set contains the same characters, along with representations foralert,backspace, andcarriage return.Run-timesupport for extended character sets has increased with each revision of the C standard.
The following reserved words arecase sensitive.
C89 has 32 reserved words, also known as 'keywords', which cannot be used for any purposes other than those for which they are predefined:
C99 added five more reserved words: (‡ indicates an alternative spelling alias for a C23 keyword)
C11 added seven more reserved words:[32](‡ indicates an alternative spelling alias for a C23 keyword)
C23 reserved fifteen more words:
Most of the recently reserved words begin with an underscore followed by a capital letter, because identifiers of that form were previously reserved by the C standard for use only by implementations. Since existing program source code should not have been using these identifiers, it would not be affected when C implementations started supporting these extensions to the programming language. Some standard headers do define more convenient synonyms for underscored identifiers. Some of those words were added as keywords with their conventional spelling in C23 and the corresponding macros were removed.
Prior to C89,entrywas reserved as a keyword. In the second edition of their bookThe C Programming Language, which describes what became known as C89, Kernighan and Ritchie wrote, "The ... [keyword]entry, formerly reserved but never used, is no longer reserved." and "The stillbornentrykeyword is withdrawn."[33]
C supports a rich set ofoperators, which are symbols used within anexpressionto specify the manipulations to be performed while evaluating that expression. C has operators for:
C uses the operator=(used in mathematics to express equality) to indicate assignment, following the precedent ofFortranandPL/I, but unlikeALGOLand its derivatives. C uses the operator==to test for equality. The similarity between the operators for assignment and equality may result in the accidental use of one in place of the other, and in many cases the mistake does not produce an error message (although some compilers produce warnings). For example, the conditional expressionif (a == b + 1)might mistakenly be written asif (a = b + 1), which will be evaluated astrueunless the value ofais0after the assignment.[34]
The Coperator precedenceis not always intuitive. For example, the operator==binds more tightly than (is executed prior to) the operators&(bitwise AND) and|(bitwise OR) in expressions such asx & 1 == 0, which must be written as(x & 1) == 0if that is the coder's intent.[35]
The "hello, world" example that appeared in the first edition ofK&Rhas become the model for an introductory program in most programming textbooks. The program prints "hello, world" to thestandard output, which is usually a terminal or screen display.
The original version was:[36]
A standard-conforming "hello, world" program is:[a]
The first line of the program contains apreprocessing directive, indicated by#include. This causes the compiler to replace that line of code with the entire text of thestdio.hheader file, which contains declarations for standard input and output functions such asprintfandscanf. The angle brackets surroundingstdio.hindicate that the header file can be located using a search strategy that prefers headers provided with the compiler to other headers having the same name (as opposed to double quotes which typically include local or project-specific header files).
The second line indicates that a function namedmainis being defined. Themainfunction serves a special purpose in C programs; therun-time environmentcalls themainfunction to begin program execution. The type specifierintindicates that the value returned to the invoker (in this case the run-time environment) as a result of evaluating themainfunction, is an integer. The keywordvoidas a parameter list indicates that themainfunction takes no arguments.[b]
The opening curly brace indicates the beginning of the code that defines themainfunction.
The next line of the program is a statement thatcalls(i.e. diverts execution to) a function namedprintf, which in this case is supplied from a systemlibrary. In this call, theprintffunction ispassed(i.e. provided with) a single argument, which is theaddressof the first character in thestring literal"hello, world\n". The string literal is an unnamedarrayset up automatically by the compiler, with elements of typecharand a finalNULL character(ASCII value 0) marking the end of the array (to allowprintfto determine the length of the string). The NULL character can also be written as theescape sequence\0. The\nis a standard escape sequence that C translates to anewlinecharacter, which, on output, signifies the end of the current line. The return value of theprintffunction is of typeint, but it is silently discarded since it is not used. (A more careful program might test the return value to check that theprintffunction succeeded.) The semicolon;terminates the statement.
The closing curly brace indicates the end of the code for themainfunction. According to the C99 specification and newer, themainfunction (unlike any other function) will implicitly return a value of0upon reaching the}that terminates the function.[c]The return value of0is interpreted by the run-time system as an exit code indicating successful execution of the function.[37]
Thetype systemin C isstaticandweakly typed, which makes it similar to the type system ofALGOLdescendants such asPascal.[38]There are built-in types for integers of various sizes, both signed and unsigned,floating-point numbers, and enumerated types (enum). Integer typecharis often used for single-byte characters. C99 added aBoolean data type. There are also derived types includingarrays,pointers,records(struct), andunions(union).
C is often used in low-level systems programming where escapes from the type system may be necessary. The compiler attempts to ensure type correctness of most expressions, but the programmer can override the checks in various ways, either by using atype castto explicitly convert a value from one type to another, or by using pointers or unions to reinterpret the underlying bits of a data object in some other way.
Some find C's declaration syntax unintuitive, particularly forfunction pointers. (Ritchie's idea was to declare identifiers in contexts resembling their use: "declaration reflects use".)[39]
C'susual arithmetic conversionsallow for efficient code to be generated, but can sometimes produce unexpected results. For example, a comparison of signed and unsigned integers of equal width requires a conversion of the signed value to unsigned. This can generate unexpected results if the signed value is negative.
C supports the use ofpointers, a type ofreferencethat records the address or location of an object or function in memory. Pointers can bedereferencedto access data stored at the address pointed to, or to invoke a pointed-to function. Pointers can be manipulated using assignment orpointer arithmetic. The run-time representation of a pointer value is typically a raw memory address (perhaps augmented by an offset-within-word field), but since a pointer's type includes the type of the thing pointed to, expressions including pointers can be type-checked at compile time. Pointer arithmetic is automatically scaled by the size of the pointed-to data type.
Pointers are used for many purposes in C.Text stringsare commonly manipulated using pointers into arrays of characters.Dynamic memory allocationis performed using pointers; the result of amallocis usuallycastto the data type of the data to be stored. Many data types, such astrees, are commonly implemented as dynamically allocatedstructobjects linked together using pointers. Pointers to other pointers are often used in multi-dimensional arrays and arrays ofstructobjects. Pointers to functions (function pointers) are useful for passing functions as arguments tohigher-order functions(such asqsortorbsearch), indispatch tables, or ascallbackstoevent handlers.[37]
Anull pointervalueexplicitly points to no valid location. Dereferencing a null pointer value is undefined, often resulting in asegmentation fault. Null pointer values are useful for indicating special cases such as no "next" pointer in the final node of alinked list, or as an error indication from functions returning pointers. In appropriate contexts in source code, such as for assigning to a pointer variable, anull pointer constantcan be written as0, with or without explicit casting to a pointer type, as theNULLmacro defined by several standard headers or, since C23 with the constantnullptr. In conditional contexts, null pointer values evaluate tofalse, while all other pointer values evaluate totrue.
Void pointers (void *) point to objects of unspecified type, and can therefore be used as "generic" data pointers. Since the size and type of the pointed-to object is not known, void pointers cannot be dereferenced, nor is pointer arithmetic on them allowed, although they can easily be (and in many contexts implicitly are) converted to and from any other object pointer type.[37]
Careless use of pointers is potentially dangerous. Because they are typically unchecked, a pointer variable can be made to point to any arbitrary location, which can cause undesirable effects. Although properly used pointers point to safe places, they can be made to point to unsafe places by using invalidpointer arithmetic; the objects they point to may continue to be used after deallocation (dangling pointers); they may be used without having been initialized (wild pointers); or they may be directly assigned an unsafe value using a cast, union, or through another corrupt pointer. In general, C is permissive in allowing manipulation of and conversion between pointer types, although compilers typically provide options for various levels of checking. Some other programming languages address these problems by using more restrictivereferencetypes.
Arraytypes in C are traditionally of a fixed, static size specified at compile time. The more recent C99 standard also allows a form of variable-length arrays. However, it is also possible to allocate a block of memory (of arbitrary size) at run-time, using the standard library'smallocfunction, and treat it as an array.
Since arrays are always accessed (in effect) via pointers, array accesses are typicallynotchecked against the underlying array size, although some compilers may providebounds checkingas an option.[40][41]Array bounds violations are therefore possible and can lead to various repercussions, including illegal memory accesses, corruption of data,buffer overruns, and run-time exceptions.
C does not have a special provision for declaringmulti-dimensional arrays, but rather relies onrecursionwithin the type system to declare arrays of arrays, which effectively accomplishes the same thing. The index values of the resulting "multi-dimensional array" can be thought of as increasing inrow-major order. Multi-dimensional arrays are commonly used in numerical algorithms (mainly from appliedlinear algebra) to store matrices. The structure of the C array is well suited to this particular task. However, in early versions of C the bounds of the array must be known fixed values or else explicitly passed to any subroutine that requires them, and dynamically sized arrays of arrays cannot be accessed using double indexing. (A workaround for this was to allocate the array with an additional "row vector" of pointers to the columns.) C99 introduced "variable-length arrays" which address this issue.
The following example using modern C (C99 or later) shows allocation of a two-dimensional array on the heap and the use of multi-dimensional array indexing for accesses (which can use bounds-checking on many C compilers):
And here is a similar implementation using C99'sAutoVLAfeature:[d]
The subscript notationx[i](wherexdesignates a pointer) issyntactic sugarfor*(x+i).[42]Taking advantage of the compiler's knowledge of the pointer type, the address thatx + ipoints to is not the base address (pointed to byx) incremented byibytes, but rather is defined to be the base address incremented byimultiplied by the size of an element thatxpoints to. Thus,x[i]designates thei+1th element of the array.
Furthermore, in most expression contexts (a notable exception is as operand ofsizeof), an expression of array type is automatically converted to a pointer to the array's first element. This implies that an array is never copied as a whole when named as an argument to a function, but rather only the address of its first element is passed. Therefore, although function calls in C usepass-by-valuesemantics, arrays are in effect passed byreference.
The total size of an arrayxcan be determined by applyingsizeofto an expression of array type. The size of an element can be determined by applying the operatorsizeofto any dereferenced element of an arrayA, as inn = sizeof A[0]. Thus, the number of elements in a declared arrayAcan be determined assizeof A / sizeof A[0]. Note, that if only a pointer to the first element is available as it is often the case in C code because of the automatic conversion described above, the information about the full type of the array and its length are lost.
One of the most important functions of a programming language is to provide facilities for managingmemoryand the objects that are stored in memory. C provides three principal ways to allocate memory for objects:[37]
These three approaches are appropriate in different situations and have various trade-offs. For example, static memory allocation has little allocation overhead, automatic allocation may involve slightly more overhead, and dynamic memory allocation can potentially have a great deal of overhead for both allocation and deallocation. The persistent nature of static objects is useful for maintaining state information across function calls, automatic allocation is easy to use but stack space is typically much more limited and transient than either static memory or heap space, and dynamic memory allocation allows convenient allocation of objects whose size is known only at run-time. Most C programs make extensive use of all three.
Where possible, automatic or static allocation is usually simplest because the storage is managed by the compiler, freeing the programmer of the potentially error-prone chore of manually allocating and releasing storage. However, many data structures can change in size at runtime, and since static allocations (and automatic allocations before C99) must have a fixed size at compile-time, there are many situations in which dynamic allocation is necessary.[37]Prior to the C99 standard, variable-sized arrays were a common example of this. (See the article onC dynamic memory allocationfor an example of dynamically allocated arrays.) Unlike automatic allocation, which can fail at run time with uncontrolled consequences, the dynamic allocation functions return an indication (in the form of a null pointer value) when the required storage cannot be allocated. (Static allocation that is too large is usually detected by thelinkerorloader, before the program can even begin execution.)
Unless otherwise specified, static objects contain zero or null pointer values upon program startup. Automatically and dynamically allocated objects are initialized only if an initial value is explicitly specified; otherwise they initially have indeterminate values (typically, whateverbit patternhappens to be present in thestorage, which might not even represent a valid value for that type). If the program attempts to access an uninitialized value, the results are undefined. Many modern compilers try to detect and warn about this problem, but bothfalse positives and false negativescan occur.
Heap memory allocation has to be synchronized with its actual usage in any program to be reused as much as possible. For example, if the only pointer to a heap memory allocation goes out of scope or has its value overwritten before it is deallocated explicitly, then that memory cannot be recovered for later reuse and is essentially lost to the program, a phenomenon known as amemory leak.Conversely, it is possible for memory to be freed, but is referenced subsequently, leading to unpredictable results. Typically, the failure symptoms appear in a portion of the program unrelated to the code that causes the error, making it difficult to diagnose the failure. Such issues are ameliorated in languages withautomatic garbage collection.
The C programming language useslibrariesas its primary method of extension. In C, a library is a set of functions contained within a single "archive" file. Each library typically has aheader file, which contains the prototypes of the functions contained within the library that may be used by a program, and declarations of special data types and macro symbols used with these functions. For a program to use a library, it must include the library's header file, and the library must be linked with the program, which in many cases requirescompiler flags(e.g.,-lm, shorthand for "link the math library").[37]
The most common C library is theC standard library, which is specified by theISOandANSI Cstandards and comes with every C implementation (implementations which target limited environments such asembedded systemsmay provide only a subset of the standard library). This library supports stream input and output, memory allocation, mathematics, character strings, and time values. Several separate standard headers (for example,stdio.h) specify the interfaces for these and other standard library facilities.
Another common set of C library functions are those used by applications specifically targeted forUnixandUnix-likesystems, especially functions which provide an interface to thekernel. These functions are detailed in various standards such asPOSIXand theSingle UNIX Specification.
Since many programs have been written in C, there are a wide variety of other libraries available. Libraries are often written in C because C compilers generate efficientobject code; programmers then create interfaces to the library so that the routines can be used from higher-level languages likeJava,Perl, andPython.[37]
File input and output (I/O) is not part of the C language itself but instead is handled by libraries (such as the C standard library) and their associated header files (e.g.stdio.h). File handling is generally implemented through high-level I/O which works throughstreams. A stream is from this perspective a data flow that is independent of devices, while a file is a concrete device. The high-level I/O is done through the association of a stream to a file. In the C standard library, abuffer(a memory area or queue) is temporarily used to store data before it is sent to the final destination. This reduces the time spent waiting for slower devices, for example ahard driveorsolid-state drive. Low-level I/O functions are not part of the standard C library[clarification needed]but are generally part of "bare metal" programming (programming that is independent of anyoperating systemsuch as mostembedded programming). With few exceptions, implementations include low-level I/O.
A number of tools have been developed to help C programmers find and fix statements with undefined behavior or possibly erroneous expressions, with greater rigor than that provided by the compiler.
Automated source code checking and auditing tools exist, such asLint. A common practice is to use Lint to detect questionable code when a program is first written. Once a program passes Lint, it is then compiled using the C compiler. Also, many compilers can optionally warn about syntactically valid constructs that are likely to actually be errors.MISRA Cis a proprietary set of guidelines to avoid such questionable code, developed for embedded systems.[43]
There are also compilers, libraries, and operating system level mechanisms for performing actions that are not a standard part of C, such asbounds checkingfor arrays, detection ofbuffer overflow,serialization,dynamic memorytracking, andautomatic garbage collection.
Memory management checking tools likePurifyorValgrindand linking with libraries containing special versions of thememory allocation functionscan help uncover runtime errors in memory usage.[44][45]
C is widely used forsystems programmingin implementingoperating systemsandembedded systemapplications.[46]This is for several reasons:
C enables programmers to create efficient implementations of algorithms and data structures, because the layer of abstraction from hardware is thin, and its overhead is low, an important criterion for computationally intensive programs. For example, theGNU Multiple Precision Arithmetic Library, theGNU Scientific Library,Mathematica, andMATLABare completely or partially written in C. Many languages support calling library functions in C, for example, thePython-based frameworkNumPyuses C for the high-performance and hardware-interacting aspects.
Computer games are often built from a combination of languages. C has featured significantly, especially for those games attempting to obtain best performance from computer platforms. Examples include Doom from 1993.[47]
C is sometimes used as anintermediate languageby implementations of other languages. This approach may be used for portability or convenience; by using C as an intermediate language, additional machine-specific code generators are not necessary. C has some features, such as line-number preprocessor directives and optional superfluous commas at the end of initializer lists, that support compilation of generated code. However, some of C's shortcomings have prompted the development of otherC-based languagesspecifically designed for use as intermediate languages, such asC--. Also, contemporary major compilersGCCandLLVMboth feature anintermediate representationthat is not C, and those compilers support front ends for many languages including C.
A consequence of C's wide availability and efficiency is thatcompilers, libraries andinterpretersof other programming languages are often implemented in C.[48]For example, thereference implementationsofPython,[49]Perl,[50]Ruby,[51]andPHP[52]are written in C.
Historically, C was sometimes used forweb developmentusing theCommon Gateway Interface(CGI) as a "gateway" for information between the web application, the server, and the browser.[53]C may have been chosen overinterpreted languagesbecause of its speed, stability, and near-universal availability.[54]It is no longer common practice for web development to be done in C,[55]and many otherweb development languagesare popular. Applications where C-based web development continues include theHTTPconfiguration pages onrouters,IoTdevices and similar, although even here some projects have parts in higher-level languages e.g. the use ofLuawithinOpenWRT.
The two most popularweb servers,Apache HTTP ServerandNginx, are both written in C. These web servers interact with the operating system, listen on TCP ports for HTTP requests, and then serve up static web content, or cause the execution of other languages handling to 'render' content such asPHP, which is itself primarily written in C. C's close-to-the-metal approach allows for the construction of these high-performance software systems.
C has also been widely used to implementend-userapplications.[56]However, such applications can also be written in newer, higher-level languages.
the power of assembly language and the convenience of ... assembly language
While C has been popular, influential and hugely successful, it has drawbacks, including:
For some purposes, restricted styles of C have been adopted, e.g.MISRA CorCERT C, in an attempt to reduce the opportunity for bugs. Databases such asCWEattempt to count the ways C etc. has vulnerabilities, along with recommendations for mitigation.
There aretoolsthat can mitigate against some of the drawbacks. Contemporary C compilers include checks which may generate warnings to help identify many potential bugs.
C has both directly and indirectly influenced many later languages such asC++andJava.[65]The most pervasive influence has been syntactical; all of the languages mentioned combine the statement and (more or less recognizably) expressionsyntax of Cwith type systems, data models or large-scale program structures that differ from those of C, sometimes radically.
Several C or near-C interpreters exist, includingChandCINT, which can also be used for scripting.
Whenobject-oriented programminglanguages became popular,C++andObjective-Cwere two different extensions of C that provided object-oriented capabilities. Both languages were originally implemented assource-to-source compilers; source code was translated into C, and then compiled with a C compiler.[66]
TheC++programming language (originally named "C withClasses") was devised byBjarne Stroustrupas an approach to providingobject-orientedfunctionality with a C-like syntax.[67]C++ adds greater typing strength, scoping, and other tools useful in object-oriented programming, and permitsgeneric programmingvia templates. Nearly a superset of C, C++ now[when?]supports most of C, witha few exceptions.
Objective-Cwas originally a very "thin" layer on top of C, and remains a strictsupersetof C that permits object-oriented programming using a hybrid dynamic/static typing paradigm. Objective-C derives its syntax from both C andSmalltalk: syntax that involves preprocessing, expressions, function declarations, and function calls is inherited from C, while the syntax for object-oriented features was originally taken from Smalltalk.
In addition toC++andObjective-C,Ch,Cilk, andUnified Parallel Care nearly supersets of C.
|
https://en.wikipedia.org/wiki/C_programming_language
|
printfis aC standard libraryfunctionthatformatstextand writes it tostandard output. The function accepts a formatc-stringargumentand avariablenumber of value arguments that the functionserializesper the format string. Mismatch between the format specifiers and count andtypeof values results inundefined behaviorand possibly programcrashor othervulnerability.
The format string isencodedas atemplate languageconsisting of verbatim text andformat specifiersthat each specify how to serialize a value. As the format string is processed left-to-right, a subsequent value is used for each format specifier found. A format specifier starts with a%character and has one or more following characters that specify how to serialize a value.
The standard library provides other, similar functions that form a family ofprintf-likefunctions. The functions share the same formatting capabilities but provide different behavior such as output to a different destination or safety measures that limit exposure to vulnerabilities. Functions of the printf-family have been implemented in other programming contexts (i.e.languages) with the same or similarsyntaxandsemantics.
ThescanfC standard library function complements printf by providing formatted input (a.k.a.lexing, a.k.a.parsing) via a similar format string syntax.
The name,printf, is short forprint formattedwhereprintrefers to output to aprinteralthough the function is not limited to printer output. Today, print refers to output to any text-based environment such as aterminalor afile.
Early programming languages likeFortranused special statements with different syntax from other calculations to build formatting descriptions.[1]In this example, the format is specified on line601, and thePRINT[a]command refers to it by line number:
Hereby:
An output with input arguments100,200, and1500.25might look like this:
In 1967,BCPLappeared.[2]Its library included thewritefroutine.[3]An example application looks like this:
Hereby:
In 1968,ALGOL 68had a more function-likeAPI, but still used special syntax (the$delimiters surround special formatting syntax):
In contrast to Fortran, using normal function calls and data types simplifies the language and compiler, and allows the implementation of the input/output to be written in the same language.
These advantages were thought to outweigh the disadvantages (such as a complete lack oftype safetyin many instances) up until the 2000s, and in most newer languages of that era I/O is not part of the syntax.
People have since learned[4]that this potentially results in consequences, ranging from security exploits to hardware failures (e.g., phone's networking capabilities being permanently disabled after trying to connect to an access point named "%p%s%s%s%s%n"[5]). Modern languages, such asC++20and later, tend to include format specifications as a part of the language syntax,[6]which restore type safety in formatting to an extent, and allow the compiler to detect some invalid combinations of format specifiers and data types at compile time.
In 1973,printfwas included as a C standard library routine as part ofVersion 4 Unix.[7]
In 1990, theprintfshellcommand, modeled after the C standard library function, was included with4.3BSD-Reno.[8]In 1991, aprintfcommand was included with GNU shellutils (now part ofGNU Core Utilities).
The need to do something about the range of problems resulting from lack of type safety has prompted attempts to make the C++ compilerprintf-aware.
The-Wformatoption ofGCCallows compile-time checks toprintfcalls, enabling the compiler to detect a subset of invalid calls (and issue either a warning or an error, stopping the compilation altogether, depending on other flags).[9]
Since the compiler is inspectingprintfformat specifiers, enabling this effectively extends the C++ syntax by making formatting a part of it.
To address usability issues with the existingC++input/output support, as well as avoid safety issues of printf[10]theC++ standard librarywas revised[11]to support a new type-safe formatting starting withC++20.[12]The approach ofstd::formatresulted from incorporating Victor Zverovich'slibfmt[13]API into the language specification[14](Zverovich wrote[15]the first draft of the new format proposal); consequently,libfmtis an implementation of the C++20 format specification. InC++23, another function,std::print, was introduced that combines formatting and outputting and therefore is a functional replacement forprintf().[16]
As the format specification has become a part of the language syntax, a C++ compiler is able to prevent invalid combinations of types and format specifiers in many cases. Unlike the-Wformatoption, this is not an optional feature.
The format specification oflibfmtandstd::formatis, in itself, an extensible "mini-language" (referred to as such in the specification),[17]an example of adomain-specific language. As such,std::print, completes a historical cycle; bringing the state-of-the-art (as of 2024) back to what it was in the case of Fortran's firstPRINTimplementation in the 1950s.
Formatting of a value is specified as markup in the format string. For example, the following outputsYour age isand then the value of the variableagein decimal format.
The syntax for a format specifier is:
The parameter field is optional. If included, then matching specifiers to values isnotsequential. The numeric valuenselects the n-th value parameter. This is aPOSIXextension; notC99.[citation needed]
This field allows for using the same value multiple times in a format string instead of having to pass the value multiple times. If a specifier includes this field, then subsequent specifiers must also.
For example,
outputs:17 0x11; 16 0x10
This field is particularly useful forlocalizingmessages to differentnatural languagesthat use differentword orders.
InWindows API, support for this feature is via a different function,printf_p.
The flags field can be zero or more of (in any order):
The width field specifies theminimumnumber of characters to output. If the value can be represented in fewer characters, then the value is left-padded with spaces so that output is the number of characters specified. If the value requires more characters, then the output is longer than the specified width. A value is never truncated.
For example,printf("%3d",12);specifies a width of 3 and outputs12with a space on the left to output 3 characters. The callprintf("%3d",1234);outputs1234which is 4 characters long since that is the minimum width for that value even though the width specified is 3.
If the width field is omitted, the output is the minimum number of characters for the value.
If the field is specified as*, then the width value is read from the list of values in the call.[18]For example,printf("%*d",3,10);outputs10where the second parameter,3, is the width (matches with*) and10is the value toserialize(matches withd).
Though not part of the width field, a leading zero is interpreted as the zero-padding flag mentioned above, and a negative value is treated as the positive value in conjunction with the left-alignment-flag also mentioned above.
The width field can be used to format values as a table (tabulated output). But, columns do not align if any value is larger than fits in the width specified. For example, notice that the last line value (1234) does not fit in the first column of width 3 and therefore the column is not aligned.
The precision field usually specifies amaximumlimit of the output, depending on the particular formatting type. Forfloating-pointnumeric types, it specifies the number of digits to the right of the decimal point to which the output should be rounded; for%gand%Git specifies the total number ofsignificant digits(before and after the decimal, not including leading or trailing zeroes) to round to. For thestring type, it limits the number of characters that should be output, after which the string is truncated.
The precision field may be omitted, or a numeric integer value, or a dynamic value when passed as another argument when indicated by an asterisk (*). For example,printf("%.*s",3,"abcdef");outputsabc.
The length field can be omitted or be any of:
For floating-point types, this is ignored.floatarguments are always promoted todoublewhen used in avarargscall.[19]
Platform-specific length options came to exist prior to widespread use of the ISO C99 extensions, including:
ISO C99 includes theinttypes.hheader file that includes a number ofmacrosfor platform-independentprintfcoding. For example:printf("%"PRId64,t);specifies decimal format for a64-bit signed integer. Since the macros evaluate to astring literal, and the compilerconcatenatesadjacent string literals, the expression"%"PRId64compiles to a single string.
Macros include:
The type field can be any of:
A common way to handle formatting with a custom data type is to format the custom data type value into astring, then use the%sspecifier to include the serialized value in a larger message.
Some printf-like functions allow extensions to theescape-character-basedmini-language, thus allowing the programmer to use a specific formatting function for non-builtin types. One is the (nowdeprecated)glibc'sregister_printf_function(). However, it is rarely used due to the fact that it conflicts withstatic format string checking. Another isVstr custom formatters, which allows adding multi-character format names.
Some applications (like theApache HTTP Server) include their own printf-like function, and embed extensions into it. However these all tend to have the same problems thatregister_printf_function()has.
TheLinux kernelprintkfunction supports a number of ways to display kernel structures using the generic%pspecification, byappendingadditional format characters.[23]For example,%pI4prints anIPv4 addressin dotted-decimal form. This allows static format string checking (of the%pportion) at the expense of full compatibility with normal printf.
Extra value arguments are ignored, but if the format string has more format specifiers than value arguments passed, the behavior is undefined. For some C compilers, an extra format specifier results in consuming a value even though there isn't one which allows theformat string attack. Generally, for C, arguments arepassed on the stack. If too few arguments are passed, then printf can read past the end of the stack frame, thus allowing an attacker to read the stack.
Some compilers, likethe GNU Compiler Collection, willstatically checkthe format strings of printf-like functions and warn about problems (when using the flags-Wallor-Wformat). GCC will also warn about user-defined printf-style functions if the non-standard "format"__attribute__is applied to the function.
The format string is often astring literal, which allowsstatic analysisof the function call. However, the format string can be the value of avariable, which allows for dynamic formatting but also a security vulnerability known as anuncontrolled format stringexploit.
Although an output function on the surface,printfallows writing to a memory location specified by an argument via%n. This functionality is occasionally used as a part of more elaborate format-string attacks.[24]
The%nfunctionality also makesprintfaccidentallyTuring-completeeven with a well-formed set of arguments. A game of tic-tac-toe written in the format string is a winner of the 27thIOCCC.[25]
Variants ofprintfin the C standard library include:
fprintfoutputs to afileinstead of standard output.
sprintfwrites to astring bufferinstead of standard output.
snprintfprovides a level of safety oversprintfsince the caller provides a lengthnthat is the length of the output buffer in bytes (including space for the trailing nul).
asprintfprovides for safety by accepting a stringhandle(char**) argument. The functionallocatesa buffer of sufficient size to contain the formatted text and outputs the buffer via the handle.
For each function of the family, including printf, there is also a variant that accepts a singleva_listargument rather than a variable list of arguments. Typically, these variants start with "v". For example:vprintf,vfprintf,vsprintf.
Generally, printf-like functions return the number of bytes output or -1 to indicate failure.[26]
The following list includes notable programming languages that provide (directly or via a standard library) functionality that is the same or similar to the C printf-like functions. Excluded are languages that use format strings that deviate from the style in this article (such asAMPLandElixir), languages that inherit their implementation from theJVMor other environment (such asClojureandScala), and languages that do not have a standard native printf implementation but have external libraries which emulate printf behavior (such asJavaScript).
|
https://en.wikipedia.org/wiki/Printf_format_string
|
Anaudit trail(also calledaudit log) is a security-relevant chronological record, set of records, and/or destination and source of records that provide documentary evidence of the sequence of activities that have affected at any time a specific operation, procedure, event, or device.[1][2]Audit records typically result from activities such asfinancial transactions,[3]scientific researchand health care data transactions,[4]orcommunicationsby individual people, systems, accounts, or other entities.
The process that creates an audit trail is typically required to always run in aprivileged mode, so it can access and supervise all actions from all users; a normal user should not be allowed to stop/change it. Furthermore, for the same reason, the trail file ordatabase tablewith a trail should not be accessible to normal users. Another way of handling this issue is through the use of a role-based security model in the software.[5]The software can operate with the closed-looped controls, or as a 'closed system', as required by many companies when using audit trail functionality.
Intelecommunications, the term means a record of both completed and attempted accesses and service, ordataforming a logical path linking asequenceof events, used to trace the transactions that have affected the contents of a record.
Ininformationorcommunications security,information auditmeans a chronological record ofsystemactivities to enable the reconstruction and examination of the sequence of events and/or changes in an event. Information put away or transmitted in paired structure that might be depended upon incourt. An audit trail is a progression of records of computer data about a working framework, an application, or client exercises. Computer frameworks may have a few audit trails each gave to a specific sort of action[6][circular reference]. Related to proper apparatuses and systems, audit trails can help with distinguishing security infringement, execution issues and application issues. Routine log audits and investigation are valuable for distinguishing security episodes, approach infringement, fake movement, and operational issues soon after they have happened, and for giving information valuable to settling such issues.[7]Audit logs can likewise be valuable for performingforensic investigation, supporting the associations inside examinations, setting upbaselines, and distinguishing operational patterns and long run issues.
Innursing research, it refers to the act of maintaining a running log orjournalof decisions relating to a research project, thus making clear the steps taken and changes made to the originalprotocol.
Inaccounting, it refers todocumentationof detailed transactions supporting summaryledgerentries. This documentation may be on paper or on electronic records.
In finance, it refers to an order (any firm indication of a willingness to buy or sell a security) tracking system, or consolidated audit trail, with respect to the trading of securities, that would capture order event information for orders in securities from the time of the receipt of an order, and further documenting the life of the order through the process of routing, modification, cancellation, and execution (in whole or in part) of the order.[8][9]
Inonline proofing, it pertains to theversion historyof a piece of artwork, design, photograph, video, or web design proof in a project.
Inclinical research, server based systems such asclinical trial management systems(CTMS) require audit trails. Anything regulatory orQA/QCrelated also requires audit trails.
Inpharmaceutical manufacturing, it is aGood Manufacturing Practiceregulatory requirement software generate audit trails, but not all software have audit trail functionality built-in. The first 'generic' audit trail generating software came out late 2021.[citation needed]The software is called Audit Trail Control, capable of fulfilling regulatory requirements for any software used in pharmaceutical manufacturing.[citation needed]
Invoting, avoter-verified paper audit trailis a method of providing feedback to voters using aballotlessvoting system.
|
https://en.wikipedia.org/wiki/Audit_trail
|
For computerlog management, theCommon Log Format,[1]also known as theNCSA Common log format,[2](afterNCSA HTTPd) is a standardizedtext fileformat used byweb serverswhen generatingserver log files.[3]Because the format is standardized, the files can be readily analyzed by a variety ofweb analysis programs, for exampleWebalizerandAnalog.
Each line in a file stored in the Common Log Format has the following syntax:[4][5]
The format is extended by theCombined Log Formatwithrefereranduser-agentfields.[6][5]
A field set to dash (-) indicates missing data.
Log files are a standard tool for computer systems developers and administrators. They record the "what happened, when, by whom" of the system. This information can record faults and help their diagnosis. It can identify security breaches and other computer misuse. It can be used for auditing. It can be used for accounting purposes.[citation needed]
The information stored is only available for later analysis if it is stored in a form that can be analysed. This data can be structured in many ways for analysis. For example, storing it in a relational database would force the data into a query-able format. However, it would also make it more difficult to retrieve if the computer crashed, and logging would not be available unless the database was available. A plain text format minimises dependencies on other system processes, and assists logging at all phases of computer operation, including start-up and shut-down, where such processes might be unavailable.[citation needed]
|
https://en.wikipedia.org/wiki/Common_Log_Format
|
Aterminal serverconnects devices with aserial portto alocal area network(LAN). Products marketed as terminal servers can be very simple devices that do not offer any security functionality, such asdata encryptionand user authentication. The primary application scenario is to enable serial devices to access network server applications, or vice versa, where security of the data on the LAN is not generally an issue. There are also many terminal servers on the market that have highly advanced security functionality to ensure that only qualified personnel can access various servers and that any data that is transmitted across the LAN, or over the Internet, is encrypted. Usually, companies that need a terminal server with these advanced functions want to remotely control, monitor, diagnose and troubleshoot equipment over a telecommunications network.
Aconsole server(also referred to as console access server, console management server, serial concentrator, or serial console server) is a device or service that provides access to thesystem consoleof a computing device via networking technologies.
Although primarily used as anInterface Message Processorstarting in 1971, theHoneywell 316could also be configured as aTerminal Interface Processor(TIP) and provide terminal server support for up to 63ASCIIserial terminals through a multi-line controller in place of one of the hosts.[1]
Historically, a terminal server was a device that attached to serialRS-232devices, such as "green screen"text terminalsor serial printers, and transported traffic viaTCP/IP,Telnet,SSHor other vendor-specific network protocols (e.g.,LAT) via anEthernetconnection.
Digital Equipment Corporation'sDECserver100 (1985), 200 (1986) and 300 (1991) are early examples of this technology. (An earlier version of this product, known as theDECSATerminal Server was actually a test-bed or proof-of-concept for using the proprietary LAT protocol in commercial production networks.) With the introduction of inexpensiveflash memorycomponents, Digital's later DECserver 700 (1991) and 900 (1995) no longer shared with their earlier units the need to download their software from a "load host" (usually a DigitalVAXor Alpha) using Digital's proprietaryMaintenance Operations Protocol(MOP). In fact, these later terminal server products also included much larger flash memory and full support for the Telnet part of the TCP/IP protocol suite. Many other companies entered the terminal-server market with devices pre-loaded with software fully compatible with LAT and Telnet.
A "terminal server" is used many ways but from a basic sense if a user has a serial device and they need to move data over the LAN, this is the product they need.
Aconsole server(console access server,console management server,serial concentrator, orserial console server) is a device or service that provides access to thesystem consoleof a computing device via networking technologies.
Most commonly, a console server provides a number ofserial ports, which are then connected to the serial ports of other equipment, such as servers, routers or switches. The consoles of the connected devices can then be accessed by connecting to the console server over a serial link such as amodem, or over a network withterminal emulatorsoftware such astelnetorssh, maintaining survivable connectivity that allows remote users to log in the various consoles without being physically nearby.
Dedicated console server appliances are available from a number of manufacturers in many configurations, with the number of serial ports ranging from one to 96. These Console Servers are primarily used for secure remote access to Unix Servers, Linux Servers, switches, routers, firewalls, and any other device on the network with a console port. The purpose is to allow network operations center (NOC) personnel to perform secure remote data center management andout-of-band managementof IT assets from anywhere in the world. Products marketed as Console Servers usually have highly advanced security functionality to ensure that only qualified personnel can access various servers and that any data that is transmitted across the LAN, or over the Internet, is encrypted. Marketing a product as a console server is very application specific because it really refers to what the user wants to do—remotely control, monitor, diagnose and troubleshoot equipment over a network or the Internet.
Some users have created their own console servers using off-the-shelfcommodity computerhardware, usually with multiport serial cards typically running a slimmed-downUnix-likeoperating system such asLinux. Such "home-grown" console servers can be less expensive, especially if built from components that have been retired in upgrades and allow greater flexibility by putting full control of the software driving the device in the hands of the administrator. This includes full access to and configurability of a wide array of security protocols and encryption standards, making it possible to create a console server that is more secure. However, this solution may have a higherTCO, less reliability and higher rack-space requirements, since most industrial console servers have the physical dimension of onerack unit(1U), whereas a desktop computer with full-sizePCIcards requires at least 3U, making the home-grown solution more costly in the case of aco-locatedinfrastructure.
An alternative approach to a console server used in someclustersetups is tonull-modemwire anddaisy-chainconsoles to otherwise unused serial ports on nodes with some other primary function.
|
https://en.wikipedia.org/wiki/Console_server
|
[citation needed]
Adata logger(alsodataloggerordata recorder) is an electronic device that records data over time or about location either with a built-ininstrumentorsensoror via external instruments and sensors. Increasingly, but not entirely, they are based on a digital processor (or computer), and called digital data loggers (DDL). They generally are small, battery-powered, portable, and equipped with a microprocessor, internal memory for data storage, and sensors. Some data loggers interface with a personal computer and use software to activate the data logger and view and analyze the collected data, while others have a local interface device (keypad, LCD) and can be used as a stand-alone device.
Data loggers vary from general-purpose devices for various measurement applications to very specific devices for measuring in one environment or application type only. While it is common for general-purpose types to beprogrammable, many remain static machines with only a limited number or no changeable parameters. Electronic data loggers have replacedchart recordersin many applications.
One primary benefit of using data loggers is their ability to automatically collect data on a 24-hour basis. Upon activation, data loggers are typically deployed and left unattended to measure and record information for the duration of the monitoring period. This allows for a comprehensive, accurate picture of the environmental conditions being monitored, such as air temperature and relative humidity.
The cost of data loggers has been declining over the years as technology improves and costs are reduced. Simple single-channel data loggers can cost as little as $25, while more complicated loggers may cost hundreds or thousands of dollars.
Standardization of protocols and data formats has been a problem but is now growing in the industry andXML,JSON, andYAMLare increasingly being adopted for data exchange. The development of theSemantic Weband theInternet of Thingsis likely to accelerate this present trend.
Several protocols have been standardized including a smart protocol,SDI-12, that allows some instrumentation to be connected to a variety of data loggers. The use of this standard has not gained much acceptance outside the environmental industry. SDI-12 also supports multi-drop instruments. Some data logging companies support theMODBUSstandard. This has been used traditionally in the industrial control area, and many industrial instruments support this communication standard. Another multi-drop protocol that is now starting to become more widely used is based uponCAN-Bus(ISO 11898). Some data loggers use a flexible scripting environment to adapt to various non-standard protocols.
The terms data logging anddata acquisitionare often used interchangeably. However, in a historical context, they are quite different. A data logger is a data acquisition system, but a data acquisition system is not necessarily a data logger.
Applications of data logging include:
Data Loggers are changing more rapidly now than ever before. The original model of a stand-alone data logger is changed to one of a device that collects data but also has access to wireless communications for alarming of events, automatic reporting of data, and remote control. Data loggers are beginning to serve web pages for current readings,e-mailtheir alarms, andFTPtheir daily results into databases or direct to the users. Very recently, there is a trend to move away from proprietary products with commercial software to open-source software and hardware devices. TheRaspberry Pisingle-board computeris among others a popular platform hosting real-time Linux or preemptive-kernel Linux operating systems with many
|
https://en.wikipedia.org/wiki/Data_logging
|
Log managementis the process for generating, transmitting, storing, accessing, and disposing of log data. A log data (orlogs) is composed of entries (records), and each entry contains information related to a specific event that occur within an organization's computing assets, including physical and virtual platforms, networks, services, and cloud environments.[1]
The process of log management generally breaks down into:[2]
The primary drivers for log management implementations are concerns aboutsecurity,[3]system and network operations (such assystemornetwork administration) and regulatory compliance. Logs are generated by nearly every computing device, and can often be directed to different locations both on a localfile systemor remote system.
Effectively analyzing large volumes of diverse logs can pose many challenges, such as:
Users and potential users of log management may purchase complete commercial tools or build their own log-management and intelligence tools, assembling the functionality from variousopen-sourcecomponents, or acquire (sub-)systems from commercial vendors. Log management is a complicated process and organizations often make mistakes while approaching it.[4]
Logging can produce technical information usable for the maintenance of applications or websites. It can serve:
Suggestions were made[by whom?]to change the definition of logging. This change would keep matters both purer and more easily maintainable:
One view[citation needed]of assessing the maturity of an organization in terms of the deployment of log-management tools might use[original research?]successive levels such as:
|
https://en.wikipedia.org/wiki/Log_management_and_intelligence
|
logparseris a flexiblecommand lineutility that was initially written by Gabriele Giuseppini,[1]aMicrosoftemployee, to automate tests forIISlogging. It was intended for use with theWindowsoperating system, and was included with the IIS 6.0Resource Kit Tools. The default behavior of logparser works like a "data processing pipeline", by taking anSQLexpression on the command line, and outputting the lines containing matches for the SQL expression.
Microsoft describes Logparser as a powerful, versatile tool that provides universal query access to text-based data such as log files,XMLfiles andCSVfiles, as well as key data sources on theWindowsoperating system such as theEvent Log, theRegistry, the file system, andActive Directory. The results of the input query can be custom-formatted in text based output, or they can be persisted to more specialty targets likeSQL,SYSLOG, or achart.
Common use:
Example:Selecting date, time and client username accessingASPX-files, taken from all.log-files in the current directory.
The following links are only available through theInternet Archive:
|
https://en.wikipedia.org/wiki/Logparser
|
TheNetwork Configuration Protocol(NETCONF) is anetwork managementprotocol developed and standardized by theIETF. It was developed in the NETCONF working group[1]and published in December 2006 as RFC 4741[2]and later revised in June 2011 and published as RFC 6241.[3]The NETCONF protocol specification is an Internet Standards Track document.
NETCONF provides mechanisms to install, manipulate, and delete the configuration of network devices. Its operations are realized on top of a simpleRemote Procedure Call(RPC) layer. The NETCONF protocol uses anExtensible Markup Language(XML) based data encoding for the configuration data as well as the protocol messages. The protocol messages are exchanged on top of a secure transport protocol.
The NETCONF protocol can be conceptually partitioned into four layers:
The NETCONF protocol has been implemented in network devices such as routers and switches by some major equipment vendors. One particular strength of NETCONF is its support for robust configuration change using transactions involving a number of devices.
The IETF developed theSimple Network Management Protocol(SNMP) in the late 1980s and it proved to be a very popularnetwork managementprotocol. In the early part of the 21st century it became apparent that in spite of what was originally intended, SNMP was not being used to configure network equipment, but was mainly being used fornetwork monitoring. In June 2002, theInternet Architecture Boardand key members of the IETF's network management community got together with network operators to discuss the situation. The results of this meeting are documented in RFC 3535. It turned out that each network operator was primarily using a different proprietarycommand-line interface(CLI) to configure their devices. This had a number of features that the operators liked, including the fact that it was text-based, as opposed to theBER-encodedSNMP. In addition, many equipment vendors did not provide the option to completely configure their devices via SNMP. As operators generally liked to write scripts to help manage their boxes, they found the SNMP CLI lacking in a number of ways. Most notably was the unpredictable nature of the output. The content and formatting of output was prone to change in unpredictable ways.
Around this same time,Juniper Networkshad been using an XML-based network management approach. This was brought to the IETF and shared with the broader community. Collectively, these two events led the IETF in May 2003 to the creation of the NETCONF working group. This working group was chartered to work on a network configuration protocol, which would better align with the needs of network operators and equipment vendors. The first version of the base NETCONF protocol was published as RFC 4741 in December 2006. Several extensions were published in subsequent years (notifications in RFC 5277 in July 2008, partial locks in RFC 5717 in December 2009, with-defaults in RFC 6243 in June 2011, system notifications in RFC 6470 in February 2012, access control in RFC 6536 in March 2012). A revised version of the base NETCONF protocol was published as RFC 6241 in June 2011.
The content of NETCONF operations is well-formed XML. Most content is related tonetwork management. Subsequently, support for encoding inJavaScript Object Notation(JSON) was also added.
The NETMOD working group has completed work to define a "human-friendly" modeling language for defining the semantics of operational data, configuration data, notifications, and operations, calledYANG. YANG is defined in RFC 6020 (version 1) and RFC 7950 (version 1.1), and is accompanied by the "Common YANG Data Types" found in RFC 6991.
During the summer of 2010, the NETMOD working group was re-chartered to work on core configuration models (system, interface, and routing) as well as work on compatibility with theSNMPmodeling language.
The base protocol defines the following protocol operations:
Basic NETCONF functionality can be extended by the definition of NETCONF capabilities. The set of additional protocol features that an implementation supports is communicated between the server and the client during the capability exchange portion of session setup. Mandatory protocol features are not included in the capability exchange since they are assumed. RFC 4741 defines a number of optional capabilities including :xpath and :validate. Note thatRFC 6241obsoletes RFC 4741.
A capability to support subscribing and receiving asynchronous event notifications is published in RFC 5277. This document defines the <create-subscription> operation, which enables creating real-time and replay subscriptions. Notifications are then sent asynchronously using the <notification> construct. It also defines the :interleave capability, which when supported with the basic :notification capability facilitates the processing of other NETCONF operations while the subscription is active.
A capability to support partial locking of the running configuration is defined in RFC 5717. This allows
multiple sessions to edit non-overlapping sub-trees within the running configuration. Without this capability, the only lock available is for the entire configuration.
A capability to monitor the NETCONF protocol is defined in RFC 6022. This document contains a data model including information about NETCONF datastores, sessions, locks, and statistics that facilitates the management of a NETCONF server. It also defines methods for NETCONF clients to discover data models supported by a NETCONF server and defines the <get-schema> operation to retrieve them.
The NETCONF messages layer provides a simple, transport-independent framing mechanism for encoding
Every NETCONF message is a well-formed XML document. An RPC result is linked to an RPC invocation by a message-id attribute. NETCONF messages can be pipelined, i.e., a client can invoke multiple RPCs without having to wait for RPC result messages first. RPC messages are defined in RFC 6241 and notification messages are defined in RFC 5277.
|
https://en.wikipedia.org/wiki/Netconf
|
NXLog[1]is a multi-platform log collection and centralization tool that offers log processing features, including log enrichment (parsing, filtering, and conversion) and log forwarding.[2]In concept NXLog is similar tosyslog-ngorRsyslogbut it is not limited toUNIXandsyslogonly. It supports all majoroperating systemssuch as Windows,[3]macOS,[4]IBM AIX,[5]etc., being compatible with virtually anySIEM, log analytics suites and many other platforms. NXLog can handle different log sources and formats,[6]so it can be used to implement a secured, centralized,[7]scalable logging system.NXLog Community Editionis proprietary and can be downloaded free of charge with no license costs or limitations.[8]
NXLog can be installed on many operating systems and it is enabled to operate in a heterogeneous environment, collecting event logs from thousands of different sources in many formats. NXLog can accept event logs fromTCP,UDP,[9]file, database and various other sources in different formats such assyslog, windows event log, etc.[10]It supports SSL/TLS encryption to make sure data security in transit.
It can perform log rewrite, correlation, alerting, and pattern matching, it can execute scheduled jobs, and can perform log rotation. It was designed to be able to fully utilize modern multi-coreCPUsystems. Its multi-threaded architecture enables input, log processing and output tasks to be executed in parallel. Using an I/O layer it is capable of handling thousands of simultaneous client connections and process log volumes above the 100,000 EPS range.
NXLog does not drop any log messages unless instructed to. It can process input sources in a prioritized order, meaning that a higher priority source will be always processed before others. This can further help avoidingUDPmessage loss for example. In case of network congestion or other log transmission problems, NXLog can buffer messages on the disk or in memory. Using loadable modules it supports different input sources and log formats, not only limited tosyslogbut windows event log, audit logs, and custom binary application logs.
With NXLog it is possible to use custom loadable modules similarly to the Apache Web server. In addition to the online log processing mode, it can be used to process logs in batch mode in an offline fashion. NXLog's configuration language, with an Apache style configuration file syntax, enables it to rewrite logs, send alerts or execute any external script based on the specified criteria.
Back in 2009 the developer of NXlog was using a modified version of msyslog to suit his needs, but when he found a requirement to implement a high performance, scalable, centralized log management solution, there was no such modern logging solution available. There were some alternatives to msyslog with some nice features (e.g.Rsyslog,syslog-ng, etc.), but none of them qualified. Most of these were still single threaded,syslogoriented, without native support for MS Windows, and came with an ambiguous configuration syntax, ugly source-code and so on.
He decided to design and write NXLog from scratch, instead of hacking something else. Thus, NXLog was born in 2009 and was a closed source product in the beginning, heavily used in several production deployments. The source code of NXLOG Community Edition was released in November 2011, and has been freely available since.
Most log processing solutions are built around the same concept. The input is read from a source, then the log messages are processed. Finally output is written or sent to a sink in other terminology.
When an event occurs in an application or a device, depending on its configuration, a log message is emitted. This is usually referred to as an "event log" or "log message". These log messages can have different formats and can be transmitted over different protocols depending on the actual implementation.
There is one thing common in all event log messages. All contain important data such as user names, IP addresses, application names, etc. This way an event can be represented as a list of key-value pairs which we call a "field". The name of the field is the key and the field data is the value. In another terminology this meta-data is sometimes referred to as event property or message tag.
The following example illustrates a syslog message:
The fields extracted from this message are as follows:
NXLog will try to use theCommon Event Expression standardfor the field names once the standard is stable.
NXLog has a special field$, raw_event. This field is handled by the transport (UDP,TCP, File, etc.) modules to read input into and write output from it. This field is also used later to parse the log message into further fields by various functions, procedures and modules.
By utilizing loadable modules, the plugin architecture of NXLog allows it to read data from any kind of input, parse and convert the format of the messages, and then send it to any kind of output. Different input, processor and output modules can be used at the same time to cover all the requirements of the logging environment. The following figure illustrates the flow of log messages using this architecture.
The core of NXLog is responsible for parsing the configuration file, monitoring files and sockets, and managing internal events. It has an event based architecture, all modules can dispatch events to the core. The NXLog core will take care of the event and will optionally pass it to a module for processing. NXLog is a multi-threaded application, the main thread is responsible for monitoring files and sockets. These are added to the core by the different input and output modules. There is a dedicated thread handling internal events. It sleeps until the next event is to be processed then wakes up and dispatches the event to a worker thread. NXLog implements a worker thread-pool model. Worker threads receive an event which must be processed immediately. This way the NXLog core can centrally control all events and the order of their execution making prioritized processing possible. Modules which handle sockets or files are written to use non-blocking I/O in order to ensure that the worker threads never block. The files and sockets monitored by the main thread also dispatch events which are then delegated to the workers. Each event belonging to the same module is executed in sequential order, not concurrently. This ensures that message order is kept and prevents concurrency issues in modules. Yet the modules (worker threads) run concurrently, thus the global log processing flow is greatly parallelized.
When an input module receives data, it creates an internal representation of the log message which is basically a structure containing the raw event data and any optional fields. This log message is then pushed to the queue of the next module in the route and an internal event is generated to signal the availability of the data. The next module after the input module in a route, can be either a processor module or an output module. Actually an input or output module can also process data through built-in code or using the NXLog language execution framework. The only difference is that processor modules are run in another worker thread, thus parallelizing log processing even more. Considering that processor modules can also be chained, this can efficiently distribute work among multipleCPUsorCPUcores in the system.
NXLog Community Edition is licensed under the NXLOG PUBLIC LICENSE v1.0.[12]
|
https://en.wikipedia.org/wiki/NXLog
|
Rsyslogis anopen-sourcesoftware utility used onUNIXandUnix-likecomputer systems for forwardinglog messagesin anIPnetwork. It implements the basicsyslogprotocol, extends it with content-based filtering, rich filtering capabilities, queued operations to handle offline outputs,[2]support for different module outputs,[3]flexible configuration options and adds features such as usingTCPfor transport.
The official RSYSLOG website defines the utility as "the rocket-fast system for log processing".[4]
Rsyslog uses the standardBSDsyslog protocol, specified inRFC3164. As the text of RFC 3164 is an informational description and not a standard, various incompatible extensions of it emerged. Rsyslog supports many of these extensions. The format of relayed messages can be customized.
The most important extensions of the original protocol supported by rsyslog are:
The rsyslog project began in 2004, whenRainer Gerhards, the primary author of rsyslog, decided to write a new strong syslog daemon to compete withsyslog-ng, because, according to the author, "A new major player will prevent monocultures and provide a rich freedom of choice."[5]Rainer Gerhards worked on rsyslog inside his own company, Adiscon GmbH.
|
https://en.wikipedia.org/wiki/Rsyslog
|
Security event management(SEM), and the relatedSIMandSIEM, are computer security disciplines that use data inspection tools to centralize the storage and interpretation of logs or events generated by other software running on a network.[1][2][3]
The acronymsSEM,SIM,andSIEMhave sometimes been used interchangeably,[3]:3[4]but generally refer to the different primary focus of products:
Many systems and applications which run on a computer network generate events which are kept in event logs. These logs are essentially lists of activities that occurred, with records of new events being appended to the end of the logs as they occur.Protocols, such assyslogandSNMP, can be used to transport these events, as they occur, to logging software that is not on the same host on which the events are generated. The better SEMs provide a flexible array of supported communication protocols to allow for the broadest range of event collection.
It is beneficial to send all events to a centralized SEM system for the following reasons:
Although centralised logging has existed for long time, SEMs are a relatively new idea, pioneered in 1999 by a small company called E-Security,[8]and are still evolving rapidly. The key feature of a Security Event Management tool is the ability to analyse the collected logs to highlight events or behaviors of interest, for example an Administrator orSuper User logon, outside of normal business hours. This may include attaching contextual information, such as host information (value, owner, location, etc.), identity information (user info related to accounts referenced in the event like first/last name, workforce ID, manager's name, etc.), and so forth. This contextual information can be leveraged to provide better correlation and reporting capabilities and is often referred to as Meta-data. Products may also integrate with external remediation, ticketing, and workflow tools to assist with the process of incident resolution. The better SEMs will provide a flexible, extensible set of integration capabilities to ensure that the SEM will work with most customer environments.
SEMs are often sold to help satisfy U.S. regulatory requirements such as those ofSarbanes–Oxley,PCI-DSS,GLBA.[citation needed]
One of the major problems in the SEM space is the difficulty in consistently analyzing event data. Every vendor, and indeed in many cases different products by one vendor, uses a different proprietary event data format and delivery method. Even in cases where a "standard" is used for some part of the chain, likeSyslog, the standards don't typically contain enough guidance to assist developers in how to generate events, administrators in how to gather them correctly and reliably, and consumers to analyze them effectively.
As an attempt to combat this problem, a couple of parallel standardization efforts are underway. First,The Open Groupis updating their circa 1997XDASstandard, which never made it past draft status. This new effort, dubbed XDAS v2, will attempt to formalize an event format including which data should be included in events and how it should be expressed.[citation needed]The XDAS v2 standard will not include event delivery standards but other standards in development by theDistributed Management Task Forcemay provide a wrapper.
In addition,MITREdeveloped efforts to unify event reporting with theCommon Event Expression(CEE) which was somewhat broader in scope as it attempted to define an event structure as well as delivery methods. The project, however, ran out of funding in 2014.
|
https://en.wikipedia.org/wiki/Security_Event_Manager
|
Incomputing,loggingis the act of keeping alogofeventsthat occur in a computer system, such as problems, errors or just information on current operations. These events may occur in theoperating systemor in othersoftware. A message orlog entryis recorded for each such event. These log messages can then be used to monitor and understand the operation of the system, todebugproblems, or during anaudit. Logging is particularly important inmulti-user software, to have a central overview of the operation of the system.
In the simplest case, messages are written to a file, called alog file.[1]Alternatively, the messages may be written to a dedicated logging system or to alog managementsoftware, where it is stored in a database or on a different computer system.
Specifically, atransaction logis a log of the communications between a system and the users of that system,[2]or a data collection method that automatically captures the type, content, or time of transactions made by a person from a terminal with that system.[3]For Web searching, a transaction log is an electronic record of interactions that have occurred during a searching episode between a Web search engine and users searching for information on that Web search engine.
Many operating systems, software frameworks and programs include a logging system. A widely used logging standard isSyslog, defined inIETF RFC5424.[4]The Syslog standard enables a dedicated, standardized subsystem to generate, filter, record, and analyze log messages. This relieves software developers of having to design and code their ad hoc logging systems.[5][6][7]
Event logsrecord events taking place in the execution of a system that can be used to understand the activity of the system and to diagnose problems.
They are essential to understand particularly in the case of applications with little user interaction.
It can also be useful to combine log file entries from multiple sources. It is a different combination that may yield between with related events on different servers. Other solutions employ network-wide querying andreporting.[8][9]
Mostdatabase systemsmaintain some kind oftransaction log, which are not mainly intended as an audit trail for later analysis, and are not intended to behuman-readable. These logs record changes to the stored data to allow the database to recover fromcrashesor other data errors and maintain the stored data in a consistent state. Thus, database systems usually have both general event logs and transaction logs.[10][11][12][13]
The use of data stored in transaction logs of Web search engines, Intranets, and Web sites can provide valuable insight into understanding the information-searching process of online searchers.[14]This understanding can enlighten information system design, interface development, and devising the information architecture for content collections.
Internet Relay Chat (IRC),instant messaging (IM)programs,peer-to-peerfile sharing clients with chat functions, andmultiplayergames (especiallyMMORPGs) commonly have the ability to automatically save textual communication, both public (IRC channel/IM conference/MMO public/party chat messages) and private chat between users, as message logs.[15]Message logs are almost universally plain text files, but IM andVoIPclients (which support textual chat, e.g. Skype) might save them inHTMLfiles or in a custom format to ease reading or enableencryption.
In the case of IRC software, message logs often include system/server messages and entries related to channel and user changes (e.g. topic change, user joins/exits/kicks/bans,nicknamechanges, the user status changes), making them more like a combined message/event log of the channel in question, but such a log is not comparable to a true IRC server event log, because it only records user-visible events for the time frame the user spent being connected to a certain channel.
Instant messaging and VoIP clients often offer the chance to store encrypted logs to enhance the user's privacy. These logs require a password to be decrypted and viewed, and they are often handled by their respective writing application. Some privacy focused messaging services, such asSignal, record minimal logs about users, limiting their information to connection times.[16]
Aserver logis a log file (or several files) automatically created and maintained by aserverconsisting of a list of activities it performed.
A typical example is aweb serverlog which maintains a history of page requests. TheW3Cmaintains a standard format (theCommon Log Format) for web server log files, but other proprietary formats exist.[9]Some servers can log information to computer readable formats (such asJSON) versus the human readable standard.[17]More recent entries are typically appended to the end of the file. Information about the request, includingclientIP address, requestdate/time,pagerequested,HTTPcode, bytes served,user agent, andreferrerare typically added. This data can be combined into a single file, or separated into distinct logs, such as an access log,errorlog, or referrer log. However, server logs typically do not collect user-specific information.
These files are usually not accessible to general Internet users, only to thewebmasteror other administrative person of an Internet service. A statistical analysis of the server log may be used to examine traffic patterns by time of day, day of week, referrer, or user agent. Efficient web site administration, adequate hosting resources and the fine tuning of sales efforts can be aided by analysis of the web server logs.
|
https://en.wikipedia.org/wiki/Server_log
|
syslog-ngis afree and open-sourceimplementation of thesyslogprotocol forUnixandUnix-likesystems. It extends the original syslogd model with content-based filtering, rich filtering capabilities, flexible configuration options and adds important features to syslog, like usingTCPfor transport. Syslog-ng is developed in the Budapest office of One Identity LLC. It has three editions with a common codebase. The first is called syslog-ng, also referred as syslog-ng Open Source Edition (OSE) with the license LGPL + GPLv2. The second is called syslog-ng Premium Edition (PE) and has additional plugins (modules) under aproprietary license. The third is called syslog-ng Storebox (SSB), which comes as an appliance with a Web-based UI as well as additional features including ultra-fast-text search, unified search, content-based alerting and a premier tier support.[2]
In January 2018, syslog-ng, as part of Balabit, was acquired by One Identity under the Quest Software umbrella. The syslog-ng team remains an independent business within the One Identity organization and continues under the syslog-ng brand.
In May 2024, the original author of syslog-ng, Balázs Scheidler, forked syslog-ng and launchedAxoSyslog, a fully open-source,drop in replacementthat develops syslog-ng into a generic security data processor, integrating it with various cloud native tools and services.
syslog-ng supports a wide variety of protocols to receive or send log data. While its origins are in syslog, today it supports modern, cloud native transports such as OpenTelemetry (OTLP), Google PubSub or Kafka. syslog-ng interoperates with a variety of devices, and is capable of consuming and transforming data between various sources and destinations.
Extensions to the original syslog-ng protocol include:
The syslog-ng project began in 1998, when Balázs Scheidler, the primary author of syslog-ng, ported the existing nsyslogd code to Linux. The 1.0.x branch of syslog-ng was still based on the nsyslogd sources and are available in the syslog-ng source archive.[4]
Right after the release of syslog-ng 1.0.x, a reimplementation of the code base started to address some of the shortcomings of syslog and to address the licensing concerns of Darren Reed, the original syslog author. This reimplementation was named stable in the October 1999 with the release of version 1.2.0. This time around, syslog-ng depended on some code originally developed forlshby Niels Möller.
Three major releases (1.2, 1.4 and 1.6) were using this code base, the last release of the 1.6.x branch in February 2007. In this period of about 8 years, syslog-ng became one of the popular alternative syslog implementations.
In a volunteer based effort,yet another rewritewas started back in 2001, dropping lsh code and using the more widely availableGLiblibrary. This rewrite of the codebase took its time, the first stable release of 2.0.0 happened in October 2006.
Development efforts were focused on improving the 2.0.x branch; support for 1.6.x was dropped at the end of 2007. Support for 2.x was dropped at the end of 2009, but it is still used in some Linux distributions.[5][6]Balabit, the company behind syslog-ng, started a parallel, commercial fork of syslog-ng, called syslog-ng Premium Edition. Portions of the commercial income are used to sponsor development of the free version.
Syslog-ng version 3.0 was released in the fourth quarter of 2008.
Starting with the 3.0 version developments efforts were parallel on the Premium and on the Open Source Editions. PE efforts were focused on quality, transport reliability, performance and encrypted log storage. The Open Source Edition efforts focused on improving the flexibility of the core infrastructure to allow more and more different, non-syslog message sources.
The syslog-ng 3.X series brought many major changes to syslog-ng without breaking backwards compatibility. Syslog-ng became modular and multi-threaded. Support for various document stores and message queuing systems was added. Many message types are now automatically parsed and turned into name-value-pairs. Extending syslog-ng using Java and Python became possible.
Version 4.0 of syslog-ng was released in December, 2022. The main version number change was necessary due to a major change in type support for name-value pairs, which was incompatible with the 3.X series. It allows more precise filtering and sending data with proper type information to databases and document stores.
While syslog-ng PE is based on the open-source edition, its version numbering is completely independent of it.
syslog-ng provides a number of features in addition to transporting syslog messages and storing them in plain text log files:
syslog-ng is available on a number of different Linux and Unix distributions. Some install it as the system default, or provide it as a package that replaces the previous standard syslogd. Several Linux distributions that used syslog-ng have replaced it withrsyslog.[citation needed]
syslog-ng is highly portable to many Unix systems, old and new alike. A list of the currently known to work Unix versions are found below:
The list above is based on BalaBit's current first hand experience, other platforms may also work, but your mileage may vary.
|
https://en.wikipedia.org/wiki/Syslog-ng
|
Aweb counterorhit counteris a publicly displayed running tally of the number of visits awebpagehas received.
Web counters are usually displayed as aninline digital imageor inplain text.Image renderingof digits may use a variety offontsand styles, with a classic design imitating the wheels of anodometer. Web counters were often accompanied by the date it was set up or last reset, to provide more context to readers on how to interpret the number shown. Although initially a way to publicly showcase a site's popularity to its visitors, some early web counters were simplyweb bugsused bywebmastersto track hits and included no visible on-page elements.
Counters were popular in the 1990s, but were later replaced by otherweb trafficmeasures such as self-hosted scripts likeAnalog, and later on by remote systems that usedJavaScript, likeGoogle Analytics. These systems typically do not include on-page elements displaying the count. Thus, seeing a web counter on a modern web page is one example ofretrocomputingon the Internet.
Owing to their ubiquity, hit counters were also a useful tool to collect data on the globalusage share of web browsersfor a time.
In oneSEOspamming technique, companies paid to have their site listed in the HTML code of a free hit counter. When a webmaster put it on their page, a small link appeared at the bottom, providing a way for sites to artificially accumulate inbound links. This was often done by sites in very competitive industries like online gambling. In 2008,Googleremoved a number of high-rankingmesotheliomasites that had been using counters from the top results.[1][failed verification]
|
https://en.wikipedia.org/wiki/Web_counter
|
Web log analysis software(also called aweb log analyzer) is a kind ofweb analyticssoftware that parses aserver log filefrom aweb server, and based on the values contained in the log file, derives indicators about when, how, and by whom a web server is visited. Reports are usually generated immediately, but data extracted from the log files can alternatively be stored in a database, allowing various reports to be generated on demand.
Features supported by log analysis packages may include "hit filters", which use pattern matching to examine selected log data.[citation needed]
|
https://en.wikipedia.org/wiki/Web_log_analysis_software
|
Inclass-based programming,downcasting, or type refinement, is the act ofcastingabaseorparentclass reference, to a more restrictedderived classreference.[1]This is only allowable if the object is already an instance of the derived class, and so this conversion is inherently fallible.
In many environments,type introspectioncan be used to obtain the type of an object instance at runtime, and then use this result to explicitly evaluate itstype compatibilitywith another type. The possible results of comparingpolymorphictypes—besides them being equivalent (identical), or unrelated (incompatible)—include two additional cases: namely, where the first type is derived from the second, and then the same thing but swapped the other way around (see:Subtyping § Subsumption).
With this information, a program can test, before performing an operation such as storing an object into a typed variable, whether that operation is type safe, or whether it would result in an error. If the type of the runtime instance is derived from (a child of) the type of the target variable (therefore, the parent), downcasting is possible.
Some languages, such asOCaml, disallow downcasting.[2]
Downcasting is useful when the type of the value referenced by the Parent variable is known and often is used when passing a value as a parameter. In the below example, the method objectToString takes an Object parameter which is assumed to be of type String.
In this approach, downcasting prevents the compiler from detecting a possible error and instead causes a run-time error.
Downcasting myObject to String ('(String)myObject') was not possible at compile time because there are times that myObject is String type, so only at run time can we figure out whether the parameter passed in is logical. While we could also convert myObject to a compile-time String using the universal java.lang.Object.toString(), this would risk calling the default implementation of toString() where it was unhelpful or insecure, and exception handling could not prevent this.
In C++, run-time type checking is implemented throughdynamic_cast. Compile-time downcasting is implemented bystatic_cast, but this operation performs no type check. If it is used improperly, it could produce undefined behavior.
A popular example of a badly considered design is containers oftop types,[citation needed]like theJavacontainers beforeJava genericswere launched, which requires downcasting of the contained objects so that they can be utilised again.
|
https://en.wikipedia.org/wiki/Downcasting
|
In computer programming,run-time type informationorrun-time type identification(RTTI)[1]is a feature of some programming languages (such asC++,[2]Object Pascal, andAda[3]) that exposes information about an object'sdata typeatruntime. Run-time type information may be available for all types or only to types that explicitly have it (as is the case with Ada). Run-time type information is a specialization of a more general concept calledtype introspection.
In the original C++ design,Bjarne Stroustrupdid not include run-time type information, because he thought this mechanism was often misused.[4]
In C++, RTTI can be used to do safetypecastsusing thedynamic_cast<>operator, and to manipulate type information at runtime using thetypeidoperator andstd::type_infoclass. In Object Pascal, RTTI can be used to perform safe type casts with theasoperator, test the class to which an object belongs with theisoperator, and manipulate type information at run time with classes contained in theRTTIunit[5](i.e. classes:TRttiContext,TRttiInstanceType, etc.). In Ada, objects of tagged types also store a type tag, which permits the identification of the type of these object at runtime. Theinoperator can be used to test, at runtime, if an object is of a specific type and may be safely converted to it.[6]
RTTI is available only for classes that arepolymorphic, which means they have at least onevirtual method. In practice, this is not a limitation because base classes must have avirtual destructorto allow objects of derived classes to perform proper cleanup if they are deleted from a base pointer.
Some compilers have flags to disable RTTI. Using these flags may reduce the overall size of the application, making them especially useful when targeting systems with a limited amount of memory.[7]
Thetypeidreserved word(keyword) is used to determine theclassof anobjectat runtime. It returns areferencetostd::type_infoobject, which exists until the end of the program.[8]The use oftypeid, in a non-polymorphic context, is often preferred overdynamic_cast<class_type>in situations where just the class information is needed, becausetypeidis always aconstant-timeprocedure, whereasdynamic_castmay need to traverse the class derivation lattice of its argument at runtime.[citation needed]Some aspects of the returned object are implementation-defined, such asstd::type_info::name(), and cannot be relied on across compilers to be consistent.
Objects of classstd::bad_typeidare thrown when the expression fortypeidis the result of applying the unary * operator on anull pointer. Whether an exception is thrown for other null reference arguments is implementation-dependent. In other words, for the exception to be guaranteed, the expression must take the formtypeid(*p)wherepis any expression resulting in a null pointer.
Output (exact output varies by system and compiler):
Thedynamic_castoperator inC++is used fordowncastinga reference or pointer to a more specific type in theclass hierarchy. Unlike thestatic_cast, the target of thedynamic_castmust be apointerorreferencetoclass. Unlikestatic_castandC-styletypecast (where type check occurs while compiling), a type safety check is performed at runtime. If the types are not compatible, anexceptionwill be thrown (when dealing withreferences) or anull pointerwill be returned (when dealing withpointers).
AJavatypecast behaves similarly; if the object being cast is not actually an instance of the target type, and cannot be converted to one by a language-defined method, an instance ofjava.lang.ClassCastExceptionwill be thrown.[9]
Suppose somefunctiontakes anobjectof typeAas its argument, and wishes to perform some additional operation if the object passed is an instance ofB, asubclassofA. This can be done usingdynamic_castas follows.
Console output:
A similar version ofMyFunctioncan be written withpointersinstead ofreferences:
In Object Pascal andDelphi, the operatorisis used to check the type of a class at runtime. It tests the belonging of an object to a given class, including classes of individual ancestors present in the inheritance hierarchy tree (e.g.Button1is aTButtonclass that has ancestors:TWinControl→TControl→TComponent→TPersistent→TObject, where the latter is the ancestor of all classes). The operatorasis used when an object needs to be treated at run time as if it belonged to an ancestor class.
The RTTI unit is used to manipulate object type information at run time. This unit contains a set of classes that allow you to: get information about an object's class and its ancestors, properties, methods and events, change property values and call methods. The following example shows the use of the RTTI module to obtain information about the class to which an object belongs, creating it, and to call its method. The example assumes that the TSubject class has been declared in a unit named SubjectUnit.
|
https://en.wikipedia.org/wiki/Run-time_type_information#C++_–_dynamic_cast_and_Java_cast
|
Incomputer science, atype punningis any programming technique that subverts or circumvents thetype systemof aprogramming languagein order to achieve an effect that would be difficult or impossible to achieve within the bounds of the formal language.
InCandC++, constructs such aspointertype conversionandunion— C++ addsreferencetype conversion andreinterpret_castto this list — are provided in order to permit many kinds of type punning, although some kinds are not actually supported by the standard language.
In thePascalprogramming language, the use ofrecordswithvariantsmay be used to treat a particular data type in more than one manner, or in a manner not normally permitted.
One classic example of type punning is found in theBerkeley socketsinterface. The function to bind an opened but uninitialized socket to anIP addressis declared as follows:
Thebindfunction is usually called as follows:
The Berkeley sockets library fundamentally relies on the fact that inC, a pointer tostruct sockaddr_inis freely convertible to a pointer tostruct sockaddr; and, in addition, that the two structure types share the same memory layout. Therefore, a reference to the structure fieldmy_addr->sin_family(wheremy_addris of typestruct sockaddr*) will actually refer to the fieldsa.sin_family(wheresais of typestruct sockaddr_in). In other words, the sockets library uses type punning to implement a rudimentary form ofpolymorphismorinheritance.
Often seen in the programming world is the use of "padded" data structures to allow for the storage of different kinds of values in what is effectively the same storage space. This is often seen when two structures are used in mutual exclusivity for optimization.
Not all examples of type punning involve structures, as the previous example did. Suppose we want to determine whether afloating-pointnumber is negative. We could write:
However, supposing that floating-point comparisons are expensive, and also supposing thatfloatis represented according to theIEEE floating-point standard, and integers are 32 bits wide, we could engage in type punning to extract thesign bitof the floating-point number using only integer operations:
Note that the behaviour will not be exactly the same: in the special case ofxbeingnegative zero, the first implementation yieldsfalsewhile the second yieldstrue. Also, the first implementation will returnfalsefor anyNaNvalue, but the latter might returntruefor NaN values with the sign bit set. Lastly we have the problem wherein the storage of the floating point data may be in big endian or little endian memory order and thus the sign bit could be in the least significant byte or the most significant byte. Therefore the use of type punning with floating point data is a questionable method with unpredictable results.
This kind of type punning is more dangerous than most. Whereas the former example relied only on guarantees made by the C programming language about structure layout and pointer convertibility, the latter example relies on assumptions about a particular system's hardware. The C99 Language Specification ( ISO9899:1999 ) has the following warning in section 6.3.2.3 Pointers : "A pointer to an object or incomplete type may be converted to a pointer to a different object or incomplete type. If the resulting pointer is not correctly aligned for the pointed-to type, the behavior is undefined." Therefore one should be very careful with the use of type punning.
Some situations, such astime-criticalcode that the compiler otherwise fails tooptimize, may require dangerous code. In these cases, documenting all such assumptions incomments, and introducingstatic assertionsto verify portability expectations, helps to keep the codemaintainable.
Practical examples of floating-point punning includefast inverse square rootpopularized byQuake III, fast FP comparison as integers,[1]and finding neighboring values by incrementing as an integer (implementingnextafter).[2]
In addition to the assumption about bit-representation of floating-point numbers, the above floating-point type-punning example also violates the C language's constraints on how objects are accessed:[3]the declared type ofxisfloatbut it is read through an expression of typeunsigned int. On many common platforms, this use of pointer punning can create problems if different pointers arealigned in machine-specific ways. Furthermore, pointers of different sizes canalias accesses to the same memory, causing problems that are unchecked by the compiler. Even when data size and pointer representation match, however, compilers can rely on the non-aliasing constraints to perform optimizations that would be unsafe in the presence of disallowed aliasing.
A naive attempt at type-punning can be achieved by using pointers: (The following running example assumes IEEE-754 bit-representation for typefloat.)
The C standard's aliasing rules state that an object shall have its stored value accessed only by an lvalue expression of a compatible type.[4]The typesfloatandint32_tare not compatible, therefore this code's behavior isundefined. Although on GCC and LLVM this particular program compiles and runs as expected, more complicated examples may interact with assumptions made bystrict aliasingand lead to unwanted behavior. The option-fno-strict-aliasingwill ensure correct behavior of code using this form of type-punning, although using other forms of type punning is recommended.[5]
In C, but not in C++, it is sometimes possible to perform type punning via aunion.
Accessingmy_union.iafter most recently writing to the other member,my_union.d, is an allowed form of type-punning in C,[6]provided that the member read is not larger than the one whose value was set (otherwise the read hasunspecified behavior[7]). The same is syntactically valid but hasundefined behaviorin C++,[8]however, where only the last-written member of aunionis considered to have any value at all.
For another example of type punning, seeStride of an array.
InC++20, thestd::bit_castfunction allows type punning with no undefined behavior. It also allows the function be labeledconstexpr.
A variant record permits treating a data type as multiple kinds of data depending on which variant is being referenced. In the following example,integeris presumed to be 16 bit, whilelongintandrealare presumed to be 32, while character is presumed to be 8 bit:
In Pascal, copying a real to an integer converts it to the truncated value. This method would translate the binary value of the floating-point number into whatever it is as a long integer (32 bit), which will not be the same and may be incompatible with the long integer value on some systems.
These examples could be used to create strange conversions, although, in some cases, there may be legitimate uses for these types of constructs, such as for determining locations of particular pieces of data. In the following example a pointer and a longint are both presumed to be 32 bit:
Where "new" is the standard routine in Pascal for allocating memory for a pointer, and "hex" is presumably a routine to print the hexadecimal string describing the value of an integer. This would allow the display of the address of a pointer, something which is not normally permitted. (Pointers cannot be read or written, only assigned.) Assigning a value to an integer variant of a pointer would allow examining or writing to any location in system memory:
This construct may cause a program check or protection violation if address 0 is protected against reading on the machine the program is running upon or the operating system it is running under.
The reinterpret cast technique from C/C++ also works in Pascal. This can be useful, when eg. reading dwords from a byte stream, and we want to treat them as float. Here is a working example, where we reinterpret-cast a dword to a float:
InC#(and other .NET languages), type punning is a little harder to achieve because of the type system, but can be done nonetheless, using pointers or struct unions.
C# only allows pointers to so-called native types, i.e. any primitive type (exceptstring), enum, array or struct that is composed only of other native types. Note that pointers are only allowed in code blocks marked 'unsafe'.
Struct unions are allowed without any notion of 'unsafe' code, but they do require the definition of a new type.
RawCILcan be used instead of C#, because it doesn't have most of the type limitations. This allows one to, for example, combine two enum values of a generic type:
This can be circumvented by the following CIL code:
ThecpblkCIL opcode allows for some other tricks, such as converting a struct to a byte array:
|
https://en.wikipedia.org/wiki/Type_punning
|
Incomputer science,reificationis the process by which an abstract idea about aprogramis turned into an explicitdata modelor other object created in aprogramming language. A computable/addressable object—aresource—is created in a system as a proxy for a non computable/addressable object. By means of reification, something that was previously implicit, unexpressed, and possibly inexpressible is explicitly formulated and made available to conceptual (logical or computational) manipulation. Informally, reification is often referred to as "making something afirst-class citizen" within the scope of a particular system. Some aspect of a system can be reified atlanguage design time, which is related toreflectionin programming languages. It can be applied as astepwise refinementatsystem design time. Reification is one of the most frequently used techniques ofconceptual analysisandknowledge representation.
In the context ofprogramming languages, reification is the process by which a user program or any aspect of a programming language that was implicit in the translated program and the run-time system, are expressed in the language itself. This process makes it available to the program, which can inspect all these aspects as ordinarydata. Inreflective languages, reification data is causally connected to the related reified aspect such that a modification to one of them affects the other. Therefore, the reification data is always a faithful representation of the related reified aspect[clarification needed]. Reification data is often said to be made afirst class object[citation needed]. Reification, at least partially, has been experienced in many languages to date: in earlyLisp dialectsand in currentProlog dialects, programs have been treated as data, although the causal connection has often been left to the responsibility of the programmer. InSmalltalk-80, the compiler from the source text to bytecode has been part of the run-time system since the very first implementations of the language.[1]
Data reification (stepwise refinement) involves finding a more concrete representation of theabstract data typesused in aformal specification.
Data reification is the terminology of theVienna Development Method(VDM) that most other people would call data refinement. An example is taking a step towards an implementation by replacing a data representation without a counterpart in the intended implementation language, such as sets, by one that does have a counterpart (such as maps with fixed domains that can be implemented by arrays), or at least one that is closer to having a counterpart, such as sequences. The VDM community prefers the word "reification" over "refinement", as the process has more to do with concretising an idea than with refining it.[4]
For similar usages, seeReification (linguistics).
Reification is widely used inconceptual modeling.[5]Reifying a relationship means viewing it as an entity. The purpose of reifying a relationship is to make it explicit, when additional information needs to be added to it. Consider the relationship typeIsMemberOf(member:Person, Committee). An instance ofIsMemberOfis a relationship that represents the fact that a person is a member of a committee. The figure below shows an example population ofIsMemberOfrelationship in tabular form. PersonP1is a member of committeesC1andC2. PersonP2is a member of committeeC1only.
The same fact, however, could also be viewed as an entity. Viewing a relationship as an entity, one can say that the entity reifies the relationship. This is called reification of a relationship. Like any other entity, it must be an instance of an entity type. In the present example, the entity type has been namedMembership. For each instance ofIsMemberOf, there is one and only one instance ofMembership, and vice versa. Now, it becomes possible to add more information to the original relationship. As an example, we can express the fact that "person p1 was nominated to be the member of committee c1 by person p2". Reified relationshipMembershipcan be used as the source of a new relationshipIsNominatedBy(Membership, Person).
For related usages seeReification (knowledge representation).
UMLprovides anassociation classconstruct for defining reified relationship types. The association class is a single model element that is both a kind of association[6]and a kind of class.[7]
The association and the entity type that reifies are both the same model element. Note that attributes cannot be reified.
InSemantic Weblanguages, such asResource Description Framework(RDF) andWeb Ontology Language(OWL), a statement is a binary relation. It is used to link two individuals or an individual and a value. Applications sometimes need to describe other RDF statements, for instance, to record information like when statements were made, or who made them, which is sometimes called "provenance" information. As an example, we may want to represent properties of a relation, such as our certainty about it, severity or strength of a relation, relevance of a relation, and so on.
The example from the conceptual modeling section describes a particular person withURIref person:p1, who is a member of thecommittee:c1. The RDF triple from that description is
Consider to store two further facts: (i) to record who nominated this particular person to this committee (a statement about the membership itself), and (ii) to record who added the fact to the database (a statement about the statement).
The first case is a case of classical reification like above in UML: reify the membership and store its attributes and roles etc.:
Additionally, RDF provides a built-in vocabulary intended for describing RDF statements. A description of a statement using this vocabulary is called a reification of the statement. The RDF reification vocabulary consists of the typerdf:Statement, and the propertiesrdf:subject,rdf:predicate, andrdf:object.[8]
Using the reification vocabulary, a reification of the statement about the person's membership would be given by assigning the statement a URIref such ascommittee:membership12345so that describing statements can be written as follows:
These statements say that the resource identified by theURIref committee:membership12345Statis an RDF statement, that the subject of the statement refers to the resource identified byperson:p1, the predicate of the statement refers to the resource identified bycommittee:isMemberOf, and the object of the statement refers to the resourcecommittee:c1. Assuming that the original statement is actually identified bycommittee:membership12345, it should be clear by comparing the original statement with the reification that the reification actually does describe it. The conventional use of the RDF reification vocabulary always involves describing a statement using four statements in this pattern. Therefore, they are sometimes referred to as the "reification quad".[8]
Using reification according to this convention, we could record the fact thatperson:p3added the statement to the
database by
It is important to note that in the conventional use of reification, the subject of the reification triples is assumed to identify a particular instance of a triple in a particular RDF document, rather than some arbitrary triple having the same subject, predicate, and object. This particular convention is used because reification is intended for expressing properties such as dates of composition and source information, as in the examples given already, and these properties need to be applied to specific instances of triples.
Note that the described triple(subject predicate object)itself is not implied by such a reification quad (and it is not necessary that it actually exists in the database). This allows also to use this mechanism to express which triples donothold.
The power of the reification vocabulary in RDF is restricted by the lack of a built-in means for assigning URIrefs to statements, so in order to express "provenance" information of this kind in RDF, one has to use some mechanism (outside of RDF) to assign URIs to individual RDF statements, then make further statements about those individual statements, using their URIs to identify them.[8]
In anXML Topic Map(XTM), only a topic can have a name or play a role in an association. One may use an association to make an assertion about a topic, but one cannot directly make assertions about that assertion. However, it is possible to create a topic that reifies a non-topic construct in a map, thus enabling the association to be named and treated as a topic itself.[9]
In Semantic Web languages, such as RDF and OWL, a property is a binary relation used to link two individuals or an individual and a value. However, in some cases, the natural and convenient way to represent certain concepts is to use relations to link an individual to more than just one individual or value. These relations are calledn-ary relations. Examples are representing relations among multiple individuals, such as a committee, a person who is a committee member and another person who has nominated the first person to become the committee member, or a buyer, a seller, and an object that was bought when describing a purchase of a book.
A more general approach to reification is to create an explicit new class and n new properties to represent ann-ary relation, making an instance of the relation linking thenindividuals an instance of this class. This approach can also be used to represent provenance information and other properties for an individual relation instance.[10]
It is also important to note that the reification described here is not the same as "quotation" found in other languages. Instead, the reification describes the relationship between a particular instance of a triple and the resources the triple refers to. The reification can be read intuitively as saying "this RDF triple talks about these things", rather than (as in quotation) "this RDF triple has this form." For instance, in the reification example used in this section, the triple:
describing therdf:subjectof the original statement says that the subject of the statement is the resource (the person) identified by the URIrefperson:p1. It does not state that the subject of the statement is the URIref itself (i.e., a string beginning with certain characters), as quotation would.
|
https://en.wikipedia.org/wiki/Reification_(computer_science)
|
sizeofis aunary operatorin theCandC++programming languages that evaluates to thestoragesize of anexpressionor adata type, measured in units sized aschar. Consequently, the expressionsizeof(char)evaluates to 1. The number ofbitsof typecharis specified by thepreprocessor macroCHAR_BIT, defined in the standardinclude filelimits.h. On most modern computing platforms this is eight bits. The result ofsizeofis an unsigned integer that is usually typed assize_t.
The operator accepts a single operand which is either a data type expressed as acast– the name of a data type enclosed in parentheses – or a non-type expression for which parentheses are not required.
Many programs must know the storage size of a particular datatype. Though for any givenimplementationof C or C++ the size of a particular datatype is constant, the sizes of even primitive types in C and C++ may be defined differently for different platforms of implementation. For example, runtime allocation of array space may use the following code, in which the sizeof operator is applied to the cast of the typeint:
In this example, themallocfunction allocates memory and returns a pointer to the memory block. The size of the block allocated is equal to the number of bytes for a single object of typeintmultiplied by 10, providing space for ten integers.
It is generally not safe to assume the size of any datatype. For example, even though most implementations of C and C++ on32-bitsystems define typeintto be four octets, this size may change when code isportedto a different system, breaking the code. The exception to this is the data typechar, which always has the size1in any standards-compliant C implementation. In addition, it is frequently difficult to predict the sizes of compound datatypes such as astructorunion, due to padding. The use ofsizeofenhances readability, since it avoids unnamed numeric constants (magic numbers).
An equivalent syntax for allocating the same array space results from using the dereferenced form of the pointer to the storage address, this time applying the operator to a pointer variable:
The operatorsizeofproduces the required memory storage space of its operand when the code is compiled. The operand is written following the keywordsizeofand may be the symbol of a storage space, e.g., a variable, anexpression, or a type name. Parentheses for the operand are optional, except when specifying a type name. The result of the operator is the size of the operand in bytes, or the size of the memory storage requirement. For expressions, it evaluates to the representation size for the type that would result from evaluation of the expression, which is not performed.
For example, sincesizeof (char)is defined to be 1[1]and assuming the integer type is four bytes long, the following code fragment prints1,4:
Certain standard header files, such asstddef.h, definesize_tto denote theunsignedintegral type of the result of asizeofexpression. Theprintfwidth specifierzis intended to format that type.
sizeofcannot be used inC preprocessorexpressions, such as#if, because it is an element of the programming language, not of the preprocessor syntax, which has no data types.
The following example in C++ uses the operatorsizeofwith variadic templates.
sizeofcan be used with variadic templates in C++11 and above on a parameter pack to determine the number of arguments.
Whensizeofis applied to the name of an array, the result is the number of bytes required to store the entire array. This is one of the few exceptions to the rule that the name of an array is converted to a pointer to the first element of the array, and is possible just because the actual array size is fixed and known at compile time, when thesizeofoperator is evaluated. The following program usessizeofto determine the size of a declared array, avoiding abuffer overflowwhen copying characters:
Here,sizeof bufferis equivalent to10 * sizeof buffer [0], which evaluates to 10, because the size of the typecharis defined as 1.
C99adds support for flexible array members to structures. This form of array declaration is allowed as the last element in structures only, and differs from normal arrays in that no length is specified to the compiler. For a structure namedscontaining a flexible array member nameda,sizeof sis therefore equivalent tooffsetof(s, a):
In this case thesizeofoperator returns the size of the structure, including any padding, but without any storage allowed for the array. Most platforms produce the following output:
C99also allows variable length arrays that have the length specified at runtime,[2]although the feature is considered an optional implementation in later versions of the C standard. In such cases, thesizeofoperator is evaluated in part at runtime to determine the storage occupied by the array.
sizeofcan be used to determine the number of elements in an array, by dividing the size of the entire array by the size of a single element. This should be used with caution; When passing an array to another function, it will "decay" to a pointer type. At this point, sizeof will return the size of the pointer, not the total size of the array. As an example with a proper array:
sizeofcan only be applied to "completely" defined types. With arrays, this means that the dimensions of the array must be present in itsdeclaration, and that the type of the elements must be completely defined. Forstructs andunions, this means that there must be a member list of completely defined types. For example, consider the following two source files:
Both files are perfectly legal C, and code infile1.ccan applysizeoftoarrandstruct x. However, it is illegal for code infile2.cto do this, because the definitions infile2.care not complete. In the case ofarr, the code does not specify the dimension of the array; without this information, the compiler has no way of knowing how many elements are in the array, and cannot calculate the array's overall size. Likewise, the compiler cannot calculate the size ofstruct xbecause it does not know what members it is made up of, and therefore cannot calculate the sum of the sizes of the structure's members (and padding). If the programmer provided the size of the array in its declaration infile2.c, or completed the definition ofstruct xby supplying a member list, this would allow the application ofsizeoftoarrorstruct xin that source file.
C++11 introduced the possibility to apply thesizeofparameter to specific members of a class without the necessity to instantiate the object to achieve this.[3]The following example for instance yields4and8on most platforms.
C++11 introducedvariadic templates; the keywordsizeoffollowed byellipsisreturns the number of elements in a parameter pack.
When applied to a fixed-length datatype or variable, expressions with the operatorsizeofare evaluated during program compilation; they are replaced by constant result-values. TheC99standard introducedvariable-length arrays(VLAs), which required evaluation for such expressions during program execution. In many cases, the implementation specifics may be documented in anapplication binary interface(ABI) document for the platform, specifying formats, padding, and alignment for the data types, to which the compiler must conform.
When calculating the size of any object type, the compiler must take into account any requireddata structure alignmentto meet efficiency or architectural constraints. Manycomputer architecturesdo not support multiple-byte access starting at any byte address that is not a multiple of the word size, and even when the architecture allows it, usually theprocessorcan fetch aword-aligned objectfaster than it can fetch an object that straddles multiple words in memory.[4]Therefore, compilers usually align data structures to at least awordboundary, and also align individual members to their respective boundaries. In the following example, the structurestudentis likely to be aligned on a word boundary, which is also where the membergradebegins, and the memberageis likely to start at the next word address. The compiler accomplishes the latter by inserting padding bytes between members as needed to satisfy the alignment requirements. There may also be padding at the end of a structure to ensure proper alignment in case the structure is used as an element of an array.
Thus, the aggregate size of a structure in C can be greater than the sum of the sizes of its individual members. For example, on many systems the following code prints8:
|
https://en.wikipedia.org/wiki/Sizeof
|
In theC++programming language,decltypeis akeywordused to query thetypeof anexpression. Introduced inC++11, its primary intended use is ingeneric programming, where it is often difficult, or even impossible, to express types that depend ontemplateparameters.
Asgeneric programmingtechniques became increasingly popular throughout the 1990s, the need for a type-deduction mechanism was recognized. Many compiler vendors implemented their own versions of the operator, typically calledtypeof, and some portable implementations with limited functionality, based on existing language features were developed. In 2002,Bjarne Stroustrupproposed that a standardized version of the operator be added to the C++ language, and suggested the name "decltype", to reflect that the operator would yield the "declared type" of an expression.
decltype's semantics were designed to cater to both generic library writers and novice programmers. In general, the deduced type matches the type of the object or function exactly as declared in the source code. Like thesizeof[1]operator,decltype's operand is not evaluated.
With the introduction oftemplatesinto the C++ programming language, and the advent ofgeneric programmingtechniques pioneered by theStandard Template Library, the need for a mechanism for obtaining the type of anexpression, commonly referred to astypeof, was recognized. In generic programming, it is often difficult or impossible to express types that depend on template parameters,[2][3]in particular the return type of function template instantiations.[2]
Many vendors provide thetypeofoperator as a compiler extension.[4]As early as 1997, before C++ was fully standardized, Brian Parker proposed a portable solution based on thesizeofoperator.[4]His work was expanded on by Bill Gibbons, who concluded that the technique had several limitations and was generally less powerful than an actualtypeofmechanism.[4]In an October 2000 article ofDr. Dobb's Journal,Andrei Alexandrescuremarked that "having a typeof would make much template code easier to write and understand."[5]He also noted that "typeof and sizeof share the same backend, because sizeof has to compute the type anyway."[5]Andrew Koenigand Barbara E. Moo also recognized the usefulness of a built-intypeoffacility, with the caveat that "using it often invites subtle programming errors, and there are some problems that it cannot solve."[6]They characterized the use of type conventions, like thetypedefsprovided by theStandard Template Library, as a more powerful and general technique.[6]However, Steve Dewhurst argued that such conventions are "costly to design and promulgate", and that it would be "much easier to ... simply extract the type of the expression."[7]In a 2011 article onC++0x, Koenig and Moo predicted that "decltype will be widely used to make everyday programs easier to write."[8]
In 2002,Bjarne Stroustrupsuggested extending the C++ language with mechanisms for querying the type of an expression, and initializing objects without specifying the type.[2]Stroustrup observed that the reference-dropping semantics offered by thetypeofoperator provided by theGCCandEDGcompilers could be problematic.[2]Conversely, an operator returning a reference type based on thelvalue-ness of the expression was deemed too confusing. The initial proposal to the C++ standards committee outlined a combination of the two variants; the operator would return a reference type only if the declared type of the expression included a reference. To emphasize that the deduced type would reflect the "declared type" of the expression, the operator was proposed to be nameddecltype.[2]
One of the cited main motivations for thedecltypeproposal was the ability to write perfectforwarding functiontemplates.[9]It is sometimes desirable to write a generic forwarding function that returns the same type as the wrapped function, regardless of the type it is instantiated with. Withoutdecltype, it is not generally possible to accomplish this.[9]An example, which also utilizes thetrailing-return-type:[9]
decltypeis essential here because it preserves the information about whether the wrapped function returns a reference type.[10]
Similarly to thesizeofoperator, the operand ofdecltypeis unevaluated, so expressions likedecltype(i++)will not result in an increment of the variable i.[11]Informally, the type returned bydecltype(e)is deduced as follows:[2]
These semantics were designed to fulfill the needs of generic library writers, while at the same time being intuitive for novice programmers, because the return type ofdecltypealways matches the type of the object or function exactly as declared in the source code.[2]More formally, Rule 1 applies to unparenthesizedid-expressions and class member access expressions.[12][13]Example:[12]Note for added lines for bar(). Below the type deduced for "bar()" is plain int, not const int, because prvalues of non-class types always have cv-unqualified types, despite the statically declared different type.
The reason for the difference between the latter two invocations ofdecltypeis that the parenthesized expression(a->x)is neither anid-expressionnor a member access expression, and therefore does not denote a named object.[14]Because the expression is an lvalue, its deduced type is "reference to the type of the expression", orconst double&.[11]The fact that extra parentheses introduce a reference qualifier to the type can be a source of errors for programmers who do not fully understanddecltype.[15]
In December 2008, a concern was raised to the committee by Jaakko Järvi over the inability to usedecltypeto form aqualified-id,[1]which is inconsistent with the intent thatdecltype(e)should be treated "as if it were atypedef-name".[16]While commenting on the formal Committee Draft forC++0x, the JapaneseISOmember body noted that "a scope operator(::) cannot be applied to decltype, but it should be. It would be useful in the case to obtain member type(nested-type) from an instance as follows:[17]
This, and similar issues pertaining to the wording inhibiting the use ofdecltypein the declaration of aderived classand in adestructorcall, were addressed by David Vandevoorde, and voted into the working paper in March 2010.[18][19]
decltypeis included in the C++ Language Standard sinceC++11.[12]It is provided by a number of compilers as an extension.Microsoft'sVisual C++ 2010and later compilers provide adecltypetype specifier that closely mimics the semantics as described in the standards committee proposal. It can be used with bothmanagedand native code.[10]The documentation states that it is "useful primarily to developers who write template libraries."[10]decltypewas added to the mainline of theGCCC++ compiler in version 4.3,[20]released on March 5, 2008.[21]decltypeis also present inCodegear'sC++ Builder 2009,[22]theIntel C++ Compiler,[23]andClang.[24]
|
https://en.wikipedia.org/wiki/Decltype
|
Programming languagesandcomputing platformsthat typically supportreflective programming(reflection) includedynamically typedlanguages such asSmalltalk,Perl,PHP,Python,VBScript, andJavaScript. Also the.NETlanguages are supported and theMaude systemof rewriting logic. Very rarely there are some non-dynamic or unmanaged languages, notable examples beingDelphi, eC andObjective-C.
|
https://en.wikipedia.org/wiki/List_of_reflective_programming_languages_and_platforms
|
Incomputer programming, amirroris areflectionmechanism that is completely decoupled from the object whose structure is being introspected. This is as opposed to traditional reflection, for example inJava, where one introspects an object using methods from the object itself (e.g.getClass()).
Mirrors adhere to the qualities ofencapsulation, stratification and ontological correspondence.[1]
Decoupling the reflection mechanism from the objects themselves allows for a few benefits:
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Mirror_(programming)
|
Aprogramming paradigmis a relatively high-level way to conceptualize and structure the implementation of acomputer program. Aprogramming languagecan be classified as supporting one or more paradigms.[1]
Paradigms are separated along and described by different dimensions of programming. Some paradigms are about implications of theexecution model, such as allowingside effects, or whether the sequence of operations is defined by the execution model. Other paradigms are about the way code is organized, such as grouping into units that include both state and behavior. Yet others are aboutsyntaxandgrammar.
Some common programming paradigms include (shown inhierarchical relationship):[2][3][4]
Web presence managementin that the former is generally a marketing and messaging discipline while the latter isGovernance,risk management, and complianceoperational and security discipline.
Programming paradigms come fromcomputer scienceresearchinto existing practices ofsoftware development. The findings allow for describing and comparing programming practices and the languages used to code programs. For perspective, other fields of research studysoftware engineeringprocessesand describe variousmethodologiesto describe and compare them.
Aprogramming languagecan be described in terms of paradigms. Some languages support only one paradigm. For example,Smalltalksupports object-oriented andHaskellsupports functional. Most languages support multiple paradigms. For example, a program written inC++,Object Pascal, orPHPcan be purelyprocedural, purelyobject-oriented, or can contain aspects of both paradigms, or others.
When using a language that supports multiple paradigms, the developer chooses which paradigm elements to use. But, this choice may not involve considering paradigms per se. The developer often uses the features of a language as the language provides them and to the extent that the developer knows them. Categorizing the resultingcodeby paradigm is often an academic activity done in retrospect.
Languages categorized asimperative paradigmhave two main features: they state the order in which operations occur, with constructs that explicitly control that order, and they allow side effects, in which state can be modified at one point in time, within one unit of code, and then later read at a different point in time inside a different unit of code. The communication between the units of code is not explicit.
In contrast, languages in thedeclarative paradigmdo not state the order in which to execute operations. Instead, they supply a number of available operations in the system, along with the conditions under which each is allowed to execute.[7]The implementation of the language's execution model tracks which operations are free to execute and chooses the order independently. More atComparison of multi-paradigm programming languages.
Inobject-orientedprogramming, code is organized intoobjectsthat contain state that is owned by and (usually) controlled by the code of the object. Most object-oriented languages are also imperative languages.
In object-oriented programming, programs are treated as a set of interacting objects. Infunctional programming, programs are treated as a sequence of stateless function evaluations. When programming computers or systems with many processors, inprocess-oriented programming, programs are treated as sets of concurrent processes that act on a logical shareddata structures.
Many programming paradigms are as well known for the techniques theyforbidas for those theysupport. For instance, pure functional programming disallowsside-effects, whilestructured programmingdisallows thegotoconstruct. Partly for this reason, new paradigms are often regarded as doctrinaire or overly rigid by those accustomed to older ones.[8]Yet, avoiding certain techniques can make it easier to understand program behavior, and toprove theoremsabout program correctness.
Programming paradigms can also be compared withprogramming models, which allows invoking anexecution modelby using only an API. Programming models can also be classified into paradigms based on features of the execution model.
Forparallel computing, using a programming model instead of a language is common. The reason is that details of the parallel hardware leak into the abstractions used to program the hardware. This causes the programmer to have to map patterns in the algorithm onto patterns in the execution model (which have been inserted due to leakage of hardware into the abstraction). As a consequence, no one parallel programming language maps well to all computation problems. Thus, it is more convenient to use a base sequential language and insert API calls to parallel execution models via a programming model. Such parallel programming models can be classified according to abstractions that reflect the hardware, such asshared memory,distributed memorywithmessage passing, notions ofplacevisible in the code, and so forth. These can be considered flavors of programming paradigm that apply to only parallel languages and programming models.
Some programming language researchers criticise the notion of paradigms as a classification of programming languages, e.g. Harper,[9]and Krishnamurthi.[10]They argue that many programming languages cannot be strictly classified into one paradigm, but rather include features from several paradigms. SeeComparison of multi-paradigm programming languages.
Different approaches to programming have developed over time. Classification of each approach was either described at the time the approach was first developed, but often not until some time later, retrospectively. An early approach consciously identified as such isstructured programming, advocated since the mid 1960s. The concept of aprogramming paradigmas such dates at least to 1978, in theTuring Awardlecture ofRobert W. Floyd, entitledThe Paradigms of Programming, which cites the notion of paradigm as used byThomas Kuhnin hisThe Structure of Scientific Revolutions(1962).[11]Early programming languages did not have clearly defined programming paradigms and sometimes programs made extensive use of goto statements. Liberal use of which lead tospaghetti codewhich is difficult to understand and maintain. This led to the development of structured programming paradigms that disallowed the use of goto statements; only allowing the use of more structured programming constructs.[12]
Machine codeis the lowest-level of computer programming as it ismachine instructionsthat define behavior at the lowest level of abstract possible for a computer. As it is the most prescriptive way to code it is classified as imperative.
It is sometimes called thefirst-generation programming language.
Assembly languageintroduced mnemonics for machine instructions andmemory addresses. Assembly is classified as imperative and is sometimes called thesecond-generation programming language.
In the 1960s, assembly languages were developed to support library COPY and quite sophisticated conditional macro generation and preprocessing abilities, CALL tosubroutine, external variables and common sections (globals), enabling significant code re-use and isolation from hardware specifics via the use of logical operators such as READ/WRITE/GET/PUT. Assembly was, and still is, used for time-critical systems and often inembedded systemsas it gives the most control of what the machine does.
Procedural languages, also called thethird-generation programming languagesare the first described ashigh-level languages. They support vocabulary related to the problem being solved. For example,
These languages are classified as procedural paradigm. They directly control the step by step process that a computer program follows. Theefficacyandefficiencyof such a program is therefore highly dependent on the programmer's skill.
In attempt to improve on procedural languages,object-oriented programming(OOP) languages were created, such asSimula,Smalltalk,C++,Eiffel,Python,PHP,Java, andC#. In these languages,dataand methods to manipulate the data are in the same code unit called anobject. Thisencapsulationensures that the only way that an object can access data is viamethodsof the object that contains the data. Thus, an object's inner workings may be changed without affecting code that uses the object.
There iscontroversyraised byAlexander Stepanov,Richard Stallman[13]and other programmers, concerning the efficacy of the OOP paradigm versus the procedural paradigm. The need for every object to have associative methods leads some skeptics to associate OOP withsoftware bloat; an attempt to resolve this dilemma came throughpolymorphism.
Although most OOP languages are third-generation, it is possible to create an object-oriented assembler language.High Level Assembly(HLA) is an example of this that fully supports advanced data types and object-oriented assembly language programming – despite its early origins. Thus, differing programming paradigms can be seen rather likemotivationalmemesof their advocates, rather than necessarily representing progress from one level to the next.[citation needed]Precise comparisons of competing paradigms' efficacy are frequently made more difficult because of new and differing terminology applied to similar entities and processes together with numerous implementation distinctions across languages.
Adeclarative programmingprogram describes what the problem is, not how to solve it. The program is structured as a set of properties to find in the expected result, not as a procedure to follow. Given a database or a set of rules, the computer tries to find a solution matching all the desired properties. An archetype of a declarative language is thefourth generation languageSQL, and the family of functional languages and logic programming.
Functional programmingis a subset of declarative programming. Programs written using this paradigm usefunctions, blocks of code intended to behave likemathematical functions. Functional languages discourage changes in the value of variables throughassignment, making a great deal of use ofrecursioninstead.
Thelogic programmingparadigm views computation asautomated reasoningover a body of knowledge. Facts about theproblem domainare expressed as logic formulas, and programs are executed by applyinginference rulesover them until an answer to the problem is found, or the set of formulas is proved inconsistent.
Symbolic programmingis a paradigm that describes programs able to manipulate formulas and program components as data.[4]Programs can thus effectively modify themselves, and appear to "learn", making them suited for applications such asartificial intelligence,expert systems,natural-language processingand computer games. Languages that support this paradigm includeLispandProlog.[14]
Differentiable programmingstructures programs so that they can bedifferentiatedthroughout, usually viaautomatic differentiation.[15][16]
Literate programming, as a form ofimperative programming, structures programs as a human-centered web, as in ahypertextessay: documentation is integral to the program, and the program is structured following the logic of prose exposition, rather than compiler convenience.
Symbolic programmingtechniques such asreflective programming(reflection), which allow a program to refer to itself, might also be considered as a programming paradigm. However, this is compatible with the major paradigms and thus is not a real paradigm in its own right.
|
https://en.wikipedia.org/wiki/Programming_paradigm
|
Template metaprogramming(TMP) is ametaprogrammingtechnique in whichtemplatesare used by acompilerto generate temporarysource code, which is merged by the compiler with the rest of the source code and then compiled. The output of these templates can includecompile-timeconstants,data structures, and completefunctions. The use of templates can be thought of ascompile-time polymorphism. The technique is used by a number of languages, the best-known beingC++, but alsoCurl,D,Nim, andXL.
Template metaprogramming was, in a sense, discovered accidentally.[1][2]
Some other languages support similar, if not more powerful, compile-time facilities (such asLispmacros), but those are outside the scope of this article.
The use of templates as a metaprogramming technique requires two distinct operations: a template must be defined, and a defined template must beinstantiated. The generic form of the generated source code is described in the template definition, and when the template is instantiated, the generic form in the template is used to generate a specific set of source code.
Template metaprogramming isTuring-complete, meaning that any computation expressible by a computer program can be computed, in some form, by a template metaprogram.[3]
Templates are different frommacros. A macro is a piece of code that executes at compile time and either performs textual manipulation of code to-be compiled (e.g.C++macros) or manipulates theabstract syntax treebeing produced by the compiler (e.g.RustorLispmacros). Textual macros are notably more independent of the syntax of the language being manipulated, as they merely change the in-memory text of the source code right before compilation.
Template metaprograms have nomutable variables— that is, no variable can change value once it has been initialized, therefore template metaprogramming can be seen as a form offunctional programming. In fact many template implementations implement flow control only throughrecursion, as seen in the example below.
Though the syntax of template metaprogramming is usually very different from the programming language it is used with, it has practical uses. Some common reasons to use templates are to implement generic programming (avoiding sections of code which are similar except for some minor variations) or to perform automatic compile-time optimization such as doing something once at compile time rather than every time the program is run — for instance, by having the compiler unroll loops to eliminate jumps and loop count decrements whenever the program is executed.
What exactly "programming at compile-time" means can be illustrated with an example of a factorial function, which in non-template C++ can be written using recursion as follows:
The code above will execute at run time to determine the factorial value of the literals 0 and 4.
By using template metaprogramming and template specialization to provide the ending condition for the recursion, the factorials used in the program—ignoring any factorial not used—can be calculated at compile time by this code:
The code above calculates the factorial value of the literals 0 and 4 at compile time and uses the results as if they were precalculated constants.
To be able to use templates in this manner, the compiler must know the value of its parameters at compile time, which has the natural precondition that factorial<X>::value can only be used if X is known at compile time. In other words, X must be a constant literal or a constant expression.
InC++11andC++20,constexprand consteval were introduced to let the compiler execute code. Using constexpr and consteval, one can use the usual recursive factorial definition with the non-templated syntax.[4]
The factorial example above is one example of compile-time code optimization in that all factorials used by the program are pre-compiled and injected as numeric constants at compilation, saving both run-time overhead andmemory footprint. It is, however, a relatively minor optimization.
As another, more significant, example of compile-timeloop unrolling, template metaprogramming can be used to create length-nvector classes (wherenis known at compile time). The benefit over a more traditional length-nvector is that the loops can be unrolled, resulting in very optimized code. As an example, consider the addition operator. A length-nvector addition might be written as
When the compiler instantiates the function template defined above, the following code may be produced:[citation needed]
The compiler's optimizer should be able to unroll theforloop because the template parameterlengthis a constant at compile time.
However, take care and exercise caution as this may cause code bloat as separate unrolled code will be generated for each 'N'(vector size) you instantiate with.
Polymorphismis a common standard programming facility where derived objects can be used as instances of their base object but where the derived objects' methods will be invoked, as in this code
where all invocations ofvirtualmethods will be those of the most-derived class. Thisdynamically polymorphicbehaviour is (typically) obtained by the creation ofvirtual look-up tablesfor classes with virtual methods, tables that are traversed at run time to identify the method to be invoked. Thus,run-time polymorphismnecessarily entails execution overhead (though on modern architectures the overhead is small).
However, in many cases the polymorphic behaviour needed is invariant and can be determined at compile time. Then theCuriously Recurring Template Pattern(CRTP) can be used to achievestatic polymorphism, which is an imitation of polymorphism in programming code but which is resolved at compile time and thus does away with run-time virtual-table lookups. For example:
Here the base class template will take advantage of the fact that member function bodies are not instantiated until after their declarations, and it will use members of the derived class within its own member functions, via the use of astatic_cast, thus at compilation generating an object composition with polymorphic characteristics. As an example of real-world usage, the CRTP is used in theBoostiteratorlibrary.[5]
Another similar use is the "Barton–Nackman trick", sometimes referred to as "restricted template expansion", where common functionality can be placed in a base class that is used not as a contract but as a necessary component to enforce conformant behaviour while minimising code redundancy.
The benefit of static tables is the replacement of "expensive" calculations with a simple array indexing operation (for examples, seelookup table). In C++, there exists more than one way to generate a static table at compile time. The following listing shows an example of creating a very simple table by using recursive structs andvariadic templates.
The table has a size of ten. Each value is the square of the index.
The idea behind this is that the struct Helper recursively inherits from a struct with one more template argument (in this example calculated as INDEX * INDEX) until the specialization of the template ends the recursion at a size of 10 elements. The specialization simply uses the variable argument list as elements for the array.
The compiler will produce code similar to the following (taken from clang called with -Xclang -ast-print -fsyntax-only).
Since C++17 this can be more readably written as:
To show a more sophisticated example the code in the following listing has been extended to have a helper for value calculation (in preparation for more complicated computations), a table specific offset and a template argument for the type of the table values (e.g. uint8_t, uint16_t, ...).
Which could be written as follows using C++17:
The C++20 standard brought C++ programmers a new tool for meta template programming, concepts.[6]
Conceptsallow programmers to specify requirements for the type, to make instantiation of template possible. The compiler looks for a template with the concept that has the highest requirements.
Here is an example of the famousFizz buzzproblem solved with Template Meta Programming.
Compile-time versus execution-time tradeoffs get visible if a great deal of template metaprogramming is used.
|
https://en.wikipedia.org/wiki/Template_metaprogramming
|
Monomorphizationis acompile-timeprocess wherepolymorphic functionsare replaced by many monomorphic functions for each unique instantiation.[1]It is considered beneficial to undergo the mentioned transformation because it results in the outputintermediate representation(IR) having specific types, which allows for more effective optimization. Additionally, many IRs are intended to be low-level and do not accommodate polymorphism. The resulting code is generally faster thandynamic dispatch, but may require more compilation time and storage space due to duplicating the function body.[2][3][4][5][6][7]
This is an example of a use of agenericidentity function inRust
After monomorphization, this would become equivalent to
|
https://en.wikipedia.org/wiki/Monomorphization
|
Generic programmingis a style ofcomputer programmingin whichalgorithmsare written in terms ofdata typesto-be-specified-laterthat are theninstantiatedwhen needed for specific types provided asparameters. This approach, pioneered in theprogramming languageMLin 1973,[1][2]permits writing commonfunctionsordata typesthat differ only in thesetof types on which they operate when used, thus reducingduplicate code.
Generic programming was introduced to the mainstream withAdain 1977. WithtemplatesinC++, generic programming became part of the repertoire of professionallibrarydesign. The techniques were further improved andparameterized typeswere introduced in the influential 1994 bookDesign Patterns.[3]
New techniques were introduced byAndrei Alexandrescuin his 2001 bookModern C++ Design: Generic Programming and Design Patterns Applied. Subsequently,Dimplemented the same ideas.
Such software entities are known asgenericsinAda,C#,Delphi,Eiffel,F#,Java,Nim,Python,Go,Rust,Swift,TypeScript, andVisual Basic (.NET). They are known asparametric polymorphisminML,Scala,Julia, andHaskell. (Haskell terminology also uses the termgenericfor a related but somewhat different concept.)
The termgeneric programmingwas originally coined byDavid MusserandAlexander Stepanov[4]in a more specific sense than the above, to describe a programming paradigm in which fundamental requirements on data types are abstracted from across concrete examples of algorithms anddata structuresand formalized asconcepts, withgeneric functionsimplemented in terms of these concepts, typically using language genericity mechanisms as described above.
Generic programming is defined inMusser & Stepanov (1989)as follows,
Generic programming centers around the idea of abstracting from concrete, efficient algorithms to obtain generic algorithms that can be combined with different data representations to produce a wide variety of useful software.
The "generic programming" paradigm is an approach to software decomposition whereby fundamental requirements on types are abstracted from across concrete examples of algorithms and data structures and formalized asconcepts, analogously to the abstraction of algebraic theories inabstract algebra.[6]Early examples of this programming approach were implemented in Scheme and Ada,[7]although the best known example is theStandard Template Library(STL),[8][9]which developed a theory ofiteratorsthat is used to decouple sequence data structures and the algorithms operating on them.
For example, givenNsequence data structures, e.g. singly linked list, vector etc., andMalgorithms to operate on them, e.g.find,sortetc., a direct approach would implement each algorithm specifically for each data structure, givingN×Mcombinations to implement. However, in the generic programming approach, each data structure returns a model of an iterator concept (a simple value type that can be dereferenced to retrieve the current value, or changed to point to another value in the sequence) and each algorithm is instead written generically with arguments of such iterators, e.g. a pair of iterators pointing to the beginning and end of the subsequence orrangeto process. Thus, onlyN+Mdata structure-algorithm combinations need be implemented. Several iterator concepts are specified in the STL, each a refinement of more restrictive concepts e.g. forward iterators only provide movement to the next value in a sequence (e.g. suitable for a singly linked list or a stream of input data), whereas a random-access iterator also provides direct constant-time access to any element of the sequence (e.g. suitable for a vector). An important point is that a data structure will return a model of the most general concept that can be implemented efficiently—computational complexityrequirements are explicitly part of the concept definition. This limits the data structures a given algorithm can be applied to and such complexity requirements are a major determinant of data structure choice. Generic programming similarly has been applied in other domains, e.g. graph algorithms.[10]
Although this approach often uses language features ofcompile-timegenericity and templates, it is independent of particular language-technical details. Generic programming pioneer Alexander Stepanov wrote,
Generic programming is about abstracting and classifying algorithms and data structures. It gets its inspiration fromKnuthand not from type theory. Its goal is the incremental construction of systematic catalogs of useful, efficient and abstract algorithms and data structures. Such an undertaking is still a dream.
I believe that iterator theories are as central to Computer Science as theories ofringsorBanach spacesare central to Mathematics.
Bjarne Stroustrupnoted,
Following Stepanov, we can define generic programming without mentioning language features: Lift algorithms and data structures from concrete examples to their most general and abstract form.
Other programming paradigms that have been described as generic programming includeDatatype generic programmingas described in "Generic Programming – an Introduction".[14]TheScrap yourboilerplateapproach is a lightweight generic programming approach for Haskell.[15]
In this article we distinguish the high-levelprogramming paradigmsofgeneric programming, above, from the lower-level programming languagegenericity mechanismsused to implement them (seeProgramming language support for genericity). For further discussion and comparison of generic programming paradigms, see.[16]
Genericity facilities have existed in high-level languages since at least the 1970s in languages such asML,CLUandAda, and were subsequently adopted by manyobject-basedandobject-orientedlanguages, includingBETA,C++,D,Eiffel,Java, andDEC's now defunctTrellis-Owl.
Genericity is implemented and supported differently in various programming languages; the term "generic" has also been used differently in various programming contexts. For example, inForththecompilercan execute code while compiling and one can create newcompiler keywordsand new implementations for those words on the fly. It has fewwordsthat expose the compiler behaviour and therefore naturally offersgenericitycapacities that, however, are not referred to as such in most Forth texts. Similarly, dynamically typed languages, especially interpreted ones, usually offergenericityby default as both passing values to functions and value assignment are type-indifferent and such behavior is often used for abstraction or code terseness, however this is not typically labeledgenericityas it's a direct consequence of the dynamic typing system employed by the language.[citation needed]The term has been used infunctional programming, specifically inHaskell-like languages, which use astructural type systemwhere types are always parametric and the actual code on those types is generic. These uses still serve a similar purpose of code-saving and rendering an abstraction.
Arraysandstructscan be viewed as predefined generic types. Every usage of an array or struct type instantiates a new concrete type, or reuses a previous instantiated type. Array element types and struct element types are parameterized types, which are used to instantiate the corresponding generic type. All this is usually built-in in thecompilerand the syntax differs from other generic constructs. Someextensible programming languagestry to unify built-in and user defined generic types.
A broad survey of genericity mechanisms in programming languages follows. For a specific survey comparing suitability of mechanisms for generic programming, see.[17]
When creating container classes in statically typed languages, it is inconvenient to write specific implementations for each datatype contained, especially if the code for each datatype is virtually identical. For example, in C++, this duplication of code can be circumvented by defining a class template:
Above,Tis a placeholder for whatever type is specified when the list is created. These "containers-of-type-T", commonly calledtemplates, allow a class to be reused with different datatypes as long as certain contracts such assubtypesandsignatureare kept. This genericity mechanism should not be confused withinclusion polymorphism, which is thealgorithmicusage of exchangeable sub-classes: for instance, a list of objects of typeMoving_Objectcontaining objects of typeAnimalandCar. Templates can also be used for type-independent functions as in theSwapexample below:
The C++templateconstruct used above is widely cited[citation needed]as the genericity construct that popularized the notion among programmers andlanguage designersand supports many generic programming idioms. The D language also offers fully generic-capable templates based on the C++ precedent but with a simplified syntax. The Java language has provided genericity facilities syntactically based on C++'s since the introduction ofJava Platform, Standard Edition(J2SE) 5.0.
C#2.0,Oxygene1.5 (formerly Chrome) andVisual Basic (.NET)2005 have constructs that exploit the support for generics present in Microsoft.NET Frameworksince version 2.0.
Adahas had generics since it was first designed in 1977–1980. Thestandard libraryuses generics to provide many services. Ada 2005 adds a comprehensive generic container library to the standard library, which was inspired by C++'sStandard Template Library.
Ageneric unitis a package or a subprogram that takes one or moregeneric formal parameters.[18]
Ageneric formal parameteris a value, a variable, a constant, a type, a subprogram, or even an instance of another, designated, generic unit. For generic formal types, the syntax distinguishes between discrete, floating-point, fixed-point, access (pointer) types, etc. Some formal parameters can have default values.
Toinstantiatea generic unit, the programmer passesactualparameters for each formal. The generic instance then behaves just like any other unit. It is possible to instantiate generic units atrun-time, for example inside a loop.
The specification of a generic package:
Instantiating the generic package:
Using an instance of a generic package:
The language syntax allows precise specification of constraints on generic formal parameters. For example, it is possible to specify that a generic formal type will only accept a modular type as the actual. It is also possible to express constraintsbetweengeneric formal parameters; for example:
In this example, Array_Type is constrained by both Index_Type and Element_Type. When instantiating the unit, the programmer must pass an actual array type that satisfies these constraints.
The disadvantage of this fine-grained control is a complicated syntax, but, because all generic formal parameters are completely defined in the specification, thecompilercan instantiate generics without looking at the body of the generic.
Unlike C++, Ada does not allow specialised generic instances, and requires that all generics be instantiated explicitly. These rules have several consequences:
C++ uses templates to enable generic programming techniques. The C++ Standard Library includes theStandard Template Libraryor STL that provides a framework of templates for common data structures and algorithms. Templates in C++ may also be used fortemplate metaprogramming, which is a way of pre-evaluating some of the code at compile-time rather thanrun-time. Using template specialization, C++ Templates areTuring complete.
There are many kinds of templates, the most common being function templates and class templates. Afunction templateis a pattern for creating ordinary functions based upon the parameterizing types supplied when instantiated. For example, the C++ Standard Template Library contains the function templatemax(x, y)that creates functions that return eitherxory,whichever is larger.max()could be defined like this:
Specializationsof this function template, instantiations with specific types, can be called just like an ordinary function:
The compiler examines the arguments used to callmaxand determines that this is a call tomax(int, int). It then instantiates a version of the function where the parameterizing typeTisint, making the equivalent of the following function:
This works whether the argumentsxandyare integers, strings, or any other type for which the expressionx < yis sensible, or more specifically, for any type for whichoperator<is defined. Common inheritance is not needed for the set of types that can be used, and so it is very similar toduck typing. A program defining a custom data type can useoperator overloadingto define the meaning of<for that type, thus allowing its use with themax()function template. While this may seem a minor benefit in this isolated example, in the context of a comprehensive library like the STL it allows the programmer to get extensive functionality for a new data type, just by defining a few operators for it. Merely defining<allows a type to be used with the standardsort(),stable_sort(), andbinary_search()algorithms or to be put inside data structures such assets,heaps, andassociative arrays.
C++ templates are completelytype safeat compile time. As a demonstration, the standard typecomplexdoes not define the<operator, because there is no strict order oncomplex numbers. Therefore,max(x, y)will fail with a compile error, ifxandyarecomplexvalues. Likewise, other templates that rely on<cannot be applied tocomplexdata unless a comparison (in the form of a functor or function) is provided. E.g.: Acomplexcannot be used as key for amapunless a comparison is provided. Unfortunately, compilers historically generate somewhat esoteric, long, and unhelpful error messages for this sort of error. Ensuring that a certain object adheres to amethod protocolcan alleviate this issue. Languages which usecompareinstead of<can also usecomplexvalues as keys.
Another kind of template, aclass template,extends the same concept to classes. A class template specialization is a class. Class templates are often used to make generic containers. For example, the STL has alinked listcontainer. To make a linked list of integers, one writeslist<int>. A list of strings is denotedlist<string>. Alisthas a set of standard functions associated with it, that work for any compatible parameterizing types.
A powerful feature of C++'s templates istemplate specialization. This allows alternative implementations to be provided based on certain characteristics of the parameterized type that is being instantiated. Template specialization has two purposes: to allow certain forms of optimization, and to reduce code bloat.
For example, consider asort()template function. One of the primary activities that such a function does is to swap or exchange the values in two of the container's positions. If the values are large (in terms of the number of bytes it takes to store each of them), then it is often quicker to first build a separate list of pointers to the objects, sort those pointers, and then build the final sorted sequence. If the values are quite small however it is usually fastest to just swap the values in-place as needed. Furthermore, if the parameterized type is already of some pointer-type, then there is no need to build a separate pointer array. Template specialization allows the template creator to write different implementations and to specify the characteristics that the parameterized type(s) must have for each implementation to be used.
Unlike function templates, class templates can bepartially specialized. That means that an alternate version of the class template code can be provided when some of the template parameters are known, while leaving other template parameters generic. This can be used, for example, to create a default implementation (theprimary specialization) that assumes that copying a parameterizing type is expensive and then create partial specializations for types that are cheap to copy, thus increasing overall efficiency. Clients of such a class template just use specializations of it without needing to know whether the compiler used the primary specialization or some partial specialization in each case. Class templates can also befully specialized,which means that an alternate implementation can be provided when all of the parameterizing types are known.
Some uses of templates, such as themax()function, were formerly filled by function-likepreprocessormacros(a legacy of theClanguage). For example, here is a possible implementation of such macro:
Macros are expanded (copy pasted) by thepreprocessor, before program compiling; templates are actual real functions. Macros are always expanded inline; templates can also beinline functionswhen the compiler deems it appropriate.
However, templates are generally considered an improvement over macros for these purposes. Templates are type-safe. Templates avoid some of the common errors found in code that makes heavy use of function-like macros, such as evaluating parameters with side effects twice. Perhaps most importantly, templates were designed to be applicable to much larger problems than macros.
There are four primary drawbacks to the use of templates: supported features, compiler support, poor error messages (usually with pre C++20substitution failure is not an error(SFINAE)), andcode bloat:
So, can derivation be used to reduce the problem of code replicated because templates are used? This would involve deriving a template from an ordinary class. This technique proved successful in curbing code bloat in real use. People who do not use a technique like this have found that replicated code can cost megabytes of code space even in moderate size programs.
The extra instantiations generated by templates can also cause some debuggers to have difficulty working gracefully with templates. For example, setting a debug breakpoint within a template from a source file may either miss setting the breakpoint in the actual instantiation desired or may set a breakpoint in every place the template is instantiated.
Also, the implementation source code for the template must be completely available (e.g. included in a header) to the translation unit (source file) using it. Templates, including much of the Standard Library, if not included in header files, cannot be compiled. (This is in contrast to non-templated code, which may be compiled to binary, providing only a declarations header file for code using it.) This may be a disadvantage by exposing the implementing code, which removes some abstractions, and could restrict its use in closed-source projects.[citation needed]
TheDlanguage supports templates based in design on C++. Most C++ template idioms work in D without alteration, but D adds some functionality:
Templates in D use a different syntax than in C++: whereas in C++ template parameters are wrapped in angular brackets (Template<param1, param2>),
D uses an exclamation sign and parentheses:Template!(param1, param2).
This avoids theC++ parsing difficultiesdue to ambiguity with comparison operators.
If there is only one parameter, the parentheses can be omitted.
Conventionally, D combines the above features to providecompile-time polymorphismusing trait-based generic programming.
For example, an inputrangeis defined as any type that satisfies the checks performed byisInputRange, which is defined as follows:
A function that accepts only input ranges can then use the above template in a template constraint:
In addition to template metaprogramming, D also provides several features to enable compile-time code generation:
Combining the above allows generating code based on existing declarations.
For example, D serialization frameworks can enumerate a type's members and generate specialized functions for each serialized type
to perform serialization and deserialization.
User-defined attributes could further indicate serialization rules.
Theimportexpression and compile-time function execution also allow efficiently implementingdomain-specific languages.
For example, given a function that takes a string containing an HTML template and returns equivalent D source code, it is possible to use it in the following way:
Generic classes have been a part ofEiffelsince the original method and language design. The foundation publications of Eiffel,[22][23]use the termgenericityto describe creating and using generic classes.
Generic classes are declared with their class name and a list of one or moreformal generic parameters. In the following code, classLISThas one formal generic parameterG
The formal generic parameters are placeholders for arbitrary class names that will be supplied when a declaration of the generic class is made, as shown in the twogeneric derivationsbelow, whereACCOUNTandDEPOSITare other class names.ACCOUNTandDEPOSITare consideredactual generic parametersas they provide real class names to substitute forGin actual use.
Within the Eiffel type system, although classLIST [G]is considered a class, it is not considered a type. However, a generic derivation ofLIST [G]such asLIST [ACCOUNT]is considered a type.
For the list class shown above, an actual generic parameter substituting forGcan be any other available class. To constrain the set of classes from which valid actual generic parameters can be chosen, ageneric constraintcan be specified. In the declaration of classSORTED_LISTbelow, the generic constraint dictates that any valid actual generic parameter will be a class that inherits from classCOMPARABLE. The generic constraint ensures that elements of aSORTED_LISTcan in fact be sorted.
Support for thegenerics, or "containers-of-type-T" was added to theJava programming languagein 2004 as part of J2SE 5.0. In Java, generics are only checked at compile time for type correctness. The generic type information is then removed via a process calledtype erasure, to maintain compatibility with oldJVMimplementations, making it unavailable at runtime.[24]For example, aList<String>is converted to the raw typeList. The compiler insertstype caststo convert the elements to theStringtype when they are retrieved from the list, reducing performance compared to other implementations such as C++ templates.
Generics were added as part of.NET Framework 2.0in November 2005, based on a research prototype from Microsoft Research started in 1999.[25]Although similar to generics in Java, .NET generics do not applytype erasure,[26]: 208–209but implement generics as a first class mechanism in the runtime usingreification. This design choice provides additional functionality, such as allowingreflectionwith preservation of generic types, and alleviating some of the limits of erasure (such as being unable to create generic arrays).[27][28]This also means that there is no performance hit from runtimecastsand normally expensiveboxing conversions. When primitive and value types are used as generic arguments, they get specialized implementations, allowing for efficient genericcollectionsand methods. As in C++ and Java, nested generic types such as Dictionary<string, List<int>> are valid types, however are advised against for member signatures in code analysis design rules.[29]
.NET allows six varieties of generic type constraints using thewherekeyword including restricting generic types to be value types, to be classes, to have constructors, and to implement interfaces.[30]Below is an example with an interface constraint:
TheMakeAtLeast()method allows operation on arrays, with elements of generic typeT. The method's type constraint indicates that the method is applicable to any typeTthat implements the genericIComparable<T>interface. This ensures acompile timeerror, if the method is called if the type does not support comparison. The interface provides the generic methodCompareTo(T).
The above method could also be written without generic types, simply using the non-genericArraytype. However, since arrays arecontravariant, the casting would not betype safe, and the compiler would be unable to find certain possible errors that would otherwise be caught when using generic types. In addition, the method would need to access the array items asobjects instead, and would requirecastingto compare two elements. (For value types like types such asintthis requires aboxingconversion, although this can be worked around using theComparer<T>class, as is done in the standard collection classes.)
A notable behavior of static members in a generic .NET class is static member instantiation per run-time type (see example below).
ForPascal, generics were first implemented in 2006, in the implementationFree Pascal.
TheObject PascaldialectDelphiacquired generics in the 2007 Delphi 11 release byCodeGear, initially only with the .NET compiler (since discontinued) before being added to the native code in the 2009 Delphi 12 release. The semantics and abilities of Delphi generics are largely modelled on those of generics in .NET 2.0, though the implementation is by necessity quite different. Here is a more or less direct translation of the first C# example shown above:
As with C#, methods and whole types can have one or more type parameters. In the example, TArray is a generic type (defined by the language) and MakeAtLeast a generic method. The available constraints are very similar to the available constraints in C#: any value type, any class, a specific class or interface, and a class with a parameterless constructor. Multiple constraints act as an additive union.
Free Pascalimplemented generics in 2006 inversion 2.2.0, before Delphi and with different syntax and semantics. However, since FPC version 2.6.0, the Delphi-style syntax is available when using the language mode{$mode Delphi}. Thus, Free Pascal code supports generics in either style.
Delphi and Free Pascal example:
Thetype classmechanism ofHaskellsupports generic programming. Six of the predefined type classes in Haskell (includingEq, the types that can be compared for equality, andShow, the types whose values can be rendered as strings) have the special property of supportingderived instances.This means that a programmer defining a new type can state that this type is to be an instance of one of these special type classes, without providing implementations of the class methods as is usually necessary when declaring class instances. All the necessary methods will be "derived" – that is, constructed automatically – based on the structure of the type. For example, the following declaration of a type ofbinary treesstates that it is to be an instance of the classesEqandShow:
This results in an equality function (==) and a string representation function (show) being automatically defined for any type of the formBinTree Tprovided thatTitself supports those operations.
The support for derived instances ofEqandShowmakes their methods==andshowgeneric in a qualitatively different way from parametrically polymorphic functions: these "functions" (more accurately, type-indexed families of functions) can be applied to values of various types, and although they behave differently for every argument type, little work is needed to add support for a new type. Ralf Hinze (2004) has shown that a similar effect can be achieved for user-defined type classes by certain programming techniques. Other researchers have proposed approaches to this and other kinds of genericity in the context of Haskell and extensions to Haskell (discussed below).
PolyP was the first generic programming language extension toHaskell. In PolyP, generic functions are calledpolytypic. The language introduces a special construct in which such polytypic functions can be defined via structural induction over the structure of the pattern functor of a regular datatype. Regular datatypes in PolyP are a subset of Haskell datatypes. A regular datatype t must be ofkind* → *, and ifais the formal type argument in the definition, then all recursive calls totmust have the formt a. These restrictions rule out higher-kinded datatypes and nested datatypes, where the recursive calls are of a different form.
The flatten function in PolyP is here provided as an example:
Generic Haskell is another extension toHaskell, developed atUtrecht Universityin theNetherlands. The extensions it provides are:
The resulting type-indexed value can be specialized to any type.
As an example, the equality function in Generic Haskell:[31]
Cleanoffers generic programming basedPolyPand theGeneric Haskellas supported by the GHC ≥ 6.0. It parametrizes by kind as those but offers overloading.
Languages in theMLfamily support generic programming throughparametric polymorphismand genericmodulescalledfunctors.BothStandard MLandOCamlprovide functors, which are similar to class templates and to Ada's generic packages.Schemesyntactic abstractions also have a connection to genericity – these are in fact a superset of C++ templates.
AVerilogmodule may take one or more parameters, to which their actual values are assigned upon the instantiation of the module. One example is a genericregisterarray where the array width is given via a parameter. Such an array, combined with a generic wire vector, can make a generic buffer or memory module with an arbitrary bit width out of a single module implementation.[32]
VHDL, being derived from Ada, also has generic abilities.[33]
Csupports "type-generic expressions" using the_Generickeyword:[34]
|
https://en.wikipedia.org/wiki/Generic_programming
|
In the context of theCorC++programming languages, alibraryis calledheader-onlyif the full definitions of allmacros,functionsandclassescomprising the library are visible to thecompilerin aheader fileform.[1]Header-only libraries do not need to be separatelycompiled, packaged and installed in order to be used. All that is required is to point the compiler at the location of the headers, and then#includethe header files into the application source. Another advantage is that the compiler's optimizer can do a much better job when all the library's source code is available.
The disadvantages include:
Nonetheless, the header-only form is popular because it avoids the (often much more serious) problem of packaging.
ForC++ templates, including the definitions in header is the only way to compile, since the compiler needs to know the full definition of the templates in order to instantiate.
|
https://en.wikipedia.org/wiki/Header-only
|
Substitution failure is not an error(SFINAE) is a principle inC++where an invalid substitution oftemplateparameters is not in itself an error. David Vandevoorde first introduced the acronym SFINAE to describe related programming techniques.[1]
Specifically, when creating a candidate set foroverload resolution, some (or all) candidates of that set may be the result of instantiated templates with (potentially deduced) template arguments substituted for the corresponding template parameters. If an error occurs during the substitution of a set of arguments for any given template, the compiler removes the potential overload from the candidate set instead of stopping with a compilation error, provided that the C++ standard permits discarding such a substitution error as mentioned.[2]If one or more candidates remain and overload resolution succeeds, the invocation is well-formed.
The following example illustrates a basic instance of SFINAE:
Here, attempting to use a non-class type in a qualified name (T::foo) results in a deduction failure forf<int>becauseinthas no nested type namedfoo, but the program is well-formed because a valid function remains in the set of candidate functions.
Although SFINAE was initially introduced to avoid creating ill-formed programs when unrelated template declarations were visible (e.g., through the inclusion of a header file), many developers later found the behavior useful for compile-time introspection. Specifically, it allows a template to determine certain properties of its template arguments at instantiation time.
For example, SFINAE can be used to determine if a type contains a certain typedef:
WhenThas the nested typefoobardefined, the instantiation of the firsttestworks and the null pointer constant is successfully passed. (And the resulting type of the expression isyes.) If it does not work, the only available function is the secondtest, and the resulting type of the expression isno. An ellipsis is used not only because it will accept any argument, but also because its conversion rank is lowest, so a call to the first function will be preferred if it is possible; this removes ambiguity.
InC++11, the above code could be simplified to:
With the standardisation of the detection idiom in theLibrary fundamental v2 (n4562)proposal, the above code could be re-written as follows:
The developers ofBoostused SFINAE in boost::enable_if[3]and in other ways.
|
https://en.wikipedia.org/wiki/Substitution_failure_is_not_an_error
|
Thecuriously recurring template pattern(CRTP) is an idiom, originally inC++, in which a classXderives from a classtemplateinstantiation usingXitself as a template argument.[1]More generally it is known asF-bound polymorphism, and it is a form ofF-bounded quantification.
The technique was formalized in 1989 as "F-bounded quantification."[2]The name "CRTP" was independently coined byJim Coplienin 1995,[3]who had observed it in some of the earliestC++template code
as well as in code examples that Timothy Budd created in his multiparadigm language Leda.[4]It is sometimes called "Upside-Down Inheritance"[5][6]due to the way it allows class hierarchies to be extended by substituting different base classes.
The Microsoft Implementation of CRTP inActive Template Library(ATL) was independently discovered, also in 1995, by Jan Falkin, who accidentally derived a base class from a derived class. Christian Beaumont first saw Jan's code and initially thought it could not possibly compile in the Microsoft compiler available at the time. Following the revelation that it did indeed work, Christian based the entire ATL andWindows Template Library(WTL) design on this mistake.[citation needed]
Some use cases for this pattern arestatic polymorphismand other metaprogramming techniques such as those described byAndrei AlexandrescuinModern C++ Design.[7]It also figures prominently in the C++ implementation of theData, Context, and Interactionparadigm.[8]In addition, CRTP is used by the C++ standard library to implement thestd::enable_shared_from_thisfunctionality.[9]
Typically, the base class template will take advantage of the fact that member function bodies (definitions) are not instantiated until long after their declarations, and will use members of the derived class within its own member functions, via the use of acast; e.g.:
In the above example, the functionBase<Derived>::interface(), thoughdeclaredbefore the existence of thestruct Derivedis known by the compiler (i.e., beforeDerivedis declared), is not actuallyinstantiatedby the compiler until it is actuallycalledby some later code which occursafterthe declaration ofDerived(not shown in the above example), so that at the time the function "interface" is instantiated, the declaration ofDerived::implementation()is known.
This technique achieves a similar effect to the use ofvirtual functions, without the costs (and some flexibility) ofdynamic polymorphism. This particular use of the CRTP has been called "simulated dynamic binding" by some.[10]This pattern is used extensively in the WindowsATLandWTLlibraries.
To elaborate on the above example, consider a base class withno virtual functions. Whenever the base class calls another member function, it will always call its own base class functions. When we derive a class from this base class, we inherit all the member variables and member functions that were not overridden (no constructors or destructors). If the derived class calls an inherited function which then calls another member function, then that function will never call any derived or overridden member functions in the derived class.
However, if base class member functions use CRTP for all member function calls, the overridden functions in the derived class will be selected at compile time. This effectively emulates the virtual function call system at compile time without the costs in size or function call overhead (VTBLstructures, and method lookups, multiple-inheritance VTBL machinery) at the disadvantage of not being able to make this choice at runtime.
The main purpose of an object counter is retrieving statistics of object creation and destruction for a given class.[11]This can be easily solved using CRTP:
Each time an object of classXis created, the constructor ofcounter<X>is called, incrementing both the created and alive count. Each time an object of classXis destroyed, the alive count is decremented. It is important to note thatcounter<X>andcounter<Y>are two separate classes and this is why they will keep separate counts ofXs andYs. In this example of CRTP, this distinction of classes is the only use of the template parameter (Tincounter<T>) and the reason why we cannot use a simple un-templated base class.
Method chaining, also known as named parameter idiom, is a common syntax for invoking multiple method calls in object-oriented programming languages. Each method returns an object, allowing the calls to be chained together in a single statement without requiring variables to store the intermediate results.
When the named parameter object pattern is applied to an object hierarchy, things can go wrong. Suppose we have such a base class:
Prints can be easily chained:
However, if we define the following derived class:
we "lose" the concrete class as soon as we invoke a function of the base:
This happens because 'print' is a function of the base – 'Printer' – and then it returns a 'Printer' instance.
The CRTP can be used to avoid such problem and to implement "Polymorphic chaining":[12]
When using polymorphism, one sometimes needs to create copies of objects by the base class pointer. A commonly used idiom for this is adding a virtual clone function that is defined in every derived class. The CRTP can be used to avoid having to duplicate that function or other similar functions in every derived class.
This allows obtaining copies of squares, circles or any other shapes byshapePtr->clone().
One issue with static polymorphism is that without using a general base class likeAbstractShapefrom the above example, derived classes cannot be stored homogeneously – that is, putting different types derived from the same base class in the same container. For example, a container defined asstd::vector<Shape*>does not work becauseShapeis not a class, but a template needing specialization. A container defined asstd::vector<Shape<Circle>*>can only storeCircles, notSquares. This is because each of the classes derived from the CRTP base classShapeis a unique type. A common solution to this problem is to inherit from a shared base class with a virtual destructor, like theAbstractShapeexample above, allowing for the creation of astd::vector<AbstractShape*>.
The use of CRTP can be simplified using theC++23featurededucing this.[13][14]For the functionsignature_dishto call a derived member functioncook_signature_dish,ChefBaseneeds to be a templated type andCafeChefneeds to inherit fromChefBase, passing its type as the template parameter.
If explicit object parameter is used,ChefBasedoes not need to be templated andCafeChefcan derive fromChefBaseplainly. Since theselfparameter is automatically deduced as the correct derived type, no casting is required.
|
https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern
|
The followinglist of C++ template librariesdetails the variouslibrariesoftemplatesavailable for theC++programming language.
The choice of a typical library depends on a diverse range of requirements such as: desired features (e.g.: large dimensional linear algebra, parallel computation, partial differential equations), commercial/opensource nature, readability of API, portability or platform/compiler dependence (e.g.: Linux, Windows, Visual C++, GCC), performance in speed, ease-of-use, continued support from developers, standard compliance, specialized optimization in code for specific application scenarios or even the size of the code-base to be installed.
|
https://en.wikipedia.org/wiki/List_of_C%2B%2B_template_libraries
|
Application lifecycle management(ALM) is theproduct lifecycle management(governance,development, andmaintenance) ofcomputer programs. It encompassesrequirements management,software architecture,computer programming,software testing,software maintenance,change management,continuous integration,project management, andrelease management.[1][2]
ALM is a broader perspective than theSoftware Development Life Cycle(SDLC), which is limited to the phases ofsoftware developmentsuch as requirements, design, coding, testing, configuration, project management, and change management. ALM continues after development until the application is no longer used, and may span many SDLCs.
Modern software development processes are not restricted to the discrete ALM/SDLCsteps managed by different teams using multiple tools from different locations.[citation needed]Real-time collaboration, access to the centralized data repository, cross-tool and cross-project visibility, better project monitoring and reporting are the key to developing quality software in less time.[citation needed]
This has given rise to the practice of integrated application lifecycle management, or integrated ALM, where all the tools and tools' users are synchronized with each other throughout the application development stages.[citation needed]This integration ensures that every team member knows Who, What, When, and Why of any changes made during the development process and there is no last minute surprise causing delivery delays or project failure.[citation needed]
Today's application management vendors focus more onAPImanagement capabilities for third party best-of-breed tool integration which ensures that organizations are well-equipped with an internal software development system that can easily integrate with any IT or ALM tools needed in a project.[citation needed]
A research director with research firmGartnerproposed changing the term ALM to ADLM (Application Development Life-cycle Management) to includeDevOps, the software engineering culture and practice that aims at unifying software development (Dev) and software operation (Ops).[3]
Some specializedsoftware suitesfor ALM are:
|
https://en.wikipedia.org/wiki/Application_lifecycle_management
|
Theinput–process–output (IPO) model, orinput-process-outputpattern, is a widely used approach insystems analysisandsoftware engineeringfor describing the structure of an information processing program or other process. Many introductoryprogrammingandsystems analysistexts introduce this as the most basic structure for describing a process.[1][2][3][4]
Acomputer programis useful for another sort of process using the input-process-output model receives inputs from a user or other source, does somecomputationson the inputs, and returns the results of the computations.[1]In essence the system separates itself from the environment, thus defining both inputs and outputs as one united mechanism.[5]The system would divide the work into three categories:
In other words, such inputs may be materials, human resources, money or information, transformed into outputs, such as consumables, services, new information or money.
As a consequence, an input-process-output system becomes very vulnerable to misinterpretation. This is because, theoretically, it contains all the data, in regards to the environment outside the system. Yet, in practice, the environment contains a significant variety of objects that a system is unable to comprehend, as it exists outside the system's control. As a result, it is very important to understand where the boundary lies between the system and the environment, which is beyond the system's understanding. Various analysts often set their own boundaries, favoring their point of view, thus creating much confusion.[6]
The views differ, in regards tosystems thinking.[4]One of such definitions would outline the Input-process-output system, as a structure, would be:
"Systems thinking is the art and science of making reliable inferences about behaviour by developing an increasingly deep understanding of the understanding of the underlying structure"[7]
Alternatively, it was also suggested that systems are not 'holistic' in the sense of bonding with remote objects (for example: trying to connect a crab, ozone layer and capital life cycle together).[8]
There are five major categories that are the most cited in information systems literature:[9][10]
A system which has not been created as a result of human interference. Examples of such would be theSolar Systemas well as the human body, evolving into its current form[9]
A system which has been created as a result of human interference, and is physically identifiable. Examples of such would be various computing machines, created by human mind for some specific purpose.[9]
A system which has been created as a result of human interference, and is not physically identifiable. Examples of such would be mathematical and philosophical systems, which have been created by human minds, for some specific purpose.[9]
There are also some social systems, which allow humans to collectively achieve a specific
A system created by humans, and derived from intangible purposes. For example: a family, that is a hierarchy of human relationships, which in essence create the boundary between natural and human systems.[9]
An organisation with hierarchy, created by humans for a specific purpose. For example: a company, which organises humans together to collaborate and achieve a specific purpose. The result of this system is physically identifiable.[9]There are, however, some significant links between with previous types. It is clear that the idea of human activity system (HAS), would consist of a variety of smaller social system, with its unique development and organisation. Moreover, arguably HASes can include designed systems - computers and machinery. Majority of previous systems would overlap.[10]
There are several key characteristics, when it comes to the fundamental behaviour of any system.
|
https://en.wikipedia.org/wiki/IPO_model
|
Insoftware engineering, asoftware development processorsoftware development life cycle(SDLC) is a process of planning and managingsoftware development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improvedesignand/orproduct management. The methodology may include the pre-definition of specificdeliverablesand artifacts that are created and completed by a project team to develop or maintain an application.[1]
Most modern development processes can be vaguely described asagile. Other methodologies includewaterfall,prototyping,iterative and incremental development,spiral development,rapid application development, andextreme programming.
A life-cycle "model" is sometimes considered a more general term for a category of methodologies and a software development "process" is a particular instance as adopted by a specific organization.[citation needed]For example, many specific software development processes fit the spiral life-cycle model. The field is often considered a subset of thesystems development life cycle.
The software development methodology framework did not emerge until the 1960s. According to Elliott (2004), thesystems development life cyclecan be considered to be the oldest formalized methodology framework for buildinginformation systems. The main idea of the software development life cycle has been "to pursue the development of information systems in a very deliberate, structured and methodical way, requiring each stage of the life cycle––from the inception of the idea to delivery of the final system––to be carried out rigidly and sequentially"[2]within the context of the framework being applied. The main target of this methodology framework in the 1960s was "to develop large scale functionalbusiness systemsin an age of large scale business conglomerates. Information systems activities revolved around heavydata processingandnumber crunchingroutines."[2]
Requirements gathering and analysis:The first phase of the custom software development process involves understanding the client's requirements and objectives. This stage typically involves engaging in thorough discussions and conducting interviews with stakeholders to identify the desired features, functionalities, and overall scope of the software. The development team works closely with the client to analyze existing systems and workflows, determine technical feasibility, and define project milestones.
Planning and design:Once the requirements are understood, the custom software development team proceeds to create a comprehensive project plan. This plan outlines the development roadmap, including timelines, resource allocation, and deliverables. The software architecture and design are also established during this phase. User interface (UI) and user experience (UX) design elements are considered to ensure the software's usability, intuitiveness, and visual appeal.
Development:With the planning and design in place, the development team begins the coding process. This phase involveswriting, testing, and debugging the software code. Agile methodologies, such as scrum or kanban, are often employed to promote flexibility, collaboration, and iterative development. Regular communication between the development team and the client ensures transparency and enables quick feedback and adjustments.
Testing and quality assurance:To ensure the software's reliability, performance, and security, rigorous testing and quality assurance (QA) processes are carried out. Different testing techniques, including unit testing, integration testing, system testing, and user acceptance testing, are employed to identify and rectify any issues or bugs. QA activities aim to validate the software against the predefined requirements, ensuring that it functions as intended.
Deployment and implementation:Once the software passes the testing phase, it is ready for deployment and implementation. The development team assists the client in setting up the software environment, migrating data if necessary, and configuring the system. User training and documentation are also provided to ensure a smooth transition and enable users to maximize the software's potential.
Maintenance and support:After the software is deployed, ongoing maintenance and support become crucial to address any issues, enhance performance, and incorporate future enhancements. Regular updates, bug fixes, and security patches are released to keep the software up-to-date and secure. This phase also involves providing technical support to end users and addressing their queries or concerns.
Methodologies, processes, and frameworks range from specific prescriptive steps that can be used directly by an organization in day-to-day work, to flexible frameworks that an organization uses to generate a custom set of steps tailored to the needs of a specific project or group. In some cases, a "sponsor" or "maintenance" organization distributes an official set of documents that describe the process. Specific examples include:
Since DSDM in 1994, all of the methodologies on the above list except RUP have been agile methodologies - yet many organizations, especially governments, still use pre-agile processes (often waterfall or similar). Software process andsoftware qualityare closely interrelated; some unexpected facets and effects have been observed in practice.[3]
Among these, another software development process has been established inopen source. The adoption of these best practices known and established processes within the confines of a company is calledinner source.
Software prototypingis about creating prototypes, i.e. incomplete versions of the software program being developed.
The basic principles are:[1]
A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problems, but this is true for all software methodologies.
"Agile software development" refers to a group of software development frameworks based on iterative development, where requirements and solutions evolve via collaboration between self-organizing cross-functional teams. The term was coined in the year 2001 when theAgile Manifestowas formulated.
Agile software development uses iterative development as a basis but advocates a lighter and more people-centric viewpoint than traditional approaches. Agile processes fundamentally incorporate iteration and the continuous feedback that it provides to successively refine and deliver a software system.
The Agile model also includes the following software development processes:
Continuous integrationis the practice of merging all developer working copies to a sharedmainlineseveral times a day.[4]Grady Boochfirst named and proposed CI inhis 1991 method,[5]although he did not advocate integrating several times a day.Extreme programming(XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day.
Various methods are acceptable for combining linear and iterative systems development methodologies, with the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
There are three main variants of incremental development:[1]
Rapid application development(RAD) is a software development methodology, which favorsiterative developmentand the rapid construction ofprototypesinstead of large amounts of up-front planning. The "planning" of software developed using RAD is interleaved with writing the software itself. The lack of extensive pre-planning generally allows software to be written much faster and makes it easier to change requirements.
The rapid development process starts with the development of preliminarydata modelsandbusiness process modelsusingstructured techniques. In the next stage, requirements are verified using prototyping, eventually to refine the data and process models. These stages are repeated iteratively; further development results in "a combined business requirements and technical design statement to be used for constructing new systems".[6]
The term was first used to describe a software development process introduced byJames Martinin 1991. According to Whitten (2003), it is a merger of variousstructured techniques, especially data-driveninformation technology engineering, with prototyping techniques to accelerate software systems development.[6]
The basic principles of rapid application development are:[1]
The waterfall model is a sequential development approach, in which development is seen as flowing steadily downwards (like a waterfall) through several phases, typically:
The first formal description of the method is often cited as an article published byWinston W. Royce[7]in 1970, although Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model.[8]
The basic principles are:[1]
The waterfall model is a traditional engineering approach applied to software engineering. A strict waterfall approach discourages revisiting and revising any prior phase once it is complete.[according to whom?]This "inflexibility" in a pure waterfall model has been a source of criticism by supporters of other more "flexible" models. It has been widely blamed for several large-scale government projects running over budget, over time and sometimes failing to deliver on requirements due to thebig design up frontapproach.[according to whom?]Except when contractually required, the waterfall model has been largely superseded by more flexible and versatile methodologies developed specifically for software development.[according to whom?]SeeCriticism of waterfall model.
In 1988,Barry Boehmpublished a formal software system development "spiral model," which combines some key aspects of thewaterfall modelandrapid prototypingmethodologies, in an effort to combine advantages oftop-down and bottom-upconcepts. It provided emphasis on a key area many felt had been neglected by other methodologies: deliberate iterative risk analysis, particularly suited to large-scale complex systems.
The basic principles are:[1]
Shape Up is a software development approach introduced byBasecampin 2018. It is a set of principles and techniques that Basecamp developed internally to overcome the problem of projects dragging on with no clear end. Its primary target audience is remote teams. Shape Up has no estimation and velocity tracking, backlogs, or sprints, unlikewaterfall,agile, orscrum. Instead, those concepts are replaced with appetite, betting, and cycles. As of 2022, besides Basecamp, notable organizations that have adopted Shape Up include UserVoice and Block.[12][13]
Other high-level software project methodologies include:
Some "process models" are abstract descriptions for evaluating, comparing, and improving the specific process adopted by an organization.
|
https://en.wikipedia.org/wiki/Software_development_methodologies
|
Data modelinginsoftware engineeringis the process of creating adata modelfor aninformation systemby applying certain formal techniques. It may be applied as part of broaderModel-driven engineering(MDE) concept.
Data modeling is aprocessused to define and analyze datarequirementsneeded to support thebusiness processeswithin the scope of corresponding information systems in organizations. Therefore, the process of data modeling involves professional data modelers working closely with business stakeholders, as well as potential users of the information system.
There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system.[2]The data requirements are initially recorded as aconceptual data modelwhich is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders. Theconceptual modelis then translated into alogical data model, which documents structures of the data that can be implemented in databases. Implementation of one conceptual data model may require multiple logical data models. The last step in data modeling is transforming the logical data model to aphysical data modelthat organizes the data into tables, and accounts for access, performance and storage details. Data modeling defines not just data elements, but also their structures and the relationships between them.[3]
Data modeling techniques and methodologies are used to model data in a standard, consistent, predictable manner in order to manage it as a resource. The use of data modeling standards is strongly recommended for all projects requiring a standard means of defining and analyzing data within an organization, e.g., using data modeling:
Data modelling may be performed during various types of projects and in multiple phases of projects. Data models are progressive; there is no such thing as the final data model for a business or application. Instead, a data model should be considered a living document that will change in response to a changing business. The data models should ideally be stored in a repository so that they can be retrieved, expanded, and edited over time.Whittenet al. (2004) determined two types of data modelling:[4]
Data modelling is also used as a technique for detailing businessrequirementsfor specificdatabases. It is sometimes calleddatabase modellingbecause adata modelis eventually implemented in a database.[4]
Data models provide a framework fordatato be used withininformation systemsby providing specific definitions and formats. If a data model is used consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data seamlessly. The results of this are indicated in the diagram. However, systems and interfaces are often expensive to build, operate, and maintain. They may also constrain the business rather than support it. This may occur when the quality of the data models implemented in systems and interfaces is poor.[1]
Some common problems found in data models are:
In 1975ANSIdescribed three kinds of data-modelinstance:[5]
According to ANSI, this approach allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual schema. The table/column structure can change without (necessarily) affecting the conceptual schema. In each case, of course, the structures must remain consistent across all schemas of the same data model.
In the context ofbusiness process integration(see figure), data modeling complementsbusiness process modeling, and ultimately results in database generation.[6]
The process of designing a database involves producing the previously described three types of schemas – conceptual, logical, and physical. The database design documented in these schemas is converted through aData Definition Language, which can then be used to generate a database. A fully attributed data model contains detailed attributes (descriptions) for every entity within it. The term "database design" can describe many different parts of the design of an overalldatabase system. Principally, and most correctly, it can be thought of as the logical design of the base data structures used to store the data. In therelational modelthese are thetablesandviews. In anobject databasethe entities and relationships map directly to object classes and named relationships. However, the term "database design" could also be used to apply to the overall process of designing, not just the base data structures, but also the forms and queries used as part of the overall database application within theDatabase Management Systemor DBMS.
In the process, systeminterfacesaccount for 25% to 70% of the development and support costs of current systems. The primary reason for this cost is that these systems do not share acommon data model. If data models are developed on a system by system basis, then not only is the same analysis repeated in overlapping areas, but further analysis must be performed to create the interfaces between them. Most systems within an organization contain the same basic data, redeveloped for a specific purpose. Therefore, an efficiently designed basic data model can minimize rework with minimal modifications for the purposes of different systems within the organization[1]
Data models represent information areas of interest. While there are many ways to create data models, according toLen Silverston(1997)[7]only two modeling methodologies stand out, top-down and bottom-up:
Sometimes models are created in a mixture of the two methods: by considering the data needs and structure of an application and by consistently referencing a subject-area model. In many environments, the distinction between a logical data model and a physical data model is blurred. In addition, someCASEtools don't make a distinction between logical andphysical data models.[7]
There are several notations for data modeling. The actual model is frequently called "entity–relationship model", because it depicts data in terms of the entities and relationships described in thedata.[4]An entity–relationship model (ERM) is an abstract conceptual representation of structured data. Entity–relationship modeling is a relational schemadatabase modelingmethod, used insoftware engineeringto produce a type ofconceptual data model(orsemantic data model) of a system, often arelational database, and its requirements in atop-downfashion.
These models are being used in the first stage ofinformation systemdesign during therequirements analysisto describe information needs or the type ofinformationthat is to be stored in adatabase. Thedata modelingtechnique can be used to describe anyontology(i.e. an overview and classifications of used terms and their relationships) for a certainuniverse of discoursei.e. the area of interest.
Several techniques have been developed for the design of data models. While these methodologies guide data modelers in their work, two different people using the same methodology will often come up with very different results. Most notable are:
Generic data models are generalizations of conventionaldata models. They define standardized general relation types, together with the kinds of things that may be related by such a relation type.
The definition of the generic data model is similar to the definition of a natural language. For example, a generic data model may define relation types such as a 'classification relation', being abinary relationbetween an individual thing and a kind of thing (a class) and a 'part-whole relation', being a binary relation between two things, one with the role of part, the other with the role of whole, regardless the kind of things that are related.
Given an extensible list of classes, this allows the classification of any individual thing and to specification of part-whole relations for any individual object. By standardization of an extensible list of relation types, a generic data model enables the expression of an unlimited number of kinds of facts and will approach the capabilities of natural languages. Conventional data models, on the other hand, have a fixed and limited domain scope, because the instantiation (usage) of such a model only allows expressions of kinds of facts that are predefined in the model.
The logical data structure of a DBMS, whether hierarchical, network, or relational, cannot totally satisfy the requirements for a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS. That is unless the semantic data model is implemented in the database on purpose, a choice which may slightly impact performance but generally vastly improves productivity.
Therefore, the need to define data from a conceptual view has led to the development ofsemantic data modelingtechniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data. As illustrated in the figure the real world, in terms of resources, ideas, events, etc., is symbolically defined by its description within physical data stores. A semantic data model is anabstractionwhich defines how the stored symbols relate to the real world. Thus, the model must be a true representation of the real world.[8]
The purpose of semantic data modeling is to create a structural model of a piece of the real world, called "universe of discourse". For this, three fundamental structural relations are considered:
A semantic data model can be used to serve many purposes, such as:[8]
The overall goal of semantic data models is to capture more meaning of data by integrating relational concepts with more powerfulabstractionconcepts known from theartificial intelligencefield. The idea is to provide high-level modeling primitives as integral parts of a data model in order to facilitate the representation of real-world situations.[10]
|
https://en.wikipedia.org/wiki/Data_modeling
|
Domain-specific modeling(DSM) is asoftware engineeringmethodologyfor designing and developing systems, such ascomputer software. It involves systematic use of adomain-specific languageto represent the various facets of a system.
Domain-specific modeling languages tend to support higher-levelabstractionsthangeneral-purpose modelinglanguages, so they require less effort and fewer low-level details to specify a given system.
Domain-specific modeling often also includes the idea ofcode generation:automatingthe creation of executablesource codedirectly from the domain-specific language models. Being free from the manual creation and maintenance of source code means domain-specific language can significantly improve developer productivity.[1]The reliability of automatic generation compared to manual coding will also reduce the number of defects in the resulting programs thus improving quality.
Domain-specific language differs from earlier code generation attempts in theCASEtools of the 1980s orUMLtools of the 1990s. In both of these, the code generators and modeling languages were built by tool vendors.[citation needed]While it is possible for a tool vendor to create a domain-specific language and generators, it is more normal for domain-specific language to occur within one organization. One or a few expert developers creates the modeling language and generators, and the rest of the developers use them.
Having the modeling language and generator built by the organization that will use them allows a tight fit with their exact domain and in response to changes in the domain.
Domain-specific languages can usually cover a range of abstraction levels for a particular domain. For example, a domain-specific modeling language for mobile phones could allow users to specify high-level abstractions for theuser interface, as well as lower-level abstractions for storing data such as phone numbers or settings. Likewise, a domain-specific modeling language for financial services could permit users to specify high-level abstractions for clients, as well as lower-level abstractions for implementing stock and bond trading algorithms.
To define a language, one needs a language to write the definition in. The language of a model is often called ametamodel, hence the language for defining a modeling language is a meta-metamodel. Meta-metamodels can be divided into two groups: those that are derived from or customizations of existing languages, and those that have been developed specifically as meta-metamodels.
Derived meta-metamodels includeentity–relationship diagrams,formal languages,extended Backus–Naur form(EBNF),ontology languages,XML schema, andMeta-Object Facility(MOF). The strengths of these languages tend to be in the familiarity and standardization of the original language.
The ethos of domain-specific modeling favors the creation of a new language for a specific task, and so there are unsurprisingly new languages designed as meta-metamodels. The most widely used family of such languages is that of OPRR,[2][3]GOPRR,[4]and GOPPRR, which focus on supporting things found in modeling languages with the minimum effort.
ManyGeneral-Purpose Modelinglanguages already have tool support available in the form ofCASEtools. Domain-specific language languages tend to have too small a market size to support the construction of a bespoke CASE tool from scratch. Instead, most tool support for domain-specific language languages is built based on existing domain-specific language frameworks or through domain-specific language environments.
A domain-specific language environment may be thought of as a metamodeling tool, i.e., a modeling tool used to define a modeling tool or CASE tool. The resulting tool may either work within the domain-specific language environment, or less commonly be produced as a separate stand-alone program. In the more common case, the domain-specific language environment supports an additional layer ofabstractionwhen compared to a traditional CASE tool.
Using a domain-specific language environment can significantly lower the cost of obtaining tool support for a domain-specific language, since a well-designed domain-specific language environment will automate the creation of program parts that are costly to build from scratch, such as domain-specific editors, browsers and components. The domain expert only needs to specify the domain specific constructs and rules, and the domain-specific language environment provides a modeling tool tailored for the target domain.
Most existing domain-specific language takes place with domain-specific language environments, either commercial such asMetaEdit+orActifsource, open source such asGEMS, or academic such asGME. The increasing popularity of domain-specific language has led to domain-specific language frameworks being added to existing IDEs, e.g.Eclipse Modeling Project(EMP) withEMFandGMF, or in Microsoft'sDSL ToolsforSoftware Factories.
TheUnified Modeling Language(UML) is ageneral-purpose modelinglanguage for software-intensive systems that is designed to support mostlyobject oriented programming. Consequently, in contrast to domain-specific language languages, UML is used for a wide variety of purposes across a broad range of domains. The primitives offered by UML are those of object oriented programming, while domain-specific languages offer primitives whosesemanticsare familiar to all practitioners in that domain. For example, in the domain ofautomotive engineering, there will be software models to represent the properties of ananti-lock braking system, or asteering wheel, etc.
UML includes a profile mechanism that allows it to be constrained and customized for specific domains and platforms. UML profiles usestereotypes, stereotype attributes (known as tagged values before UML 2.0), and constraints to restrict and extend the scope of UML to a particular domain. Perhaps the best known example of customizing UML for a specific domain isSysML, a domain specific language forsystems engineering.
UML is a popular choice for various model-driven development approaches whereby technical artifacts such as source code, documentation, tests, and more are generated algorithmically from a domain model. For instance, application profiles of the legal document standardAkoma Ntosocan be developed by representing legal concepts and ontologies in UML class objects.[5]
|
https://en.wikipedia.org/wiki/Domain-specific_modeling
|
Method engineeringin the "field ofinformation systemsis thedisciplineto construct new methods from existing methods".[2]It focuses on "the design, construction and evaluation of methods, techniques and support tools forinformation systems development".[3]
Furthermore, method engineering "wants to improve the usefulness ofsystems development methodsby creating an adaptation framework whereby methods are created to match specific organisational situations".[4]
Themeta-process modelingprocess is often not supported through software tools, called computer aided method engineering (CAME) tools, orMetaCASE tools(Meta-level Computer Assisted Software Engineering tools). Often the instantiation technique "has been utilised to build the repository of Computer Aided Method Engineering environments".[5]There are many tools for meta-process modeling.[6][7][8][9][10]
In the literature, different terms refer to the notion of method adaptation, including 'method tailoring', 'method fragment adaptation' and 'situational method engineering'. Method tailoring is defined as:
A process or capability in which human agents through responsive changes in, and dynamic interplays between contexts, intentions, and method fragments determine a system development approach for a specific project situation.[11]
Potentially, almost all agile methods are suitable for method tailoring. Even theDSDMmethod is being used for this purpose and has been successfully tailored in aCMMcontext.[12]Situation-appropriateness can be considered as a distinguishing characteristic between agile methods and traditional software development methods, with the latter being relatively much more rigid and prescriptive. The practical implication is that agile methods allow project teams to adapt workingpracticesaccording to the needs of individual projects. Practices are concrete activities and products that are part of a method framework. At a more extreme level, the philosophy behind the method, consisting of a number ofprinciples, could be adapted.[11]
Situational method engineering is the construction of methods which are tuned to specific situations of development projects.[13]It can be described as the creation of a new method by
This enables the creation of development methods suitable for any development situation. Each system development starts then, with a method definition phase where the development method is constructed on the spot.[4]
In case of mobile business development, there are methods available for specific parts of thebusiness modeldesign process and ICT development. Situational method engineering can be used to combine these methods into one unified method that adopts the characteristics of mobile ICT services.
The developers of theIDEFmodeling languages, Richard J. Mayer et al. (1995), have developed an early approach to method engineering from studying common method engineering practice and experience in developing other analysis anddesign methods. The following figure provides a process-oriented view of this approach. This image uses theIDEF3Process Description Capture method to describe this process where boxes with verb phrases represent activities, arrows represent precedence relationships, and "exclusive or" conditions among possible paths are represented by the junction boxes labeled with an "X.".[1]
According to this approach there are three basic strategies in method engineering:[1]
This basic strategies can be developed in a similar process of concept development
Aknowledge engineeringapproach is the predominant mechanism for method enhancement and new method development. In other words, with very few exceptions, method development involves isolating, documenting, and packaging existing practice for a given task in a form that promotes reliable success among practitioners. Expert attunements are first characterized in the form of basic intuitions and method concepts. These are often initially identified through analysis of the techniques, diagrams, and expressions used by experts. These discoveries aid in the search for existing methods that can be leveraged to support novice practitioners in acquiring the same attunements and skills.[1]
New method development is accomplished by establishing the scope of the method, refining characterizations of the method concepts and intuitions, designing a procedure that provides both task accomplishment and basic apprenticeship support to novice practitioners, and developing a language(s) of expression. Method application techniques are then developed outlining guidelines for use in a stand-alone mode and in concert with other methods. Each element of the method then undergoes iterative refinement through both laboratory and field testing.[1]
The method language design process is highly iterative and experimental in nature. Unlike procedure development, where a set of heuristics and techniques from existing practice can be identified, merged, and refined, language designers rarely encounter well-developed graphical display or textual information capture mechanisms. When potentially reusable language structures can be found, they are often poorly defined or only partially suited to the needs of the method.[1]
A critical factor in the design of a method language is clearly establishing the purpose and scope of the method. The purpose of the method establishes the needs the method must address. This is used to determine the expressive power required of the supporting language. The scope of the method establishes the range and depth of coverage which must also be established before one can design an appropriate language design strategy. Scope determination also involves deciding what cognitive activities will be supported through method application. For example, language design can be confined to only display the final results of method application (as in providing IDEF9 with graphical and textual language facilities that capture the logic and structure of constraints). Alternatively, there may be a need for in-process language support facilitating information collection and analysis. In those situations, specific language constructs may be designed to help method practitioners organize, classify, and represent information that will later be synthesized into additional representation structures intended for display.[1]
With this foundation, language designers begin the process of deciding what needs to be expressed in the language and how it should be expressed. Language design can begin by developing a textual language capable of representing the full range of information to be addressed. Graphical language structures designed to display select portions of the textual language can then be developed. Alternatively, graphical language structures may evolve prior to, or in parallel with, the development of the textual language. The sequence of these activities largely depends on the degree of understanding of the language requirements held among language developers. These may become clear only after several iterations of both graphical and textual language design.[1]
Graphical language design begins by identifying a preliminary set of schematics and the purpose or goals of each in terms of where and how they will support the method application process. The central item of focus is determined for each schematic. For example, in experimenting with alternative graphical language designs for IDEF9, a Context Schematic was envisioned as a mechanism to classify the varying environmental contexts in which constraints may apply. The central focus of this schematic was the context. After deciding on the central focus for the schematic, additional information (concepts and relations) that should be captured or conveyed is identified.[1]
Up to this point in the language design process, the primary focus has been on the information that should be displayed in a given schematic to achieve the goals of the schematic. This is where the language designer must determine which items identified for possible inclusion in the schematic are amenable to graphical representation and will serve to keep the user focused on the desired information content. With this general understanding, previously developed graphical language structures are explored to identify potential reuse opportunities. While exploring candidate graphical language designs for emerging IDEF methods, a wide range of diagrams were identified and explored. Quite often, even some of the central concepts of a method will have no graphical language element in the method.[1]
For example, theIDEF1Information Modeling method includes the notion of an entity but has no syntactic element for an entity in the graphical language.8. When the language designer decides that a syntactic element should be included for a method concept, candidate symbols are designed and evaluated. Throughout the graphical language design process, the language designer applies a number of guiding principles to assist in developing high quality designs. Among these, the language designer avoids overlapping concept classes or poorly defined ones. They also seek to establish intuitive mechanisms to convey the direction for reading the schematics.[1]
For example, schematics may be designed to be read from left to right, in a bottom-up fashion, or center-out. The potential for clutter or overwhelmingly large amounts of information on a single schematic is also considered as either condition makes reading and understanding the schematic extremely difficult.[1]
Each candidate design is then tested by developing a wide range of examples to explore the utility of the designs relative to the purpose for each schematic. Initial attempts at method development, and the development of supporting language structures in particular, are usually complicated. With successive iterations on the design, unnecessary and complex language structures are eliminated.[1]
As the graphical language design approaches a level of maturity, attention turns to the textual language. The purposes served by textual languages range from providing a mechanism for expressing information that has explicitly been left out of the graphical language to providing a mechanism for standard data exchange and automated model interpretation. Thus, the textual language supporting the method may be simple and unstructured (in terms of computer interpretability), or it may emerge as a highly structured, and complex language. The purpose of the method largely determines what level of structure will be required of the textual language.[1]
As the method language begins to approach maturity, mathematical formalization techniques are employed so the emerging language has clear syntax and semantics. The method formalization process often helps uncover ambiguities, identify awkward language structures, and streamline the language.[1]
These general activities culminate in a language that helps focus user attention on the information that needs to be discovered, analyzed, transformed, or communicated in the course of accomplishing the task for which the method was designed. Both the procedure and language components of the method also help users develop the necessary skills and attunements required to achieve consistently high quality results for the targeted task.[1]
Once the method has been developed, application techniques will be designed to successfully apply the method in stand-alone mode as well as together with other methods. Application techniques constitute the "use" component of the method which continues to evolve and grow throughout the life of the method. The method procedure, language constructs, and application techniques are reviewed and tested to iteratively refine the method.[1]
This article incorporates text fromUS Air Force,Information Integration for Concurrent Engineering (IICE) Compendium of methods reportbyRichard J. Mayeret al., 1995, a publication now in the public domain.
|
https://en.wikipedia.org/wiki/Method_engineering
|
Model-driven architecture(MDA) is a software design approach for the development of software systems. It provides a set of guidelines for the structuring of specifications, which are expressed as models. Model Driven Architecture is a kind of domain engineering, and supportsmodel-driven engineeringof software systems. It was launched by theObject Management Group(OMG) in 2001.[1]
Model Driven Architecture® (MDA®) "provides an approach for deriving value from models and architecture in support of the full life cycle of physical, organizational and I.T. systems". A model is a (representation of) an abstraction of a system. MDA® provides value by producing models at varying levels of abstraction, from a conceptual view down to the smallest implementation detail. OMG literature speaks of three such levels of abstraction, or architectural viewpoints: the Computation-independent Model (CIM), the Platform-independent model (PIM), and thePlatform-specific model(PSM). The CIM describes a system conceptually, the PIM describes the computational aspects of a system without reference to the technologies that may be used to implement it, and the PSM provides the technical details necessary to implement the system. The OMG Guide notes, though, that these three architectural viewpoints are useful, but are just three of many possible viewpoints.[2]
The OMG organization provides specifications rather than implementations, often as answers toRequests for Proposals(RFPs). Implementations come from private companies or open source groups.
The MDA model is related to multiple standards, including theUnified Modeling Language(UML), theMeta-Object Facility(MOF),XML Metadata Interchange(XMI),Enterprise Distributed Object Computing(EDOC), theSoftware Process Engineering Metamodel(SPEM), and theCommon Warehouse Metamodel(CWM). Note that the term “architecture” in Model Driven Architecture does not refer to the architecture of the system being modeled, but rather to the architecture of the various standards and model forms that serve as the technology basis for MDA.[citation needed]
Executable UMLwas the UML profile used when MDA was born. Now, the OMG is promotingfUML, instead. (The action language for fUML is ALF.)
TheObject Management Groupholds registered trademarks on the term Model Driven Architecture and its acronym MDA, as well as trademarks for terms such as: Model Based Application Development, Model Driven Application Development, Model Based Application Development, Model Based Programming, Model Driven Systems, and others.[3]
OMG focuses Model Driven Architecture® on forward engineering, i.e. producing code from abstract, human-elaborated modeling diagrams (e.g. class diagrams)[citation needed]. OMG's ADTF (Analysis and Design Task Force) group leads this effort. With some humour, the group chose ADM (MDA backwards) to name the study of reverse engineering. ADM decodes to Architecture-Driven Modernization. The objective of ADM is to produce standards for model-based reverse engineering of legacy systems.[4]Knowledge Discovery Metamodel(KDM) is the furthest along of these efforts, and describes information systems in terms of various assets (programs, specifications, data, test files, database schemas, etc.).
As the concepts and technologies used to realize designs and the concepts and technologies used to realize architectures have changed at their own pace, decoupling them allows system developers to choose from the best and most fitting in both domains. The design addresses the functional (use case) requirements while architecture provides the infrastructure through which non-functional requirements like scalability, reliability and performance are realized. MDA envisages that the platform independent model (PIM), which represents a conceptual design realizing the functional requirements, will survive changes in realization technologies andsoftware architectures.
Of particular importance to Model Driven Architecture is the notion ofmodel transformation. A specific standard language for model transformation has been defined byOMGcalledQVT.
The OMG organization provides rough specifications rather than implementations, often as answers toRequests for Proposals(RFPs). The OMG documents the overall process in a document called the MDA Guide.
Basically, an MDA tool is a tool used to develop, interpret, compare, align, measure, verify, transform, etc. models or metamodels.[5]In the following section "model" is interpreted as meaning any kind of model (e.g. a UML model) or metamodel (e.g. the CWM metamodel). In any MDA approach we have essentially two kinds of models:initial modelsare created manually by human agents whilederived modelsare created automatically by programs. For example, an analyst may create a UML initial model from its observation of some loose business situation while a Java model may be automatically derived from this UML model by aModel transformationoperation.
An MDA tool may be a tool used to check models for completeness, inconsistencies, or error and warning conditions.
Some tools perform more than one of the functions listed above. For example, some creation tools may also have transformation and test capabilities. There are other tools that are solely for creation, solely for graphical presentation, solely for transformation, etc.
Implementations of the OMG specifications come from private companies oropen sourcegroups. One important source of implementations for OMG specifications is theEclipse Foundation(EF). Many implementations of OMG modeling standards may be found in theEclipse Modeling Framework(EMF) orGraphical Modeling Framework(GMF), the Eclipse foundation is also developing other tools of various profiles as GMT. Eclipse's compliance to OMG specifications is often not strict. This is true for example for OMG's EMOF standard, which EMF approximates with its Ecore implementation. More examples may be found in the M2M project implementing the QVT standard or in the M2T project implementing the MOF2Text standard.
One should be careful not to confuse theList of MDA Toolsand theList of UML tools, the former being much broader. This distinction can be made more general by distinguishing 'variable metamodel tools' and 'fixed metamodel tools'. A UML CASE tool is typically a 'fixed metamodel tool' since it has been hard-wired to work only with a given version of the UML metamodel (e.g. UML 2.1). On the contrary, other tools have internal generic capabilities allowing them to adapt to arbitrary metamodels or to a particular kind of metamodels.
Usually MDA tools focus rudimentary architecture specification, although in some cases the tools are architecture-independent (or platform independent).
Simple examples of architecture specifications include:
Some key concepts that underpin the MDA approach (launched in 2001) were first elucidated by theShlaer–Mellor methodduring the late 1980s. Indeed, a key absent technical standard of the MDA approach (that of an action language syntax forExecutable UML) has been bridged by some vendors by adapting the original Shlaer–Mellor Action Language (modified for UML)[citation needed]. However, during this period the MDA approach has not gained mainstream industry acceptance; with theGartner Groupstill identifying MDA as an "on the rise" technology in its 2006 "Hype Cycle",[6]andForrester Researchdeclaring MDA to be "D.O.A." in 2006.[7]Potential concerns that have been raised with the OMG MDA approach include:
|
https://en.wikipedia.org/wiki/Model-driven_architecture
|
Amodeling languageis anyartificial languagethat can be used to expressdata,informationorknowledgeorsystemsin astructurethat is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure of aprogramming language.
A modeling language can be graphical or textual.[1]
An example of a graphical modeling language and a corresponding textual modeling language isEXPRESS.
Not all modeling languages are executable, and for those that are, the use of them doesn't necessarily mean that programmers are no longer required. On the contrary, executable modeling languages are intended to amplify the productivity of skilled programmers, so that they can address more challenging problems, such asparallel computinganddistributed systems.
A large number of modeling languages appear in the literature.
Example of graphical modeling languages in the field of computer science, project management and systems engineering:
Examples of graphical modeling languages in other fields of science.
Information models can also be expressed in formalized natural languages, such as Gellish.[4]Gellish has natural language variants such asGellish Formal Englishand Gellish Formal Dutch (Gellish Formeel Nederlands), etc. Gellish Formal English is an information representation language or semantic modeling language that is defined in the Gellish English Dictionary-Taxonomy, which has the form of a Taxonomy-Ontology (similarly for Dutch). Gellish Formal English is not only suitable to express knowledge, requirements and dictionaries, taxonomies and ontologies, but also information about individual things. All that information is expressed in one language and therefore it can all be integrated, independent of the question whether it is stored in central or distributed or in federated databases. Information models in Gellish Formal English consists of collections of Gellish Formal English expressions, that use natural language terms and formalized phrases. For example, a geographic information model might consist of a number of Gellish Formal English expressions, such as:
whereas information requirements and knowledge can be expressed for example as follows:
Such Gellish Formal English expressions use names of concepts (such as "city") and phrases that represent relation types (such as⟨is located in⟩and⟨is classified as a⟩) that should be selected from the Gellish English Dictionary-Taxonomy (or of your own domain dictionary). The Gellish English Dictionary-Taxonomy enables the creation of semantically rich information models, because the dictionary contains more than 600 standard relation types and contains definitions of more than 40000 concepts. An information model in Gellish can express facts or make statements, queries and answers.
In the field ofcomputer sciencerecently more specific types of modeling languages have emerged.
Algebraic Modeling Languages(AML) are high-level programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). One particular advantage of AMLs likeAIMMS,AMPL,GAMS,Gekko,Mosel,OPL,MiniZinc, andOptimJis the similarity of its syntax to the mathematical notation of optimization problems. This allows for a very concise and readable definition of problems in the domain of optimization, which is supported by certain language elements like sets, indices, algebraic expressions, powerful sparse index and data handling variables, constraints with arbitrary names. The algebraic formulation of a model does not contain any hints how to process it.
Behavioral languages are designed to describe the observable behavior of complex systems consisting of components that
execute concurrently. These languages focus on the description of key concepts such as: concurrency, nondeterminism, synchronization, and communication. The semantic foundations of Behavioral languages areprocess calculusorprocess algebra.
Adiscipline-specific modeling (DspM)language is focused on deliverables affiliated with a specific software development life cycle stage. Therefore, such language offers a distinct vocabulary, syntax, and notation for each stage, such as discovery, analysis, design, architecture, contraction, etc. For example, for the analysis phase of a project, the modeler employs specific analysis notation to deliver an analysis proposition diagram. During the design phase, however, logical design notation is used to depict the relationship between software entities. In addition, the discipline-specific modeling language best practices does not preclude practitioners from combining the various notations in a single diagram.
Domain-specific modeling(DSM) is a software engineering methodology for designing and developing systems, most often IT systems such as computer software. It involves the systematic use of a graphicaldomain-specific language(DSL) to represent the various facets of a system. DSM languages tend to support higher-level abstractions than General-purpose modeling languages, so they require less effort and fewer low-level details to specify a given system.
Aframework-specific modeling language(FSML) is a kind of domain-specific modeling language which is designed for an object-oriented application framework. FSMLs define framework-provided abstractions as FSML concepts and decompose the abstractions into features. The features represent implementation steps or choices.
A FSML concept can be configured by selecting features and providing values for features. Such a concept configuration represents how the concept should be implemented in the code. In other words, concept configuration describes how the framework should be completed in order to create the implementation of the concept.
Linked dataandontology engineeringrequire 'host languages' to represententities and the relations between them,constraintsbetween the properties of entities and relations, andmetadataattributes.JSON-LDandRDFare two major (and semantically almost equivalent) languages in this context, primarily because they supportstatement reification and contextualisationwhich are essential properties to support thehigher-order logicneeded to reason about models.Model transformationis a common example of such reasoning.
Object modeling languagesare modeling languages based on a standardized set of symbols and ways of arranging them to model (part of) an object oriented software design or system design.
Some organizations use them extensively in combination with a software development methodology to progress from initial specification to an implementation plan and to communicate that plan to an entire team of developers and stakeholders. Because a modeling language is visual and at a higher-level of abstraction than code, using models encourages the generation of a shared vision that may prevent problems of differing interpretation later in development. Often software modeling tools are used to construct these models, which may then be capable of automatic translation to code.
Virtual Reality Modeling Language(VRML), before 1995 known as the Virtual Reality Markup Language is a standard file format for representing 3-dimensional (3D) interactive vector graphics, designed particularly with the World Wide Web in mind.
Various kinds of modeling languages are applied in different disciplines, includingcomputer science,information management,business process modeling,software engineering, andsystems engineering. Modeling languages can be used to specify:
Modeling languages are intended to be used to precisely specify systems so that stakeholders (e.g., customers, operators, analysts, designers) can better understand the system being modeled.
The more mature modeling languages are precise, consistent and executable.Informal diagrammingtechniques applied with drawing tools are expected to produce useful pictorial representations of system requirements, structures and behaviors, which can be useful for communication, design, and problem solving but cannot be used programmatically.[5]: 539Executable modeling languages applied with proper tool support, however, are expected to automate systemverification and validation,simulationandcode generationfrom the same representations.
A review of modelling languages is essential to be able to assign which languages are appropriate for different modelling settings. In the term settings we include stakeholders, domain and the knowledge connected. Assessing thelanguage qualityis a means that aims to achieve better models.
Here language quality is stated in accordance with theSEQUAL frameworkfor quality of models developed by Krogstie, Sindre and Lindland (2003), since this is a framework that connects the language quality to a framework for general model quality. Five areas are used in this framework to describe language quality and these are supposed to express both theconceptualas well as the visual notation of the language. We will not go into a thorough explanation of the underlying quality framework of models but concentrate on the areas used to explain the language quality framework.
The framework states the ability to represent the domain as domain appropriateness. The statementappropriatenesscan be a bit vague, but in this particular context it meansable to express. You should ideally only be able to express things that are in the domain but be powerful enough to include everything that is in the domain. This requirement might seem a bit strict, but the aim is to get a visually expressed model which includes everything relevant to the domain and excludes everything not appropriate for the domain. To achieve this, the language has to have a good distinction of which notations andsyntaxesthat are advantageous to present.
To evaluate the participant appropriateness we try to identify how well the language expresses the knowledge held by the stakeholders. This involves challenges since a stakeholder's knowledge is subjective. The knowledge of the stakeholder is both tacit and explicit. Both types of knowledge are of dynamic character. In this framework only the explicit type of knowledge is taken into account. The language should to a large extent express all the explicit knowledge of the stakeholders relevant to the domain.
Last paragraph stated that knowledge of the stakeholders should be presented in a good way. In addition it is imperative that the language should be able to express all possible explicit knowledge of the stakeholders. No knowledge should be left unexpressed due to lacks in the language.
Comprehensibility appropriateness makes sure that the social actors understand the model due to a consistent use of the language. To achieve this the framework includes a set of criteria. The general importance that these express is that the language should be flexible, easy to organize and easy to distinguish different parts of the language internally as well as from other languages. In addition to this, the goal should be as simple as possible and that each symbol in the language has a unique representation.
This is in connection to also to the structure of the development requirements.
.
To ensure that the domain actually modelled is usable for analyzing and further processing, the language has to ensure that it is possible to reason in an automatic way. To achieve this it has to include formal syntax and semantics. Another advantage by formalizing is the ability to discover errors in an early stage. It is not always that the language best fitted for the technical actors is the same as for the social actors.
The language used is appropriate for the organizational context, e.g. that the language is standardized within the organization, or that it is supported by tools that are chosen as standard in the organization.
|
https://en.wikipedia.org/wiki/Modeling_language
|
Rapid application development(RAD), also calledrapid application building(RAB), is both a general term foradaptive software developmentapproaches, and the name forJames Martin's method of rapid development. In general, RAD approaches to software development put less emphasis on planning and more emphasis on an adaptive process.Prototypesare often used in addition to or sometimes even instead of design specifications.
RAD is especially well suited for (although not limited to) developingsoftwarethat is driven byuser interfacerequirements.Graphical user interface buildersare often called rapid application development tools. Other approaches to rapid development include theadaptive,agile,spiral, andunifiedmodels.
Rapid application development was a response to plan-drivenwaterfallprocesses, developed in the 1970s and 1980s, such as theStructured Systems Analysis and Design Method(SSADM). One of the problems with these methods is that they were based on a traditional engineering model used to design and build things like bridges and buildings. Software is an inherently different kind of artifact. Software can radically change the entire process used to solve a problem. As a result, knowledge gained from the development process itself can feed back to the requirements and design of the solution.[1]Plan-driven approaches attempt to rigidly define the requirements, the solution, and the plan to implement it, and have a process that discourages changes. RAD approaches, on the other hand, recognize that software development is a knowledge intensive process and provide flexible processes that help take advantage of knowledge gained during the project to improve or adapt the solution.
The first such RAD alternative was developed byBarry Boehmand was known as thespiral model. Boehm and other subsequent RAD approaches emphasized developing prototypes as well as or instead of rigorous design specifications. Prototypes had several advantages over traditional specifications:
Starting with the ideas ofBarry Boehmand others,James Martindeveloped the rapid application development approach during the 1980s atIBMand finally formalized it by publishing a book in 1991,Rapid Application Development. This has resulted in some confusion over the term RAD even among IT professionals. It is important to distinguish between RAD as a general alternative to the waterfall model and RAD as the specific method created by Martin. The Martin method was tailored toward knowledge intensive and UI intensive business systems.
These ideas were further developed and improved upon by RAD pioneers like James Kerr and Richard Hunter, who together wrote the seminal book on the subject, Inside RAD,[3]which followed the journey of a RAD project manager as he drove and refined the RAD Methodology in real-time on an actual RAD project. These practitioners, and those like them, helped RAD gain popularity as an alternative to traditional systems project life cycle approaches.
The RAD approach also matured during the period of peak interest inbusiness re-engineering. The idea of business process re-engineering was to radically rethink core business processes such as sales and customer support with the new capabilities of Information Technology in mind. RAD was often an essential part of larger business re engineering programs. The rapid prototyping approach of RAD was a key tool to help users and analysts "think out of the box" about innovative ways that technology might radically reinvent a core business process.[4][5]
Much of James Martin's comfort with RAD stemmed fromDupont's Information Engineering division and its leader Scott Schultz and their respective relationships with John Underwood who headed up a bespoke RAD development company that pioneered many successful RAD projects in Australia and Hong Kong.
Successful projects that includedANZ Bank,Lend Lease,BHP,Coca-ColaAmatil,Alcan,Hong Kong Jockey Cluband numerous others.
Success that led to both Scott Shultz and James Martin both spending time in Australia with John Underwood to understand the methods and details of why Australia was disproportionately successful in implementing significant mission critical RAD projects.
The James Martin approach to RAD divides the process into four distinct phases:
In modern Information Technology environments, many systems are now built using some degree of Rapid Application Development[7](not necessarily the James Martin approach). In addition to Martin's method,agile methodsand theRational Unified Processare often used for RAD development.
The purported advantages of RAD include:
The purported disadvantages of RAD include:
Practical concepts to implement RAD:
Other similar concepts:
|
https://en.wikipedia.org/wiki/Rapid_application_development
|
Incomputer science,automatic programming[1]is a type ofcomputer programmingin which some mechanism generates acomputer program, to allow humanprogrammersto write the code at a higher abstraction level.
There has been little agreement on the precise definition of automatic programming, mostly because its meaning has changed over time.David Parnas, tracing the history of "automatic programming" in published research, noted that in the 1940s it described automation of the manual process of punchingpaper tape. Later it referred to translation ofhigh-level programming languageslikeFortranandALGOL. In fact, one of the earliest programs identifiable as acompilerwas calledAutocode.Parnasconcluded that "automatic programming has always been aeuphemismfor programming in a higher-level language than was then available to the programmer."[2]
Program synthesisis one type of automatic programming where a procedure is created from scratch, based on mathematical requirements.
Mildred Koss, an earlyUNIVACprogrammer, explains: "Writing machine code involved several tedious steps—breaking down a process into discrete instructions, assigning specific memory locations to all the commands, and managing the I/O buffers. After following these steps to implement mathematical routines, a sub-routine library, and sorting programs, our task was to look at the larger programming process. We needed to understand how we might reuse tested code and have the machine help in programming. As we programmed, we examined the process and tried to think of ways to abstract these steps to incorporate them into higher-level language. This led to the development of interpreters, assemblers, compilers, and generators—programs designed to operate on or produce other programs, that is,automatic programming."[3]
Generative programmingand the related termmeta-programming[4]are concepts whereby programs can be written "to manufacture software components in an automated way"[5]just as automation has improved "production of traditional commodities such as garments, automobiles, chemicals, and electronics."[6][7]
The goal is to improveprogrammerproductivity.[8]It is often related to code-reuse topics such ascomponent-based software engineering.
Source-code generationis the process of generating source code based on a description of the problem[9]or anontologicalmodel such as a template and is accomplished with aprogramming toolsuch as atemplate processoror anintegrated development environment(IDE). These tools allow the generation ofsource codethrough any of various means.
Modern programming languages are well supported by tools likeJson4Swift(Swift) andJson2Kotlin(Kotlin).
Programs that could generateCOBOLcode include:
These application generators supported COBOL inserts and overrides.
Amacroprocessor, such as theC preprocessor, which replaces patterns in source code according to relatively simple rules, is a simple form of source-code generator.Source-to-sourcecode generation tools also exist.[11][12]
Large language modelssuch asChatGPTare capable of generating a program's source code from a description of the program given in a natural language.[13]
Manyrelational database systemsprovide a function that will export the content of the database asSQLdata definitionqueries, which may then be executed to re-import the tables and their data, or migrate them to another RDBMS.
Alow-code development platform(LCDP) is software that provides an environmentprogrammersuse to createapplication softwarethroughgraphical user interfacesand configuration instead of traditionalcomputer programming.
|
https://en.wikipedia.org/wiki/Automatic_programming
|
Build automationis the practice ofbuildingsoftware systems in a relatively unattended fashion. The build is configured to run with minimized or nosoftware developerinteraction and without using a developer's personal computer. Build automation encompasses the act of configuring thebuild systemas well the resulting system itself.
Build automation encompasses both sequencing build operations vianon-interactive interfacetools and running builds on a sharedserver.[1]
Build automation toolsallow for sequencing the tasks of building software via a non-interactive interface. Existing tools such asMakecan be used via custom configuration file or using thecommand-line. Custom tools such asshell scriptscan also be used, although they become increasingly cumbersome as the codebase grows more complex.[2]
Some tools, such asshell scripts, are task-orienteddeclarative programming. They encode sequences of commands to perform with usually minimal conditional logic.
Some tools, such as Make are product-oriented. They build a product, a.k.a. target, based on configured dependencies.[3]
A build server is aserversetup to runbuilds. As opposed to apersonal computer, a server allows for a more consistent and available build environment.
Traditionally, a build server was a local computer dedicated as a shared resource instead of used as a personal computer. Today, there are manycloud computing,software as a service(SaaS)websitesfor building.
Without a build server, developers typically rely on their personal computers for building, leading to several drawbacks, such as (but not limited to):
Acontinuous integrationserver is a build server that is setup to build in a relatively frequent way – often on each codecommit. A build server may also be incorporated into anARAtool orALMtool.
Typical build triggering options include:
Automating the build process is a required step for implementingcontinuous integrationandcontinuous delivery(CI/CD) – all of which consideredbest practicefor software development.[4][how?]
Pluses of build automation include:[5]
|
https://en.wikipedia.org/wiki/Build_automation
|
Anintegrated development environment(IDE) is asoftware applicationthat provides comprehensive facilities forsoftware development. An IDE normally consists of at least asource-code editor,build automationtools, and adebugger. Some IDEs, such asIntelliJ IDEA,EclipseandLazaruscontain the necessarycompiler,interpreteror both; others, such asSharpDevelopandNetBeans, do not.
The boundary between an IDE and other parts of the broader software development environment is not well-defined; sometimes aversion control systemor various tools to simplify the construction of agraphical user interface(GUI) are integrated. Many modern IDEs also have aclass browser, anobject browser, and aclass hierarchy diagramfor use inobject-oriented software development.
Integrated development environments are designed to maximize programmer productivity by providing tight-knit components with similaruser interfaces. IDEs present a single program in which all development is done. This program typically provides many features for authoring, modifying, compiling, deploying and debugging software. This contrasts with software development using unrelated tools, such asvi,GDB,GNU Compiler Collection, ormake.
One aim of the IDE is to reduce the configuration necessary to piece together multiple development utilities. Instead, it provides the same set of capabilities as one cohesive unit. Reducing setup time can increase developer productivity, especially in cases where learning to use the IDE is faster than manually integrating and learning all of the individual tools. Tighter integration of all development tasks has the potential to improve overall productivity beyond just helping with setup tasks. For example, code can be continuously parsed while it is being edited, providing instant feedback when syntax errors are introduced, thus allowing developers to debug code much faster and more easily with an IDE.
Some IDEs are dedicated to a specificprogramming language, allowing a feature set that most closely matches theprogramming paradigmsof the language. However, there are many multiple-language IDEs.
While most modern IDEs are graphical, text-based IDEs such asTurbo Pascalwere in popular use before the availability of windowing systems likeMicrosoft Windowsand theX Window System(X11). They commonly use function keys orhotkeysto execute frequently used commands or macros.
IDEs initially became possible when developing via aconsoleorterminal. Early systems could not support one, since programs were submitted to acompilerorassemblerviapunched cards,paper tape, etc.Dartmouth BASICwas the first language to be created with an IDE (and was also the first to be designed for use while sitting in front of a console or terminal).[citation needed]Its IDE (part of theDartmouth Time-Sharing System) was command-based, and therefore did not look much like the menu-driven, graphical IDEs popular after the advent of thegraphical user interface. However it integrated editing, file management, compilation, debugging and execution in a manner consistent with a modern IDE.
Maestro Iis a product from Softlab Munich and was the world's first integrated development environment[1]for software.Maestro Iwas installed for 22,000 programmers worldwide. Until 1989, 6,000 installations existed in theFederal Republic of Germany. Maestro was arguably the world leader in this field during the 1970s and 1980s. Today one of the last Maestro I can be found in the Museum of Information Technology at Arlington in Texas.
One of the first IDEs with a plug-in concept wasSoftbench. In 1995Computerwochecommented that the use of an IDE was not well received by developers since it would fence in their creativity.
As of August 2023[update], the most commonly searched for IDEs onGoogle SearchwereVisual Studio,Visual Studio Code, andEclipse.[2]
The IDE editor usually providessyntax highlighting, it can show both the structures, the language keywords and the syntax errors with visually distinct colors and font effects.[3]
Code completion is an important IDE feature, intended to speed up programming. Modern IDEs even haveintelligent code completion.
Code completionis anautocompletionfeature in many integrated development environments (IDEs) that speeds up the process of coding applications by fixing common mistakes and suggesting lines of code. This usually happens through popups while typing, querying parameters of functions, and query hints related to syntax errors. Modern code completion software typically usesgenerative artificial intelligencesystems to predict lines of code[citation needed]. Code completion and related tools serve as documentation and disambiguation forvariablenames,functions, andmethods, usingstatic analysis.[4][5]
Advanced IDEs provide support forautomated refactoring.[3]
An IDE is expected to provide integratedversion control, in order to interact with source repositories.[3]
IDEs are also used for debugging, using an integrateddebugger, with support for setting breakpoints in the editor, visual rendering of steps, etc.[9]
IDEs may provide support for code search. Code search has two different meanings. First, it means searching for class and function declarations, usages, variable and field read/write, etc. IDEs can use different kinds of user interface for code search, for example form-based widgets[10]and natural-language based interfaces.
Second, it means searching for a concrete implementation of some specified functionality.[11]
Visual programmingis a usage scenario in which an IDE is generally required. Visual Basic allows users to create new applications by moving programming, building blocks, or code nodes to create flowcharts or structure diagrams that are then compiled or interpreted. These flowcharts often are based on theUnified Modeling Language.
This interface has been popularized with theLego Mindstormssystem and is being actively perused by a number of companies wishing to capitalize on the power of custom browsers like those found atMozilla.KTechlabsupports flowcode and is a popular open-source IDE and Simulator for developing software for microcontrollers. Visual programming is also responsible for the power ofdistributed programming(cf.LabVIEWand EICASLAB software). An early visual programming system,Max, was modeled after an analogsynthesizerdesign and has been used to develop real-time music performance software since the 1980s. Another early example wasPrograph, adataflow-based system originally developed for theMacintosh. The graphical programming environment "Grape" is used to programqfix robot kits.
This approach is also used in specialist software such as Openlab, where the end-users want the flexibility of a full programming language, without the traditional learning curve associated with one.
Some IDEs support multiple languages, such asGNU Emacs,IntelliJ IDEA,Eclipse,MyEclipse,NetBeans,MonoDevelop, JDoodle or PlayCode.
Support for alternative languages is often provided byplugins, allowing them to be installed on the same IDE at the same time. For example,Flycheckis a modern on-the-fly syntax checking extension forGNU Emacs24 with support for 39 languages.[12]Another example is JDoodle, an online cloud-based IDE that supports 88 languages.[1]Eclipse, andNetbeanshave plugins forC/C++,Ada,GNAT(for exampleAdaGIDE),Perl,Python,Ruby, andPHP, which are selected between automatically based on file extension, environment or project settings.
IDEs can be implemented in various languages, for example:
Unixprogrammers can combinecommand-linePOSIXtools into a complete development environment, capable of developing large programs such as theLinux kerneland its environment.[13]In this sense, the entire Unix system functions as an IDE.[14]The free softwareGNU toolchain(includingGNU Compiler Collection(GCC),GNU Debugger(GDB), andGNU make) is available on many platforms, including Windows.[15]The pervasive Unix philosophy of "everything is a text stream" enables developers who favorcommand-lineoriented tools to use editors with support for many of the standard Unix and GNU build tools, building an IDE with programs likeEmacs[16][17][18]orVim.Data Display Debuggeris intended to be an advanced graphical front-end for many text-baseddebuggerstandard tools. Some programmers prefer managingmakefilesand their derivatives to the similar code building tools included in a full IDE. For example, most contributors to thePostgreSQLdatabase usemakeandGDBdirectly to develop new features.[19]Even when building PostgreSQL forMicrosoft WindowsusingVisual C++,Perlscripts are used as a replacement formakerather than relying on any IDE features.[20]Some Linux IDEs such asGeanyattempt to provide a graphical front end to traditional build operations.
On the variousMicrosoft Windowsplatforms, command-line tools for development are seldom used. Accordingly, there are many commercial and non-commercial products. However, each has a different design commonly creating incompatibilities. Most major compiler vendors for Windows still provide free copies of their command-line tools, includingMicrosoft(Visual C++,Platform SDK,.NET FrameworkSDK,nmakeutility).
IDEs have always been popular on the Apple Macintosh'sclassic Mac OSandmacOS, dating back toMacintosh Programmer's Workshop,Turbo Pascal, THINK Pascal andTHINK Cenvironments of the mid-1980s. Currently macOS programmers can choose between native IDEs likeXcodeand open-source tools such asEclipseandNetbeans.ActiveState Komodois a proprietary multilanguage IDE supported on macOS.
Anonline integrated development environment, also known as a web IDE or cloud IDE, is abrowserbased IDE that allows for software development or web development.[21]An online IDE can be accessed from a web browser, allowing for a portable work environment. An online IDE does not usually contain all of the same features as a traditional or desktop IDE although all of the basic IDE features, such as syntax highlighting, are typically present.
A Mobile-Based Integrated Development Environment (IDE) is a software application that provides a comprehensive suite of tools for software development on mobile platforms. Unlike traditional desktop IDEs, mobile-based IDEs are designed to run on smartphones and tablets, allowing developers to write, debug, and deploy code directly from their mobile devices.
|
https://en.wikipedia.org/wiki/Integrated_development_environment
|
Ananti-patterninsoftware engineering,project management, andbusiness processesis a common response to a recurring problem that is usually ineffective and risks being highly counterproductive.[1][2]The term, coined in 1995 by computer programmerAndrew Koenig, was inspired by the bookDesign Patterns(which highlights a number ofdesign patternsinsoftware developmentthat its authors considered to be highly reliable and effective) and first published in his article in theJournal of Object-Oriented Programming.[3]A further paper in 1996 presented by Michael Ackroyd at the Object World West Conference also documented anti-patterns.[3]
It was, however, the 1998 bookAntiPatternsthat both popularized the idea and extended its scope beyond the field of software design to include software architecture and project management.[3]Other authors have extended it further since to encompass environmental, organizational, and cultural anti-patterns.[4]
According to the authors ofDesign Patterns, there are two key elements to an anti-pattern that distinguish it from a bad habit, bad practice, or bad idea:
A guide to what is commonly used is a "rule-of-three" similar to that for patterns: to be an anti-pattern it must have been witnessed occurring at least three times.[5]
Documenting anti-patterns can be an effective way to analyze a problem space and to capture expert knowledge.[6]
While some anti-pattern descriptions merely document the adverse consequences of the pattern, good anti-pattern documentation also provides an alternative, or a means to ameliorate the anti-pattern.[7]
In software engineering, anti-patterns include thebig ball of mud(lack of) design, thegod object(where a singleclasshandles all control in aprogramrather than control being distributed across multiple classes),magic numbers(unique values with an unexplained meaning or multiple occurrences which could be replaced with a named constant), andpoltergeists(ephemeral controller classes that only exist to invoke other methods on classes).[7]
This indicates asoftware systemthat lacks a perceivable architecture. Although undesirable from a software engineering point of view, such systems are common in practice due to business pressures, developerturnoverandcode entropy.
The term was popularized inBrian Footeand Joseph Yoder's 1997 paper of the same name, which defines the term:
A Big Ball of Mud is a haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire,spaghetti-codejungle. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated.
The overall structure of the system may never have been well defined.
If it was, it may have eroded beyond recognition. Programmers with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems.
Foote and Yoder have credited Brian Marick as the originator of the "big ball of mud" term for this sort of architecture.[8]
Project management anti-patterns included in theAntipatternsbook include:
|
https://en.wikipedia.org/wiki/Anti-pattern
|
Insoftware engineering, asoftware design patternordesign patternis a general,reusablesolution to a commonly occurring problem in many contexts insoftware design.[1]A design pattern is not a rigid structure to be transplanted directly intosource code. Rather, it is a description or a template for solving a particular type of problem that can be deployed in many different situations.[2]Design patterns can be viewed as formalizedbest practicesthat the programmer may use to solve common problems when designing a software application or system.
Object-orienteddesign patterns typically show relationships and interactions betweenclassesorobjects, without specifying the final application classes or objects that are involved.[citation needed]Patterns that imply mutable state may be unsuited forfunctional programminglanguages. Some patterns can be rendered unnecessary in languages that have built-in support for solving the problem they are trying to solve, and object-oriented patterns are not necessarily suitable for non-object-oriented languages.[citation needed]
Design patterns may be viewed as a structured approach tocomputer programmingintermediate between the levels of aprogramming paradigmand a concretealgorithm.[citation needed]
Patterns originated as anarchitectural conceptbyChristopher Alexanderas early as 1977 inA Pattern Language(cf. his article, "The Pattern of Streets," JOURNAL OF THE AIP, September, 1966, Vol. 32, No. 5, pp. 273–278). In 1987,Kent BeckandWard Cunninghambegan experimenting with the idea of applying patterns to programming – specificallypattern languages– and presented their results at theOOPSLAconference that year.[3][4]In the following years, Beck, Cunningham and others followed up on this work.
Design patterns gained popularity incomputer scienceafter the bookDesign Patterns: Elements of Reusable Object-Oriented Softwarewas published in 1994 by the so-called "Gang of Four" (Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides), which is frequently abbreviated as "GoF". That same year, the firstPattern Languages of ProgrammingConference was held, and the following year thePortland Pattern Repositorywas set up for documentation of design patterns. The scope of the term remains a matter of dispute. Notable books in the design pattern genre include:
Although design patterns have been applied practically for a long time, formalization of the concept of design patterns languished for several years.[5]
Design patterns can speed up the development process by providing proven development paradigms.[6]Effective software design requires considering issues that may not become apparent until later in the implementation. Freshly written code can often have hidden, subtle issues that take time to be detected; issues that sometimes can cause major problems down the road. Reusing design patterns can help to prevent such issues,[7]and enhance code readability for those familiar with the patterns.
Software design techniques are difficult to apply to a broader range of problems.[citation needed]Design patterns provide general solutions,documentedin a format that does not require specifics tied to a particular problem.
In 1996, Christopher Alexander was invited to give aKeynote Speechto the 1996 OOPSLA Convention. Here he reflected on how his work on Patterns in Architecture had developed and his hopes for how the Software Design community could help Architecture extend Patterns to create living structures that use generative schemes that are more like computer code.
A pattern describes adesign motif, a.k.a.prototypical micro-architecture, as a set of program constituents (e.g., classes, methods...) and their relationships. A developer adapts the motif to their codebase to solve the problem described by the pattern. The resulting code has structure and organization similar to the chosen motif.
Efforts have also been made to codify design patterns in particular domains, including the use of existing design patterns as well as domain-specific design patterns. Examples includeuser interfacedesign patterns,[8]information visualization,[9]secure design,[10]"secure usability",[11]Web design[12]and business model design.[13]
The annualPattern Languages of ProgrammingConference proceedings[14]include many examples of domain-specific patterns.
Object-orienteddesign patterns typically show relationships and interactions betweenclassesorobjects, without specifying the final application classes or objects that are involved. Patterns that imply mutable state may be unsuited forfunctional programminglanguages. Some patterns can be rendered unnecessary in languages that have built-in support for solving the problem they are trying to solve, and object-oriented patterns are not necessarily suitable for non-object-oriented languages.
Design patterns can be organized into groups based on what kind of problem they solve.Creational patternscreate objects.Structural patternsorganize classes and objects to form larger structures that provide new functionality.Behavioral patternsdescribe collaboration between objects.
J2EE Patterns[17]PoEAA[18]
Can be unsafe when implemented in some language/hardware combinations. It can therefore sometimes be considered ananti-pattern.
The documentation for a design pattern describes the context in which the pattern is used, the forces within the context that the pattern seeks to resolve, and the suggested solution.[27]There is no single, standard format for documenting design patterns. Rather, a variety of different formats have been used by different pattern authors. However, according toMartin Fowler, certain pattern forms have become more well-known than others, and consequently become common starting points for new pattern-writing efforts.[28]One example of a commonly used documentation format is the one used byErich Gamma,Richard Helm,Ralph Johnson, andJohn Vlissidesin their bookDesign Patterns. It contains the following sections:
Some suggest that design patterns may be a sign that features are missing in a given programming language (JavaorC++for instance).Peter Norvigdemonstrates that 16 out of the 23 patterns in theDesign Patternsbook (which is primarily focused on C++) are simplified or eliminated (via direct language support) inLisporDylan.[29]Related observations were made by Hannemann and Kiczales who implemented several of the 23 design patterns using anaspect-oriented programming language(AspectJ) and showed that code-level dependencies were removed from the implementations of 17 of the 23 design patterns and that aspect-oriented programming could simplify the implementations of design patterns.[30]See alsoPaul Graham'sessay "Revenge of the Nerds".[31]
Inappropriate use of patterns may unnecessarily increase complexity.[32]FizzBuzzEnterpriseEditionoffers a humorous example of over-complexity introduced by design patterns.[33]
By definition, a pattern must be programmed anew into each application that uses it. Since some authors see this as a step backward fromsoftware reuseas provided bycomponents, researchers have worked to turn patterns into components. Meyer and Arnout were able to provide full or partial componentization of two-thirds of the patterns they attempted.[34]
In order to achieve flexibility, design patterns may introduce additional levels ofindirection, which may complicate the resulting design and decreaseruntimeperformance.
Software design patterns offer finer granularity compared to software architecture patterns and software architecture styles, as design patterns focus on solving detailed, low-level design problems within individual components or subsystems. Examples include Singleton, Factory Method, and Observer.[35][36][37]
Software Architecture Patternrefers to a reusable, proven solution to a recurring problem at the system level, addressing concerns related to the overall structure, component interactions, and quality attributes of the system.[citation needed]Software architecture patterns operate at a higher level of abstraction than design patterns, solving broader system-level challenges. While these patterns typically affect system-level concerns, the distinction between architectural patterns and architectural styles can sometimes be blurry. Examples includeCircuit Breaker.[35][36][37]
Software Architecture Stylerefers to a high-level structural organization that defines the overall system organization, specifying how components are organized, how they interact, and the constraints on those interactions.[citation needed]Architecture styles typically include a vocabulary of component and connector types, as well as semantic models for interpreting the system's properties. These styles represent the most coarse-grained level of system organization. Examples includeLayered Architecture,Microservices, andEvent-Driven Architecture.[35][36][37]
|
https://en.wikipedia.org/wiki/Design_pattern_(computer_science)
|
Insoftware engineering, asoftware development processorsoftware development life cycle(SDLC) is a process of planning and managingsoftware development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improvedesignand/orproduct management. The methodology may include the pre-definition of specificdeliverablesand artifacts that are created and completed by a project team to develop or maintain an application.[1]
Most modern development processes can be vaguely described asagile. Other methodologies includewaterfall,prototyping,iterative and incremental development,spiral development,rapid application development, andextreme programming.
A life-cycle "model" is sometimes considered a more general term for a category of methodologies and a software development "process" is a particular instance as adopted by a specific organization.[citation needed]For example, many specific software development processes fit the spiral life-cycle model. The field is often considered a subset of thesystems development life cycle.
The software development methodology framework did not emerge until the 1960s. According to Elliott (2004), thesystems development life cyclecan be considered to be the oldest formalized methodology framework for buildinginformation systems. The main idea of the software development life cycle has been "to pursue the development of information systems in a very deliberate, structured and methodical way, requiring each stage of the life cycle––from the inception of the idea to delivery of the final system––to be carried out rigidly and sequentially"[2]within the context of the framework being applied. The main target of this methodology framework in the 1960s was "to develop large scale functionalbusiness systemsin an age of large scale business conglomerates. Information systems activities revolved around heavydata processingandnumber crunchingroutines."[2]
Requirements gathering and analysis:The first phase of the custom software development process involves understanding the client's requirements and objectives. This stage typically involves engaging in thorough discussions and conducting interviews with stakeholders to identify the desired features, functionalities, and overall scope of the software. The development team works closely with the client to analyze existing systems and workflows, determine technical feasibility, and define project milestones.
Planning and design:Once the requirements are understood, the custom software development team proceeds to create a comprehensive project plan. This plan outlines the development roadmap, including timelines, resource allocation, and deliverables. The software architecture and design are also established during this phase. User interface (UI) and user experience (UX) design elements are considered to ensure the software's usability, intuitiveness, and visual appeal.
Development:With the planning and design in place, the development team begins the coding process. This phase involveswriting, testing, and debugging the software code. Agile methodologies, such as scrum or kanban, are often employed to promote flexibility, collaboration, and iterative development. Regular communication between the development team and the client ensures transparency and enables quick feedback and adjustments.
Testing and quality assurance:To ensure the software's reliability, performance, and security, rigorous testing and quality assurance (QA) processes are carried out. Different testing techniques, including unit testing, integration testing, system testing, and user acceptance testing, are employed to identify and rectify any issues or bugs. QA activities aim to validate the software against the predefined requirements, ensuring that it functions as intended.
Deployment and implementation:Once the software passes the testing phase, it is ready for deployment and implementation. The development team assists the client in setting up the software environment, migrating data if necessary, and configuring the system. User training and documentation are also provided to ensure a smooth transition and enable users to maximize the software's potential.
Maintenance and support:After the software is deployed, ongoing maintenance and support become crucial to address any issues, enhance performance, and incorporate future enhancements. Regular updates, bug fixes, and security patches are released to keep the software up-to-date and secure. This phase also involves providing technical support to end users and addressing their queries or concerns.
Methodologies, processes, and frameworks range from specific prescriptive steps that can be used directly by an organization in day-to-day work, to flexible frameworks that an organization uses to generate a custom set of steps tailored to the needs of a specific project or group. In some cases, a "sponsor" or "maintenance" organization distributes an official set of documents that describe the process. Specific examples include:
Since DSDM in 1994, all of the methodologies on the above list except RUP have been agile methodologies - yet many organizations, especially governments, still use pre-agile processes (often waterfall or similar). Software process andsoftware qualityare closely interrelated; some unexpected facets and effects have been observed in practice.[3]
Among these, another software development process has been established inopen source. The adoption of these best practices known and established processes within the confines of a company is calledinner source.
Software prototypingis about creating prototypes, i.e. incomplete versions of the software program being developed.
The basic principles are:[1]
A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problems, but this is true for all software methodologies.
"Agile software development" refers to a group of software development frameworks based on iterative development, where requirements and solutions evolve via collaboration between self-organizing cross-functional teams. The term was coined in the year 2001 when theAgile Manifestowas formulated.
Agile software development uses iterative development as a basis but advocates a lighter and more people-centric viewpoint than traditional approaches. Agile processes fundamentally incorporate iteration and the continuous feedback that it provides to successively refine and deliver a software system.
The Agile model also includes the following software development processes:
Continuous integrationis the practice of merging all developer working copies to a sharedmainlineseveral times a day.[4]Grady Boochfirst named and proposed CI inhis 1991 method,[5]although he did not advocate integrating several times a day.Extreme programming(XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day.
Various methods are acceptable for combining linear and iterative systems development methodologies, with the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
There are three main variants of incremental development:[1]
Rapid application development(RAD) is a software development methodology, which favorsiterative developmentand the rapid construction ofprototypesinstead of large amounts of up-front planning. The "planning" of software developed using RAD is interleaved with writing the software itself. The lack of extensive pre-planning generally allows software to be written much faster and makes it easier to change requirements.
The rapid development process starts with the development of preliminarydata modelsandbusiness process modelsusingstructured techniques. In the next stage, requirements are verified using prototyping, eventually to refine the data and process models. These stages are repeated iteratively; further development results in "a combined business requirements and technical design statement to be used for constructing new systems".[6]
The term was first used to describe a software development process introduced byJames Martinin 1991. According to Whitten (2003), it is a merger of variousstructured techniques, especially data-driveninformation technology engineering, with prototyping techniques to accelerate software systems development.[6]
The basic principles of rapid application development are:[1]
The waterfall model is a sequential development approach, in which development is seen as flowing steadily downwards (like a waterfall) through several phases, typically:
The first formal description of the method is often cited as an article published byWinston W. Royce[7]in 1970, although Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model.[8]
The basic principles are:[1]
The waterfall model is a traditional engineering approach applied to software engineering. A strict waterfall approach discourages revisiting and revising any prior phase once it is complete.[according to whom?]This "inflexibility" in a pure waterfall model has been a source of criticism by supporters of other more "flexible" models. It has been widely blamed for several large-scale government projects running over budget, over time and sometimes failing to deliver on requirements due to thebig design up frontapproach.[according to whom?]Except when contractually required, the waterfall model has been largely superseded by more flexible and versatile methodologies developed specifically for software development.[according to whom?]SeeCriticism of waterfall model.
In 1988,Barry Boehmpublished a formal software system development "spiral model," which combines some key aspects of thewaterfall modelandrapid prototypingmethodologies, in an effort to combine advantages oftop-down and bottom-upconcepts. It provided emphasis on a key area many felt had been neglected by other methodologies: deliberate iterative risk analysis, particularly suited to large-scale complex systems.
The basic principles are:[1]
Shape Up is a software development approach introduced byBasecampin 2018. It is a set of principles and techniques that Basecamp developed internally to overcome the problem of projects dragging on with no clear end. Its primary target audience is remote teams. Shape Up has no estimation and velocity tracking, backlogs, or sprints, unlikewaterfall,agile, orscrum. Instead, those concepts are replaced with appetite, betting, and cycles. As of 2022, besides Basecamp, notable organizations that have adopted Shape Up include UserVoice and Block.[12][13]
Other high-level software project methodologies include:
Some "process models" are abstract descriptions for evaluating, comparing, and improving the specific process adopted by an organization.
|
https://en.wikipedia.org/wiki/Software_development_methodology
|
The followingoutlineis provided as an overview of and topical guide to computer engineering:
Computer engineering–disciplinethat integrates several fields ofelectrical engineeringandcomputer sciencerequired to developcomputer hardwareand software.[1]Computer engineers usually have training inelectronic engineering(orelectrical engineering),software design, and hardware–software integration instead of onlysoftware engineeringor electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individualmicrocontrollers,microprocessors,personal computers, andsupercomputers, tocircuit design. This field of engineering not only focuses on how computer systems themselves work, but also how they integrate into the larger picture.[2]
|
https://en.wikipedia.org/wiki/Outline_of_computer_engineering
|
The followingoutlineis provided as an overview of and topical guide to computer programming:
Computer programming– process that leads from an original formulation of acomputingproblem toexecutablecomputer programs. Programming involves activities such as analysis, developing understanding, generatingalgorithms,verificationof requirements of algorithms including theircorrectnessand resources consumption, and implementation (commonly referred to as coding[1][2]) of algorithms in a targetprogramming language.Source codeis written in one or moreprogramming languages. The purpose of programming is to find a sequence of instructions that will automate performing a specific task or solving a given problem.
Programming language– formal constructed language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs to control the behavior of a machine or to express algorithms.
The top 20 most popular programming languages as of December 2022[update]:[3]
Programming language comparisons
Software engineering–
|
https://en.wikipedia.org/wiki/Outline_of_computer_programming
|
The followingoutlineis provided as an overview of and topical guide to software development:
Software development– development of asoftwareproduct, which entailscomputer programming(process of writing and maintaining thesource code), and encompasses a planned and structured process from the conception of the desired software to its final manifestation.[1]Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.[2]
Software development can be described as all of the following:
While theinformation technology(IT) industry undergoes changes faster than any other field, most technical experts agree that one must have a community to consult, learn from, or share experiences with. Here is a list of well-known software development organizations.
|
https://en.wikipedia.org/wiki/Outline_of_software_development
|
The followingoutlineis provided as an overview of and topical guide to web design and web development, two very related fields:
Web design– field that encompasses many different skills and disciplines in the production and maintenance ofwebsites. The different areas of web design include web graphic design;interface design; authoring, including standardized code andproprietary software;user experience design; andsearch engine optimization. Often many individuals will work in teams covering different aspects of the design process, although some designers will cover them all.[1]The term web design is normally used to describe the design process relating to the front-end (client side) design of a website including writingmarkup. Web design partially overlapsweb engineeringin the broader scope ofweb development. Web designers are expected to have an awareness ofusabilityand if their role involves creating markup then they are also expected to be up to date withweb accessibilityguidelines.
Web development– work involved in developing aweb sitefor theInternet(World Wide Web) or anintranet(a private network).[2]Web development can range from developing a simple singlestatic pageofplain textto complex web-based internet applications (web apps),electronic businesses, andsocial network services. A more comprehensive list of tasks to which web development commonly refers, may includeweb engineering,web design,web content development, client liaison,client-side/server-sidescripting,web serverandnetwork securityconfiguration, ande-commercedevelopment.
Among web professionals, "web development" usually refers to the main non-design aspects of building web sites: writingmarkupandcoding.[3]Web development may usecontent management systems(CMS) to make content changes easier and available with basic technical skills.
For larger organizations and businesses, web development teams can consist of hundreds of people (web developers) and follow standard methods likeAgile methodologieswhile developing websites. Smaller organizations may only require a single permanent or contracting developer, or secondary assignment to related job positions such as agraphic designerorinformation systemstechnician. Web development may be a collaborative effort between departments rather than the domain of a designated department. There are three kinds of web developer specialization:front-end developer, back-end developer, andfull-stack developer. Front-end developers are responsible for behaviour and visuals that run in the user browser, back-end developers deal with the servers and full-stack developers are responsible for both. Currently, the demand for React and Node.JS developers are very high all over the world.
|
https://en.wikipedia.org/wiki/Outline_of_web_design_and_web_development
|
The followingoutlineis provided as an overview of and topical guide to computers:
Computers– programmable machines designed to automatically carry out sequences of arithmetic or logical operations. The sequences of operations can be changed readily, allowing computers to solve more than one kind of problem.
Computers can be described as all of the following:
Computer architecture–
History of computing hardware
Software development–
Computer magazines–SeeList of computer magazinesOnline–
|
https://en.wikipedia.org/wiki/Outline_of_computers
|
This is an alphabetical list of articles pertaining specifically tosoftware engineering.
2D computer graphics—3D computer graphics
Abstract syntax tree—Abstraction—Accounting software—Ada—Addressing mode—Agile software development—Algorithm—Anti-pattern—Application framework—Application software—Artificial intelligence—Artificial neural network—ASCII—Aspect-oriented programming—Assembler—Assembly language—Assertion—Automata theory—Automotive software—Avionics software
Backward compatibility—BASIC—BCPL—Berkeley Software Distribution—Beta test—Boolean logic—Business software
C—C++—C#—CAD—Canonical model—Capability Maturity Model—Capability Maturity Model Integration—COBOL—Code coverage—Cohesion—Compilers—Complexity—Computation—Computational complexity theory—Computer—Computer-aided design—Computer-aided manufacturing—Computer architecture—Computer bug—Computer file—Computer graphics—Computer model—Computer multitasking—Computer programming—Computer science—Computer software—Computer term etymologies—Concurrent programming—Configuration management—Coupling—Cyclomatic complexity
Data structure—Data-structured language—Database—Dead code—Decision table—Declarative programming—Design pattern—Development stage—Device driver—Disassembler—Disk image—Domain-specific language
EEPROM—Electronic design automation—Embedded system—Engineering—Engineering model—EPROM—Even-odd rule—Expert system—Extreme programming
FIFO (computing and electronics)—File system—Filename extension—Finite-state machine—Firmware—Formal methods—Forth—Fortran—Forward compatibility—Functional decomposition—Functional design—Functional programming
Game development—Game programming—Game tester—GIMP Toolkit—Graphical user interface
Hierarchical database—High-level language—Hoare logic—Human–computer interaction—Hyperlink—Hyper-threading
IEEE Software—Imperative programming—Information technology engineering—Information systems—Information technology—Instruction set—Interactive programming—Interface description language—Intermediate language—Interpreter—Invariant—ISO—ISO 9000—ISO 9001—ISO 9660—ISO/IEC 12207—ISO image—Iterative development
Java—Java Modeling Language—Java virtual machine
Kernel—Knowledge management
Level design—Level designer—LIFO—Linux—List of programming languages—Literate programming
Machine code—Machine language—Mainframe—Medical informatics—Medical software—Mesh networking—Metadata (computing)—Microcode—Microprogram—Microsoft Windows—Minicomputer—MIPS architecture—Multi-paradigm programming language
Neural network software—Numerical analysis
Object code—Object database—Object-oriented programming—Ontology—Opcode—Open implementation—Open-source software—Operating system
Packet writing—Pair programming—Parallax scrolling—Pascal—p-code machine—Perl—PHP—Post-object programming—Privacy Engineering-Procedural programming—Processor register—Program specification—Programming language—Programming paradigm—Programming tool—Project lifecycle—Proprietary software—Python
Qt (toolkit)—Query optimizer—Queueing theory
Rapid application development—Rational Unified Process—Real-time operating system—Refactoring—Reflection—Regression testing—Relational database—Release to manufacturing—Reliability engineering—Requirement—Requirements analysis—Revision control—Robotics
Scripting language—Second-system effect—Signal analysis—Simulation—Software—Software architecture—Software bloat—Software brittleness—Software componentry—Software configuration management—Software development cycle—Software development process—Software engineering—Software framework—Software maintenance—Software metric—Source code—Source lines of code—Specification language—Sprite—SQL—Standard data model—SCAMPI—Stack (abstract data type)—Static code analysis—Static single-assignment form—Statistical package—String—Structured programming—Structured Query Language—Subroutine—Supercomputer—Systems architect—Systems development life cycle—Systems design—SPICE (ISO15504)
Tcl—Texture mapping—Theory of computation—Think aloud protocol—Thread—Threaded code—Three-address code—Timeboxing—TinyOS
UCSD p-System—Unix—Usability—Usability testing—User interface
Video games—Virtual finite-state machine—Visual Basic (classic)—Visual Basic .NET
Waterfall model—Wiki—Windows—Windows Vista
Xerox PARC—
YouTube—
Z notation—
|
https://en.wikipedia.org/wiki/Index_of_software_engineering_articles
|
Search-based software engineering(SBSE) appliesmetaheuristicsearch techniques such asgenetic algorithms,simulated annealingandtabu searchtosoftware engineeringproblems. Many activities insoftware engineeringcan be stated asoptimizationproblems.Optimizationtechniques ofoperations researchsuch aslinear programmingordynamic programmingare often impractical for large scalesoftware engineeringproblems because of theircomputational complexityor their assumptions on the problem structure. Researchers and practitioners usemetaheuristicsearch techniques, which impose little assumptions on the problem structure, to find near-optimal or "good-enough" solutions.[1]
SBSE problems can be divided into two types:
SBSE converts a software engineering problem into a computational search problem that can be tackled with ametaheuristic. This involves defining a search space, or the set of possible solutions. This space is typically too large to be explored exhaustively, suggesting ametaheuristicapproach. A metric[3](also called a fitness function, cost function, objective function or quality measure) is then used to measure the quality of potential solutions. Many software engineering problems can be reformulated as a computational search problem.[4]
The term "search-based application", in contrast, refers to usingsearch-engine technology, rather than search techniques, in another industrial application.
One of the earliest attempts to applyoptimizationto asoftware engineeringproblem was reported byWebb Millerand David Spooner in 1976 in the area ofsoftware testing.[5]In 1992, S. Xanthakis and his colleagues applied a search technique to asoftware engineeringproblem for the first time.[6]The term SBSE was first used in 2001 byHarmanand Jones.[7]The research community grew to include more than 800 authors by 2013, spanning approximately 270 institutions in 40 countries.[8]
Search-based software engineering is applicable to almost all phases of thesoftware development process.Software testinghas been one of the major applications.[9]Search techniques have been applied to othersoftware engineeringactivities, for instance,requirements analysis,[10][11]design,[12][13]refactoring,[14]development,[15]andmaintenance.[16]
Requirements engineeringis the process by which the needs of a software's users and environment are determined and managed. Search-based methods have been used for requirements selection and optimisation with the goal of finding the best possible subset of requirements that matches user requests amid constraints such as limited resources and interdependencies between requirements. This problem is often tackled as amultiple-criteria decision-makingproblem and, generally involves presenting the decision maker with a set of good compromises between cost and user satisfaction as well as the requirements risk.[17][18][19][20]
Identifying asoftware bug(or acode smell) and thendebugging(orrefactoring) the software is largely a manual and labor-intensive endeavor, though the process is tool-supported. One objective of SBSE is to automatically identify and fix bugs (for example viamutation testing).
Genetic programming, a biologically-inspired technique that involves evolving programs through the use of crossover and mutation, has been used to search for repairs to programs by altering a few lines of source code. TheGenProg Evolutionary Program Repairsoftware repaired 55 out of 105 bugs for approximately $8 each in one test.[21]
Coevolutionadopts a "predator and prey"metaphorin which a suite of programs and a suite ofunit testsevolve together and influence each other.[22]
Search-based software engineering has been applied to software testing, including the automatic generation of test cases (test data), test case minimization and test case prioritization.[23]Regression testinghas also received some attention.
The use of SBSE inprogram optimization, or modifying a piece of software to make it more efficient in terms of speed and resource use, has been the object of successful research.[24]In one instance, a 50,000 line program was genetically improved, resulting in a program 70 times faster on average.[25]A recent work by Basios et al. shows that by optimising the data structure, Google Guava found a 9% improvement in execution time, 13% improvement in memory consumption and 4% improvement in CPU usage separately.[26]
A number of decisions that are normally made by a project manager can be done automatically, for example, project scheduling.[27]
Tools available for SBSE include OpenPAT,[28]EvoSuite,[29]andCoverage, a code coverage measurement tool for Python.[30]
A number of methods and techniques are available, including:
As a relatively new area of research, SBSE does not yet experience broad industry acceptance.
Successful applications of SBSE in the industry can mostly be found within software testing, where the capability to automatically generate random test inputs for uncovering bugs at a big scale is attractive to companies. In 2017,Facebookacquired the software startup Majicke Limited that developed Sapienz, a search-based bug finding app.[32]
In other application scenarios, software engineers may be reluctant to adopt tools over which they have little control or that generate solutions that are unlike those that humans produce.[33]In the context of SBSE use in fixing or improving programs, developers need to be confident that any automatically produced modification does not generate unexpected behavior outside the scope of a system's requirements and testing environment. Considering that fully automated programming has yet to be achieved, a desirable property of such modifications would be that they need to be easily understood by humans to support maintenance activities.[34]
Another concern is that SBSE might make the software engineer redundant. Supporters claim that the motivation for SBSE is to enhance the relationship between the engineer and the program.[35]
|
https://en.wikipedia.org/wiki/Search-based_software_engineering
|
TheSoftware Engineering Body of Knowledge(SWEBOK(/ˈswiːˌbɒk/SWEE-bok)) refers to the collective knowledge, skills, techniques, methodologies, best practices, and experiences accumulated within the field ofsoftware engineeringover time. A baseline for this body of knowledge is presented in theGuide to the Software Engineering Body of Knowledge,[1]also known as theSWEBOK Guide, anISO/IECstandard originally recognized as ISO/IEC TR 19759:2005[2]and later revised by ISO/IEC TR 19759:2015.[3]TheSWEBOK Guideserves as a compendium and guide to the body of knowledge that has been developing and evolving over the past decades.
TheSWEBOK Guidehas been created through cooperation among several professional bodies and members of industry and is published by theIEEE Computer Society(IEEE),[4]from which it can be accessed for free. In late 2013,SWEBOK V3was approved for publication and released.[5]In 2016, the IEEE Computer Society began the SWEBOK Evolution effort to develop future iterations of the body of knowledge.[6]The SWEBOK Evolution project resulted in the publication ofSWEBOK Guideversion 4 in October 2024.[7]
The published version ofSWEBOK V3has the following 15knowledge areas(KAs) within the field ofsoftware engineering:
It also recognized, but did not define, these related disciplines:
The 2004 edition of theSWEBOK Guide, known asSWEBOK 2004, defined tenknowledge areas(KAs) within the field ofsoftware engineering:
The following disciplines are also defined as being related to software engineering:
A similar effort to define a body of knowledge for software engineering is the "Computing Curriculum Software Engineering (CCSE)," officially namedSoftware Engineering 2004(SE2004). The curriculum largely overlaps withSWEBOK 2004since the latter has been used as one of its sources, although it is more directed towards academia. Whereas theSWEBOK Guidedefines the software engineering knowledge that practitioners should have after four years of practice, SE2004 defines the knowledge that anundergraduatesoftware engineering student should possess upon graduation (including knowledge of mathematics, general engineering principles, and other related areas).SWEBOK V3aims to address these intersections.
|
https://en.wikipedia.org/wiki/SWEBOK
|
TheSoftware Engineering 2004(SE2004) —formerly known asComputing Curriculum Software Engineering(CCSE)— is a document that provides recommendations forundergraduate educationinsoftware engineering. SE2004 was initially developed by a steering committee between 2001 and 2004. Its development was sponsored by theAssociation for Computing Machineryand theIEEE Computer Society. Important components of SE2004 include theSoftware Engineering Education Knowledge, a list of topics that all graduates should know, as well as a set of guidelines for implementing curricula and a set of proposed courses.[citation needed]
Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/CCSE
|
Complexitycharacterizes the behavior of asystemormodelwhose components interact in multiple ways and follow local rules, leading tonon-linearity,randomness,collective dynamics,hierarchy, andemergence.[1][2]
The term is generally used to characterize something with many parts where those parts interact with each other in multiple ways, culminating in a higher order of emergence greater than the sum of its parts. The study of these complex linkages at various scales is the main goal ofcomplex systems theory.
The intuitive criterion of complexity can be formulated as follows: a system would be more complex if more parts could be distinguished, and if more connections between them existed.[3]
As of 2010[update], a number of approaches to characterizing complexity have been used inscience; Zayedet al.[4]reflect many of these.Neil Johnsonstates that "even among scientists, there is no unique definition of complexity – and the scientific notion has traditionally been conveyed using particular examples..." Ultimately Johnson adopts the definition of "complexity science" as "the study of the phenomena which emerge from a collection of interacting objects".[5]
Definitions of complexity often depend on the concept of a "system" – a set of parts or elements that have relationships among them differentiated from relationships with other elements outside the relational regime. Many definitions tend to postulate or assume that complexity expresses a condition of numerous elements in a system and numerous forms of relationships among the elements. However, what one sees as complex and what one sees as simple is relative and changes with time.
Warren Weaverposited in 1948 two forms of complexity: disorganized complexity, and organized complexity.[6]Phenomenaof 'disorganized complexity' are treated usingprobability theoryandstatistical mechanics, while 'organized complexity' deals with phenomena that escape such approaches and confront "dealing simultaneously with a sizable number of factors which are interrelated into an organic whole".[6]Weaver's 1948 paper has influenced subsequent thinking about complexity.[7]
The approaches that embody concepts of systems, multiple elements, multiple relational regimes, and state spaces might be summarized as implying that complexity arises from the number of distinguishable relational regimes (and their associated state spaces) in a defined system.
Some definitions relate to the algorithmic basis for the expression of a complex phenomenon or model or mathematical expression, as later set out herein.
One of the problems in addressing complexity issues has been formalizing the intuitive conceptual distinction between the large number of variances in relationships extant in random collections, and the sometimes large, but smaller, number of relationships between elements in systems where constraints (related to correlation of otherwise independent elements) simultaneously reduce the variations from element independence and create distinguishable regimes of more-uniform, or correlated, relationships, or interactions.
Weaver perceived and addressed this problem, in at least a preliminary way, in drawing a distinction between "disorganized complexity" and "organized complexity".
In Weaver's view, disorganized complexity results from the particular system having a very large number of parts, say millions of parts, or many more. Though the interactions of the parts in a "disorganized complexity" situation can be seen as largely random, the properties of the system as a whole can be understood by using probability and statistical methods.
A prime example of disorganized complexity is a gas in a container, with the gas molecules as the parts. Some would suggest that a system of disorganized complexity may be compared with the (relative)simplicityofplanetary orbits– the latter can be predicted by applyingNewton's laws of motion. Of course, most real-world systems, including planetary orbits, eventually become theoretically unpredictable even using Newtonian dynamics; as discovered by modernchaos theory.[8]
Organized complexity, in Weaver's view, resides in nothing else than the non-random, or correlated, interaction between the parts. These correlated relationships create a differentiated structure that can, as a system, interact with other systems. The coordinated system manifests properties not carried or dictated by individual parts. The organized aspect of this form of complexity with regard to other systems, rather than the subject system, can be said to "emerge," without any "guiding hand".
The number of parts does not have to be very large for a particular system to have emergent properties. A system of organized complexity may be understood in its properties (behavior among the properties) throughmodelingandsimulation, particularlymodeling and simulation with computers. An example of organized complexity is a city neighborhood as a living mechanism, with the neighborhood people among the system's parts.[9]
There are generally rules which can be invoked to explain the origin of complexity in a given system.
The source of disorganized complexity is the large number of parts in the system of interest, and the lack of correlation between elements in the system.
In the case ofself-organizingliving systems, usefully organized complexity comes from beneficially mutated organisms being selected to survive by their environment for their differentialreproductive abilityor at least success over inanimatematteror less organized complex organisms. See e.g.Robert Ulanowicz's treatment ofecosystems.[10]
Complexity of an object or system is a relative property. For instance, for many functions (problems), such acomputational complexityas time of computation is smaller when multitapeTuring machinesare used than when Turing machines with one tape are used.Random Access Machinesallow one to even more decrease time complexity (Greenlaw and Hoover 1998: 226), while inductive Turing machines can decrease even the complexity class of a function, language or set (Burgin 2005). This shows that tools of activity can be an important factor of complexity.
In several scientific fields, "complexity" has a precise meaning:
Other fields introduce less precisely defined notions of complexity:
Complexity has always been a part of our environment, and therefore many scientific fields have dealt withcomplex systemsand phenomena. From one perspective, that which is somehow complex – displaying variation without being random – is most worthy of interest given the rewards found in the depths of exploration.
The use of the term complex is often confused with the term complicated. In today's systems, this is the difference between myriad connecting "stovepipes" and effective "integrated" solutions.[17]This means that complex is the opposite of independent, while complicated is the opposite of simple.
While this has led some fields to come up with specific definitions of complexity, there is a more recent movement to regroup observationsfrom different fieldsto study complexity in itself, whether it appears inanthills,human brainsorsocial systems.[18]One such interdisciplinary group of fields isrelational order theories.
The behavior of a complex system is often said to be due to emergence and self-organization. Chaos theory has investigated the sensitivity of systems to variations in initial conditions as one cause of complex behaviour.
Recent developments inartificial life,evolutionary computationandgenetic algorithmshave led to an increasing emphasis on complexity and complex adaptive systems.
Insocial science, the study on the emergence of macro-properties from the micro-properties, also known as macro-micro view insociology. The topic is commonly recognized associal complexitythat is often related to the use of computer simulation in social science, i.e.computational sociology.
Systems theoryhas long been concerned with the study of complex systems (in recent times,complexity theoryandcomplex systemshave also been used as names of the field). These systems are present in the research of a variety of disciplines, includingbiology,economics, social studies andtechnology. Recently, complexity has become a natural domain of interest of real-world socio-cognitive systems and emergingsystemicsresearch. Complex systems tend to be high-dimensional, non-linear, and difficult to model. In specific circumstances, they may exhibit low-dimensional behaviour.
Ininformation theory, algorithmic information theory is concerned with the complexity of strings ofdata.
Complex strings are harder to compress. While intuition tells us that this may depend on thecodecused to compress a string (a codec could be theoretically created in any arbitrary language, including one in which the very small command "X" could cause the computer to output a very complicated string like "18995316"), any twoTuring-completelanguages can be implemented in each other, meaning that the length of two encodings in different languages will vary by at most the length of the "translation" language – which will end up being negligible for sufficiently large data strings.
These algorithmic measures of complexity tend to assign high values torandom noise. However, under a certain understanding of complexity, arguably the most intuitive one, random noise is meaningless and so not complex at all.
Information entropyis also sometimes used in information theory as indicative of complexity, but entropy is also high for randomness. In the case of complex systems,information fluctuation complexitywas designed so as not to measure randomness as complex and has been useful in many applications. More recently, a complexity metric was developed for images that can avoid measuring noise as complex by using the minimum description length principle.[19]
There has also been interest in measuring the complexity of classification problems insupervised machine learning. This can be useful inmeta-learningto determine for which data sets filtering (or removing suspected noisy instances from the training set) is the most beneficial[20]and could be expanded to other areas. Forbinary classification, such measures can consider the overlaps in feature values from differing classes, the separability of the classes, and measures of geometry, topology, and density ofmanifolds.[21]
For non-binary classification problems, instance hardness[22]is a bottom-up approach that first seeks to identify instances that are likely to be misclassified (assumed to be the most complex). The characteristics of such instances are then measured usingsupervisedmeasures such as the number of disagreeing neighbors or the likelihood of the assigned class label given the input features.
A recent study based onmolecular simulationsand compliance constants describesmolecular recognitionas a phenomenon of organisation.[23]Even for small molecules likecarbohydrates, the recognition process can not be predicted or designed even assuming that each individualhydrogen bond's strength is exactly known.
Deriving from thelaw of requisite variety, Boisot and McKelvey formulated the ‘Law of Requisite Complexity’, that holds that, in order to be efficaciously adaptive, the internal complexity of a system must match the external complexity it confronts.[24]
The application in project management of the Law of Requisite Complexity, as proposed by Stefan Morcov, is the analysis ofpositive, appropriate and negative complexity.[25][26]
Project complexityis the property of a project which makes it difficult to understand, foresee, and keep under control its overall behavior, even when given reasonably complete information about the project system.[27][28]
Maik Maurer considers complexity as a reality in engineering. He proposed a methodology formanaging complexity in systems engineering[29]:
1. Define the system.
2. Identify the type of complexity.
3. Determine the strategy.
4. Determine the method.
5. Model the system.
6. Implement the method.
Computational complexity theory is the study of the complexity of problems – that is, the difficulty ofsolvingthem. Problems can be classified by complexity class according to the time it takes for an algorithm – usually a computer program – to solve them as a function of the problem size. Some problems are difficult to solve, while others are easy. For example, some difficult problems need algorithms that take an exponential amount of time in terms of the size of the problem to solve. Take thetravelling salesman problem, for example. It can be solved, as denoted inBig O notation, in timeO(n22n){\displaystyle O(n^{2}2^{n})}(wherenis the size of the network to visit – the number of cities the travelling salesman must visit exactly once). As the size of the network of cities grows, the time needed to find the route grows (more than) exponentially.
Even though a problem may be computationally solvable in principle, in actual practice it may not be that simple. These problems might require large amounts of time or an inordinate amount of space. Computational complexity may be approached from many different aspects. Computational complexity can be investigated on the basis of time, memory or other resources used to solve the problem. Time and space are two of the most important and popular considerations when problems of complexity are analyzed.
There exist a certain class of problems that although they are solvable in principle they require so much time or space that it is not practical to attempt to solve them. These problems are calledintractable.
There is another form of complexity calledhierarchical complexity. It is orthogonal to the forms of complexity discussed so far, which are called horizontal complexity.
The concept of complexity is being increasingly used in the study ofcosmology,big history, andcultural evolutionwith increasing granularity, as well as increasing quantification.
Eric Chaissonhas advanced a cosmological complexity[30]metric which he terms Energy Rate Density.[31]This approach has been expanded in various works, most recently applied to measuring evolving complexity of nation-states and their growing cities.[32]
|
https://en.wikipedia.org/wiki/Complexity
|
The Mythical Man-Month: Essays on Software Engineeringis a book onsoftware engineeringandproject managementbyFred Brooksfirst published in 1975, with subsequent editions in 1982 and 1995. Its central theme is that adding manpower to a software project that is behind schedule delays it even longer. This idea is known asBrooks's law, and is presented along with thesecond-system effectand advocacy ofprototyping.
Brooks's observations are based on his experiences atIBMwhile managing the development ofOS/360. He had added moreprogrammersto a project falling behind schedule, a decision that he would later conclude had, counter-intuitively, delayed the project even further. He also made the mistake of asserting that one project—involved in writing anALGOLcompiler—would require six months, regardless of the number of workers involved (it required longer). The tendency for managers to repeat such errors in project development led Brooks to quip that his book is called "The Bible of Software Engineering", because "everybody quotes it, some people read it, and a few people go by it".[1]
The work was first published in 1975 (ISBN0-201-00650-2),[2]reprinted with corrections in 1982, and republished in an anniversary edition with four extra chapters in 1995 (ISBN0-201-83595-9), including a reprint of the essay "No Silver Bullet" with commentary by the author.
Brooks discusses several causes of scheduling failures. The most enduring is his discussion ofBrooks's law:Adding manpower to a late software project makes it later.Man-monthis a hypothetical unit of work representing the work done by one person in one month; Brooks's law says that the possibility of measuring useful work in man-months is a myth, and is hence the centerpiece of the book.
Complex programming projects cannot be perfectly partitioned into discrete tasks that can be worked on without communication between the workers and without establishing a set of complex interrelationships between tasks and the workers performing them.
Therefore, assigning more programmers to a project running behind schedule will make it even later. This is because the time required for the new programmers to learn about the project and the increased communication overhead will consume an ever-increasing quantity of the calendar time available. Whennpeople have to communicate among themselves, asnincreases, their output decreases and when it becomes negative the project is delayed further with every person added.
Brooks added the chapter "No Silver Bullet—Essence and Accidents in Software Engineering" and further reflections on it in the chapter "'No Silver Bullet' Refired" to the anniversary edition ofThe Mythical Man-Month.
Brooks insists that there is no onesilver bullet: "there is no single development, in either technology or management technique, which by itself promises even oneorder of magnitude[tenfold] improvement within a decade in productivity, in reliability, in simplicity."
The argument relies on the distinction between accidental complexity and essential complexity, similar to the wayAmdahl's lawrelies on the distinction between "parallelizable" and "strictly serial".
Thesecond-system effectproposes that, when an architect designs a second system, it is the most dangerous system they will ever design, because they will tend to incorporate all of the additions they originally did not add to the first system due to inherent time constraints. Thus, when embarking on a second system, an engineer should be mindful that they are susceptible to over-engineering it.
The author makes the observation that in a suitably complex system there is a certain irreducible number of errors. Any attempt to fix observed errors tends to result in the introduction of other errors.
Brooks wrote "Question: How does a large software project get to be one year late? Answer: One day at a time!" Incremental slippages on many fronts eventually accumulate to produce a large overall delay. Continued attention to meeting small individual milestones is required at each level of management.
To make a user-friendly system, the system must have conceptual integrity, which can only be achieved by separating architecture from implementation. A single chief architect (or a small number of architects), acting on the user's behalf, decides what goes in the system and what stays out. The architect or team of architects should develop an idea of what the system should do and make sure that this vision is understood by the rest of the team. A novel idea by someone may not be included if it does not fit seamlessly with the overall system design. In fact, to ensure a user-friendly system, a system may deliberately providefewerfeatures than it is capable of. The point being, if a system is too complicated to use, many features will go unused because no one has time to learn them.
The chief architect produces a manual of system specifications. It should describe the external specifications of the system in detail, that is everything that the user sees. The manual should be altered as feedback comes in from the implementation teams and the users.
When designing a new kind of system, a teamwilldesign a throw-away system (whether it intends to or not). This system acts as a "pilot plan" that reveals techniques that will subsequently cause a complete redesign of the system. This second,smartersystem should be the one delivered to the customer, since delivery of the pilot system would cause nothing but agony to the customer, and possibly ruin the system's reputation and maybe even the company.
Every project manager should create a small core set of formal documents defining the project objectives, how they are to be achieved, who is going to achieve them, when they are going to be achieved, and how much they are going to cost. These documents may also reveal inconsistencies that are otherwise hard to see.
When estimating project times, it should be remembered thatprogramming products(which can be sold to paying customers) and programming systems are both three times as hard to write as simple independent in-house programs.[3]It should be kept in mind how much of the work week will actually be spent on technical issues, as opposed to administrative or other non-technical tasks, such as meetings, and especially "stand-up" or "all-hands" meetings.
To avoid disaster, all the teams working on a project should remain in contact with each other in as many ways as possible (e-mail, phone, meetings, memos, etc.). Instead of assuming something, implementers should ask the architect(s) to clarify their intent on a feature they are implementing, before proceeding with an assumption that might very well be completely incorrect. The architect(s) are responsible for formulating a group picture of the project and communicating it to others.
Much as a surgical team during surgery is led by one surgeon performing the most critical work, while directing the team to assist with less critical parts, it seems reasonable to have a "good" programmer develop critical system components while the rest of a team provides what is needed at the right time. Additionally, Brooks muses that "good" programmers are generally five to ten times as productive as mediocre ones.
Software is invisible. Therefore, many things only become apparent once a certain amount of work has been done on a new system, allowing a user to experience it. This experience will yield insights, which will change a user's needs or the perception of the user's needs. The system should, therefore, be changed to fulfill the changed requirements of the user. This can only occur up to a certain point, otherwise the system may never be completed. At a certain date, no more changes should be allowed to the system and the code should be frozen. All requests for changes should be delayed until thenextversion of the system.
Instead of every programmer having their own special set of tools, each team should have a designated tool-maker who may create tools that are highly customized for the job that team is doing (e.g. a code generator tool that creates code based on a specification). In addition, system-wide tools should be built by a common tools team, overseen by the project manager.
There are two techniques for lowering software development costs that Brooks writes about:
|
https://en.wikipedia.org/wiki/Second_system_syndrome
|
Incomputer science,program optimization,code optimization, orsoftware optimizationis the process of modifying a software system to make some aspect of it work moreefficientlyor use fewer resources.[1]In general, acomputer programmay be optimized so that it executes more rapidly, or to make it capable of operating with lessmemory storageor other resources, or draw less power.
Although the term "optimization" is derived from "optimum",[2]achieving a truly optimal system is rare in practice, which is referred to assuperoptimization. Optimization typically focuses on improving a system with respect to a specific quality metric rather than making it universally optimal. This often leads to trade-offs, where enhancing one metric may come at the expense of another. One popular example isspace-time tradeoff, reducing a program’s execution time by increasing its memory consumption. Conversely, in scenarios where memory is limited, engineers might prioritize a sloweralgorithmto conserve space. There is rarely a single design that can excel in all situations, requiringengineersto prioritize attributes most relevant to the application at hand.
Furthermore, achieving absolute optimization often demands disproportionate effort relative to the benefits gained. Consequently, optimization processes usually stop once sufficient improvements are achieved, without striving for perfection. Fortunately, significant gains often occur early in the optimization process, making it practical to stop before reachingdiminishing returns.
Optimization can occur at a number of levels. Typically the higher levels have greater impact, and are harder to change later on in a project, requiring significant changes or a complete rewrite if they need to be changed. Thus optimization can typically proceed via refinement from higher to lower, with initial gains being larger and achieved with less work, and later gains being smaller and requiring more work. However, in some cases overall performance depends on performance of very low-level portions of a program, and small changes at a late stage or early consideration of low-level details can have outsized impact. Typically some consideration is given to efficiency throughout a project – though this varies significantly – but major optimization is often considered a refinement to be done late, if ever. On longer-running projects there are typically cycles of optimization, where improving one area reveals limitations in another, and these are typically curtailed when performance is acceptable or gains become too small or costly.
As performance is part of the specification of a program – a program that is unusably slow is not fit for purpose: a video game with 60 Hz (frames-per-second) is acceptable, but 6 frames-per-second is unacceptably choppy – performance is a consideration from the start, to ensure that the system is able to deliver sufficient performance, and early prototypes need to have roughly acceptable performance for there to be confidence that the final system will (with optimization) achieve acceptable performance. This is sometimes omitted in the belief that optimization can always be done later, resulting in prototype systems that are far too slow – often by anorder of magnitudeor more – and systems that ultimately are failures because they architecturally cannot achieve their performance goals, such as theIntel 432(1981); or ones that take years of work to achieve acceptable performance, such as Java (1995), which only achieved acceptable performance withHotSpot(1999). The degree to which performance changes between prototype and production system, and how amenable it is to optimization, can be a significant source of uncertainty and risk.
At the highest level, the design may be optimized to make best use of the available resources, given goals, constraints, and expected use/load. The architectural design of a system overwhelmingly affects its performance. For example, a system that is network latency-bound (where network latency is the main constraint on overall performance) would be optimized to minimize network trips, ideally making a single request (or no requests, as in apush protocol) rather than multiple roundtrips. Choice of design depends on the goals: when designing acompiler, if fast compilation is the key priority, aone-pass compileris faster than amulti-pass compiler(assuming same work), but if speed of output code is the goal, a slower multi-pass compiler fulfills the goal better, even though it takes longer itself. Choice of platform and programming language occur at this level, and changing them frequently requires a complete rewrite, though a modular system may allow rewrite of only some component – for example, for a Python program one may rewrite performance-critical sections in C. In a distributed system, choice of architecture (client-server,peer-to-peer, etc.) occurs at the design level, and may be difficult to change, particularly if all components cannot be replaced in sync (e.g., old clients).
Given an overall design, a good choice ofefficient algorithmsanddata structures, and efficient implementation of these algorithms and data structures comes next. After design, the choice ofalgorithmsand data structures affects efficiency more than any other aspect of the program. Generally data structures are more difficult to change than algorithms, as a data structure assumption and its performance assumptions are used throughout the program, though this can be minimized by the use ofabstract data typesin function definitions, and keeping the concrete data structure definitions restricted to a few places.
For algorithms, this primarily consists of ensuring that algorithms are constant O(1), logarithmic O(logn), linear O(n), or in some cases log-linear O(nlogn) in the input (both in space and time). Algorithms with quadratic complexity O(n2) fail to scale, and even linear algorithms cause problems if repeatedly called, and are typically replaced with constant or logarithmic if possible.
Beyond asymptotic order of growth, the constant factors matter: an asymptotically slower algorithm may be faster or smaller (because simpler) than an asymptotically faster algorithm when they are both faced with small input, which may be the case that occurs in reality. Often ahybrid algorithmwill provide the best performance, due to this tradeoff changing with size.
A general technique to improve performance is to avoid work. A good example is the use of afast pathfor common cases, improving performance by avoiding unnecessary work. For example, using a simple text layout algorithm for Latin text, only switching to a complex layout algorithm for complex scripts, such asDevanagari. Another important technique is caching, particularlymemoization, which avoids redundant computations. Because of the importance of caching, there are often many levels of caching in a system, which can cause problems from memory use, and correctness issues from stale caches.
Beyond general algorithms and their implementation on an abstract machine, concrete source code level choices can make a significant difference. For example, on early C compilers,while(1)was slower thanfor(;;)for an unconditional loop, becausewhile(1)evaluated 1 and then had a conditional jump which tested if it was true, whilefor (;;)had an unconditional jump . Some optimizations (such as this one) can nowadays be performed byoptimizing compilers. This depends on the source language, the target machine language, and the compiler, and can be both difficult to understand or predict and changes over time; this is a key place where understanding of compilers and machine code can improve performance.Loop-invariant code motionandreturn value optimizationare examples of optimizations that reduce the need for auxiliary variables and can even result in faster performance by avoiding round-about optimizations.
Between the source and compile level,directivesandbuild flagscan be used to tune performance options in the source code and compiler respectively, such as usingpreprocessordefines to disable unneeded software features, optimizing for specific processor models or hardware capabilities, or predictingbranching, for instance. Source-based software distribution systems such asBSD'sPortsandGentoo'sPortagecan take advantage of this form of optimization.
Use of anoptimizing compilertends to ensure that theexecutable programis optimized at least as much as the compiler can predict.
At the lowest level, writing code using anassembly language, designed for a particular hardware platform can produce the most efficient and compact code if the programmer takes advantage of the full repertoire ofmachine instructions. Manyoperating systemsused onembedded systemshave been traditionally written in assembler code for this reason. Programs (other than very small programs) are seldom written from start to finish in assembly due to the time and cost involved. Most are compiled down from a high level language to assembly and hand optimized from there. When efficiency and size are less important large parts may be written in a high-level language.
With more modernoptimizing compilersand the greater complexity of recentCPUs, it is harder to write more efficient code than what the compiler generates, and few projects need this "ultimate" optimization step.
Much of the code written today is intended to run on as many machines as possible. As a consequence, programmers and compilers don't always take advantage of the more efficient instructions provided by newer CPUs or quirks of older models. Additionally, assembly code tuned for a particular processor without using such instructions might still be suboptimal on a different processor, expecting a different tuning of the code.
Typically today rather than writing in assembly language, programmers will use adisassemblerto analyze the output of a compiler and change the high-level source code so that it can be compiled more efficiently, or understand why it is inefficient.
Just-in-timecompilers can produce customized machine code based on run-time data, at the cost of compilation overhead. This technique dates to the earliestregular expressionengines, and has become widespread with Java HotSpot and V8 for JavaScript. In some casesadaptive optimizationmay be able to performrun timeoptimization exceeding the capability of static compilers by dynamically adjusting parameters according to the actual input or other factors.
Profile-guided optimizationis an ahead-of-time (AOT) compilation optimization technique based on run time profiles, and is similar to a static "average case" analog of the dynamic technique of adaptive optimization.
Self-modifying codecan alter itself in response to run time conditions in order to optimize code; this was more common in assembly language programs.
SomeCPU designscan perform some optimizations at run time. Some examples includeout-of-order execution,speculative execution,instruction pipelines, andbranch predictors. Compilers can help the program take advantage of these CPU features, for example throughinstruction scheduling.
Code optimization can be also broadly categorized asplatform-dependent and platform-independent techniques. While the latter ones are effective on most or all platforms, platform-dependent techniques use specific properties of one platform, or rely on parameters depending on the single platform or even on the single processor. Writing or producing different versions of the same code for different processors might therefore be needed. For instance, in the case of compile-level optimization, platform-independent techniques are generic techniques (such asloop unrolling, reduction in function calls, memory efficient routines, reduction in conditions, etc.), that impact most CPU architectures in a similar way. A great example of platform-independent optimization has been shown with inner for loop, where it was observed that a loop with an inner for loop performs more computations per unit time than a loop without it or one with an inner while loop.[3]Generally, these serve to reduce the totalinstruction path lengthrequired to complete the program and/or reduce total memory usage during the process. On the other hand, platform-dependent techniques involve instruction scheduling,instruction-level parallelism, data-level parallelism, cache optimization techniques (i.e., parameters that differ among various platforms) and the optimal instruction scheduling might be different even on different processors of the same architecture.
Computational tasks can be performed in several different ways with varying efficiency. A more efficient version with equivalent functionality is known as astrength reduction. For example, consider the followingCcode snippet whose intention is to obtain the sum of all integers from 1 toN:
This code can (assuming noarithmetic overflow) be rewritten using a mathematical formula like:
The optimization, sometimes performed automatically by an optimizing compiler, is to select a method (algorithm) that is more computationally efficient, while retaining the same functionality. Seealgorithmic efficiencyfor a discussion of some of these techniques. However, a significant improvement in performance can often be achieved by removing extraneous functionality.
Optimization is not always an obvious or intuitive process. In the example above, the "optimized" version might actually be slower than the original version ifNwere sufficiently small and the particular hardware happens to be much faster at performing addition andloopingoperations than multiplication and division.
In some cases, however, optimization relies on using more elaborate algorithms, making use of "special cases" and special "tricks" and performing complex trade-offs. A "fully optimized" program might be more difficult to comprehend and hence may contain morefaultsthan unoptimized versions. Beyond eliminating obvious antipatterns, some code level optimizations decrease maintainability.
Optimization will generally focus on improving just one or two aspects of performance: execution time, memory usage, disk space, bandwidth, power consumption or some other resource. This will usually require a trade-off – where one factor is optimized at the expense of others. For example, increasing the size ofcacheimproves run time performance, but also increases the memory consumption. Other common trade-offs include code clarity and conciseness.
There are instances where the programmer performing the optimization must decide to make the software better for some operations but at the cost of making other operations less efficient. These trade-offs may sometimes be of a non-technical nature – such as when a competitor has published abenchmarkresult that must be beaten in order to improve commercial success but comes perhaps with the burden of making normal usage of the software less efficient. Such changes are sometimes jokingly referred to aspessimizations.
Optimization may include finding abottleneckin a system – a component that is the limiting factor on performance. In terms of code, this will often be ahot spot– a critical part of the code that is the primary consumer of the needed resource – though it can be another factor, such as I/O latency or network bandwidth.
In computer science, resource consumption often follows a form ofpower lawdistribution, and thePareto principlecan be applied to resource optimization by observing that 80% of the resources are typically used by 20% of the operations.[4]In software engineering, it is often a better approximation that 90% of the execution time of a computer program is spent executing 10% of the code (known as the 90/10 law in this context).
More complex algorithms and data structures perform well with many items, while simple algorithms are more suitable for small amounts of data — the setup, initialization time, and constant factors of the more complex algorithm can outweigh the benefit, and thus ahybrid algorithmoradaptive algorithmmay be faster than any single algorithm. A performance profiler can be used to narrow down decisions about which functionality fits which conditions.[5]
In some cases, adding morememorycan help to make a program run faster. For example, a filtering program will commonly read each line and filter and output that line immediately. This only uses enough memory for one line, but performance is typically poor, due to the latency of each disk read. Caching the result is similarly effective, though also requiring larger memory use.
Optimization can reducereadabilityand add code that is used only to improve theperformance. This may complicate programs or systems, making them harder to maintain and debug. As a result, optimization or performance tuning is often performed at the end of thedevelopment stage.
Donald Knuthmade the following two statements on optimization:
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%"[6]
(He also attributed the quote toTony Hoareseveral years later,[7]although this might have been an error as Hoare disclaims having coined the phrase.[8])
"In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"[6]
"Premature optimization" is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code. This can result in a design that is not as clean as it could have been or code that is incorrect, because the code is complicated by the optimization and the programmer is distracted by optimizing.
When deciding whether to optimize a specific part of the program,Amdahl's Lawshould always be considered: the impact on the overall program depends very much on how much time is actually spent in that specific part, which is not always clear from looking at the code without aperformance analysis.
A better approach is therefore to design first, code from the design and thenprofile/benchmarkthe resulting code to see which parts should be optimized. A simple and elegant design is often easier to optimize at this stage, and profiling may reveal unexpected performance problems that would not have been addressed by premature optimization.
In practice, it is often necessary to keep performance goals in mind when first designing software, but the programmer balances the goals of design and optimization.
Modern compilers and operating systems are so efficient that the intended performance increases often fail to materialize. As an example, caching data at the application level that is again cached at the operating system level does not yield improvements in execution. Even so, it is a rare case when the programmer will remove failed optimizations from production code. It is also true that advances in hardware will more often than not obviate any potential improvements, yet the obscuring code will persist into the future long after its purpose has been negated.
Optimization during code development usingmacrostakes on different forms in different languages.
In some procedural languages, such asCandC++, macros are implemented using token substitution. Nowadays,inline functionscan be used as atype safealternative in many cases. In both cases, the inlined function body can then undergo further compile-time optimizations by the compiler, includingconstant folding, which may move some computations to compile time.
In manyfunctional programminglanguages, macros are implemented using parse-time substitution of parse trees/abstract syntax trees, which it is claimed makes them safer to use. Since in many cases interpretation is used, that is one way to ensure that such computations are only performed at parse-time, and sometimes the only way.
Lisporiginated this style of macro,[citation needed]and such macros are often called "Lisp-like macros". A similar effect can be achieved by usingtemplate metaprogramminginC++.
In both cases, work is moved to compile-time. The difference betweenCmacros on one side, and Lisp-like macros andC++template metaprogrammingon the other side, is that the latter tools allow performing arbitrary computations at compile-time/parse-time, while expansion ofCmacros does not perform any computation, and relies on the optimizer ability to perform it. Additionally,Cmacros do not directly supportrecursionoriteration, so are notTuring complete.
As with any optimization, however, it is often difficult to predict where such tools will have the most impact before a project is complete.
See alsoCategory:Compiler optimizations
Optimization can be automated by compilers or performed by programmers. Gains are usually limited for local optimization, and larger for global optimizations. Usually, the most powerful optimization is to find a superioralgorithm.
Optimizing a whole system is usually undertaken by programmers because it is too complex for automated optimizers. In this situation, programmers orsystem administratorsexplicitly change code so that the overall system performs better. Although it can produce better efficiency, it is far more expensive than automated optimizations. Since many parameters influence the program performance, the program optimization space is large. Meta-heuristics and machine learning are used to address the complexity of program optimization.[9]
Use aprofiler(orperformance analyzer) to find the sections of the program that are taking the most resources – thebottleneck. Programmers sometimes believe they have a clear idea of where the bottleneck is, but intuition is frequently wrong.[citation needed]Optimizing an unimportant piece of code will typically do little to help the overall performance.
When the bottleneck is localized, optimization usually starts with a rethinking of the algorithm used in the program. More often than not, a particular algorithm can be specifically tailored to a particular problem, yielding better performance than a generic algorithm. For example, the task of sorting a huge list of items is usually done with aquicksortroutine, which is one of the most efficient generic algorithms. But if some characteristic of the items is exploitable (for example, they are already arranged in some particular order), a different method can be used, or even a custom-made sort routine.
After the programmer is reasonably sure that the best algorithm is selected, code optimization can start. Loops can be unrolled (for lower loop overhead, although this can often lead tolowerspeed if it overloads theCPU cache), data types as small as possible can be used, integer arithmetic can be used instead of floating-point, and so on. (Seealgorithmic efficiencyarticle for these and other techniques.)
Performance bottlenecks can be due to language limitations rather than algorithms or data structures used in the program. Sometimes, a critical part of the program can be re-written in a differentprogramming languagethat gives more direct access to the underlying machine. For example, it is common for veryhigh-levellanguages likePythonto have modules written inCfor greater speed. Programs already written in C can have modules written inassembly. Programs written inDcan use theinline assembler.
Rewriting sections "pays off" in these circumstances because of a general "rule of thumb" known as the 90/10 law, which states that 90% of the time is spent in 10% of the code, and only 10% of the time in the remaining 90% of the code. So, putting intellectual effort into optimizing just a small part of the program can have a huge effect on the overall speed – if the correct part(s) can be located.
Manual optimization sometimes has the side effect of undermining readability. Thus code optimizations should be carefully documented (preferably using in-line comments), and their effect on future development evaluated.
The program that performs an automated optimization is called anoptimizer. Most optimizers are embedded in compilers and operate during compilation. Optimizers can often tailor the generated code to specific processors.
Today, automated optimizations are almost exclusively limited tocompiler optimization. However, because compiler optimizations are usually limited to a fixed set of rather general optimizations, there is considerable demand for optimizers which can accept descriptions of problem and language-specific optimizations, allowing an engineer to specify custom optimizations. Tools that accept descriptions of optimizations are calledprogram transformationsystems and are beginning to be applied to real software systems such as C++.
Some high-level languages (Eiffel,Esterel) optimize their programs by using anintermediate language.
Grid computingordistributed computingaims to optimize the whole system, by moving tasks from computers with high usage to computers with idle time.
Sometimes, the time taken to undertake optimization therein itself may be an issue.
Optimizing existing code usually does not add new features, and worse, it might add newbugsin previously working code (as any change might). Because manually optimized code might sometimes have less "readability" than unoptimized code, optimization might impact maintainability of it as well. Optimization comes at a price and it is important to be sure that the investment is worthwhile.
An automatic optimizer (oroptimizing compiler, a program that performs code optimization) may itself have to be optimized, either to further improve the efficiency of its target programs or else speed up its own operation. A compilation performed with optimization "turned on" usually takes longer, although this is usually only a problem when programs are quite large.
In particular, forjust-in-time compilersthe performance of therun timecompile component, executing together with its target code, is the key to improving overall execution speed.
|
https://en.wikipedia.org/wiki/Optimization_(computer_science)
|
Source code escrowis the deposit of thesource codeofsoftwarewith a third-partyescrowagent. Escrow is typically requested by a party licensing software (the licensee), to ensure maintenance of the software instead ofabandonmentororphaning. The software's source code is released to the licensee if the licensor files for bankruptcy or otherwise fails to maintain and update the software as promised in the softwarelicense agreement.
As the continued operation and maintenance of custom software is critical to many companies, they usually desire to make sure that it continues even if the licensor becomes unable to do so, such as because of bankruptcy. This is most easily achieved by obtaining a copy of the up-to-date source code. The licensor, however, will often be unwilling to agree to this, as the source code will generally represent one of their most closely guardedtrade secrets.[1]
As a solution to this conflict of interest, source code escrow ensures that the licensee obtains access to the source code only when the maintenance of the software cannot otherwise be assured, as defined in contractually agreed-upon conditions.[2]
Source code escrow takes place in a contractual relationship, formalized in a source code escrow agreement, between at least three parties:
The service provided by the escrow agent – generally a business dedicated to that purpose and independent from either party – consists principally in taking custody of the source code from the licensor and releasing it to the licensee only if the conditions specified in the escrow agreement are met.[2]
Source code escrow agreements provide for the following:
Whether a source code escrow agreement is entered into at all, and who bears its costs, is subject to agreement between the licensor and the licensee.Software license agreementsoften provide for a right of the licensee to demand that the source code be put into escrow, or to join an existing escrow agreement.[4]
Bankruptcylaws may interfere with the execution of a source code escrow agreement, if the bankrupt licensor's creditors are legally entitled to seize the licensor's assets – including the code in escrow – upon bankruptcy, preventing the release of the code to the licensee.[6]
Museums, archives and otherGLAMorganizations have begun to act as independent escrow agents due to growingdigital obsolescence. Notable examples are theInternet Archivein 2007,[7][8]theLibrary of Congressin 2006,[9][10]ICHEG,[11]Computer History Museum,[12][13]or theMOMA.[14]
There are also some cases wheresoftware communitiesact as escrow agent, for instance forWing Commandervideo gameseries[15][16][17]orUltima 9of theUltima series.[18]
The escrow agreements described above are most applicable to custom-developed software which is not available to the general public. In some cases, source code forcommercial off-the-shelfsoftware may be deposited into escrow to be released asfree and open-source softwareunder anopen source licensewhen the original developer ceases development and/or when certain fundraising conditions are met (thethreshold pledge system).
For instance, theBlendergraphics suite was released in this way following the bankruptcy of Not a Number Technologies; the widely usedQt toolkitis covered by a source code escrow agreement secured by the "KDE Free Qt Foundation".[19]
There are many cases ofend-of-lifeopen-sourcing which allow the community continued self-support, seeList of commercial video games with later released source code.
|
https://en.wikipedia.org/wiki/Source_code_escrow
|
Feature interactionis asoftware engineeringconcept. It occurs when the integration of two features would modify the behavior of one or both features.
The termfeatureis used to denote a unit of functionality of a software application. Similar to many concepts in computer science, the term can be used at different levels of abstraction. For example, the plain old telephone service (POTS) is a telephony application feature at one level, but itself is composed of originating features and terminating features. The originating features may in turn include the providedial tonefeature, digit collection feature and so on.
This definition offeature interactionallows one to focus on certain behavior of the interacting features such as how their response time may be changed given the integration. Many researchers in the field consider problems that arise due to change in the executionbehaviorof the interacting features. Under that context, thebehaviorof a feature is defined by its execution flow and output for a given input. In other words, the interaction changes the execution flow and output of the interacting features for a given input.
In the context oftelephony, atelephone line(the system) typically offers a set of features that includecall forwardingandcall waiting. Call waiting allows one call to be suspended while a second call is answered, while call forwarding enables a customer to specify a secondary phone number to which additional calls will be forwarded in the event that the customer is already using the phone.
To illustrate the example, we consider a telephone line provided to a customer, and we assume that both call forwarding and call waiting are enabled on the line. When a first call arrives on the line, the phone rings and is answered. Since neither feature is activated by the first call, there is no noticeable problem. When a second call arrives before the first has terminated, the telephone system has a decision to make: whether the call should be forwarded to the secondary number (call forwarding) or the person who answered the first call should be notified that another call has arrived (call waiting). Since this decision has no obvious correct answer, the optimal answer depends on the needs of the customer. Thisfeature interactionis a specific example of a general and common problem that has become prevalent due to increasing system complexity.
In this situation, it is possible that the system’s decision will be made in anon-deterministicfashion due torace conditionsand other design factors. The consequences of feature interactions can range from minor irritations to life-threatening software failures, and therefore there is ongoing research that aims to find ways ofdetectingas well asresolvingfeature interactions.
|
https://en.wikipedia.org/wiki/Feature_interaction_problem
|
Professional certification,trade certification, orprofessional designation, often called simplycertificationorqualification, is a designation earned by a person to assure qualification to perform a job or task. Not all certifications that usepost-nominal lettersare an acknowledgement of educational achievement, or an agency appointed to safeguard the public interest.
A certification is a third-party attestation of an individual's level of knowledge or proficiency in a certain industry or profession. They are granted by authorities in the field, such asprofessional societiesand universities, or by private certificate-granting agencies. Most certifications are time-limited; some expire after a period of time (e.g., the lifetime of a product that requires certification for use), while others can be renewed indefinitely as long as certain requirements are met. Renewal usually requires ongoing education to remain up-to-date on advancements in the field, evidenced by earning the specified number ofcontinuing educationcredits (CECs), orcontinuing education units(CEUs), from approved professional development courses.
Many certification programs are affiliated with professional associations, trade organizations, or private vendors interested in raising industry standards. Certification programs are often created or endorsed by professional associations, but are typically completely independent from membership organizations. Certifications are very common in fields such as aviation, construction, technology, environment, and other industrial sectors, as well as healthcare, business, real estate, and finance.
According toThe Guide to National Professional Certification Programs(1997) by Phillip Barnhart, "certifications are portable, since they do not depend on one company's definition of a certain job" and they provide potential employers with "an impartial, third-party endorsement of an individual's professional knowledge and experience".[1]
Certification is different from professional licensure. In the United States, licenses are typically issued by state agencies, whereas certifications are usually awarded by professional societies or educational institutes. Obtaining a certificate is voluntary in some fields, but in others, certification from a government-accredited agency may be legally required to perform certain jobs or tasks. In other countries, licenses are typically granted by professional societies or universities and require a certificate after about three to five years and so on thereafter. The assessment process for certification may be more comprehensive than that of licensure, though sometimes the assessment process is very similar or even the same, despite differing in terms of legal status.
TheAmerican National Standards Institute(ANSI) defines the standard for being a certifying agency as meeting the following two requirements:
TheInstitute for Credentialing Excellence(ICE) is a U.S.-based organization that sets standards for the accreditation of personnel certification and certificate programs based on theStandards for Educational and Psychological Testing, a joint publication of the American Educational Research Association (AERA), the American Psychological Association (APA), and the National Council on Measurement in Education (NCME). Many members of theAssociation of Test Publishers(ATP) are also certification organizations.
There are three general types of certification. Listed in order of development level and portability, they are: corporate (internal), product-specific, and profession-wide.
Corporate, or "internal" certifications, are made by acorporationor low-stakes organization for internal purposes. For example, a corporation might require a one-day training course for all sales personnel, after which they receive a certificate. While this certificate has limited portability – to other corporations, for example – it is the most simple to develop.
Product-specific certifications are more involved, and are intended to be referenced to a product across all applications. This approach is very prevalent in theinformation technology(IT) industry, where personnel are certified on a version ofsoftwareor hardware. This type of certification is portable across locations (for example, different corporations that use that software), but not across other products. Another example could be the certifications issued for shipping personnel, which are under international standards even for the recognition of the certification body, under theInternational Maritime Organization(IMO).
The most general type of certification is profession-wide. Certification in the medical profession is often offered by particular specialties. In order to apply professional standards, increase the level of practice, and protect the public, a professional organization might establish a certification. This is intended to be portable to all places a certified professional might work. Of course, this generalization increases the cost of such a program; the process to establish a legally defensible assessment of an entire profession is very extensive. An example of this is acertified public accountant(CPA), which would not be certified for just one corporation or one piece ofaccountancysoftware but for general work in the profession.
Many tertiary education providers grant professional certificates as an award for the completion of an educational program. The curriculum of a professional certificate is most often in a focused subject matter. Many professional certificates have the same curriculum as master's degrees in the same subject. Many other professional certificates offer the same courses as master's degrees in the same subject, but require the student to take fewer total courses to complete the program. Some professional certificates have a curriculum that more closely resembles a baccalaureate major in the same field. The typical professional certificate program is between 200 and 300 class-hours in size. It is uncommon for a program to be larger or smaller than that. Most professional certificate programs are open enrollment, but some have admissions processes. A few universities put some of their professional certificates into a subclass they refer to as advanced professional certificates.
Advanced professional certificatesare professional credentials designed to helpprofessionalsenhance theirjob performanceand marketability in their respective fields. In many other countries, certificates are qualifications inhigher education. In theUnited States, a certificate may be offered by an institute of higher education. These certificates usually signify that a student has reached a standard of knowledge of a certainvocationalsubject. Certificate programs can be completed more quickly than associate degrees and often do not have general education requirements.
An advanced professional certificate is a result of an educational process designed for individuals. Certificates are designed for both newcomers to the industry as well as seasoned professionals. Certificates are awarded by aneducational programoracademic institution. Completion of a certificate program indicates completion of a course or series of courses with a specific concentration that is different from an educational degree program. Course content for an advanced certificate is set forth through a variety of sources i.e.faculty,committee,instructors, and other subject matter experts in a related field. The end goal of an advanced professional certificate is so that professionals may demonstrate knowledge of course content at the end of a set period in time.
There are many professional bodies for accountants and auditors throughout the world; some of them are legally recognized in their jurisdictions.Public accountantsare theaccountancyandcontrolexperts that are legally certified in differentjurisdictionsto work in public practices, certifying accounts asstatutory auditors, eventually selling advice and services to other individuals and businesses. Today, however, many work within private corporations, financial industry, and government bodies.
Cf.Accountancy qualifications and regulation
Aviatorsare certified through theoretical and in-flight examinations. Requirements for certifications are quite equal in most countries and are regulated by each National Aviation Authority. The existing certificates or pilot licenses are:
Licensing in these categories require not only examinations but also a minimum number of flight hours. All categories are available for Fixed-Wing Aircraft (airplanes) and Rotatory-Wing Aircraft (helicopters). Within each category, aviators may also obtain certifications in:
Usually, aviators must be certified also in their log books for the type and model of aircraft they are allowed to fly. Currency checks as well as regular medical check-ups with a frequency of 6 months, 12 months, or 36 months, depending on the type of flying permitted, are obligatory. An aviator can fly only if holding:
InEurope, theANSP,ATCO& ANSP technicians are certified according to EUROCONTROL Safety Regulatory Requirement (ESARRs) (according toEUregulation 2096/2005 "Common Requirements").
In the United States, several communications certifications are conferred by theElectronics Technicians Association.
Certification is often used in the professions ofsoftware engineeringandinformation technology.
Conferred by the International Dance Council CID[10]at UNESCO, the International Certification of Dance Studies[11]is awarded to students who have completed 150 hours of classes in a specific form of dance for Level 1. Another 150 hours are required for Level 2 and so on till Level 10. This is the only international certification for dance since the International Dance Council CID[10]is the official body for all forms of dance; it is usually given in addition to local or national certificates, that is why it is colloquially called "the dancer's passport". Students cannot apply for this certification directly – they have to ask their school to apply on their behalf. This certification is awarded free of charge, there is no cost other than membership fees.
International Dance Council CID[10]at UNESCO administers the International Certification of Dance Studies.
In the United States, several electronics certifications are provided by theElectronics Technicians Association.
TheFederal Emergency Management Agency's EMI offers credentials and training opportunities for United States citizens. Students do not have to be employed by FEMA or be federal employees for some of the programs.[13]
Professional engineering is any act of planning, designing, composing, measuring, evaluating, inspecting, advising, reporting, directing or supervising, or managing any of the foregoing, that requires the application of engineering principles and that concerns the safeguarding of life, health, property, economic interests, the public interest or the environment.
Event planningincludes budgeting, scheduling, site selection, acquiring necessary permits, coordinating transportation and parking, arranging for speakers or entertainers, arranging decor, event security, catering, coordinating with third-party vendors, and emergency plans.
A warehouse management system (WMS) is a part of the supply chain and primarily aims to control the movement and storage of materials within a warehouse and process the associated transactions, including shipping, receiving, putaway and picking. The systems also direct and optimize stock putaway based on real-time information about the status of bin utilization. A WMS monitors the progress of products through the warehouse. It involves the physical warehouse infrastructure,tracking systems, and communication between product stations.[14]
More precisely, warehouse management involves the receipt, storage and movement of goods, (normally finished goods), to intermediate storage locations or to a final customer. In the multi-echelon model for distribution, there may be multiple levels of warehouses. This includes a central warehouse, a regional warehouses (serviced by the central warehouse) and potentially retail warehouses (serviced by the regional warehouses).[15]
IECEx[16]covers the specialized field of explosion protection associated with the use of equipment in areas where flammable gases, liquids and combustible dusts may be present. This system provides the assurance that equipment is manufactured to meet safety standards, and that services such as installation, repair and overhaul also comply with IEC International Standards on 60079 series. The UNECE (United Nations Economic Commission for Europe), cited IECEx as one example of a practice model for the verification of conformity to IEC Standards, for European smaller countries with no certification schemes for such equipment. It published a "Common Regulatory Framework" as a suggestion for those countries implementing a certification program for the explosive atmospheres' segment.[17]
In the United States, insurance professionals are licensed separately by each state. Many individuals seek one or more certifications to distinguish themselves from their peers.
TESOLis a large field of employment with widely varying degrees of regulation. Most provision worldwide is through thestate schoolsystem of each individual country, and as such, the instructors tend to be trained primary- orsecondary schoolteachers who arenative speakersof the language of their pupils, and not of English. Though native speakers of English have been working in non-English speaking countriesin this capacityfor years, it was not until the last twenty-five years or so that there was any widespread focus on training particularly for this field. Previously, workers in this sort of job were people engaging inbackpacker tourismhoping to earn some extra travel money or well-educated professionals in other fieldsvolunteering, or retired people. These sort of people are certainly still to be found, but there are many who consider TESOL their main profession.
One of the problems[according to whom?]facing these full-time teachers is the absence of an international governing body for the certification or licensure of English language teachers. However,Cambridge Universityand its subsidiary bodyUCLESare pioneers in trying to get some degree of accountability and quality control to consumers of English courses, through theirCELTAandDELTAprograms.Trinity College Londonhas equivalent programs, theCertTESOLand theLTCL DipTESOL. They offer initial certificates in teaching, in which candidates are trained inlanguageawareness andclassroomtechniques, and given a chance to practice teaching, after which feedback is reported. Both institutions have as a follow-up a professional diploma, usually taken after a year or two in the field. Although the initial certificate is available to anyone with a high school education, the diploma is meant to be a post-graduate qualification and can in fact be incorporated into a master's degree program.
An increasing number of attorneys are choosing to be recognized as having special expertise in certain fields of law. According to theAmerican Bar Association, a lawyer who is a certified specialist has been recognized by an independent professional certifying organization as having an enhanced level of skill and expertise, as well as substantial involvement in an established legal specialty. These organizations require a lawyer to demonstrate special training, experience and knowledge to ensure that the lawyer's recognition is meaningful and reliable. Lawyer conduct with regard to specialty certification is regulated by the states.
Legal administrators vary in their day-to-day responsibilities and job requirements. The Association of Legal Administrators (ALA) is the credentialing body of the Certified Legal Manager (CLM) certification program.[19]CLMs are recognized as administrators who have passed a comprehensive examination and have met other eligibility requirements.:[20]
Logisticianis the profession in the logistics and transport sectors, including sea, air, land and rail modes. Professional qualification for logisticians usually carries post-nominal letters.
Certification granting bodies include, but are not limited to,Institute for Supply Management(ISM),Association for Operations Management(APICS),Chartered Institute of Logistics and Transport(CILT),International Society of Logistics(SOLE),Canadian Institute of Traffic and Transportation(CITT), andAllied Council for Commerce and Logistics(ACCL).
Management consultingis the practice of providing consulting services to organizations to improve their performance or in any way to assist in achieving any sort of organizational objectives.
The profession's primary certification is the "Certified Management Consultant[21]" (CMC) designation.
Certification granting bodies are the approximately 50 Institutes of Management Consulting belonging to theInternational Council of Management Consulting Institutes(ICMCI).[22]
Churches have their own process of who may use various religious titles.Protestantchurches typically require aMasters of Divinity, accreditation by thedenominationandordinationby the local church in order for a minister to become a "Reverend". Those qualifications may or may not also give government authorization to solemnizemarriages.
Board certificationis the process by which a physician in the United States documents by written, practical or computer based testing, illustrating a mastery of knowledge and skills that define a particular area of medical specialization. TheAmerican Board of Medical Specialties, a not-for-profit organization, assists 24 approved medical specialty boards in the development and use of standards in the ongoing evaluation and certification of physicians.
Medical specialty certification in the United States is a voluntary process. While medical licensure sets the minimum competency requirements to diagnose and treat patients, it is not specialty specific.[23]Board certification demonstrates a physician's exceptional expertise in a particular specialty or sub-specialty of medical practice.
Patients, physicians, health care providers, insurers and quality organizations regard certification as an important measure of a physician's knowledge, experience and skills to provide quality health care within a given specialty.
Other professional certifications include certifications such asmedical licenses,Membership of the Royal College of Physicians, Fellowship of theRoyal College of Physicians and Surgeons of Canada,nursing board certification, diplomas insocial work. The Commission for Certification in Geriatric Pharmacy certifies pharmacists that are knowledgeable about principles of geriatric pharmacotherapy and the provision of pharmaceutical care to the elderly. Additional certifying bodies relating to the medical field include:
NCPRP stands for "National Certified Peer Recovery Professional", and the NCPRP credential and exam were developed in collaboration with the International Certification Board of Recovery Professionals (ICBRP) and is currently being administered byPARfessionals.
PARfessionals is a professional organization and all of the available courses are professional development and pre-certification courses.
The NCPRP credential and exam focus primarily on the concept ofpeer recoverythrough mental health and addiction recovery. It has the main purpose of training student-candidates on how to becomepeer recovery professionalswho can provide guidance, knowledge or assistance for individuals who have had similar experiences.[24]
Each student-candidate must complete several key steps which include initial registration; the pre-certification review course; and all applicable sections of the official application in order to become eligible to complete the final step, which is the NCPRP certification exam.[25]
The NCPRP credential is obtained once a participant successfully passes the NCPRP certification exam by the second attempt and is valid for five years.[26]
Organizations that offer various certifications include:
In the US, the Universal Accreditation Board, an organization composed of thePublic Relations Society of America, the Agricultural Relations Council, the National School Public Relations Association, the Religious Communicators Council and otherpublic relationsprofessional societies, administers theAccreditation in Public Relations(APR), a voluntary certification program for public relations practitioners.
The Building Owners and Managers Association and theInternational Facility Management Associationoffer professional certifications for the operation and management of commercial properties.[28][29]
Organizations offering certification include:
Political commentators have criticized professional oroccupational licensing, especially medical and legal licensing, for restricting the supply of services and therefore making them more expensive, often putting them out of reach of the poor.[31][32]
|
https://en.wikipedia.org/wiki/Certification_(software_engineering)
|
Engineering disastersoften arise from shortcuts in the design process. Engineering is the science and technology used to meet the needs and demands of society.[1]These demands includebuildings,aircraft,vessels, and computer software. In order to meet society’s demands, the creation of newer technology and infrastructure must be met efficiently and cost-effectively. To accomplish this, managers and engineers need a mutual approach to the specified demand at hand. This can lead to shortcuts in engineering design to reduce costs of construction and fabrication. Occasionally, these shortcuts can lead to unexpected design failures.
Failure occurs when a structure or device has been used past the limits of design that inhibits proper function.[2]If a structure is designed to only support a certain amount ofstress,strain, or loading and the user applies greater amounts, the structure will begin to deform and eventually fail. Several factors contribute to failure including a flawed design, improper use, financial costs, and miscommunication.
In the field of engineering, the importance of safety is emphasized. Learning from past engineering failures and infamous disasters such as the Challenger explosion brings the sense of reality to what can happen when appropriate safety precautions are not taken. Safety tests such astensile testing,finite element analysis(FEA), and failure theories help provide information to design engineers about what maximum forces and stresses can be applied to a certain region of a design. These precautionary measures help prevent failures due to overloading and deformation.[3]
Static loading is when a force is applied slowly to an object or structure. Static load tests such as tensile testing, bending tests, and torsion tests help determine the maximum loads that a design can withstand without permanent deformation or failure. Tensile testing is common when calculating a stress-strain curve which can determine theyield strengthandultimate strengthof a specific test specimen.
The specimen is stretched slowly in tension until it breaks, while the load and the distance across the gage length are continuously monitored. A sample subjected to a tensile test can typically withstand stresses higher than its yield stress without breaking. At a certain point, however, the sample will break into two pieces. This happens because the microscopic cracks that resulted from yielding will spread to large scales. The stress at the point of complete breakage is called a material's ultimate tensile strength.[4]The result is astress–strain curveof the material's behavior under static loading. Through this tensile testing, the yield strength is found at the point where the material begins to yield more readily to the applied stress, and its rate of deformation increases.[5]
When a material undergoes permanent deformation from exposure to radical temperatures or constant loading, the functionality of the material can become impaired.[6][7]This time–dependent plastic distortion of material is known ascreep. Stress and temperature are both major factors of the rate of creep. In order for a design to be considered safe, the deformation due to creep must be much less than the strain at which failure occurs. Once the static loading causes the specimen to surpass this point, the specimen will begin permanent, or plastic, deformation.[7]
In mechanical design, most failures are due to time-varying, or dynamic, loads that are applied to a system. This phenomenon is known as fatigue failure.Fatigueis known as the weakness in a material due to variations of stress that are repeatedly applied to said material.[8]For example, when stretching a rubber band to a certain length without breaking it (i.e. not surpassing the yield stress of the rubber band) the rubber band will return to its original form after release; however, repeatedly stretching the rubber band with the same amount of force thousands of times would create micro-cracks in the band which would lead to the rubber band being snapped. The same principle is applied to mechanical materials such as metals.[5]
Fatigue failure always begins at a crack that may form over time or due to the manufacturing process used. The three stages of fatigue failure are:
Note that fatigue does not imply that the strength of the material is lessened after failure. This notion was originally referred to a material becoming "tired" after cyclic loading.[5]
Engineering is a precise discipline, requiring communication among project developers. Several forms of miscommunication can lead to a flawed design. Various fields of engineering must intercommunicate, including civil, electrical, mechanical, industrial, chemical, biological, and environmental engineering. For example, a modern automobile design requires electrical engineers, mechanical engineers, and environmental engineers to work together to produce a fuel-efficient, durable product for consumers. If engineers do not adequately communicate among one another, a potential design could have flaws and be unsafe for consumer purchase. Engineering disasters can be a result of such miscommunication, including the2005 levee failures in Greater New Orleans,LouisianaduringHurricane Katrina, theSpace Shuttle Columbia disaster, and theHyatt Regency walkway collapse.[9][10][11]
An exceptional example of this is theMars Climate Orbiter. "The primary cause of the orbiter's violent demise was that one piece of ground software supplied by Lockheed Martin produced results in a United States customary unit, contrary to its Software Interface Specification (SIS), while a second system, supplied byNASA, expected those results to be in SI units, in accordance with the SIS." Lockheed Martin and the prime contractor spectacularly failed to communicate.
Software has played a role in many high-profile disasters:
When larger projects such as infrastructures and airplanes fail, multiple people can be affected which leads to an engineering disaster. A disaster is defined as a calamity that results in significant damage which may include the loss of life.[13]In-depth observations and post-disaster analysis have been documented to a large extent to help prevent similar disasters from occurring.
The Ashtabula River railroad disaster occurred December 29, 1876 when a bridge over the Ashtabula River nearAshtabula, Ohiofailed as a Lake Shore and Michigan Southern Railway train passed over it killing at least 92 people. Modern analyses blame failure of an angle block lug, thrust stress and low temperatures.
On December 28, 1879, the Tay Bridge Disaster occurred when the first Tay Rail Bridge collapsed as aNorth British Railwaypassenger train on theEdinburgh–Dundee linepassed over it, killing at least 59 people. The major cause was failure to allow for wind loadings.
The Johnstown Flood occurred on May 31, 1889, when the South Fork Dam located on the Little Conemaugh River upstream of the town ofJohnstown,Pennsylvania, failed after days of heavy rainfall killing at least 2,209 people. A 2016 hydraulic analysis confirmed that changes made to the dam severely reduced its ability to withstand major storms.
The road, rail and pedestrian Quebec Bridge in Quebec, Canada, failed twice during construction, in 1907 and 1916, at the cost of 88 lives. The first failure was improper design of the chords. The second failure occurred when the central span was being raised into position and fell into the river.
The St. Francis Dam was aconcretegravity damlocated inSan Francisquito CanyoninLos Angeles County, California, built from 1924 to 1926 to serveLos Angeles's growing water needs. It failed in 1928 due to a defective soil foundation and design flaws, triggering afloodthat claimed the lives of at least 431 people.
The firstTacoma Narrows Bridgewas asuspension bridgeinWashingtonthat spanned theTacoma NarrowsstraitofPuget Sound. It dramaticallycollapsedon November 7, 1940. The proximate cause was moderate winds which producedaeroelastic flutterthat was self-exciting and unbounded, opposite to damping.
On July 17, 1981, two overhead walkways loaded with partygoers at theHyatt Regency HotelinKansas City, Missouri, collapsed. The concrete and glass platforms fell onto atea dancein the lobby, killing 114 and injuring 216. Investigations concluded the walkway would have failed under one-third the weight it held that night because of a revised design.
Levees and floodwalls protecting New Orleans, Louisiana, and its suburbs failed in 50 locations on August 29, 2005, following the passage of Hurricane Katrina, killing 1,577 people. Four major investigations all concurred that the primary cause of the flooding was inadequate design and construction by the Army Corps of Engineers.
Ponte Morandi was a road viaduct in Genoa, Liguria, Italy. On August 14, 2018, a section of the viaduct collapsed during a rainstorm, killing forty-three people. The remains of the original bridge were demolished in August 2019.
On June 24, 2021, at 1:22 a.m., Champlain Towers South, a 12-story beachfrontcondominiumin theMiamisuburb ofSurfside, Florida, partiallycollapsedkilling ninety-eight people. The investigations are currently ongoing.
The Space ShuttleChallengerdisaster occurred on January 28, 1986, when the NASASpace Shuttle orbiterChallenger(OV-099)(missionSTS-51-L) broke apart 73 seconds into its flight, leading to the deaths of its seven crew members. Disintegration of the vehicle began after anO-ringseal in its rightsolid rocket booster(SRB) failed at liftoff.
TheSpace Shuttle Columbia(OV-102) disaster occurred on February 1, 2003, during the final leg ofSTS-107. While re-entering Earth's atmosphere overLouisianaandTexas, the shuttle unexpectedly disintegrated, resulting in the deaths of all seven astronauts on board. The cause was damage to thermal shielding tiles from impact with a falling piece of foam insulation from an external tank during the January 16 launch.
Early Liberty ships sufferedhull and deck cracks, and a few were lost to such structural defects. During World War II, there were nearly 1,500 instances of significantbrittle fractures. Three of the 2,710 Liberties built broke in half without warning. In cold temperatures the steel hulls cracked, resulting in later ships being constructed using more suitable steel.
On the night of April 26, 1865, the passenger steamboatSultanaexploded on theMississippi Riverseven miles (11 km) north ofMemphis, Tennessee. The explosion resulted in the loss of 1,547 lives. The cause was believed to be the result of an incorrectly repaired boiler exploding, which led to the explosion of two of the three other boilers.
On 18 June 2023, the submersibleTitanimploded during an expedition to thewreck of theTitanic, killing all five persons on board. Flaws in the design of the submersible and thecarbon fibrepressure hull in particular were discussed as a possible cause of the implosion, with Titan's operatorOceanGatehaving ignored multiple previous warnings about the potential for accidents.
|
https://en.wikipedia.org/wiki/Engineering_disasters#Failure_due_to_software
|
Inproject management, thecone of uncertaintydescribes the evolution of the amount of best case uncertainty during a project.[1]At the beginning of a project, comparatively little is known about the product or work results, and so estimates are subject to large uncertainty. As more research and development is done, more information is learned about the project, and the uncertainty then tends to decrease, reaching 0% when allresidual riskhas been terminated or transferred. This usually happens by the end of the project i.e. by transferring the responsibilities to a separate maintenance group.
The term cone of uncertainty is used insoftware developmentwhere the technical and business environments change very rapidly. However, the concept, under different names, is a well-established basic principle ofcost engineering. Most[citation needed]environments change so slowly that they can be considered static for the duration of a typical project, and traditional project management methods therefore focus on achieving a full understanding of the environment through careful analysis and planning. Well before any significant investments are made, the uncertainty is reduced to a level where the risk can be carried comfortably. In this kind of environment the uncertainty level decreases rapidly in the beginning and the cone shape is less obvious. The software business however is very volatile and there is an external pressure to decrease the uncertainty level over time. The project must actively and continuously work to reduce the uncertainty level.
The cone of uncertainty is narrowed both by research and by decisions that remove the sources of variability from the project. These decisions are about scope, what is included and not included in the project. If these decisions change later in the project then the cone will widen.
Original research for engineering and construction in the chemical industry demonstrated that actual final costs often exceeded the earliest "base" estimate by as much as 100% (or underran by as much as 50%[2]). Research in the software industry on the cone of uncertainty stated that in the beginning of theproject life cycle(i.e. before gathering ofrequirements) estimates have in general an uncertainty of factor 4 on both the high side and the low side.[3]This means that the actual effort or scope can be 4 times or 1/4 of the first estimates. This uncertainty tends to decrease over the course of a project, although that decrease is not guaranteed.[4]
One way to account for the cone of uncertainty in the project estimate is to first determine a 'most likely' single-point estimate and then calculate the high-low range using predefined multipliers (dependent on the level of uncertainty at that time). This can be done with formulas applied to spreadsheets, or by using aproject management toolthat allows the task owner to enter a low/high ranged estimate and will then create a schedule that will include this level of uncertainty.
The cone of uncertainty is also used extensively as a graphic inhurricaneforecasting, where its most iconic usage is more formally known as theNHCTrack Forecast Cone,[5]and more colloquially known as the Error Cone, Cone of Probability, or the Cone of Death. (Note that the usage in hurricane forecasting is essentially the opposite of the usage in software development. In software development, the uncertainty surrounds the current state of the project, and in the future the uncertainty decreases, whereas in hurricane forecasting the current location of the storm is certain, and the future path of the storm becomes increasingly uncertain).[6]Over the past decade, storms have traveled within their projected areas two-thirds of the time,[7]and the cones themselves have shrunk due to improvements in methodology. The NHC first began in-house five-day projections in 2001, and began issuing such to the public in 2003. It is currently working in-house on seven-day forecasts, but the resultant cone of uncertainty is so large that the possible benefits fordisaster managementare problematic.[8]
The original conceptual basis of the cone of uncertainty was developed for engineering and construction in the chemical industry by the founders of the American Association of Cost Engineers (nowAACE International). They published a proposed standard estimate type classification system with uncertainty ranges in 1958[9]and presented "cone" illustrations in the industry literature at that time.[2]In the software field, the concept was picked up by Barry Boehm.[10]Boehm referred to the concept as the "Funnel Curve".[11]Boehm's initial quantification of the effects of the Funnel Curve were subjective.[10]Later work by Boehm and his colleagues atUSCapplied data from a set of software projects from the U.S. Air Force and other sources to validate the model. The basic model was further validated based on work at NASA's Software Engineering Lab.[12][13]
The first time the name "cone of uncertainty" was used to describe this concept was inSoftware Project Survival Guide.[14]
Footnotes
|
https://en.wikipedia.org/wiki/Cone_of_uncertainty
|
Cost estimation in software engineering is typically concerned with the financial spend on the effort to develop and test the software, this can also include requirements review, maintenance, training, managing and buying extra equipment, servers and software. Many methods have been developed for estimating software costs for a given project.
Methods for estimation in software engineering include these principles:
Most cost software development estimation techniques involve estimating or measuring software size first and then applying some knowledge of historical of cost per unit of size. Software size is typically sized inSLOC,Function PointorAgile story points.
This business term article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Cost_estimation_in_software_engineering
|
Cost estimation modelsare mathematicalalgorithmsorparametric equationsused to estimate the costs of a product or project. The results of the models are typically necessary to obtain approval to proceed, and are factored into business plans, budgets, and other financial planning and tracking mechanisms.
These algorithms were originally performed manually but now are almost universally computerized. They may be standardized (available in published texts or purchased commercially) or proprietary, depending on the type of business, product, or project in question. Simple models may use standardspreadsheetproducts.
Models typically function through the input ofparametersthat describe the attributes of the product or project in question, and possibly physical resource requirements. The model then provides as output various resources requirements in cost and time. Some models concentrate only on estimating project costs (often a single monetary value). Little attention has been given to the development of models for estimating the amount of resources needed for the different elements that comprise a project.[1]
Cost modeling practitioners often have the titles of cost estimators,cost engineers, or parametric analysts.
Typical applications include:
|
https://en.wikipedia.org/wiki/Cost_estimation_models
|
Acost overrun, also known as acost increaseorbudget overrun, involves unexpected incurredcosts. When these costs are in excess of budgeted amounts due to avalue engineeringunderestimation of the actual cost during budgeting, they are known by these terms.
Cost overruns are common ininfrastructure,building, andtechnologyprojects. ForIT projects, a 2004 industry study by the Standish Group found an average cost overrun of 43 percent; 71 percent of projects came in over budget, exceeded time estimates, and had estimated too narrow ascope; and total waste was estimated at $55 billion per year in the US alone.[1]
Many major construction projects have incurred cost overruns; cost estimates used to decide whether important transportation infrastructure should be built can mislead grossly and systematically.[2]
Cost overrun is distinguished fromcost escalation, which is ananticipatedgrowth in a budgeted cost due to factors such as inflation.
Recent works by Ahiaga-Dagbui and Smith suggests an alternative to what is traditionally seen as an overrun in the construction field.[3]They attempt to make a distinction between the oftenconflated causes of construction cost underestimationandeventual cost overruns. Critical to their argument is the point of reference for measuring cost overruns. Whereas some measure the size of cost overruns as the difference between cost at the time of decision to build and final completion costs, others measure the size of overruns as the difference between cost at contract award and final completion cost. This leads to a wide range in the size of overruns reported in different studies.
Four types ofexplanationfor cost overrun exist:technical,psychological,political-economic,value engineering. Technical explanations account for cost overrun in terms of imperfectforecastingtechniques, inadequate data, etc. Psychological explanations account for overrun in terms ofoptimism biaswith forecasters.Scope creep, where the requirements or targets rises during the project, is common. Finally, political-economic explanations see overrun as the result ofstrategic misrepresentationof scope or budgets. Historically, political explanations for cost overrun have been seen to be the most dominant.[4]In the USA, the architectural firm Home Architects has attributed this to a human trait they call "Psychology of Construction Cost Denial", regarding the cost inflation of custom homes.[5]
A less explored possible cause of cost overruns on construction project is theescalation of commitmentto a course of action. This theory, grounded in social psychology and organisation behaviour, suggests the tendency of people and organisations to become locked-in and entrapped in a particular course of action and thereby 'throw good money after bad' to make the venture succeed. This defies conventional rationality behind subjective expected utility theory. Ahiaga-Dagbui and Smith explore the effects ofescalation of commitmenton project delivery in construction using the case of theScottish Parliamentproject.[6]Also, a recent study has suggested that principles of chaos theory can be employed to understand how cost overruns emerge inmegaprojects.[7]This paper seeks to reclassify megaprojects as chaotic systems that are nonlinear and therefore difficult to predict. Using cases of cost overruns in oil and gas megaprojects, this study makes strong argument that chaos theory can indeed be a silver bullet in finding solutions to the recurring problem of cost overruns in megaprojects.
A newly discovered possible cause of cost overruns isvalue engineering, and an approach to correct value engineering cost overruns known as value-driven-design.
In response to problem of cost overruns on major projects, the UK Government set up a Major Projects Authority to provideproject assuranceto HM Treasury and other Government departments undertaking major projects.[8]Independent review of the financial effectiveness of project assurance in reducing cost overruns found the project assurance process to be effective in reducing cost overruns and recommended an expansion of the process to cover most of the Government's project portfolio.[9]Project assurance is now also being used by private sector companies undertaking major projects.
Cost overrun can be described in multiple ways.
For example, consider a bridge with a construction budget of $100 million where the actual cost was $150 million. This scenario could be truthfully represented by the following statement
The final example is the most commonly used as it specifically describes the cost overruns exclusively whereas the other two describe the overrun as an aspect of the total expense. In any case care should be taken to accurately describe what is meant by the chosen percentage so as to avoid ambiguity.
|
https://en.wikipedia.org/wiki/Cost_overrun
|
Thefunction pointis a "unit of measurement" to express the amount of business functionality aninformation system(as a product) provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost (in dollars or hours) of a single unit is calculated from past projects.[1]
There are several recognized standards and/or public specifications for sizing software based on Function Point.
1. ISO Standards
The first five standards are implementations of the over-arching standard forFunctional Size MeasurementISO/IEC 14143.[2]The OMG Automated Function Point (AFP) specification, led by theConsortium for IT Software Quality, provides a standard for automating the Function Point counting according to the guidelines of the International Function Point User Group (IFPUG) However, the current implementations of this standard have a limitation in being able to distinguish External Output (EO) from External Inquiries (EQ) out of the box, without some upfront configuration.[3]
Function points were defined in 1979 inMeasuring Application Development Productivityby Allan J. Albrecht atIBM.[4]Thefunctional user requirementsof the software are identified and each one is categorized into one of five types: outputs, inquiries, inputs, internal files, and external interfaces. Once the function is identified and categorized into a type, it is then assessed for complexity and assigned a number of function points. Each of these functional user requirements maps to an end-user business function, such as a data entry for an Input or a user query for an Inquiry. This distinction is important because it tends to make the functions measured in function points map easily into user-oriented requirements, but it also tends to hide internal functions (e.g. algorithms), which also require resources to implement.
There is currently no ISO recognized FSM Method that includes algorithmic complexity in the sizing result. Recently there have been different approaches proposed to deal with this perceived weakness, implemented in several commercial software products. The variations of the Albrecht-based IFPUG method designed to make up for this (and other weaknesses) include:
The use of function points in favor of lines of code seek to address several additional issues:
Albrecht observed in his research that Function Points were highly correlated to lines of code,[9]which has resulted in a questioning of the value of such a measure if a more objective measure, namely counting lines of code, is available. In addition, there have been multiple attempts to address perceived shortcomings with the measure by augmenting the counting regimen.[10][11][12][13][14][15]Others have offered solutions to circumvent the challenges by developing alternative methods which create a proxy for the amount of functionality delivered.[16]
|
https://en.wikipedia.org/wiki/Function_points
|
Theplanning fallacyis a phenomenon in which predictions about how much time will be needed to complete a future task display anoptimism biasand underestimate the time needed. This phenomenon sometimes occurs regardless of the individual's knowledge that past tasks of a similar nature have taken longer to complete than generally planned.[1][2][3]The bias affects predictions only about one's own tasks. On the other hand, when outside observers predict task completion times, they tend to exhibit a pessimistic bias, overestimating the time needed.[4][5]The planning fallacy involves estimates of task completion times more optimistic than those encountered in similar projects in the past.
The planning fallacy was first proposed byDaniel KahnemanandAmos Tverskyin 1979.[6][7]In 2003, Lovallo and Kahneman proposed an expanded definition as the tendency to underestimate the time, costs, and risks of future actions and at the same time overestimate the benefits of the same actions. According to this definition, the planning fallacy results in not only time overruns, but alsocost overrunsandbenefit shortfalls.[8]
In a 1994 study, 37psychologystudents were asked to estimate how long it would take to finish theirsenior theses. The average estimate was 33.9 days. They also estimated how long it would take "if everything went as well as it possibly could" (averaging 27.4 days) and "if everything went as poorly as it possibly could" (averaging 48.6 days). The average actual completion time was 55.5 days, with about 30% of the students completing their thesis in the amount of time they predicted.[1]
Another study asked students to estimate when they would complete their personal academic projects. Specifically, the researchers asked for estimated times by which the students thought it was 50%, 75%, and 99% probable their personal projects would be done.[5]
A survey of Canadian tax payers, published in 1997, found that they mailed in their tax forms about a week later than they predicted. They had no misconceptions about their past record of getting forms mailed in, but expected that they would get it done more quickly next time.[9]This illustrates a defining feature of the planning fallacy: that people recognize that their past predictions have been over-optimistic, while insisting that their current predictions are realistic.[4]
Carter and colleagues conducted three studies in 2005 that demonstrate empirical support that the planning fallacy also affects predictions concerning group tasks. This research emphasizes the importance of how temporal frames and thoughts of successful completion contribute to the planning fallacy.[10]
The segmentation effect is defined as the time allocated for a task being significantly smaller than the sum of the time allocated to individual smaller sub-tasks of that task. In a study performed by Forsyth in 2008, this effect was tested to determine if it could be used to reduce the planning fallacy. In three experiments, the segmentation effect was shown to be influential. However, the segmentation effect demands a great deal of cognitive resources and is not very feasible to use in everyday situations.[17]
Implementation intentionsare concrete plans that accurately show how, when, and where one will act. It has been shown through various experiments that implementation intentions help people become more aware of the overall task and see all possible outcomes. Initially, this actually causes predictions to become even more optimistic. However, it is believed that forming implementation intentions "explicitly recruits willpower" by having the person commit themselves to the completion of the task. Those that had formed implementation intentions during the experiments began work on the task sooner, experienced fewer interruptions, and later predictions had reduced optimistic bias than those who had not. It was also found that the reduction in optimistic bias was mediated by the reduction in interruptions.[3]
Reference class forecastingpredicts the outcome of a planned action based on actual outcomes in a reference class of similar actions to that being forecast.
TheSydney Opera Housewas expected to be completed in 1963. A scaled-down version opened in 1973, a decade later. The original cost was estimated at $7 million, but its delayed completion led to a cost of $102 million.[10]
TheEurofighter Typhoondefense project took six years longer than expected, with an overrun cost of 8 billion euros.[10]
TheBig Digwhich undergrounded theBoston Central Arterywas completed seven years later than planned,[18]for $8.08 billion on a budget of $2.8 billion (in 1988 dollars).[19]
TheDenver International Airportopened sixteen months later than scheduled, with a total cost of $4.8 billion, over $2 billion more than expected.[20]
TheBerlin Brandenburg Airportis another case. After 15 years of planning, construction began in 2006, with the opening planned for October 2011. There were numerous delays. It was finally opened on October 31, 2020. The original budget was €2.83 billion; current projections are close to €10.0 billion.
Olkiluoto Nuclear Power Plant Unit 3faced severe delay and a cost overrun. The construction started in 2005 and was expected to be completed by 2009, but completed only in 2023.[21][22]Initially, the estimated cost of the project was around 3 billion euros, but the cost has escalated to approximately 10 billion euros.[23]
California High-Speed Railis still under construction, with tens of billions of dollars in overruns expected, and connections to major cities postponed until after completion of the rural segment.
TheJames Webb Space Telescopewent over budget by approximately 9 billion dollars, and was sent into orbit 14 years later than its originally planned launch date.
|
https://en.wikipedia.org/wiki/Planning_fallacy
|
Proxy-Based Estimating (PROBE) is an estimating process used in thePersonal Software Process(PSP) to estimate size and effort.
Proxy Based Estimating (PROBE), is the estimation method introduced byWatts Humphrey(of theSoftware Engineering InstituteatCarnegie Mellon University) as part of the
Personal Software Process (a discipline that helps individual software engineers monitor,
test, and improve their own work).
PROBE is based on the idea that if an engineer is building a component similar to one they built previously, then it will take about the same effort as it did in the past.
In the PROBE method, individual engineers use a database to keep track of the size and
effort of all of the work that they do, developing a history of the effort they have put into
their past projects, broken into individual components. Each component in the database is
assigned a type (“calculation,” “data,” “logic,” etc.) and a size (from “very small” to “very
large”).
When a new project must be estimated, it is broken down into tasks that correspond
to these types and sizes. A formula based on linear regression is used to calculate
the estimate for each task.
Additional information on PROBE can be found in A Discipline for Software Engineering by Watts Humphrey (Addison Wesley, 1994).[1]
Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Proxy-based_estimating
|
ThePutnam modelis an empiricalsoftware effort estimationmodel[1]created by Lawrence H. Putnam in 1978. Measurements of a software project is collected (e.g., effort in man-years, elapsed time, and lines of code) and an equation fitted to the data usingregression analysis. Future effort estimates are made by providing size and calculating the associated effort using the equation which fit the original data (usually with someerror).
SLIM (Software LIfecycle Management) is the name given by Putnam to the proprietary suite of tools his
companyQSM, Inc.developed, based on his model. It is one of the earliest
of these types of models developed. Closely relatedsoftware parametric modelsare Constructive Cost Model (COCOMO), Parametric Review of Information for Costing and Evaluation – Software (PRICE-S), and Software Evaluation and Estimation of Resources – Software Estimating Model (SEER-SEM).
A claimed advantage to this model is the simplicity of calibration.
While managing R&D projects for the Army and later atGE, Putnam noticed software staffing profiles followed
theRayleigh distribution.[2]
Putnam used his observations about productivity levels to derive the software equation:
where:
In practical use, when making an estimate for a software task the software equation is solved foreffort:
An estimated software size at project completion and organizational process productivity is used. Plottingeffortas a function oftimeyields theTime-Effort Curve. The points along the curve represent the estimated total effort to complete the project at sometime. One of the distinguishing features of the Putnam model is that total effort decreases as the time to complete the project is extended. This is normally represented in other parametric models with a schedule relaxation parameter.
This estimating method is fairly sensitive to uncertainty in bothsizeandprocess productivity.
Putnam advocates obtaining process productivity by calibration:[1]
Putnam makes a sharp distinction between 'conventional productivity' :size/effortand process productivity.
|
https://en.wikipedia.org/wiki/Putnam_model
|
Aparametric modelis a set of related mathematical equations that incorporates variable parameters. Ascenariois defined by selecting a value for each parameter. Software project managers use software parametric models and parametric estimation tools to estimate their projects' duration, staffing and cost.
In the early 1980s refinements to earlier models, such as PRICE S and SLIM, and new models, such as SPQR, Checkpoint, ESTIMACS,SEER-SEMorCOCOMOand its commercial implementations PCOC,Costimator, GECOMO, COSTAR and Before You Leap emerged.
The prime advantage of these models is that they are objective, repeatable, calibrated and easy to use, although calibration to previous experience may be a disadvantage when applied to a significantly different project.
These models were highly effective forwaterfall model, version 1 software projects of the 1980s and highlighted the early achievements of parametrics. As systems became more complex and new languages emerged, different software parametric models emerged that employed new cost estimating relationships, risk analyzers, software sizing, nonlinear software reuse, and personnel continuity.
|
https://en.wikipedia.org/wiki/Software_parametric_models
|
AnAPI writeris atechnical writerwho writes documents that describe anapplication programming interface(API). The primary audience includes programmers, developers, system architects, and system designers.
An API is alibraryconsisting of interfaces, functions,classes, structures, enumerations, etc. for building a software application. It is used by developers to interact with and extend the software. An API for a givenprogramming languageor system may consist of system-defined and user-defined constructs. As the number and complexity of these constructs increases, it becomes very tedious for developers to remember all of the functions and the parameters defined. Hence, the API writers play a key role in buildingsoftwareapplications.
Due to the technical subject matter, API writers must understand applicationsource codeenough to extract the information that API documents require. API writers often use tooling that extractssoftware documentationplaced by programmers in the source code in a structured manner, preserving the relationships between the comments and the programming constructs they document.
API writers must also understand the software product and document the new features or changes as part of the new software release. The schedule of software releases varies from organization to organization. API writers need to understand the software life cycle well and integrate themselves into thesystems development life cycle(SDLC).
API writers in theUnited Statesgenerally followThe Chicago Manual of Styleforgrammarandpunctuation.[citation needed]
API writers typically possess a mix of programming and language skills; many API writers have backgrounds inprogrammingortechnical writing.
Expert API/software development kit(SDK) writers can easily becomeprogrammingwriters.
The API writing process is typically split between analyzing and understanding thesource code, planning, writing, and reviewing. It is often the case that the analytical, planning, and writing stages do not occur in a strictly linear fashion.
The writing and evaluation criteria vary between organizations. Some of the most effective API documents are written by those who are adequately capable of understanding the workings of a particular application, so that they can relate the software to the users or the various component constructs to the overall purpose of the program. API writers may also be responsible for authoringend-userproduct documentation.
While reference documentation may be auto-generated to ensure completeness, documentation that helps developers get started should be written by a professional API writer and reviewed by subject matter experts.[1]This helps ensure that developers understand key concepts and can get started quickly.
API writers produce documents that include:
|
https://en.wikipedia.org/wiki/API_Writer
|
The following tables compare general and technical information for a number ofdocumentation generators. Please see the individual products' articles for further information. Unless otherwise specified in footnotes, comparisons are based on the stable versions without any add-ons, extensions or external programs. Note that many of the generators listed are no longer maintained.
Basic general information about the generators, including: creator or company, license, and price.
The output formats the generators can write.
|
https://en.wikipedia.org/wiki/Comparison_of_documentation_generators
|
Asoftware design description(a.k.a.software design documentorSDD; justdesign document; alsoSoftware Design Specification) is a representation of a software design that is to be used for recording design information, addressing various design concerns, and communicating that information to the design’s stakeholders.[1]An SDD usually accompanies an architecture diagram with pointers to detailed feature specifications of smaller pieces of the design. Practically, the description is required to coordinate a large team under a single vision, needs to be a stable reference, and outline all parts of the software and how they will work.
The SDD usually contains the following information:
These design mediums enable the designer to represent procedural detail, that facilitates translation to code. This blueprint for implementation forms the basis for all subsequent software engineering work.
IEEE 1016-2009, titledIEEE Standard for Information Technology—Systems Design—Software Design Descriptions,[2]is anIEEEstandard that specifies "the required information content and organization" for an SDD.[3]IEEE 1016 does not specify the medium of an SDD; it is "applicable to automated databases and design description languages but can be used for paper documents and other means of descriptions."[4]
The 2009 edition was a major revision to IEEE 1016-1998, elevating it from recommended practice to full standard. This revision was modeled afterIEEE Std 1471-2000,Recommended Practice for Architectural Description of Software-intensive Systems, extending the concepts ofview, viewpoint, stakeholder, and concernfrom architecture description to support documentation of high-level and detailed design and construction of software. [IEEE 1016,Introduction]
Following the IEEE 1016 conceptual model, an SDD is organized into one or more design views. Each design view follows the conventions of its design viewpoint. IEEE 1016 defines the following design viewpoints for use:[5]
In addition, users of the standard are not limited to these viewpoints but may define their own.[6]
IEEE 1016-2009 is currently listed as 'Inactive - Reserved'.[7]
|
https://en.wikipedia.org/wiki/Design_document
|
Inprogramming, adocstringis astring literalspecified insource codethat is used, like acomment, to document a specific segment of code. Unlike conventional source code comments, or even specifically formatted comments likedocblocks, docstrings are not stripped from the source tree when it isparsedand are retained throughout theruntimeof the program. This allows the programmer to inspect these comments at run time, for instance as an interactive help system, or asmetadata.
Languages that support docstrings includePython,Lisp,Elixir,Clojure,[1]Gherkin,[2]Julia[3]andHaskell.[4]
Documentation is supported at language level, in the form of docstrings.Markdownis Elixir's de factomarkup languageof choice for use in docstrings:
In Lisp, docstrings are known as documentation strings. TheCommon Lispstandard states that a particular implementation may choose to discard docstrings whenever they want, for whatever reason. When they are kept, docstrings may be viewed and changed using the DOCUMENTATION function.[5]For instance:
The common practice of documenting a code object at the head of its definition is captured by the addition of docstring syntax in the Python language.
The docstring for a Python code object (a module, class, or function) is the first statement of that code object, immediately following the definition (the 'def' or 'class' statement). The statement must be a bare string literal, not any other kind of expression. The docstring for the code object is available on that code object's__doc__attribute and through thehelpfunction.
The following Python file shows the declaration of docstrings within a Python source file:
Assuming that the above code was saved asmymodule.py, the following is an interactive session showing how the docstrings may be accessed:
|
https://en.wikipedia.org/wiki/Docstring
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.