text
stringlengths
11
320k
source
stringlengths
26
161
TheDocument Object Model(DOM) is across-platformandlanguage-independentinterface that treats anHTMLorXMLdocument as atree structurewherein eachnodeis anobjectrepresenting a part of the document. The DOM represents a document with a logical tree. Each branch of the tree ends in a node, and each node contains objects. DOM methods allow programmatic access to the tree; with them one can change the structure, style or content of a document.[2]Nodes can haveevent handlers(also known as event listeners) attached to them. Once an event is triggered, the event handlers get executed.[3] The principal standardization of the DOM was handled by theWorld Wide Web Consortium(W3C), which last developed a recommendation in 2004.WHATWGtook over the development of the standard, publishing it as aliving document. The W3C now publishes stable snapshots of the WHATWG standard. In HTML DOM (Document Object Model), every element is a node:[4] The history of the Document Object Model is intertwined with the history of the "browser wars" of the late 1990s betweenNetscape NavigatorandMicrosoft Internet Explorer, as well as with that ofJavaScriptandJScript, the firstscripting languagesto be widelyimplementedin theJavaScript enginesofweb browsers. JavaScript was released byNetscape Communicationsin 1995 within Netscape Navigator 2.0. Netscape's competitor,Microsoft, releasedInternet Explorer 3.0the following year with a reimplementation of JavaScript called JScript. JavaScript and JScript letweb developerscreate web pages withclient-sideinteractivity. The limited facilities for detecting user-generatedeventsand modifying the HTML document in the first generation of these languages eventually became known as "DOM Level 0" or "Legacy DOM." No independent standard was developed for DOM Level 0, but it was partly described in the specifications forHTML 4. Legacy DOM was limited in the kinds ofelementsthat could be accessed.Form,linkand image elements could be referenced with a hierarchical name that began with the root document object. A hierarchical name could make use of either the names or thesequential indexof the traversed elements. For example, aform input elementcould be accessed as eitherdocument.myForm.myInputordocument.forms[0].elements[0]. The Legacy DOM enabled client-side form validation and simple interface interactivity like creatingtooltips. In 1997, Netscape and Microsoft released version 4.0 of Netscape Navigator and Internet Explorer respectively, adding support forDynamic HTML(DHTML) functionality enabling changes to a loaded HTML document. DHTML required extensions to the rudimentary document object that was available in the Legacy DOM implementations. Although the Legacy DOM implementations were largely compatible since JScript was based on JavaScript, the DHTML DOM extensions were developed in parallel by each browser maker and remained incompatible. These versions of the DOM became known as the "Intermediate DOM". After the standardization ofECMAScript, theW3CDOM Working Group began drafting a standard DOM specification. The completed specification, known as "DOM Level 1", became a W3C Recommendation in late 1998. By 2005, large parts of W3C DOM were well-supported by common ECMAScript-enabled browsers, includingInternet Explorer 6(from 2001),Opera,SafariandGecko-based browsers (likeMozilla,Firefox,SeaMonkeyandCamino). TheW3CDOM Working Group published its final recommendation and subsequently disbanded in 2004. Development efforts migrated to theWHATWG, which continues to maintain a living standard.[5]In 2009, the Web Applications group reorganized DOM activities at the W3C.[6]In 2013, due to a lack of progress and the impending release ofHTML5, the DOM Level 4 specification was reassigned to theHTML Working Groupto expedite its completion.[7]Meanwhile, in 2015, the Web Applications group was disbanded and DOM stewardship passed to the Web Platform group.[8]Beginning with the publication of DOM Level 4 in 2015, the W3C creates new recommendations based on snapshots of the WHATWG standard. Torendera document such as a HTML page, most web browsers use an internal model similar to the DOM. The nodes of every document are organized in atree structure, called theDOM tree, with the topmost node named as "Document object". When an HTML page is rendered in browsers, the browser downloads the HTML into local memory and automatically parses it to display the page on screen. However, the DOM does not necessarily need to be represented as a tree,[11]and some browsers have used other internal models.[12] When a web page is loaded, the browser creates a Document Object Model of the page, which is an object oriented representation of an HTML document that acts as an interface between JavaScript and the document itself. This allows the creation ofdynamic web pages,[13]because within a page JavaScript can: A Document Object Model (DOM) tree is a hierarchical representation of an HTML orXMLdocument. It consists of a root node, which is the document itself, and a series of child nodes that represent the elements, attributes, and text content of the document. Each node in the tree has a parent node, except for the root node, and can have multiple child nodes. Elements in an HTML or XML document are represented as nodes in the DOM tree. Each element node has a tag name and attributes, and can contain other element nodes or text nodes as children. For example, an HTML document with the following structure: will be represented in the DOM tree as: Text content within an element is represented as a text node in the DOM tree. Text nodes do not have attributes or child nodes, and are always leaf nodes in the tree. For example, the text content "My Website" in the title element and "Welcome" in the h1 element in the above example are both represented as text nodes. Attributes of an element are represented as properties of the element node in the DOM tree. For example, an element with the following HTML: will be represented in the DOM tree as: The DOM tree can be manipulated using JavaScript or other programming languages. Common tasks include navigating the tree, adding, removing, and modifying nodes, and getting and setting the properties of nodes. The DOM API provides a set of methods and properties to perform these operations, such asgetElementById,createElement,appendChild, andinnerHTML. Another way to create a DOM structure is using the innerHTML property to insert HTML code as a string, creating the elements and children in the process. For example: Another method is to use a JavaScript library or framework such asjQuery,AngularJS,React,Vue.js, etc. These libraries provide a more convenient, eloquent and efficient way to create, manipulate and interact with the DOM. It is also possible to create a DOM structure from an XML or JSON data, using JavaScript methods to parse the data and create the nodes accordingly. Creating a DOM structure does not necessarily mean that it will be displayed in the web page, it only exists in memory and should be appended to the document body or a specific container to be rendered. In summary, creating a DOM structure involves creating individual nodes and organizing them in a hierarchical structure using JavaScript or other programming languages, and it can be done using several methods depending on the use case and the developer's preference. Because the DOM supports navigation in any direction (e.g., parent and previous sibling) and allows for arbitrary modifications, implementations typically buffer the document.[14]However, a DOM need not originate in a serialized document at all, but can be created in place with the DOM API. And even before the idea of the DOM originated, there were implementations of equivalent structure with persistent disk representation and rapid access, for exampleDynaText's model disclosed in[15]and various database approaches. Web browsers rely onlayout enginesto parse HTML into a DOM. Some layout engines, such asTrident/MSHTML, are associated primarily or exclusively with a particular browser, such as Internet Explorer. Others, includingBlink,WebKit, andGecko, are shared by a number of browsers, such asGoogle Chrome,Opera,Safari, andFirefox. The different layout engines implement the DOM standards to varying degrees of compliance. DOM implementations: APIs that expose DOM implementations: Inspection tools:
https://en.wikipedia.org/wiki/Document_Object_Model
Insoftware engineering, adouble-chance functionis a softwaredesign patternwith a strong application incross-platformand scalable development. Consider a graphicsAPIwith functions toDrawPoint,DrawLine, andDrawSquare. It is easy to see thatDrawLinecan be implemented solely in terms ofDrawPoint, andDrawSquarecan in turn be implemented through four calls toDrawLine. If you were porting this API to a new architecture you would have a choice: implement three different functions natively (taking more time to implement, but likely resulting in faster code), or writeDrawPointnatively, and implement the others as described above using common, cross-platform, code. An important example of this approach is theX11graphics system, which can be ported to new graphics hardware by providing a very small number of device-dependent primitives, leaving higher level functions to a hardware-independent layer.[1][2] The double-chance function is an optimal method of creating such an implementation, whereby the first draft of the port can use the "fast to market, slow to run" version with a commonDrawPointfunction, while later versions can be modified as "slow to market, fast to run". Where the double-chance pattern scores high is that the base API includes the self-supporting implementation given here as part of the null driver, and all other implementations are extensions of this. Consequently, the first port is, in fact, the first usable implementation. One typical implementation inC++could be: Note that theCBaseGfxAPI::DrawPointfunction is never used, per se, as any graphics call goes through one of its derived classes. So a call toCNewGfxAPI::DrawSquarewould have its first chance to render a square by theCNewGfxAPIclass. If no native implementation exists, then the base class is called, at which point thevirtualizationtakes over and means thatCNewGfxAPI::DrawLineis called. This gives theCNewGfxAPIclass a “second chance” to usenative code, if any is available. With this method it is, theoretically, possible to build an entire 3D engine (applyingsoftwarerasterizing) using only one native function in the form of DrawPoint, with other functions being implemented as and when time permits. In practice this would be hopelessly slow, but it does demonstrate the possibilities for double-chance functions.
https://en.wikipedia.org/wiki/Double-chance_function
Aforeign function interface(FFI) is a mechanism by which a program written in oneprogramming languagecan call routines or make use of services written or compiled in another one. An FFI is often used in contexts where calls are made into a binarydynamic-link library. The term comes from the specification forCommon Lisp, which explicitly refers to the programming language feature enabling for inter-language calls as such;[citation needed]the term is also often used officially by theinterpreterandcompilerdocumentationforHaskell,[1]Rust,[2]PHP,[3]Python, andLuaJIT(Lua)[4][5]: 35.[6]Other languages use other terminology:Adahaslanguage bindings, whileJavahasJava Native Interface(JNI) orJava Native Access(JNA). Foreign function interface has become generic terminology for mechanisms which provide such services. The primary function of a foreign function interface is to mate the semantics andcalling conventionsof one programming language (thehostlanguage, or the language which defines the FFI), with the semantics and conventions of another (theguestlanguage). This process must also take into consideration theruntime environmentsandapplication binary interfacesof both. This can be done in several ways: FFIs may be complicated by the following considerations: Examples of FFIs include: In addition, many FFIs can be generated automatically: for example,SWIG. However, in the case of anextension languagea semantic inversion of the relationship of guest and host can occur, when a smaller body of extension language is the guest invoking services in the larger body of host language, such as writing a small plugin[26]for GIMP.[27] Some FFIs are restricted to free standingfunctions, while others also allow calls of functions embedded in an object or class (often calledmethod calls); some even permit migration of complex datatypes or objects across the language boundary. In most cases, an FFI is defined by ahigher-level language, so that it may employ services defined and implemented in alower-level language, typically asystem programming languagelikeCorC++. This is typically done to either accessoperating system(OS) services in the language in which the OS API is defined, or for performance goals. Many FFIs also provide the means for the called language to invoke services in the host language also. The term foreign function interface is generally not used to describe multi-lingual runtimes such as the MicrosoftCommon Language Runtime, where a commonsubstrateis provided which enables any CLR-compliant language to use services defined in any other. (However, in this case the CLR does include an FFI,P/Invoke, to call outside the runtime.) In addition, many distributed computing architectures such as theJava remote method invocation(RMI), RPC,CORBA,SOAPandD-Buspermit different services to be written in different languages; such architectures are generally not considered FFIs. There are some special cases, in which the languages compile into the same bytecode VM, likeClojureandJava, as well asElixirandErlang. Since there is no interface, it is not an FFI, strictly speaking, while it offers the same functions to the user.
https://en.wikipedia.org/wiki/Foreign_function_interface
Insoftware development,frontendrefers to thepresentation layerthat users interact with, whilebackendinvolves thedata managementand processing behind the scenes, and full-stack development refers to mastering both. In theclient–server model, theclientis usually considered the frontend, handling user-facing tasks, and theserveris the backend, managing data and logic. Some presentation tasks may also be performed by the server. Insoftware architecture, there may be manylayersbetween the hardware andend user. Thefrontis an abstraction, simplifying the underlying component by providing auser-friendlyinterface, while thebackusually handles data storage andbusiness logic. E-commerce Website: Thefrontendis the user interface (e.g., product pages, search bar), while thebackendprocesses payments and updates inventory. Banking App: Thefrontenddisplays account balances, while thebackendhandles secure transactions and updates records. Social Media Platform: Thefrontendshows the news feed, while thebackendstores posts and manages notifications. Intelecommunication, thefrontcan be considered a device or service, while thebackis the infrastructure that supports provision of service. Arule of thumbis that the client-side (or "frontend") is any component manipulated by the user. The server-side (or "backend") code usually resides on theserver, often far removed physically from the user. Incontent management systems, the termsfrontendandbackendmay refer to the end-user facing views of the CMS and the administrative views, respectively.[1][2] Inspeech synthesis, the frontend refers to the part of the synthesis system that converts the input text into asymbolicphoneticrepresentation, and the backend converts the symbolic phonetic representation into actual sounds.[3] Incompilers, thefrontendtranslatescomputer programmingsource codeinto anintermediate representation, and the backend works with the intermediate representation to produce code in a computer output language. The backend usuallyoptimizesto produce code that runs faster. The frontend/backend distinction can separate theparsersection that deals with source code and the backend thatgenerates code and optimizes. Some designs, such asGCC, offer choices between multiple frontends (parsing different sourcelanguages) or backends (generating code for different targetprocessors).[4] Somegraphical user interface(GUI) applications running in adesktop environmentare implemented as a thin frontend for underlyingcommand-line interface(CLI) programs, to save the user from learning the special terminology and memorizing thecommands. Another way to understand the difference between the two is to understand the knowledge required of a frontend vs. a backendsoftware developer. The list below focuses onweb developmentas an example. Note that both positions, despite possibly working on one product, have a very distinct set of skills. The frontend communicates with backend through anAPI. In the case ofweband mobile frontends, the API is often based onHTTPrequest/response. The API is sometimes designed using the "Backend for Frontend" (BFF) pattern, that serves responses to ease the processing on frontend side.[5] Innetwork computing,frontendcan refer to anyhardwarethat optimizes or protectsnetwork traffic.[6]It is calledapplication front-end hardwarebecause it is placed on the network'soutward-facing frontend or boundary. Network traffic passes through the front-end hardware before entering the network. Inprocessor design,frontend designwould be the initial description of the behavior of a circuit in ahardware description languagesuch asVerilog, whilebackend designwould be the process of mapping that behavior to physical transistors on adie.[7]
https://en.wikipedia.org/wiki/Front_and_back_ends
In computing, aninterfaceis a shared boundary across which two or more separate components of acomputer systemexchange information. The exchange can be betweensoftware,computer hardware,peripheral devices,humans, and combinations of these.[1]Some computer hardware devices, such as atouchscreen, can both send and receive data through the interface, while others such as a mouse or microphone may only provide an interface to send data to a given system.[2] Hardware interfaces exist in many components, such as the variousbuses,storage devices, otherI/Odevices, etc. A hardware interface is described by the mechanical, electrical, and logical signals at the interface and the protocol for sequencing them (sometimes called signaling).[3]A standard interface, such asSCSI, decouples the design and introduction of computing hardware, such asI/Odevices, from the design and introduction of other components of a computing system, thereby allowing users and manufacturers great flexibility in the implementation of computing systems.[3]Hardware interfaces can beparallelwith several electrical connections carrying parts of the data simultaneously orserialwhere data are sent onebitat a time.[4] A software interface may refer to a wide range of different types of interfaces at different "levels". For example, an operating system may interface with pieces of hardware.Applicationsorprogramsrunning on the operating system may need to interact via datastreams, filters, and pipelines.[5]Inobject oriented programs, objects within an application may need to interact viamethods.[6] A key principle of design is to prohibit access to all resources by default, allowing access only through well-defined entry points, i.e., interfaces.[7]Software interfaces provide access to computer resources (such as memory, CPU, storage, etc.) of the underlying computer system; direct access (i.e., not through well-designed interfaces) to such resources by software can have major ramifications—sometimes disastrous ones—for functionality and stability.[citation needed] Interfaces between software components can provideconstants,data types, types ofprocedures,exceptionspecifications, andmethod signatures. Sometimes, publicvariablesare also defined as part of an interface.[8] The interface of a software moduleAis deliberately defined separately from theimplementationof that module. The latter contains the actual code of the procedures and methods described in the interface, as well as other "private" variables, procedures, etc. Another software moduleB, for example theclienttoA, that interacts withAis forced to do so only through the published interface. One practical advantage of this arrangement is that replacing the implementation ofAwith another implementation of the same interface should not causeBto fail—howAinternally meets the requirements of the interface is not relevant toB, whichis only concernedwith the specifications of the interface. (See alsoLiskov substitution principle.)[citation needed] In someobject-orientedlanguages, especially those without fullmultiple inheritance, the terminterfaceis used to define anabstract typethat acts as anabstractionof aclass. It contains no data, but defines behaviours asmethodsignatures. Aclasshaving code and data for all the methods corresponding to that interface and declaring so is said toimplementthat interface.[9]Furthermore, even in single-inheritance-languages, one can implement multiple interfaces, and hence canbeof different types at the same time.[10] An interface is thus atypedefinition; anywhere an object can be exchanged (for example, in afunctionormethodcall) thetypeof the object to be exchanged can be defined in terms of one of its implementedinterfaces or base-classes rather than specifying the specificclass. This approach means that any class that implements that interface can be used.[citation needed]For example, adummy implementationmay be used to allow development to progress before the final implementation is available. In another case, afake or mockimplementation may be substituted during testing. Suchstubimplementations are replaced by real code later in the development process. Usually, a method defined in an interface contains no code and thus cannot itself be called; it must be implemented by non-abstract code to be run when it is invoked.[citation needed]An interface called "Stack" might define two methods:push()andpop(). It can be implemented in different ways, for example,FastStackandGenericStack—the first being fast, working with a data structure of fixed size, and the second using a data structure that can be resized, but at the cost of somewhat lower speed. Though interfaces can contain many methods, they may contain only one or even none at all. For example, theJavalanguage defines the interfaceReadablethat has the singleread()method; various implementations are used for different purposes, includingBufferedReader,FileReader,InputStreamReader,PipedReader, andStringReader.Marker interfaceslikeSerializablecontain no methods at all and serve to provide run-time information to generic processing usingReflection.[11] The use of interfaces allows for a programming style calledprogramming to the interface. The idea behind this approach is to base programming logic on the interfaces of the objects used, rather than on internal implementation details. Programming to the interface reduces dependency on implementation specifics and makes code more reusable.[12] Pushing this idea to the extreme,inversion of controlleaves thecontextto inject the code with the specific implementations of the interface that will be used to perform the work. Auser interfaceis a point of interaction between a computer and humans; it includes any number ofmodalitiesofinteraction(such as graphics, sound, position, movement, etc.) where data is transferred between the user and the computer system.
https://en.wikipedia.org/wiki/Interface_(computing)
Aninterface control document(ICD) insystems engineering[1]andsoftware engineering, provides a record of all interface information (such as drawings, diagrams, tables, and textual information) generated for a project.[2]The underlying interface documents provide the details and describe the interface or interfaces betweensubsystemsor to asystemorsubsystem. An ICD is the umbrella document over the system interfaces; examples of what these interface specifications should describe include: The purpose of the ICD is to control and maintain a record of system interface information for a given project. This includes all possible inputs to and all potential outputs from a system for some potential or actual user of the system. The internal interfaces of a system or subsystem are documented in their respective interface requirements specifications, while human-machine interfaces might be in asystem designdocument (such as asoftware design document)[citation needed]. Interface control documents are a key element ofsystems engineeringas they control the documented interface(s) of a system, as well as specify a set of interface versionsthat work together, and thereby bound the requirements. Anapplication programming interfaceis a form of interface for a software system, in that it describes how to access the functions and services provided by a system via an interface. If a system producer wants others to be able to use the system, an ICD and interface specs (or their equivalent) is a worthwhile investment. An ICD should only describe the detailed interface documentation itself, and not the characteristics of the systems which use it to connect. The function and logic of those systems should be described in their own requirements and design documents as needed. In this way, independent teams can develop the connecting systems which use the interface specified, without regard to how other systems will react to data and signals which are sent over the interface. For example, the ICD and associated interface documentation must include information about the size, format, and what is measured by the data, but not any ultimatemeaningof the data in its intended use by any user. An adequately defined interface will allow one team to test its implementation of the interface by simulating the opposing side with a simple communications simulator. Not knowing the business logic of the system on the far side of an interface makes it more likely that one will develop a system that does not break when the other system changes its business rules and logic. (Provision for limits or sanity checking should be pointedly avoided in an interface requirements specification.) Thus, good modularity and abstraction leading to easy maintenance and extensibility are achieved.
https://en.wikipedia.org/wiki/Interface_control_document
3D graphics have become so popular, particularly invideo games, that specializedAPIs(application programming interfaces) have been created to ease the processes in all stages of computer graphics generation. These APIs have also proved vital to computer graphics hardware manufacturers, as they provide a way forprogrammersto access the hardware in an abstract way, while still taking advantage of the special hardware of any specificgraphics card. The first 3D graphics framework was probablyCore, published by the ACM in 1977. These APIs for 3D computer graphics are particularly popular: There are also higher-level 3Dscene-graphAPIs which provide additional functionality on top of the lower-level rendering API. Such libraries under active development include: There is more interest inweb browserbased high-level API for 3D graphics engines. Some are:
https://en.wikipedia.org/wiki/List_of_3D_graphics_APIs
In software engineering, amicroservicearchitecture is an architectural pattern that organizes an application into a collection of loosely coupled, fine-grained services that communicate through lightweight protocols. This pattern is characterized by the ability to develop and deploy services independently, improving modularity, scalability, and adaptability. However, it introduces additional complexity, particularly in managing distributed systems and inter-service communication, making the initial implementation more challenging compared to a monolithic architecture.[1] There is no single, universally agreed-upon definition of microservices. However, they are generally characterized by a focus on modularity, with each service designed around a specific business capability. These services are loosely coupled, independently deployable, and often developed and scaled separately, enabling greater flexibility and agility in managing complex systems. Microservices architecture is closely associated with principles such as domain-driven design, decentralization of data and governance, and the flexibility to use different technologies for individual services to best meet their requirements.[2][3][4] It is common for microservices architectures to be adopted forcloud-native applications,serverless computing, and applications using lightweightcontainerdeployment. According to Fowler, because of the large number (when compared to monolithic application implementations) of services, decentralized continuous delivery andDevOpswith holistic service monitoring are necessary to effectively develop, maintain, and operate such applications.[5]A consequence of (and rationale for) following this approach is that the individual microservices can be individually scaled. In the monolithic approach, an application supporting three functions would have to be scaled in its entirety even if only one of these functions had a resource constraint.[6]With microservices, only the microservice supporting the function with resource constraints needs to be scaled out, thus providing resource and cost optimization benefits.[7] In February 2020, the Cloud Microservices Market Research Report predicted that the global microservice architecture market size will increase at aCAGRof 21.37% from 2019 to 2026 and reach $3.1 billion by 2026.[8] Cell-based architecture is adistributed computingdesign in which computational resources are organized into self-contained units called cells. Each cell operates independently, handling a subset of requests while maintaining scalability, fault isolation, and availability.[2][9][10] A cell typically consists of multiple microservices and functions as an autonomous unit. In some implementations, entire sets of microservices are replicated across multiple cells, enabling requests to be rerouted to another operational cell if one experiences a failure. This approach is intended to improve system-wide resilience by limiting the impact of localized failures.[2][9][10] Some implementations incorporatecircuit breakerswithin and between cells. Within a cell, circuit breakers may be used to mitigate cascading failures among microservices, while inter-cell circuit breakers can isolate failing cells and redirect traffic to those that remain operational.[2][9][10] Cell-based architecture has been adopted in certain large-scale distributed systems where fault isolation and redundancy are design priorities. Its implementation varies based on system requirements, infrastructure constraints, and specific operational goals.[2][9][10] In 1999, software developer Peter Rodgers had been working on the Dexter research project atHewlett Packard Labs, whose aim was to make code less brittle and to make large-scale, complex software systemsrobustto change.[11]Ultimately this path of research led to the development ofresource-oriented computing(ROC), a generalized computation abstraction in which REST is a special subset. In 2005, during a presentation at the Web Services Edge conference, Rodgers argued for "REST-services" and stated that "Software componentsare Micro-Web-Services... Micro-Services are composed usingUnix-like pipelines(the Web meets Unix = trueloose-coupling). Services can call services (+multiple language run-times). Complex service assemblies are abstracted behind simpleURIinterfaces. Any service, at any granularity, can be exposed." He described how a well-designed microservices platform "applies the underlying architectural principles of theWeband REST services together with Unix-like scheduling and pipelines to provide radical flexibility and improved simplicity in service-oriented architectures.[12] Also in 2005,Alistair Cockburnwrote abouthexagonal architecturewhich is a software design pattern that is used along with the microservices. This pattern makes the design of the microservice possible since it isolates in layers the business logic from the auxiliary services needed in order to deploy and run the microservice completely independent from others. Determining the appropriate level of (micro)service granularity in a microservices architecture often requires iterative collaboration between architects and developers. This process involves evaluating user requirements, service responsibilities, and architectural characteristics, such as non-functional requirements. Neal Ford highlights the role of integrator and disintegrator factors in this context. Integrator factors, such as shared transactions or tightly coupled processes, favor combining services, while disintegrator factors, such as fault tolerance or independent scalability, encourage splitting services to meet operational and architectural goals. Additionally, fitness functions, as proposed by Neal Ford, can be used to validate architectural decisions and service granularity by continuously measuring system qualities or behaviors that are critical to stakeholders, ensuring alignment with overall architectural objectives.[13][14] A bounded context, a fundamental concept indomain-driven design(DDD), defines a specific area within which a domain model is consistent and valid, ensuring clarity and separation of concerns.[15]In microservices architecture, a bounded context often maps to a microservice, but this relationship can vary depending on the design approach. A one-to-one relationship, where each bounded context is implemented as a single microservice, is typically ideal as it maintains clear boundaries, reduces coupling, and enables independent deployment and scaling. However, other mappings may also be appropriate: a one-to-many relationship can arise when a bounded context is divided into multiple microservices to address varying scalability or other operational needs, while a many-to-one relationship may consolidate multiple bounded contexts into a single microservice for simplicity or to minimize operational overhead. The choice of relationship should balance the principles of DDD with the system's business goals, technical constraints, and operational requirements.[16] The benefit of decomposing an application into different smaller services are numerous: The microservices approach is subject to criticism for a number of issues: The architecture introduces additional complexity and new problems to deal with, such aslatency,message formatdesign,[34]backup/availability/consistency (BAC),[35]load balancingandfault tolerance.[36]All of these problems have to be addressed at scale. The complexity of amonolithic applicationdoes not disappear if it is re-implemented as a set of microservices. Some of the complexity gets translated into operational complexity.[37]Other places where the complexity manifests itself are increased network traffic and resulting in slower performance. Also, an application made up of any number of microservices has a larger number of interface points to access its respectiveecosystem, which increases the architectural complexity.[38]Various organizing principles (such ashypermedia as the engine of application state(HATEOAS), interface and data model documentation captured viaSwagger, etc.) have been applied to reduce the impact of such additional complexity. Microservices are susceptible to thefallacies of distributed computing, a series of misconceptions that can lead to significant issues in software development and deployment.[13] Ideally, microservices follow a "share-nothing" architecture. However, in practice, microservices architectures often encounter situations where code must be shared across services. Common approaches to addressing this challenge include utilizing separate shared libraries for reusable components (e.g., a security library), replicating stable modules with minimal changes across services, or, in certain cases, consolidating multiple microservices into a single service to reduce complexity. Each approach has its advantages and trade-offs, depending on the specific context and requirements.[39] According to O'Reilly, each microservice should have its own architectural characteristics (a.k.anon functional requirements), and architects should not define uniform characteristics for the entiredistributed system.[13] Computer microservices can be implemented in different programming languages and might use different infrastructures. Therefore, the most important technology choices are the way microservices communicate with each other (synchronous, asynchronous, UI integration) and the protocols used for the communication (RESTful HTTP, messaging,GraphQL...). In a traditional system, most technology choices like the programming language impact the whole system. Therefore, the approach to choosing technologies is quite different.[40] TheEclipse Foundationhas published a specification for developing microservices, Eclipse MicroProfile.[41][42] In a service mesh, each service instance is paired with an instance of a reverse proxy server, called a service proxy, sidecar proxy, or sidecar. The service instance and sidecar proxy share a container, and the containers are managed by a container orchestration tool such asKubernetes,Nomad,Docker Swarm, orDC/OS. The service proxies are responsible for communication with other service instances and can support capabilities such as service (instance) discovery, load balancing, authentication and authorization, secure communications, and others.
https://en.wikipedia.org/wiki/Microservices
Incompilerconstruction,name mangling(also calledname decoration) is a technique used to solve various problems caused by the need to resolve unique names for programming entities in many modernprogramming languages. It provides means to encode added information in the name of afunction,structure,classor anotherdata type, to pass more semantic information from thecompilerto thelinker. The need for name mangling arises where a language allows different entities to be named with the sameidentifieras long as they occupy a differentnamespace(typically defined by a module, class, or explicitnamespacedirective) or have differenttype signatures(such as infunction overloading). It is required in these uses because each signature might require different, specializedcalling conventionin themachine code. Anyobject codeproduced by compilers is usually linked with other pieces of object code (produced by the same or another compiler) by a type of program called alinker. The linker needs a great deal of information on each program entity. For example, to correctly link a function it needs its name, the number ofargumentsand their types, and so on. The simple programming languages of the 1970s, likeC, only distinguishedsubroutinesby their name, ignoring other information includingparameterand return types. Later languages, likeC++, defined stricter requirements for routines to be considered "equal", such as the parameter types, return type, and calling convention of a function. These requirements enable method overloading and detection of some bugs (such as using different definitions of a function when compiling differentsource codefiles). These stricter requirements needed to work with extantprogramming toolsand conventions. Thus, added requirements were encoded in the name of the symbol, since that was the only information a traditional linker had about a symbol. Although name mangling is not generally required or used by languages that do not supportfunction overloading, likeCand classicPascal, they use it in some cases to provide added information about a function. For example, compilers targeted atMicrosoft Windowsplatforms support a variety ofcalling conventions, which determine the manner in which parameters are sent to subroutines and results are returned. Because the different calling conventions are incompatible with one another, compilers mangle symbols with codes detailing which convention should be used to call the specific routine. The mangling scheme for Windows was established by Microsoft and has been informally followed by other compilers includingDigital Mars,Borland, andGNU Compiler Collection(GCC) when compiling code for the Windows platforms. The scheme even applies to other languages, such asPascal,D,Delphi,Fortran, andC#. This allows subroutines written in those languages to call, or be called by, extant Windowslibrariesusing a calling convention different from their default. When compiling the following C examples: 32-bit compilers emit, respectively: In thestdcallandfastcallmangling schemes, the function is encoded as_name@Xand@name@Xrespectively, whereXis the number of bytes, in decimal, of the argument(s) in the parameter list (including those passed in registers, for fastcall). In the case ofcdecl, the function name is merely prefixed by an underscore. The 64-bit convention on Windows (Microsoft C) has no leading underscore. This difference may in some rare cases lead to unresolved externals when porting such code to 64 bits. For example, Fortran code can use 'alias' to link against a C method by name as follows: This will compile and link fine under 32 bits, but generate an unresolved external_funder 64 bits. One workaround for this is not to use 'alias' at all (in which the method names typically need to be capitalized in C and Fortran). Another is to use the BIND option: In C, most compilers also mangle static functions and variables (and in C++ functions and variables declared static or put in the anonymous namespace) in translation units using the same mangling rules as for their non-static versions. If functions with the same name (and parameters for C++) are also defined and used in different translation units, it will also mangle to the same name, potentially leading to a clash. However, they will not be equivalent if they are called in their respective translation units. Compilers are usually free to emit arbitrary mangling for these functions, because it is illegal to access these from other translation units directly, so they will never need linking between different object code (linking of them is never needed). To prevent linking conflicts, compilers will use standard mangling, but will use so-called 'local' symbols. When linking many such translation units there might be multiple definitions of a function with the same name, but resulting code will only call one or another depending on which translation unit it came from. This is usually done using therelocationmechanism. C++compilers are the most widespread users of name mangling. The first C++ compilers were implemented as translators toCsource code, which would then be compiled by a C compiler to object code; because of this, symbol names had to conform to C identifier rules. Even later, with the emergence of compilers that produced machine code or assembly directly, the system'slinkergenerally did not support C++ symbols, and mangling was still required. TheC++language does not define a standard decoration scheme, so each compiler uses its own. C++ also has complex language features, such asclasses,templates,namespaces, andoperator overloading, that alter the meaning of specific symbols based on context or usage. Meta-data about these features can be disambiguated by mangling (decorating) the name of asymbol. Because the name-mangling systems for such features are not standardized across compilers, few linkers can link object code that was produced by different compilers. A single C++ translation unit might define two functions namedf(): These are distinct functions, with no relation to each other apart from the name. The C++ compiler will therefore encode the type information in the symbol name, the result being something resembling: Even though its name is unique,g()is still mangled: name mangling applies toallC++ symbols (except for those in anextern"C"{}block). The mangled symbols in this example, in the comments below the respective identifier name, are those produced by the GNU GCC 3.x compilers, according to theIA-64(Itanium) ABI: All mangled symbols begin with_Z(note that an identifier beginning with an underscore followed by a capital letter is areserved identifierin C, so conflict with user identifiers is avoided); for nested names (including both namespaces and classes), this is followed byN, then a series of <length, id> pairs (the length being the length of the next identifier), and finallyE. For example,wikipedia::article::formatbecomes: For functions, this is then followed by the type information; asformat()is avoidfunction, this is simplyv; hence: Forprint_to, the standard typestd::ostream(which is atypedefforstd::basic_ostream<char, std::char_traits<char> >) is used, which has the special aliasSo; a reference to this type is thereforeRSo, with the complete name for the function being: There isn't a standardized scheme by which even trivial C++ identifiers are mangled, and consequently different compilers (or even different versions of the same compiler, or the same compiler on different platforms) mangle public symbols in radically different (and thus totally incompatible) ways. Consider how different C++ compilers mangle the same functions: Notes: The job of the common C++ idiom: is to ensure that the symbols within are "unmangled" – that the compiler emits a binary file with their names undecorated, as a C compiler would do. As C language definitions are unmangled, the C++ compiler needs to avoid mangling references to these identifiers. For example, the standard strings library,<string.h>, usually contains something resembling: Thus, code such as: uses the correct, unmangledstrcmpandmemset. If theextern "C"had not been used, the (SunPro) C++ compiler would produce code equivalent to: Since those symbols do not exist in the C runtime library (e.g.libc), link errors would result. It would seem that standardized name mangling in the C++ language would lead to greater interoperability between compiler implementations. However, such a standardization by itself would not suffice to guarantee C++ compiler interoperability and it might even create a false impression that interoperability is possible and safe when it isn't. Name mangling is only one of severalapplication binary interface(ABI) details that need to be decided and observed by a C++ implementation. Other ABI aspects likeexception handling,virtual tablelayout, structure, and stack framepaddingalso cause differing C++ implementations to be incompatible. Further, requiring a particular form of mangling would cause issues for systems where implementation limits (e.g., length of symbols) dictate a particular mangling scheme. A standardizedrequirementfor name mangling would also prevent an implementation where mangling was not required at all – for example, a linker that understood the C++ language. TheC++ standardtherefore does not attempt to standardize name mangling. On the contrary, theAnnotated C++ Reference Manual(also known asARM,ISBN0-201-51459-1, section 7.2.1c) actively encourages the use of different mangling schemes to prevent linking when other aspects of the ABI are incompatible. Nevertheless, as detailed in the section above, on some platforms[4]the full C++ ABI has been standardized, including name mangling. Because C++ symbols are routinely exported fromDLLandshared objectfiles, the name mangling scheme is not merely a compiler-internal matter. Different compilers (or different versions of the same compiler, in many cases) produce such binaries under different name decoration schemes, meaning that symbols are frequently unresolved if the compilers used to create the library and the program using it employed different schemes. For example, if a system with multiple C++ compilers installed (e.g., GNU GCC and the OS vendor's compiler) wished to install theBoost C++ Libraries, it would have to be compiled multiple times (once for GCC and once for the vendor compiler). It is good for safety purposes that compilers producing incompatible object codes (codes based on different ABIs, regarding e.g., classes and exceptions) use different name mangling schemes. This guarantees that these incompatibilities are detected at the linking phase, not when executing the software (which could lead to obscure bugs and serious stability issues). For this reason, name decoration is an important aspect of any C++-relatedABI. There are instances, particularly in large, complex code bases, where it can be difficult or impractical to map the mangled name emitted within a linker error message back to the particular corresponding token/variable-name in the source. This problem can make identifying the relevant source file(s) very difficult for build or test engineers even if only one compiler and linker are in use. Demanglers (including those within the linker error reporting mechanisms) sometimes help but the mangling mechanism itself may discard critical disambiguating information. Output: InJava, thesignatureof a method or a class contains its name and the types of its method arguments and return value, where applicable. The format of signatures is documented, as the language, compiler, and .class file format were all designed together (and had object-orientation and universal interoperability in mind from the start). The scope of anonymous classes is confined to their parent class, so the compiler must produce a "qualified" public name for theinner class, to avoid conflict where other classes with the same name (inner or not) exist in the same namespace. Similarly, anonymous classes must have "fake" public names generated for them (as the concept of anonymous classes only exists in the compiler, not the runtime). So, compiling the following Java program: will produce three.classfiles: All of these class names are valid (as $ symbols are permitted in the JVM specification) and these names are "safe" for the compiler to generate, as the Java language definition advises not to use $ symbols in normal java class definitions. Name resolution in Java is further complicated at runtime, asfully qualified namesfor classes are unique only inside a specificclassloaderinstance. Classloaders are ordered hierarchically and each Thread in the JVM has a so-called context class loader, so in cases where two different classloader instances contain classes with the same name, the system first tries to load the class using the root (or system) classloader and then goes down the hierarchy to the context class loader. Java Native Interface, Java's native method support, allows Java language programs to call out to programs written in another language (usually C or C++). There are two name-resolution concerns here, neither of which is implemented in astandardizedmanner: InPython, mangling is used for class attributes that one does not want subclasses to use[6]which are designated as such by giving them a name with two or more leading underscores and no more than one trailing underscore. For example,__thingwill be mangled, as will___thingand__thing_, but__thing__and__thing___will not. Python's runtime does not restrict access to such attributes, the mangling only prevents name collisions if a derived class defines an attribute with the same name. On encountering name mangled attributes, Python transforms these names by prepending a single underscore and the name of the enclosing class, for example: To avoid name mangling in Pascal, use: Free Pascalsupportsfunctionand operator overloading, thus it also uses name mangling to support these features. On the other hand, Free Pascal is capable of calling symbols defined in external modules created with another language and exporting its own symbols to be called by another language. For further information, consultChapter 6.2and7.1ofFree Pascal Programmer's Guide. Name mangling is also necessary inFortrancompilers, originally because the language iscase insensitive. Further mangling requirements were imposed later in the evolution of the language because of the addition ofmodulesand other features in the Fortran 90 standard. The case mangling, especially, is a common issue that must be dealt with to call Fortran libraries, such asLAPACK, from other languages, such asC. Because of the case insensitivity, the name of a subroutine or functionFOOmust be converted to a standardized case and format by the compiler so that it will be linked in the same way regardless of case. Different compilers have implemented this in various ways, and no standardization has occurred. TheAIXandHP-UXFortran compilers convert all identifiers to lower casefoo, while theCrayandUnicosFortran compilers converted identifiers to all upper caseFOO. TheGNUg77compiler converts identifiers to lower case plus an underscorefoo_, except that identifiers already containing an underscoreFOO_BARhave two underscores appendedfoo_bar__, following a convention established byf2c. Many other compilers, includingSilicon Graphics's (SGI)IRIXcompilers,GNU Fortran, andIntel's Fortran compiler (except on Microsoft Windows), convert all identifiers to lower case plus an underscore (foo_andfoo_bar_, respectively). On Microsoft Windows, the Intel Fortran compiler defaults to uppercase without an underscore.[7] Identifiers in Fortran 90 modules must be further mangled, because the same procedure name may occur in different modules. Since the Fortran 2003 Standard requires that module procedure names not conflict with other external symbols,[8]compilers tend to use the module name and the procedure name, with a distinct marker in between. For example: In this module, the name of the function will be mangled as__m_MOD_five(e.g., GNU Fortran),m_MP_five_(e.g., Intel's ifort),m.five_(e.g., Oracle's sun95), etc. Since Fortran does not allow overloading the name of a procedure, but usesgeneric interface blocksand generic type-bound procedures instead, the mangled names do not need to incorporate clues about the arguments. The Fortran 2003 BIND option overrides any name mangling done by the compiler, as shownabove. Function names are mangled by default inRust. However, this can be disabled by the#[no_mangle]function attribute. This attribute can be used to export functions to C, C++, orObjective-C.[9]Further, along with the#[start]function attribute or the#[no_main]crate attribute, it allows the user to define a C-style entry point for the program.[10] Rust has used many versions of symbol mangling schemes that can be selected at compile time with an-Z symbol-mangling-versionoption. The following manglers are defined: Examples are provided in the Rustsymbol-namestests.[13] Essentially two forms of method exist inObjective-C, the class("static") method, and theinstance method. A method declaration in Objective-C is of the following form: Class methods are signified by +, instance methods use -. A typical class method declaration may then look like: With instance methods looking like this: Each of these method declarations have a specific internal representation. When compiled, each method is named according to the following scheme for class methods: and this for instance methods: The colons in the Objective-C syntax are translated to underscores. So, the Objective-C class method+(id)initWithX:(int)numberandY:(int)number;, if belonging to thePointclass would translate as_c_Point_initWithX_andY_, and the instance method (belonging to the same class)-(id)value;would translate to_i_Point_value. Each of the methods of a class are labeled in this way. However, to look up a method that a class may respond to would be tedious if all methods are represented in this fashion. Each of the methods is assigned a unique symbol (such as an integer). Such a symbol is known as aselector. In Objective-C, one can manage selectors directly – they have a specific type in Objective-C –SEL. During compiling, a table is built that maps the textual representation, such as_i_Point_value, to selectors (which are given a typeSEL). Managing selectors is more efficient than manipulating the text representation of a method. Note that a selector only matches a method's name, not the class it belongs to: different classes can have different implementations of a method with the same name. Because of this, implementations of a method are given a specific identifier too, these are known as implementation pointers, and are also given a type,IMP. Message sends are encoded by the compiler as calls to theidobjc_msgSend(idreceiver,SELselector,...)function, or one of its cousins, wherereceiveris the receiver of the message, andSELdetermines the method to call. Each class has its own table that maps selectors to their implementations – the implementation pointer specifies where in memory the implementation of the method resides. There are separate tables for class and instance methods. Apart from being stored in theSELtoIMPlookup tables, the functions are essentially anonymous. TheSELvalue for a selector does not vary between classes. This enablespolymorphism. The Objective-C runtime maintains information about the argument and return types of methods. However, this information is not part of the name of the method, and can vary from class to class. Since Objective-C does not supportnamespaces, there is no need for the mangling of class names (that do appear as symbols in generated binaries). Swiftkeeps metadata about functions (and more) in the mangled symbols referring to them. This metadata includes the function's name, attributes, module name, parameter types, return type, and more. For example: The mangled name for a methodfunc calculate(x: int) -> intof aMyClassclass in moduletestis_TFC4test7MyClass9calculatefS0_FT1xSi_Si, for 2014 Swift. The components and their meanings are as follows:[14] Mangling for versions since Swift 4.0 is documented officially. It retains some similarity to Itanium.[15]
https://en.wikipedia.org/wiki/Name_mangling
Anopen API(often referred to as a public API) is a publicly availableapplication programming interfacethat provides developers with programmatic access to a (possibly proprietary)software applicationorweb service.[1]Open APIs are APIs that are published on theinternetand are free to access by consumers.[2] There is no universally accepted definition of the term "Open API" and it may be used to mean a variety of things in different contexts, including:[3] A private API is an interface that opens parts of an organization's backend data and application functionality for use by developers working within (or contractors working for) that organization. Private APIs are only exposed to internal developers therefore the API publishers have total control over what and how applications are developed. Private APIs offer substantial benefits with regards to internal collaboration. Using a private API across an organization allows for greater shared awareness of the internal data models. As the developers are working for (or contracted by) one organization, communication will be more direct and therefore they should be able to work more cohesively as a group. Private APIs can significantly diminish the development time needed to manipulate and build internal systems that maximise productivity and create customer-facing applications that improve market reach and add value to existing offerings.[4] Private APIs can be made "private" in a number of ways. Most commonly the organization simply chooses not to document such an interface, such as in the case of undocumented functions of Microsoft Windows, which can be found by inspection of the symbol tables.[5]Some Web-based APIs may be authenticated by keys, both discoverable by analysis of application traffic.[6]macOS furthermore uses an "entitlement", granted only by digital signature, to control access to private APIs in the system.[7] Private APIs are by definition without any guarantee to the third-party developer choosing to uncover and use them. Nevertheless, the use of undocumented functions on Microsoft Windows have become so widespread that the system needs to preserve old behaviors for specific programs using the "AppCompat" database.[8] In contrast to a private API, an open API is publicly available for all developers to access. They allow developers, outside of an organization's workforce, to access backend data that can then be used to enhance their own applications. Open APIs can significantly increase revenue without the business having to invest in hiring new developers making them a very profitable software application.[9][10]However, opening back end information to the public can create a range of security and management challenges.[11]For example, publishing open APIs can make it harder for organisations to control the experience end users have with their information assets. Open API publishers cannot assume client apps built on their APIs will offer a good user experience. Furthermore, they cannot fully ensure that client apps maintain the look and feel of their corporate branding. Open APIs can be used by businesses seeking to leverage the ever-growing community of freelancing developers who have the ability to create innovative applications that add value to their core business. Open APIs are favoured in the business sphere as they simultaneously increase the production of new ideas without investing directly in development efforts. Businesses often tailor their APIs to target specific developer audiences that they feel will be most effective in creating valuable new applications. However, an API can significantly diminish an application's functionality if it is overloaded with features. For example,[12]Yahoo's open search API allows developers to integrateYahoosearch into their own software applications. The addition of this API provides search functionality to the developer's application whilst also increasing search traffic for Yahoo's search engine hence benefitting both parties. With respect toFacebookandTwitter, we can see how third parties have enriched these services with their own code. For example, the ability to create an account on an external site/app using your Facebook credentials is made possible using Facebook's open API. Many large technology firms, such as Twitter,LinkedInand Facebook, allow the use of their service by third parties andcompetitors.[13][14][15] With the rise in prominence ofHTML5and Web 2.0, the modern browsing experience has become interactive and dynamic and this has, in part, been accelerated through the use of open APIs. Some open APIs fetch data from the database behind a website and these are called Web APIs. For example, Google's YouTube API allows developers to integrate YouTube into their applications by providing the capability to search for videos, retrieve standard feeds, and see related content. Web APIs are used for exchanging information with a website either by receiving or by sending data. When a web API fetches data from a website, the application makes a HTTP request to the server the site is stored on. The server then sends data back in a format your application expects (if you requested data) or incorporates your changes to the website (if you sent data).
https://en.wikipedia.org/wiki/Open_API
Anopen service interface definition(OSID) is a programmatic interface specification describing a service. These interfaces are specified by theOpen Knowledge Initiative(OKI) to implement aservice-oriented architecture(SOA) to achieveinteroperabilityamong applications across a varied base of underlying and changing technologies. To preserve the investment in software engineering, program logic is separated from underlying technologies through the use of software interfaces each of which defines a contract between a service consumer and a service provider. This separation is the basis of any valid SOA. While some methods define the service interface boundary at a protocol or server level, OSIDs place the boundary at the application level to effectively insulate the consumer fromprotocols, server identities, and utility libraries that are in the domain to a service provider resulting in software which is easier to develop, longer-lasting, and usable across a wider array of computing environments. OSIDs assist insoftware designand development by breaking up the problem space across service interface boundaries. Because network communication issues are addressed within a service provider andbelowthe interface, there isn't an assumption that every service provider implement a remote communications protocol (though many do). OSIDs are also used for communication and coordination among the various components of complex software which provide a means of organizing design and development activities for simplifiedproject management. OSID providers (implementations) are often reused across a varied set of applications. Once software is made to understand the interface contract for a service, other compliant implementations may be used in its place. This achievesreusabilityat a high level (a service level) and also serves to easily scale software written for smaller more dedicated purposes. An OSID provider implementation may be composed of an arbitrary number of other OSID providers. This layering technique is an obvious means ofabstraction. When all the OSID providers implement the same service, this is called anadapterpattern. Adapter patterns are powerful techniques to federate, multiplex, or bridge different services contracting from the same interface without the modification to the application. This Internet-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Open_Service_Interface_Definitions
Parsing,syntax analysis, orsyntactic analysisis a process of analyzing astringofsymbols, either innatural language,computer languagesordata structures, conforming to the rules of aformal grammarby breaking it into parts. The termparsingcomes from Latinpars(orationis), meaningpart (of speech).[1] The term has slightly different meanings in different branches oflinguisticsandcomputer science. Traditionalsentenceparsing is often performed as a method of understanding the exact meaning of a sentence or word, sometimes with the aid of devices such assentence diagrams. It usually emphasizes the importance of grammatical divisions such assubjectandpredicate. Withincomputational linguisticsthe term is used to refer to the formal analysis by a computer of a sentence or other string of words into its constituents, resulting in aparse treeshowing their syntactic relation to each other, which may also containsemanticinformation.[citation needed]Some parsing algorithms generate aparse forestor list of parse trees from a string that issyntactically ambiguous.[2] The term is also used inpsycholinguisticswhen describing language comprehension. In this context, parsing refers to the way that human beings analyze a sentence or phrase (in spoken language or text) "in terms of grammatical constituents, identifying the parts of speech, syntactic relations, etc."[1]This term is especially common when discussing which linguistic cues help speakers interpretgarden-path sentences. Within computer science, the term is used in the analysis ofcomputer languages, referring to the syntactic analysis of the input code into its component parts in order to facilitate the writing ofcompilersandinterpreters. The term may also be used to describe a split or separation. In data analysis, the term is often used to refer to a process extracting desired information from data, e.g., creating atime seriessignal from aXMLdocument. The traditional grammatical exercise of parsing, sometimes known asclause analysis, involves breaking down a text into its componentparts of speechwith an explanation of the form, function, and syntactic relationship of each part.[3]This is determined in large part from study of the language'sconjugationsanddeclensions, which can be quite intricate for heavilyinflectedlanguages. To parse a phrase such as "man bites dog" involves noting that thesingularnoun "man" is thesubjectof the sentence, the verb "bites" is thethird person singularof thepresent tenseof the verb "to bite", and the singular noun "dog" is theobjectof the sentence. Techniques such assentence diagramsare sometimes used to indicate relation between elements in the sentence. Parsing was formerly central to the teaching of grammar throughout the English-speaking world, and widely regarded as basic to the use and understanding of written language.[citation needed] In somemachine translationandnatural language processingsystems, written texts in human languages are parsed by computer programs.[4]Human sentences are not easily parsed by programs, as there is substantialambiguityin the structure of human language, whose usage is to convey meaning (orsemantics) amongst a potentially unlimited range of possibilities, but only some of which are germane to the particular case.[5]So an utterance "Man bites dog" versus "Dog bites man" is definite on one detail but in another language might appear as "Man dog bites" with a reliance on the larger context to distinguish between those two possibilities, if indeed that difference was of concern. It is difficult to prepare formal rules to describe informal behaviour even though it is clear that some rules are being followed.[citation needed] In order to parse natural language data, researchers must first agree on thegrammarto be used. The choice of syntax is affected by bothlinguisticand computational concerns; for instance some parsing systems uselexical functional grammar, but in general, parsing for grammars of this type is known to beNP-complete.Head-driven phrase structure grammaris another linguistic formalism which has been popular in the parsing community, but other research efforts have focused on less complex formalisms such as the one used in the PennTreebank.Shallow parsingaims to find only the boundaries of major constituents such as noun phrases. Another popular strategy for avoiding linguistic controversy isdependency grammarparsing. Most modern parsers are at least partly statistical; that is, they rely on acorpusof training data which has already been annotated (parsed by hand). This approach allows the system to gather information about the frequency with which various constructions occur in specific contexts.(Seemachine learning.)Approaches which have been used include straightforwardPCFGs(probabilistic context-free grammars),[6]maximum entropy,[7]andneural nets.[8]Most of the more successful systems uselexicalstatistics (that is, they consider the identities of the words involved, as well as theirpart of speech). However such systems are vulnerable tooverfittingand require some kind ofsmoothingto be effective.[citation needed] Parsing algorithms for natural language cannot rely on the grammar having 'nice' properties as with manually designed grammars for programming languages. As mentioned earlier some grammar formalisms are very difficult to parse computationally; in general, even if the desired structure is notcontext-free, some kind of context-free approximation to the grammar is used to perform a first pass. Algorithms which use context-free grammars often rely on some variant of theCYK algorithm, usually with someheuristicto prune away unlikely analyses to save time.(Seechart parsing.)However some systems trade speed for accuracy using, e.g., linear-time versions of theshift-reducealgorithm. A somewhat recent development has beenparse rerankingin which the parser proposes some large number of analyses, and a more complex system selects the best option.[citation needed]Innatural language understandingapplications,semantic parsersconvert the text into a representation of its meaning.[9] Inpsycholinguistics, parsing involves not just the assignment of words to categories (formation of ontological insights), but the evaluation of the meaning of a sentence according to the rules of syntax drawn by inferences made from each word in the sentence (known asconnotation). This normally occurs as words are being heard or read. Neurolinguistics generally understands parsing to be a function of working memory, meaning that parsing is used to keep several parts of one sentence at play in the mind at one time, all readily accessible to be analyzed as needed. Because the human working memory has limitations, so does the function of sentence parsing.[10]This is evidenced by several different types of syntactically complex sentences that propose potentially issues for mental parsing of sentences. The first, and perhaps most well-known, type of sentence that challenges parsing ability is the garden-path sentence. These sentences are designed so that the most common interpretation of the sentence appears grammatically faulty, but upon further inspection, these sentences are grammatically sound. Garden-path sentences are difficult to parse because they contain a phrase or a word with more than one meaning, often their most typical meaning being a different part of speech.[11]For example, in the sentence, "the horse raced past the barn fell", raced is initially interpreted as a past tense verb, but in this sentence, it functions as part of an adjective phrase.[12]Since parsing is used to identify parts of speech, these sentences challenge the parsing ability of the reader. Another type of sentence that is difficult to parse is an attachment ambiguity, which includes a phrase that could potentially modify different parts of a sentence, and therefore presents a challenge in identifying syntactic relationship (i.e. "The boy saw the lady with the telescope", in which the ambiguous phrase with the telescope could modify the boy saw or the lady.)[11] A third type of sentence that challenges parsing ability is center embedding, in which phrases are placed in the center of other similarly formed phrases (i.e. "The rat the cat the man hit chased ran into the trap".) Sentences with 2 or in the most extreme cases 3 center embeddings are challenging for mental parsing, again because of ambiguity of syntactic relationship.[13] Within neurolinguistics there are multiple theories that aim to describe how parsing takes place in the brain. One such model is a more traditional generative model of sentence processing, which theorizes that within the brain there is a distinct module designed for sentence parsing, which is preceded by access to lexical recognition and retrieval, and then followed by syntactic processing that considers a single syntactic result of the parsing, only returning to revise that syntactic interpretation if a potential problem is detected.[14]The opposing, more contemporary model theorizes that within the mind, the processing of a sentence is not modular, or happening in strict sequence. Rather, it poses that several different syntactic possibilities can be considered at the same time, because lexical access, syntactic processing, and determination of meaning occur in parallel in the brain. In this way these processes are integrated.[15] Although there is still much to learn about the neurology of parsing, studies have shown evidence that several areas of the brain might play a role in parsing. These include the left anterior temporal pole, the left inferior frontal gyrus, the left superior temporal gyrus, the left superior frontal gyrus, the right posterior cingulate cortex, and the left angular gyrus. Although it has not been absolutely proven, it has been suggested that these different structures might favor either phrase-structure parsing or dependency-structure parsing, meaning different types of parsing could be processed in different ways which have yet to be understood.[16] Discourse analysisexamines ways to analyze language use and semiotic events. Persuasive language may be calledrhetoric. Aparseris a software component that takes input data (typically text) and builds adata structure– often some kind ofparse tree,abstract syntax treeor other hierarchical structure, giving a structural representation of the input while checking for correct syntax. The parsing may be preceded or followed by other steps, or these may be combined into a single step. The parser is often preceded by a separatelexical analyser, which creates tokens from the sequence of input characters; alternatively, these can be combined inscannerless parsing. Parsers may be programmed by hand or may be automatically or semi-automatically generated by aparser generator. Parsing is complementary totemplating, which produces formattedoutput.These may be applied to different domains, but often appear together, such as thescanf/printfpair, or the input (front end parsing) and output (back end code generation) stages of acompiler. The input to a parser is typically text in somecomputer language, but may also be text in a natural language or less structured textual data, in which case generally only certain parts of the text are extracted, rather than a parse tree being constructed. Parsers range from very simple functions such asscanf, to complex programs such as the frontend of aC++ compileror theHTMLparser of aweb browser. An important class of simple parsing is done usingregular expressions, in which a group of regular expressions defines aregular languageand a regular expression engine automatically generating a parser for that language, allowingpattern matchingand extraction of text. In other contexts regular expressions are instead used prior to parsing, as the lexing step whose output is then used by the parser. The use of parsers varies by input. In the case of data languages, a parser is often found as the file reading facility of a program, such as reading in HTML orXMLtext; these examples aremarkup languages. In the case ofprogramming languages, a parser is a component of acompilerorinterpreter, which parses thesource codeof acomputer programming languageto create some form of internal representation; the parser is a key step in thecompiler frontend. Programming languages tend to be specified in terms of adeterministic context-free grammarbecause fast and efficient parsers can be written for them. For compilers, the parsing itself can be done in one pass or multiple passes – seeone-pass compilerandmulti-pass compiler. The implied disadvantages of a one-pass compiler can largely be overcome by addingfix-ups, where provision is made for code relocation during the forward pass, and the fix-ups are applied backwards when the current program segment has been recognized as having been completed. An example where such a fix-up mechanism would be useful would be a forward GOTO statement, where the target of the GOTO is unknown until the program segment is completed. In this case, the application of the fix-up would be delayed until the target of the GOTO was recognized. Conversely, a backward GOTO does not require a fix-up, as the location will already be known. Context-free grammars are limited in the extent to which they can express all of the requirements of a language. Informally, the reason is that the memory of such a language is limited. The grammar cannot remember the presence of a construct over an arbitrarily long input; this is necessary for a language in which, for example, a name must be declared before it may be referenced. More powerful grammars that can express this constraint, however, cannot be parsed efficiently. Thus, it is a common strategy to create a relaxed parser for a context-free grammar which accepts a superset of the desired language constructs (that is, it accepts some invalid constructs); later, the unwanted constructs can be filtered out at thesemantic analysis(contextual analysis) step. For example, inPythonthe following is syntactically valid code: The following code, however, is syntactically valid in terms of the context-free grammar, yielding a syntax tree with the same structure as the previous, but violates the semantic rule requiring variables to be initialized before use: The following example demonstrates the common case of parsing a computer language with two levels of grammar: lexical and syntactic. The first stage is the token generation, orlexical analysis, by which the input character stream is split into meaningful symbols defined by a grammar ofregular expressions. For example, a calculator program would look at an input such as "12 * (3 + 4)^2" and split it into the tokens12,*,(,3,+,4,),^,2, each of which is a meaningful symbol in the context of an arithmetic expression. The lexer would contain rules to tell it that the characters*,+,^,(and)mark the start of a new token, so meaningless tokens like "12*" or "(3" will not be generated. The next stage is parsing or syntactic analysis, which is checking that the tokens form an allowable expression. This is usually done with reference to acontext-free grammarwhich recursively defines components that can make up an expression and the order in which they must appear. However, not all rules defining programming languages can be expressed by context-free grammars alone, for example type validity and proper declaration of identifiers. These rules can be formally expressed withattribute grammars. The final phase issemantic parsingor analysis, which is working out the implications of the expression just validated and taking the appropriate action.[17]In the case of a calculator or interpreter, the action is to evaluate the expression or program; a compiler, on the other hand, would generate some kind of code. Attribute grammars can also be used to define these actions. Thetaskof the parser is essentially to determine if and how the input can be derived from the start symbol of the grammar. This can be done in essentially two ways: LL parsersandrecursive-descent parserare examples of top-down parsers that cannot accommodateleft recursiveproduction rules. Although it has been believed that simple implementations of top-down parsing cannot accommodate direct and indirect left-recursion and may require exponential time and space complexity while parsingambiguous context-free grammars, more sophisticated algorithms for top-down parsing have been created by Frost, Hafiz, and Callaghan[20][21]which accommodateambiguityandleft recursionin polynomial time and which generate polynomial-size representations of the potentially exponential number of parse trees. Their algorithm is able to produce both left-most and right-most derivations of an input with regard to a givencontext-free grammar. An important distinction with regard to parsers is whether a parser generates aleftmost derivationor arightmost derivation(seecontext-free grammar). LL parsers will generate a leftmostderivationand LR parsers will generate a rightmost derivation (although usually in reverse).[18] Somegraphical parsingalgorithms have been designed forvisual programming languages.[22][23]Parsers for visual languages are sometimes based ongraph grammars.[24] Adaptive parsingalgorithms have been used to construct "self-extending"natural language user interfaces.[25] A simple parser implementation reads the entire input file, performs an intermediate computation or translation, and then writes the entire output file, such as in-memorymulti-pass compilers. Alternative parser implementation approaches: Some of the well known parser development tools include the following: Lookahead establishes the maximum incoming tokens that a parser can use to decide which rule it should use. Lookahead is especially relevant toLL,LR, andLALR parsers, where it is often explicitly indicated by affixing the lookahead to the algorithm name in parentheses, such as LALR(1). Mostprogramming languages, the primary target of parsers, are carefully defined in such a way that a parser with limited lookahead, typically one, can parse them, because parsers with limited lookahead are often more efficient. One important change[citation needed]to this trend came in 1990 whenTerence ParrcreatedANTLRfor his Ph.D. thesis, aparser generatorfor efficient LL(k) parsers, wherekis any fixed value. LR parsers typically have only a few actions after seeing each token. They are shift (add this token to the stack for later reduction), reduce (pop tokens from the stack and form a syntactic construct), end, error (no known rule applies) or conflict (does not know whether to shift or reduce). Lookahead has two advantages.[clarification needed] Example: Parsing the Expression1 + 2 * 3[dubious–discuss] Most programming languages (except for a few such as APL and Smalltalk) and algebraic formulas give higher precedence to multiplication than addition, in which case the correct interpretation of the example above is1 + (2 * 3). Note that Rule4 above is a semantic rule. It is possible to rewrite the grammar to incorporate this into the syntax. However, not all such rules can be translated into syntax. Initially Input = [1, +, 2, *, 3] The parse tree and resulting code from it is not correct according to language semantics. To correctly parse without lookahead, there are three solutions: The parse tree generated is correct and simplymore efficient[clarify][citation needed]than non-lookahead parsers. This is the strategy followed inLALR parsers.
https://en.wikipedia.org/wiki/Parsing
Incomputing, aplug-in(orplugin,add-in,addin,add-on, oraddon) is asoftware componentthat extends the functionality of an existingsoftware systemwithout requiring the system to bere-built. A plug-infeatureis one way that a system can becustomizable.[1] Applications support plug-ins for a variety of reasons including: Examples of plug-in use for various categories of applications: The host application provides services which the plug-in can use, including a way for plug-ins to register themselves with the host application and aprotocolfor the exchange of data with plug-ins. Plug-ins depend on the services provided by the host application and do not usually work by themselves. Conversely, the host application operates independently of the plug-ins, making it possible for end-users to add and update plug-ins dynamically without needing to make changes to the host application.[11][12] Programmers typically implement plug-ins asshared libraries, which getdynamically loadedat run time.HyperCardsupported a similar facility, but more commonly included the plug-in code in the HyperCard documents (calledstacks) themselves. Thus the HyperCard stack became a self-contained application in its own right, distributable as a single entity that end-users could run without the need for additional installation-steps. Programs may also implement plug-ins by loading a directory of simplescriptfiles written in ascripting languagelikePythonorLua. In the context of aweb browser, a helper application is a separate program—likeIrfanVieworAdobe Reader—that extends the functionality of a browser.[13][14]A helper application extends the functionality an application but unlike the typical plug-in that is loaded into the host application'saddress space, a helper application is a separate application. With a separate address space, the extension cannot crash the host application as is possible if they share an address space.[15] In the mid-1970s, theEDTtext editorran on theUnisysVS/9operating systemfor theUNIVAC Series 90mainframe computer. It allowed a program to be run from the editor, which can access the in-memory edit buffer.[16]The plug-in executable could call the editor to inspect and change the text. TheUniversity of WaterlooFortran compiler used this to allow interactive compilation ofFortranprograms. Early personal computer software with plug-in capability included HyperCard andQuarkXPresson theApple Macintosh, both released in 1987. In 1988,Silicon Beach Softwareincluded plug-in capability inDigital DarkroomandSuperPaint.
https://en.wikipedia.org/wiki/Plug-in_(computing)
RESTful API Modeling Language(RAML) is aYAML-based language for describing staticAPIs(but not REST APIs).[2]It provides all the information necessary to describe APIs on the level 2 of theRichardson Maturity Model. Although designed with RESTful APIs in mind, RAML is not capable of describing APIs that obey all constraints of REST (it cannot describe an API obeyingHATEOAS, in particular). It encourages reuse, enables discovery and pattern-sharing and aims for merit-based emergence of best practices.[3] RAML was first proposed in 2013. The initial RAML specification was authored by Uri Sarid, Emiliano Lesende, Santiago Vacas and Damian Martinez, and garnered support from technology leaders like MuleSoft, AngularJS, Intuit, Box, PayPal, Programmable Web and API Web Science, Kin Lane, SOA Software, and Cisco.[4]Development is managed by the RAML Workgroup.[5]The current workgroup signatories include technology leaders fromMuleSoft(Uri Sarid, CTO),AngularJS(Misko Hevery, Project Founder),Intuit(Ivan Lazarov, Chief Enterprise Architect),Airware(Peter Rexer, Director of Product - Developer Platform),Programmable WebandAPI Science(John Musser, Founder),SOA Software(Tony Gullotta, Director of Development),Cisco(Jaideep Subedar, Senior Manager, Product Management - Application Integration Solutions Group),VMware(Kevin Duffey, Senior MTS Engineer),Akamai Technologies(Rob Daigneau, Director of Architecture for Akamai's OPEN API Platform) andRestlet(Jerome Louvel, CTO and Founder). RAML is a trademark of MuleSoft.[6] Very few existing APIs meet the precise criteria to be classified as RESTful APIs. Consequently, like most API initiatives in the 2010s, RAML has initially focussed on the basics of APIs including resources, methods, parameters, and response bodies that need not be hypermedia.There are plans to move towards more strictly RESTful APIs as the evolution of technology and the market permits.[citation needed] There are a number of reasons why RAML has broken out from being a proprietary vendor language and has proven interesting to the broader API community:[7] A new organization, under the sponsorship of theLinux Foundation, called theOpen API Initiativewas set up in 2015 to standardize the description of HTTP APIs. A number of companies includingSmartBear,Google,IBMandMicrosoftwere founding members.[11][12]SmartBear donated the Swagger specification to the new group. RAML and API Blueprint are also under consideration by the group.[13][14] This is an example RAML file. As with YAML, indentation shows nesting. Some highlights: Furthermore, you can convert your RAML specification to eitherOpenAPIorAPI BlueprintusingAPIMATIC, thus enabling you to use further API gateways.
https://en.wikipedia.org/wiki/RAML_(software)
Asoftware development kit(SDK) is a collection of software development tools in one installable package. They facilitate the creation of applications by having a compiler, debugger and sometimes asoftware framework. They are normally specific to a hardware platform and operating system combination. To create applications with advanced functionalities such as advertisements, push notifications, etc; mostapplication softwaredevelopers use specific software development kits. Some SDKs are required for developing a platform-specific app. For example, the development of an Android app on the Java platform requires aJava Development Kit. For iOS applications (apps) theiOS SDKis required. For Universal Windows Platform the.NET Framework SDKmight be used. There are also SDKs that add additional features and can be installed in apps to provide analytics, data about application activity, and monetization options. Some prominent creators of these types of SDKs include Google, Smaato, InMobi, and Facebook. An SDK can take the form ofapplication programming interfaces[1]in the form of on-devicelibrariesof reusable functions used to interface to a particularprogramming language, or it may be as complex as hardware-specific tools that can communicate with a particularembedded system.[2]Commontoolsinclude debugging facilities and otherutilities, often presented in anintegrated development environment.[3]SDKs may include sample software and/or technical notes along with documentation, and tutorials to help clarify points made by the primary reference material.[4][5] SDKs often includelicensesthat make them unsuitable for building software intended to be developed under an incompatible license. For example, a proprietary SDK is generally incompatible withfree softwaredevelopment, while aGNU General Public License'd SDK could be incompatible with proprietary software development, for legal reasons.[6][7]However, SDKs built under theGNU Lesser General Public Licenseare typically usable for proprietary development.[8][9]In cases where the underlying technology is new, SDKs may include hardware. For example,AirTag's 2012near-field communicationSDK included both the paying and the reading halves of the necessary hardware stack.[10] The averageAndroidmobile appimplements 15.6 separate SDKs, with gaming apps implementing on average 17.5 different SDKs.[11][12]The most popular SDK categories for Android mobile apps are analytics and advertising.[12] SDKs can be unsafe (because they are implemented within apps yet run separate code). Malicious SDKs (with honest intentions or not) can violate users'data privacy, damage app performance, or even cause apps to be banned fromGoogle Playor theApp Store.[13]New technologies allowapp developersto control and monitor client SDKs in real time. Providers of SDKs for specific systems orsubsystemssometimes substitute a more specific term instead ofsoftware. For instance, bothMicrosoft[14]andCitrix[15]provide a driver development kit for developingdevice drivers. Examples of software development kits for various platforms include:
https://en.wikipedia.org/wiki/Software_development_kit
Web designencompasses many different skills and disciplines in the production and maintenance of websites. The different areas of web design include web graphic design;user interface design(UI design); authoring, including standardised code andproprietary software;user experience design(UX design); andsearch engine optimization. Often many individuals will work in teams covering different aspects of the design process, although some designers will cover them all.[1]The term "web design" is normally used to describe the design process relating to the front-end (client side) design of a website including writingmarkup. Web design partially overlapsweb engineeringin the broader scope ofweb development. Web designers are expected to have an awareness ofusabilityand be up to date withweb accessibilityguidelines. Although web design has a fairly recent history, it can be linked to other areas such as graphic design, user experience, and multimedia arts, but is more aptly seen from a technological standpoint. It has become a large part of people's everyday lives. It is hard to imagine the Internet without animated graphics, different styles oftypography, backgrounds, videos and music. The web was announced on August 6, 1991; in November 1992,CERNwas the first website to go live on the World Wide Web. During this period, websites were structured by using the <table> tag which created numbers on the website. Eventually, web designers were able to find their way around it to create more structures and formats. In early history, the structure of the websites was fragile and hard to contain, so it became very difficult to use them. In November 1993,ALIWEBwas the first ever search engine to be created (Archie Like Indexing for the WEB).[2] In 1989, whilst working atCERNin Switzerland, British scientistTim Berners-Leeproposed to create a globalhypertextproject, which later became known as theWorld Wide Web. From 1991 to 1993 the World Wide Web was born.Text-onlyHTMLpages could be viewed using a simple line-modeweb browser.[3]In 1993Marc AndreessenandEric Bina, created theMosaic browser. At the time there were multiple browsers, however the majority of them were Unix-based and naturally text-heavy. There had been no integrated approach tographic designelements such asimagesorsounds. TheMosaic browserbroke this mould.[4]TheW3Cwas created in October 1994 to "lead the World Wide Web to its full potential by developing commonprotocolsthat promote its evolution and ensure itsinteroperability."[5]This discouraged any one company from monopolizing a proprietary browser andprogramming language, which could have altered the effect of the World Wide Web as a whole. The W3C continues to set standards, which can today be seen withJavaScriptand other languages. In 1994 Andreessen formed Mosaic Communications Corp. that later became known asNetscape Communications, theNetscape 0.9 browser. Netscape created its HTML tags without regard to the traditional standards process. For example, Netscape 1.1 included tags for changing background colours and formatting text withtableson web pages. From 1996 to 1999 thebrowser warsbegan, asMicrosoftandNetscapefought for ultimate browser dominance. During this time there were many new technologies in the field, notablyCascading Style Sheets,JavaScript, andDynamic HTML. On the whole, the browser competition did lead to many positive creations and helped web design evolve at a rapid pace.[6] In 1996, Microsoft released its first competitive browser, which was complete with its features and HTML tags. It was also the first browser to support style sheets, which at the time was seen as an obscure authoring technique and is today an important aspect of web design.[6]TheHTML markupfortableswas originally intended for displaying tabular data. However, designers quickly realized the potential of using HTML tables for creating complex, multi-column layouts that were otherwise not possible. At this time, as design and good aesthetics seemed to take precedence over good markup structure, little attention was paid to semantics andweb accessibility. HTML sites were limited in their design options, even more so with earlier versions of HTML. To create complex designs, many web designers had to use complicated table structures or even use blankspacer .GIFimages to stop empty table cells from collapsing.[7]CSSwas introduced in December 1996 by theW3Cto support presentation and layout. This allowedHTMLcode to be semantic rather than both semantic and presentational and improved web accessibility, seetableless web design. In 1996,Flash(originally known as FutureSplash) was developed. At the time, the Flash content development tool was relatively simple compared to now, using basic layout and drawing tools, a limited precursor toActionScript, and a timeline, but it enabled web designers to go beyond the point of HTML,animated GIFsandJavaScript. However, because Flash required aplug-in, many web developers avoided using it for fear of limiting their market share due to lack of compatibility. Instead, designers reverted toGIFanimations (if they did not forego usingmotion graphicsaltogether) and JavaScript forwidgets. But the benefits of Flash made it popular enough among specific target markets to eventually work its way to the vast majority of browsers, and powerful enough to be used to develop entire sites.[7] In 1998, Netscape released Netscape Communicator code under anopen-source licence, enabling thousands of developers to participate in improving the software. However, these developers decided to start a standard for the web from scratch, which guided the development of the open-source browser and soon expanded to a complete application platform.[6]TheWeb Standards Projectwas formed and promoted browser compliance withHTMLandCSSstandards. Programs likeAcid1,Acid2, andAcid3were created in order to test browsers for compliance with web standards. In 2000, Internet Explorer was released for Mac, which was the first browser that fully supported HTML 4.01 and CSS 1. It was also the first browser to fully support thePNGimage format.[6]By 2001, after a campaign by Microsoft to popularize Internet Explorer, Internet Explorer had reached 96% ofweb browser usage share, which signified the end of the first browser wars as Internet Explorer had no real competition.[8] Since the start of the 21st century, the web has become more and more integrated into people's lives. As this has happened the technology of the web has also moved on. There have also been significant changes in the way people use and access the web, and this has changed how sites are designed. Since the end of thebrowsers wars[when?]new browsers have been released. Many of these areopen source, meaning that they tend to have faster development and are more supportive of new standards. The new options are considered by many[weasel words]to be better than Microsoft'sInternet Explorer. TheW3Chas released new standards for HTML (HTML5) and CSS (CSS3), as well as newJavaScriptAPIs, each as a new but individual standard.[when?]While the term HTML5 is only used to refer to the new version of HTML andsomeof the JavaScript APIs, it has become common to use it to refer to the entire suite of new standards (HTML5, CSS3 and JavaScript). With the advancements in3GandLTEinternet coverage, a significant portion of website traffic shifted to mobile devices. This shift influenced the web design industry, steering it towards a minimalist, lighter, and more simplistic style. The "mobile first" approach emerged as a result, emphasizing the creation of website designs that prioritize mobile-oriented layouts first, before adapting them to larger screen dimensions. Web designers use a variety of different tools depending on what part of the production process they are involved in. These tools are updated over time by newer standards and software but the principles behind them remain the same. Web designers use bothvectorandrastergraphics editors to create web-formatted imagery or design prototypes. A website can be created usingWYSIWYGwebsite buildersoftware or acontent management system, or the individual web pages can behand-codedin just the same manner as the first web pages were created. Other tools web designers might use include markupvalidators[9]and other testing tools for usability and accessibility to ensure their websites meet web accessibility guidelines.[10] One popular tool in web design is UX Design, UX Design is a popular modality of modern web design art, it features user friendly interface and appropriate presentation.[11] Marketing and communication design on a website may identify what works for its target market. This can be an age group or particular strand of culture; thus the designer may understand the trends of its audience. Designers may also understand the type of website they are designing, meaning, for example, that (B2B)business-to-businesswebsite design considerations might differ greatly from a consumer-targeted website such as a retail or entertainment website. Careful consideration might be made to ensure that the aesthetics or overall design of a site do not clash with the clarity and accuracy of the content or the ease ofweb navigation,[12]especially on a B2B website. Designers may also consider the reputation of the owner or business the site is representing to make sure they are portrayed favorably. Web designers normally oversee all the websites that are made on how they work or operate on things. They constantly are updating and changing everything on websites behind the scenes. All the elements they do are text, photos, graphics, and layout of the web. Before beginning work on a website, web designers normally set an appointment with their clients to discuss layout, colour, graphics, and design. Web designers spend the majority of their time designing websites and making sure the speed is right. Web designers typically engage in testing and working, marketing, and communicating with other designers about laying out the websites and finding the right elements for the websites.[13] User understanding of the content of a website often depends on user understanding of how the website works. This is part of theuser experience design. User experience is related to layout, clear instructions, and labeling on a website. How well a user understands how they can interact on a site may also depend on theinteractive designof the site. If a user perceives the usefulness of the website, they are more likely to continue using it. Users who are skilled and well versed in website use may find a more distinctive, yet less intuitive or lessuser-friendlywebsite interface useful nonetheless. However, users with less experience are less likely to see the advantages or usefulness of a less intuitive website interface. This drives the trend for a more universal user experience and ease of access to accommodate as many users as possible regardless of user skill.[14]Much of the user experience design and interactive design are considered in theuser interface design. Advanced interactive functions may requireplug-insif not advanced coding language skills. Choosing whether or not to use interactivity that requires plug-ins is a critical decision in user experience design. If the plug-in doesn't come pre-installed with most browsers, there's a risk that the user will have neither the know-how nor the patience to install a plug-in just to access the content. If the function requires advanced coding language skills, it may be too costly in either time or money to code compared to the amount of enhancement the function will add to the user experience. There's also a risk that advanced interactivity may be incompatible with older browsers or hardware configurations. Publishing a function that doesn't work reliably is potentially worse for the user experience than making no attempt. It depends on the target audience if it's likely to be needed or worth any risks. Progressive enhancementis a strategy in web design that puts emphasis onweb contentfirst, allowingeveryone to accessthe basic content and functionality of a web page, whilstuserswith additional browser features or faster Internet access receive the enhanced version instead. In practice, this means serving content throughHTMLand applying styling and animation throughCSSto the technically possible extent, then applying further enhancements throughJavaScript. Pages' text is loaded immediately through the HTML source code rather than having to wait for JavaScript to initiate and load the content subsequently, which allows content to be readable with minimum loading time and bandwidth, and throughtext-based browsers, and maximizesbackwards compatibility.[15] As an example,MediaWiki-based sites including Wikipedia use progressive enhancement, as they remain usable while JavaScript and even CSS is deactivated, as pages' content is included in the page's HTML source code, whereas counter-exampleEveripediarelies on JavaScript to load pages' content subsequently; a blank page appears with JavaScript deactivated. Part of the user interface design is affected by the quality of thepage layout. For example, a designer may consider whether the site's page layout should remain consistent on different pages when designing the layout. Page pixel width may also be considered vital for aligning objects in the layout design. The most popular fixed-width websites generally have the same set width to match the current most popular browser window, at the current most popular screen resolution, on the current most popular monitor size. Most pages are also center-aligned for concerns ofaestheticson larger screens. Fluid layoutsincreased in popularity around 2000 to allow the browser to make user-specific layout adjustments to fluid layouts based on the details of the reader's screen (window size, font size relative to window, etc.). They grew as an alternative to HTML-table-based layouts andgrid-based designin both page layout design principles and in coding technique but were very slow to be adopted.[note 1]This was due to considerations ofscreen reading devicesand varying windows sizes which designers have no control over. Accordingly, a design may be broken down into units (sidebars, content blocks,embedded advertisingareas, navigation areas) that are sent to the browser and which will be fitted into the display window by the browser, as best it can. Although such a display may often change the relative position of major content units, sidebars may be displaced belowbody textrather than to the side of it. This is a more flexible display than a hard-coded grid-based layout that doesn't fit the device window. In particular, the relative position of content blocks may change while leaving the content within the block unaffected. This also minimizes the user's need to horizontally scroll the page. Responsive web designis a newer approach, based on CSS3, and a deeper level of per-device specification within the page's style sheet through an enhanced use of the CSS@mediarule. In March 2018 Google announced they would be rolling out mobile-first indexing.[16]Sites using responsive design are well placed to ensure they meet this new approach. Web designers may choose to limit the variety of website typefaces to only a few which are of a similar style, instead of using a wide range oftypefacesortype styles. Most browsers recognize a specific number of safe fonts, which designers mainly use in order to avoid complications. Font downloading was later included in the CSS3 fonts module and has since been implemented in Safari 3.1,Opera 10, andMozilla Firefox 3.5. This has subsequently increased interest inweb typography, as well as the usage of font downloading. Most site layouts incorporate negative space to break the text up into paragraphs and also avoid center-aligned text.[17] The page layout and user interface may also be affected by the use of motion graphics. The choice of whether or not to use motion graphics may depend on the target market for the website. Motion graphics may be expected or at least better received with an entertainment-oriented website. However, a website target audience with a more serious or formal interest (such as business, community, or government) might find animations unnecessary and distracting if only for entertainment or decoration purposes. This doesn't mean that more serious content couldn't be enhanced with animated or video presentations that is relevant to the content. In either case,motion graphic designmay make the difference between more effective visuals or distracting visuals. Motion graphics that are not initiated by the site visitor can produce accessibility issues. The World Wide Web consortium accessibility standards require that site visitors be able to disable the animations.[18] Website designers may consider it to be good practice to conform to standards. This is usually done via a description specifying what the element is doing. Failure to conform to standards may not make a website unusable or error-prone, but standards can relate to the correct layout of pages for readability as well as making sure coded elements are closed appropriately. This includes errors in code, a more organized layout for code, and making sure IDs and classes are identified properly. Poorly coded pages are sometimes colloquially calledtag soup.Validating via W3C[9]can only be done when a correct DOCTYPE declaration is made, which is used to highlight errors in code. The system identifies the errors and areas that do not conform to web design standards. This information can then be corrected by the user.[19] There are two ways websites are generated: statically or dynamically. A static website stores a unique file for every page of a static website. Each time that page is requested, the same content is returned. This content is created once, during the design of the website. It is usually manually authored, although some sites use an automated creation process, similar to a dynamic website, whose results are stored long-term as completed pages. These automatically created static sites became more popular around 2015, with generators such asJekyllandAdobe Muse.[20] The benefits of a static website are that they were simpler to host, as their server only needed to serve static content, not execute server-side scripts. This required less server administration and had less chance of exposing security holes. They could also serve pages more quickly, on low-cost server hardware. This advantage became less important as cheap web hosting expanded to also offer dynamic features, andvirtual serversoffered high performance for short intervals at low cost. Almost all websites have some static content, as supporting assets such as images and style sheets are usually static, even on a website with highly dynamic pages. Dynamic websites are generated on the fly and use server-side technology to generate web pages. They typically extract their content from one or more back-end databases: some are database queries across a relational database to query a catalog or to summarise numeric information, and others may use adocument databasesuch asMongoDBorNoSQLto store larger units of content, such as blog posts or wiki articles. In the design process, dynamic pages are often mocked-up orwireframedusing static pages. The skillset needed to develop dynamic web pages is much broader than for a static page, involving server-side and database coding as well as client-side interface design. Even medium-sized dynamic projects are thus almost always a team effort. When dynamic web pages first developed, they were typically coded directly in languages such asPerl,PHPorASP. Some of these, notably PHP and ASP, used a 'template' approach where a server-side page resembled the structure of the completed client-side page, and data was inserted into places defined by 'tags'. This was a quicker means of development than coding in a purely procedural coding language such as Perl. Both of these approaches have now been supplanted for many websites by higher-level application-focused tools such ascontent management systems. These build on top of general-purpose coding platforms and assume that a website exists to offer content according to one of several well-recognised models, such as a time-sequenced blog, a thematic magazine or news site, a wiki, or a user forum. These tools make the implementation of such a site very easy, and a purely organizational and design-based task, without requiring any coding. Editing the content itself (as well as the template page) can be done both by means of the site itself and with the use of third-party software. The ability to edit all pages is provided only to a specific category of users (for example, administrators, or registered users). In some cases, anonymous users are allowed to edit certain web content, which is less frequent (for example, on forums – adding messages). An example of a site with an anonymous change is Wikipedia. Usability experts, includingJakob Nielsenand Kyle Soucy, have often emphasised homepage design for website success and asserted that the homepage is the most important page on a website.[21]Nielsen, Jakob; Tahir, Marie (October 2001),Homepage Usability: 50 Websites Deconstructed, New Riders Publishing,ISBN978-0-7357-1102-0[22][23]However practitioners into the 2000s were starting to find that a growing number of website traffic was bypassing the homepage, going directly to internal content pages through search engines, e-newsletters and RSS feeds.[24]This led many practitioners to argue that homepages are less important than most people think.[25][26][27][28]Jared Spool argued in 2007 that a site's homepage was actually the least important page on a website.[29] In 2012 and 2013, carousels (also called 'sliders' and 'rotating banners') have become an extremely popular design element on homepages, often used to showcase featured or recent content in a confined space.[30]Many practitioners argue that carousels are an ineffective design element and hurt a website's search engine optimisation and usability.[30][31][32] There are two primary jobs involved in creating a website: the web designer andweb developer, who often work closely together on a website.[33]The web designers are responsible for the visual aspect, which includes the layout, colouring, and typography of a web page. Web designers will also have a working knowledge ofmarkup languagessuch as HTML and CSS, although the extent of their knowledge will differ from one web designer to another. Particularly in smaller organizations, one person will need the necessary skills for designing and programming the full web page, while larger organizations may have a web designer responsible for the visual aspect alone. Further jobs which may become involved in the creation of a website include: Chat GPT and other AI models are being used to write and code websites making it faster and easier to create websites. There are still discussions about the ethical implications on using artificial intelligence for design as the world becomes more familiar with using AI for time-consuming tasks used in design processes.[34]
https://en.wikipedia.org/wiki/Web_content_vendor
Cross Platform Component Object Model(XPCOM) is across-platformcomponent modelfromMozilla. It is similar toComponent Object Model(COM),Common Object Request Broker Architecture(CORBA) andsystem object model(SOM). It features multiplelanguage bindingsandinterface description language(IDL) descriptions, which allow programmers to plug their custom functions into the framework and connect them with other components. The most notable use of XPCOM is within theFirefoxweb browser, where many internal components interact through XPCOM interfaces. Furthermore, Firefox used to allowadd-onsextensive XPCOM access, but this was removed in 2017 and replaced with the less-permissiveWebExtensions API.[1][2]Twoforksof Firefox still support XPCOM add-on capability:Pale Moon[3]andBasilisk.[4] XPCOM is one of the main things making theMozillaapplication environment an actual framework. It is a development environment that provides the following features for the cross-platform software developer: This component object model makes virtually all of the functionality ofGeckoavailable as a series of components, or reusable cross-platformlibraries, that can be accessed from theweb browseror scripted from any Mozilla application. Applications that must access the various Mozilla XPCOM libraries (networking,security,DOM, etc.) use a special layer of XPCOM calledXPConnect, whichreflectsthe library interfaces intoJavaScript, or other languages. XPConnect glues the front end to theC++,C, orRustprogramming language based components in XPCOM, and it can be extended to include scripting support for other languages: PyXPCOM[5]already offers support forPython. wxWidgets[6]provide support forPerl, and there are efforts underway to addCommon Language Infrastructure(CLI) andRubylanguage support for XPConnect. For developers, XPCOM allows writing components inC++,C,JavaScript,Python, or other languages for which special bindings have been created, and compile and run those components on dozens of different platforms, including these and others where Mozilla is supported. The flexibility to reuse the XPCOM components from the Gecko library and develop new components that run on different platforms facilitatesrapid application developmentand results in an application that is more productive and easier to maintain. The networking library, for example, is a set of XPCOM components that can be accessed and used by any Mozilla application. File I/O, security, password management, and profiles are also separate XPCOM components that programmers can use in their own application development. XPCOM adds a lot of code formarshallingobjects, and in theNetscapeera XPCOM was overused for internal interfaces where it wasn't truly necessary, resulting insoftware bloat.[7]This was a key reason why in 2001AppleforkedKHTML, not Gecko, to create theWebKitenginefor itsSafaribrowser.[8] Mozilla has since cleaned up some of the XPCOM bloat.[9]By 2008, this combined with other efforts resulted in big performance improvements forGecko.[10]
https://en.wikipedia.org/wiki/XPCOM
Abinary recompileris acompilerthat takesexecutablebinary filesas input, analyzes their structure, applies transformations and optimizations, and outputs new optimized executable binaries.[1] The foundation to the concepts of binary recompilation were laid out byGary Kildall[2][3][4][5][6][7][8]with the development of the optimizingassembly code translatorXLT86in 1981.[4][9][10][11]
https://en.wikipedia.org/wiki/Binary_recompiler
Incomputing,binary translationis a form ofbinary recompilationwhere sequences ofinstructionsare translated from a sourceinstruction set(ISA) to the target instruction set with respect to theoperating systemfor which the binary was compiled for. In some cases such asinstruction set simulation, the target instruction set may be the same as the source instruction set, providing testing and debugging features such as instruction trace, conditional breakpoints andhot spotdetection. The two main types are static and dynamic binary translation. Translation can be done in hardware (for example, by circuits in aCPU) or in software (e.g. run-time engines, static recompiler, emulators, all are typically slow). Binary translation is motivated by a lack of a binary for a target platform, the lack of source code to compile for the target platform, or otherwise difficulty in compiling the source for the target platform. Statically recompiled binaries run potentially faster than their respective emulated binaries, as the emulation overhead is removed. This is similar to the difference in performance between interpreted and compiled programs in general. A translator using static binary translation aims to convert all of the code of anexecutable fileinto code that runs on the target architecture and platform without having to run the code first, as is done in dynamic binary translation. This is very difficult to do correctly, since not all the code can be discovered by the translator. For example, some parts of the executable may be reachable only throughindirect branches, whose value is known only at run-time. One such static binary translator uses universalsuperoptimizerpeepholetechnology (developed bySorav Bansaland Alex Aiken fromStanford University) to perform efficient translation between possibly many source and target pairs, with considerably low development costs and high performance of the target binary. In experiments of PowerPC-to-x86 translations, some binaries even outperformed native versions, but on average they ran at two-thirds of native speed.[1] In 1960sHoneywellprovided a program called theLiberatorfor theirHoneywell 200series of computers; it could translate programs for theIBM 1400 seriesof computers into programs for the Honeywell 200 series.[2] In 1995 Norman Ramsey atBell Communications Researchand Mary F. Fernandez at Department of Computer Science,Princeton UniversitydevelopedThe New Jersey Machine-Code Toolkitthat had the basic tools for static assembly translation.[3] In 2004 Scott Elliott and Phillip R. Hutchinson atNintendodeveloped a tool to generate "C" code fromGame Boybinary that could then be compiled for a new platform and linked against a hardware library for use in airline entertainment systems.[4] In 2014, anARM architectureversion of the 1998video gameStarCraftwas generated by static recompilation and additionalreverse engineeringof the originalx86version.[5][6]ThePandorahandheld community was capable of developing the required tools[7]on their own and achieving such translations successfully several times.[8][9] Another example is theNES-to-x86statically recompiled version of the videogameSuper Mario Bros.which was generated under usage ofLLVMin 2013.[10] For instance, a successful x86-to-x64static recompilation was generated for theprocedural terrain generatorof the video gameCube Worldin 2014.[11] Dynamic binary translation (DBT) looks at a short sequence of code—typically on the order of a singlebasic block—then translates it and caches the resulting sequence. Code is only translated as it is discovered and when possible, and branch instructions are made to point to already translated and saved code (memoization). Dynamic binary translation differs from simple emulation (eliminating the emulator's main read-decode-execute loop—a major performance bottleneck), paying for this by large overhead during translation time. This overhead is hopefully amortized as translated code sequences are executed multiple times. More advanced dynamic translators employdynamic recompilationwhere the translated code is instrumented to find out what portions are executed a large number of times, and these portions areoptimizedaggressively. This technique is reminiscent of aJIT compiler, and in fact such compilers (e.g.Sun'sHotSpottechnology) can be viewed as dynamic translators from a virtual instruction set (thebytecode) to a real one. The smart microprocessor consists of a hardwareVLIWcore as its engine and a software layer called Code Morphing software. The Code Morphing software acts as a shell […] morphing or translatingx86instructions to native Crusoe instructions. In addition, the Code Morphing software contains a dynamic compiler and code optimizer […] The result is increased performance at the least amount of power. […] [This] allows Transmeta to evolve the VLIW hardware and Code Morphing software separately without affecting the huge base of software applications.
https://en.wikipedia.org/wiki/Binary_translation
Platform virtualization software, specificallyemulatorsandhypervisors, aresoftware packagesthat emulate the whole physical computer machine, often providing multiplevirtual machineson one physical platform. The table below compares basic information aboutplatform virtualizationhypervisors. This table is meant to outline restrictions in the software dictated by licensing or capabilities. Note: No limit means no enforced limit. For example, a VM with 1 TB of memory cannot fit in a host with only 8 GB memory and no memory swap disk, so it will have a limit of 8 GB physically.
https://en.wikipedia.org/wiki/Comparison_of_platform_virtualization_software
Incomputing,just-in-time(JIT)compilation(alsodynamic translationorrun-time compilations)[1]iscompilation(ofcomputer code) during execution of a program (atrun time) rather than before execution.[2]This may consist ofsource code translationbut is more commonlybytecodetranslation tomachine code, which is then executed directly. A system implementing a JIT compiler typically continuously analyses the code being executed and identifies parts of the code where the speedup gained from compilation or recompilation would outweigh the overhead of compiling that code. JIT compilation is a combination of the two traditional approaches to translation to machine code—ahead-of-time compilation (AOT), andinterpretation—and combines some advantages and drawbacks of both.[2]Roughly, JIT compilation combines the speed of compiled code with the flexibility of interpretation, with the overhead of an interpreter and the additional overhead of compiling andlinking(not just interpreting). JIT compilation is a form ofdynamic compilation, and allowsadaptive optimizationsuch asdynamic recompilationandmicroarchitecture-specific speedups.[nb 1][3]Interpretation and JIT compilation are particularly suited fordynamic programming languages, as the runtime system can handlelate-bounddata types and enforce security guarantees. The earliest published JIT compiler is generally attributed to work onLISPbyJohn McCarthyin 1960.[4]In his seminal paperRecursive functions of symbolic expressions and their computation by machine, Part I, he mentions functions that are translated during runtime, thereby sparing the need to save the compiler output topunch cards[5](although this would be more accurately known as a "Compile and go system"). Another early example was byKen Thompson, who in 1968 gave one of the first applications ofregular expressions, here forpattern matchingin the text editorQED.[6]For speed, Thompson implemented regular expression matching by JITing toIBM 7094code on theCompatible Time-Sharing System.[4]An influential technique for deriving compiled code from interpretation was pioneered byJames G. Mitchellin 1970, which he implemented for the experimental languageLC².[7][8] Smalltalk(c. 1983) pioneered new aspects of JIT compilations. For example, translation to machine code was done on demand, and the result was cached for later use. When memory became scarce, the system would delete some of this code and regenerate it when it was needed again.[2][9]Sun'sSelflanguage improved these techniques extensively and was at one point the fastest Smalltalk system in the world, achieving up to half the speed of optimized C[10]but with a fully object-oriented language. Self was abandoned by Sun, but the research went into the Java language. The term "Just-in-time compilation" was borrowed from the manufacturing term "Just in time" and popularized by Java, with James Gosling using the term from 1993.[11]Currently JITing is used by most implementations of theJava Virtual Machine, asHotSpotbuilds on, and extensively uses, this research base. The HP project Dynamo was an experimental JIT compiler where the "bytecode" format and the machine code format were the same; the system optimizedPA-8000machine code.[12]Counterintuitively, this resulted in speed ups, in some cases of 30% since doing this permitted optimizations at the machine code level, for example, inlining code for better cache usage and optimizations of calls to dynamic libraries and many other run-time optimizations which conventional compilers are not able to attempt.[13][14] In November 2020,PHP8.0 introduced a JIT compiler.[15]In October 2024,CPythonintroduced an experimental JIT compiler.[16] In a bytecode-compiled system,source codeis translated to anintermediate representationknown asbytecode. Bytecode is not the machine code for any particular computer, and may beportableamong computer architectures. The bytecode may then be interpreted by, or run on avirtual machine. The JIT compiler reads the bytecodes in many sections (or in full, rarely) and compiles them dynamically into machine code so the program can run faster. This can be done per-file, per-function or even on any arbitrary code fragment; the code can be compiled when it is about to be executed (hence the name "just-in-time"), and then cached and reused later without needing to be recompiled. By contrast, a traditionalinterpreted virtual machinewill simply interpret the bytecode, generally with much lower performance. Someinterpreters even interpret source code, without the step of first compiling to bytecode, with even worse performance.Statically-compiled codeornative codeis compiled prior to deployment. Adynamic compilation environmentis one in which the compiler can be used during execution. A common goal of using JIT techniques is to reach or surpass the performance ofstatic compilation, while maintaining the advantages of bytecode interpretation: Much of the "heavy lifting" of parsing the original source code and performing basic optimization is often handled at compile time, prior to deployment: compilation from bytecode to machine code is much faster than compiling from source. The deployed bytecode is portable, unlike native code. Since the runtime has control over the compilation, like interpreted bytecode, it can run in a secure sandbox. Compilers from bytecode to machine code are easier to write, because the portable bytecode compiler has already done much of the work. JIT code generally offers far better performance than interpreters. In addition, it can in some cases offer better performance than static compilation, as many optimizations are only feasible at run-time:[17][18] Because a JIT must render and execute a native binary image at runtime, true machine-code JITs necessitate platforms that allow for data to be executed at runtime, making using such JITs on aHarvard architecture-based machine impossible; the same can be said for certain operating systems and virtual machines as well. However, a special type of "JIT" may potentiallynottarget the physical machine's CPU architecture, but rather an optimized VM bytecode where limitations on raw machine code prevail, especially where that bytecode's VM eventually leverages a JIT to native code.[19] JIT causes a slight to noticeable delay in the initial execution of an application, due to the time taken to load and compile the input code. Sometimes this delay is called "startup time delay" or "warm-up time". In general, the more optimization JIT performs, the better the code it will generate, but the initial delay will also increase. A JIT compiler therefore has to make a trade-off between the compilation time and the quality of the code it hopes to generate. Startup time can include increased IO-bound operations in addition to JIT compilation: for example, thert.jarclass data file for theJava Virtual Machine(JVM) is 40 MB and the JVM must seek a lot of data in this contextually huge file.[20] One possible optimization, used by Sun'sHotSpotJava Virtual Machine, is to combine interpretation and JIT compilation. The application code is initially interpreted, but the JVM monitors which sequences ofbytecodeare frequently executed and translates them to machine code for direct execution on the hardware. For bytecode which is executed only a few times, this saves the compilation time and reduces the initial latency; for frequently executed bytecode, JIT compilation is used to run at high speed, after an initial phase of slow interpretation. Additionally, since a program spends most time executing a minority of its code, the reduced compilation time is significant. Finally, during the initial code interpretation, execution statistics can be collected before compilation, which helps to perform better optimization.[21] The correct tradeoff can vary due to circumstances. For example, Sun's Java Virtual Machine has two major modes—client and server. In client mode, minimal compilation and optimization is performed, to reduce startup time. In server mode, extensive compilation and optimization is performed, to maximize performance once the application is running by sacrificing startup time. Other Java just-in-time compilers have used a runtime measurement of the number of times a method has executed combined with the bytecode size of a method as a heuristic to decide when to compile.[22]Still another uses the number of times executed combined with the detection of loops.[23]In general, it is much harder to accurately predict which methods to optimize in short-running applications than in long-running ones.[24] Native Image Generator(Ngen) byMicrosoftis another approach at reducing the initial delay.[25]Ngen pre-compiles (or "pre-JITs") bytecode in aCommon Intermediate Languageimage into machine native code. As a result, no runtime compilation is needed..NET Framework2.0 shipped withVisual Studio 2005runs Ngen on all of the Microsoft library DLLs right after the installation. Pre-jitting provides a way to improve the startup time. However, the quality of code it generates might not be as good as the one that is JITed, for the same reasons why code compiled statically, withoutprofile-guided optimization, cannot be as good as JIT compiled code in the extreme case: the lack of profiling data to drive, for instance, inline caching.[26] There also exist Java implementations that combine anAOT (ahead-of-time) compilerwith either a JIT compiler (Excelsior JET) or interpreter (GNU Compiler for Java). JIT compilation may not reliably achieve its goal, namely entering a steady state of improved performance after a short initial warmup period.[27][28]Across eight different virtual machines,Barrett et al. (2017)measured six widely-usedmicrobenchmarkswhich are commonly used by virtual machine implementors as optimisation targets, running them repeatedly within a single process execution.[29]OnLinux, they found that 8.7% to 9.6% of process executions failed to reach a steady state of performance, 16.7% to 17.9% entered a steady state ofreducedperformance after a warmup period, and 56.5% pairings of a specific virtual machine running a specific benchmark failed to consistently see a steady-state non-degradation of performance across multiple executions (i.e., at least one execution failed to reach a steady state or saw reduced performance in the steady state). Even where an improved steady-state was reached, it sometimes took many hundreds of iterations.[30]Traini et al. (2022)instead focused on the HotSpot virtual machine but with a much wider array of benchmarks,[31]finding that 10.9% of process executions failed to reach a steady state of performance, and 43.5% of benchmarks did not consistently attain a steady state across multiple executions.[32] JIT compilation fundamentally uses executable data, and thus poses security challenges and possible exploits. Implementation of JIT compilation consists of compiling source code or byte code to machine code and executing it. This is generally done directly in memory: the JIT compiler outputs the machine code directly into memory and immediately executes it, rather than outputting it to disk and then invoking the code as a separate program, as in usual ahead of time compilation. In modern architectures this runs into a problem due toexecutable space protection: arbitrary memory cannot be executed, as otherwise there is a potential security hole. Thus memory must be marked as executable; for security reasons this should be doneafterthe code has been written to memory, and marked read-only, as writable/executable memory is a security hole (seeW^X).[33]For instance Firefox's JIT compiler for Javascript introduced this protection in a release version with Firefox 46.[34] JIT sprayingis a class ofcomputer security exploitsthat use JIT compilation forheap spraying: the resulting memory is then executable, which allows an exploit if execution can be moved into the heap. JIT compilation can be applied to some programs, or can be used for certain capacities, particularly dynamic capacities such asregular expressions. For example, a text editor may compile a regular expression provided at runtime to machine code to allow faster matching: this cannot be done ahead of time, as the pattern is only provided at runtime. Several modernruntime environmentsrely on JIT compilation for high-speed code execution, including most implementations ofJava, together withMicrosoft's.NET. Similarly, many regular-expression libraries feature JIT compilation of regular expressions, either to bytecode or to machine code. JIT compilation is also used in some emulators, in order to translate machine code from one CPU architecture to another. A common implementation of JIT compilation is to first have AOT compilation to bytecode (virtual machinecode), known asbytecode compilation, and then have JIT compilation to machine code (dynamic compilation), rather than interpretation of the bytecode. This improves the runtime performance compared to interpretation, at the cost of lag due to compilation. JIT compilers translate continuously, as with interpreters, but caching of compiled code minimizes lag on future execution of the same code during a given run. Since only part of the program is compiled, there is significantly less lag than if the entire program were compiled prior to execution.
https://en.wikipedia.org/wiki/Just-in-time_compilation
Incomputer programming,instrumentationis the act of modifying software so thatanalysiscan be performed on it.[1] Generally, instrumentation either modifiessource codeorbinary code. Execution environments like the JVM provide separate interfaces to add instrumentation to program executions, such as theJVMTI, which enables instrumentation during program start. Instrumentation enablesprofiling:[2]measuring dynamic behavior during a test run. This is useful for properties of a program that cannot beanalyzed staticallywith sufficient precision, such asperformanceandalias analysis. Instrumentation can include: Instrumentation is limited by execution coverage. If the program never reaches a particular point of execution, then instrumentation at that point collects no data. For instance, if a word processor application is instrumented, but the user never activates the print feature, then the instrumentation can say nothing about the routines which are used exclusively by the printing feature. Instrumentation increases the execution time of a program. In some contexts, this increase might be dramatic and hence limit the limit the application of instrumentation to debugging contexts. The instrumentation overhead differs depending on the used instrumentation technology.[4] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Instrumentation_(computer_programming)
Data verificationis a process in which different types of data are checked for accuracy andinconsistenciesafterdata migrationis done.[1]In some domains it is referred to Source Data Verification (SDV), such as inclinical trials.[2] Data verification helps to determine whether data was accurately translated when data istransferredfrom one source to another, is complete, and supports processes in the new system. During verification, there may be a need for a parallel run of both systems to identify areas of disparity and forestall erroneousdata loss. Methods for data verification includedouble data entry,proofreadingand automated verification of data. Proofreading data involves someone checking the data entered against the original document. This is also time-consuming and costly. Automated verification of data can be achieved using one way hashes locally or through use of a SaaS based service such as Q by SoLVBL to provide immutable seals to allow verification of the original data. Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Data_verification
In thesocial sciences,triangulationrefers to the application and combination of severalresearch methodsin the study of the same phenomenon.[1]By combining multiple observers, theories, methods, andempiricalmaterials, researchers hope to overcome the weakness or intrinsicbiasesand the problems that come from single method, single-observer, and single-theory studies. It is popularly used insociology. "The concept of triangulation is borrowed fromnavigationalandland surveyingtechniques that determine a single point in space with the convergence of measurements taken from two other distinct points."[2] Triangulation can be used in bothquantitativeandqualitativestudies as an alternative to traditional criteria like reliability and validity. The purpose of triangulation in qualitative research is to increase the credibility and validity of the results. Several scholars have aimed to define triangulation throughout the years. Denzin (2006) identified four basic types of triangulation:[6] Cohen, L., Mansion, L. and Morrison, K. (2000). Research Methods in Education.5th ed. London: Routledge. p.25
https://en.wikipedia.org/wiki/Triangulation_(social_science)
Database normalizationis the process of structuring arelational databasein accordance with a series of so-callednormal formsin order to reducedata redundancyand improvedata integrity. It was first proposed byBritishcomputer scientistEdgar F. Coddas part of hisrelational model. Normalization entails organizing thecolumns(attributes) andtables(relations) of a database to ensure that theirdependenciesare properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process ofsynthesis(creating a new database design) ordecomposition(improving an existing database design). A basic objective of the first normal form defined by Codd in 1970 was to permit data to be queried and manipulated using a "universal data sub-language" grounded infirst-order logic.[1]An example of such a language isSQL, though it is one that Codd regarded as seriously flawed.[2] The objectives of normalization beyond 1NF (first normal form) were stated by Codd as: When an attempt is made to modify (update, insert into, or delete from) a relation, the following undesirable side effects may arise in relations that have not been sufficiently normalized: A fully normalized database allows its structure to be extended to accommodate new types of data without changing existing structure too much. As a result, applications interacting with the database are minimally affected. Normalized relations, and the relationship between one normalized relation and another, mirror real-world concepts and their interrelationships. Codd introduced the concept of normalization and what is now known as thefirst normal form(1NF) in 1970.[4]Codd went on to define thesecond normal form(2NF) andthird normal form(3NF) in 1971,[5]and Codd andRaymond F. Boycedefined theBoyce–Codd normal form(BCNF) in 1974.[6] Ronald Faginintroduced thefourth normal form(4NF) in 1977 and thefifth normal form(5NF) in 1979.Christopher J. Dateintroduced thesixth normal form(6NF) in 2003. Informally, a relational database relation is often described as "normalized" if it meets third normal form.[7]Most 3NF relations are free of insertion, updation, and deletion anomalies. The normal forms (from least normalized to most normalized) are: Normalization is a database design technique, which is used to design arelational databasetable up to higher normal form.[9]The process is progressive, and a higher level of database normalization cannot be achieved unless the previous levels have been satisfied.[10] That means that, having data inunnormalized form(the least normalized) and aiming to achieve the highest level of normalization, the first step would be to ensure compliance tofirst normal form, the second step would be to ensuresecond normal formis satisfied, and so forth in order mentioned above, until the data conforms tosixth normal form. However, normal forms beyond4NFare mainly of academic interest, as the problems they exist to solve rarely appear in practice.[11] The data in the following example was intentionally designed to contradict most of the normal forms. In practice it is often possible to skip some of the normalization steps because the data is already normalized to some extent. Fixing a violation of one normal form also often fixes a violation of a higher normal form. In the example, one table has been chosen for normalization at each step, meaning that at the end, some tables might not be sufficiently normalized. Let a database table exist with the following structure:[10] For this example it is assumed that each book has only one author. A table that conforms to the relational model has aprimary keywhich uniquely identifies a row. In our example, the primary key is acomposite keyof{Title, Format}(indicated by the underlining): In thefirst normal formeach field contains a single value. A field may not contain a set of values or a nested record.Subjectcontains a set of subject values, meaning it does not comply. To solve the problem, the subjects are extracted into a separateSubjecttable:[10] Instead of one table inunnormalized form, there are now two tables conforming to the 1NF. Recall that theBooktable below has acomposite keyof{Title, Format}, which will not satisfy 2NF if some subset of that key is a determinant. At this point in our design thekeyis not finalized as theprimary key, so it is called acandidate key. Consider the following table: All of the attributes that are not part of the candidate key depend onTitle, but onlyPricealso depends onFormat. To conform to2NFand remove duplicates, every non-candidate-key attribute must depend on the whole candidate key, not just part of it. To normalize this table, make{Title}a (simple) candidate key (the primary key) so that every non-candidate-key attribute depends on the whole candidate key, and removePriceinto a separate table so that its dependency onFormatcan be preserved: Now, both theBookandPricetables conform to2NF. TheBooktable still has a transitive functional dependency ({Author Nationality} is dependent on {Author}, which is dependent on {Title}). Similar violations exist for publisher ({Publisher Country} is dependent on {Publisher}, which is dependent on {Title}) and for genre ({Genre Name} is dependent on {Genre ID}, which is dependent on {Title}). Hence, theBooktable is not in 3NF. To resolve this, we can place {Author Nationality}, {Publisher Country}, and {Genre Name} in their own respective tables, thereby eliminating the transitive functional dependencies: The elementary key normal form (EKNF) falls strictly between 3NF and BCNF and is not much discussed in the literature. It is intended"to capture the salient qualities of both 3NF and BCNF"while avoiding the problems of both (namely, that 3NF is "too forgiving" and BCNF is "prone to computational complexity"). Since it is rarely mentioned in literature, it is not included in this example. Assume the database is owned by a book retailer franchise that has several franchisees that own shops in different locations. And therefore the retailer decided to add a table that contains data about availability of the books at different locations: As this table structure consists of acompound primary key, it doesn't contain any non-key attributes and it's already inBCNF(and therefore also satisfies all the previousnormal forms). However, assuming that all available books are offered in each area, theTitleis not unambiguously bound to a certainLocationand therefore the table doesn't satisfy4NF. That means that, to satisfy thefourth normal form, this table needs to be decomposed as well: Now, every record is unambiguously identified by asuperkey, therefore4NFis satisfied. Suppose the franchisees can also order books from different suppliers. Let the relation also be subject to the following constraint: This table is in4NF, but the Supplier ID is equal to the join of its projections:{{Supplier ID, Title}, {Title, Franchisee ID}, {Franchisee ID, Supplier ID}}.No component of that join dependency is asuperkey(the sole superkey being the entire heading), so the table does not satisfy theETNFand can be further decomposed:[12] The decomposition produces ETNF compliance. To spot a table not satisfying the5NF, it is usually necessary to examine the data thoroughly. Suppose the table from4NF examplewith a little modification in data and let's examine if it satisfies5NF: Decomposing this table lowers redundancies, resulting in the following two tables: The query joining these tables would return the following data: The JOIN returns three more rows than it should; adding another table to clarify the relation results in three separate tables: What will the JOIN return now? It actually is not possible to join these three tables. That means it wasn't possible to decompose theFranchisee - Book - Locationwithout data loss, therefore the table already satisfies5NF. Disclaimer- the data used demonstrates the principal, but fails to remain true. In this case the data would best be decomposed into the following, with a surrogate key which we will call 'Store ID': The JOIN will now return the expected result: C.J. Datehas argued that only a database in 5NF is truly "normalized".[13] Let's have a look at theBooktable from previous examples and see if it satisfies thedomain-key normal form: Logically,Thicknessis determined by number of pages. That means it depends onPageswhich is not a key. Let's set an example convention saying a book up to 350 pages is considered "slim" and a book over 350 pages is considered "thick". This convention is technically a constraint but it is neither a domain constraint nor a key constraint; therefore we cannot rely on domain constraints and key constraints to keep the data integrity. In other words – nothing prevents us from putting, for example, "Thick" for a book with only 50 pages – and this makes the table violateDKNF. To solve this, a table holding enumeration that defines theThicknessis created, and that column is removed from the original table: That way, the domain integrity violation has been eliminated, and the table is inDKNF. A simple and intuitive definition of thesixth normal formis that"a table is in6NFwhenthe row contains the Primary Key, and at most one other attribute".[14] That means, for example, thePublishertable designed whilecreating the 1NF: needs to be further decomposed into two tables: The obvious drawback of 6NF is the proliferation of tables required to represent the information on a single entity. If a table in 5NF has one primary key column and N attributes, representing the same information in 6NF will require N tables; multi-field updates to a single conceptual record will require updates to multiple tables; and inserts and deletes will similarly require operations across multiple tables. For this reason, in databases intended to serveonline transaction processing(OLTP) needs, 6NF should not be used. However, indata warehouses, which do not permit interactive updates and which are specialized for fast query on large data volumes, certain DBMSs use an internal 6NF representation – known as acolumnar data store. In situations where the number of unique values of a column is far less than the number of rows in the table, column-oriented storage allow significant savings in space through data compression. Columnar storage also allows fast execution of range queries (e.g., show all records where a particular column is between X and Y, or less than X.) In all these cases, however, the database designer does not have to perform 6NF normalization manually by creating separate tables. Some DBMSs that are specialized for warehousing, such asSybase IQ, use columnar storage by default, but the designer still sees only a single multi-column table. Other DBMSs, such as Microsoft SQL Server 2012 and later, let you specify a "columnstore index" for a particular table.[15]
https://en.wikipedia.org/wiki/Database_normalization
Daikonis acomputer programthat detects likely invariants of programs.[1]Aninvariantis a condition that always holds true at certain points in the program. It is mainly used[2]fordebuggingprograms in late development, or checking modifications to existing code. Daikon can detect properties inC,C++,Java,Perl, andIOAprograms, as well asspreadsheetfiles or other data sources. Daikon is easy to extend and isfree software.[3] Thisprogramming-tool-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Daikon_(system)
Dynamic load testing(ordynamic loading) is a method to assess apile'sbearing capacityby applying a dynamic load to the pile head (a falling mass) while recording acceleration and strain on the pile head. Dynamic load testing is ahigh strain dynamic testwhich can be appliedafterpile installation for concrete piles. For steel or timber piles, dynamic load testing can be done during installation or after installation. The procedure is standardized by ASTM D4945-00 Standard Test Method for High Strain Dynamic Testing of Piles. It may be performed on all piles, regardless of their installation method. In addition to bearing capacity, Dynamic Load Testing gives information on resistance distribution (shaft resistance and end bearing) and evaluates the shape and integrity of the foundation element. The foundation bearing capacity results obtained with dynamic load tests correlate well with the results ofstatic load testsperformed on the same foundation element.[citation needed] This article about acivil engineeringtopic is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Dynamic_load_testing
Incomputer science,program analysis[1]is the process of analyzing the behavior of computer programs regarding a property such as correctness, robustness, safety and liveness. Program analysis focuses on two major areas:program optimizationandprogram correctness. The first focuses on improving the program’s performance while reducing the resource usage while the latter focuses on ensuring that the program does what it is supposed to do. Program analysis can be performed without executing the program (static program analysis), during runtime (dynamic program analysis) or in a combination of both. In the context of program correctness, static analysis can discover vulnerabilities during the development phase of the program.[2]These vulnerabilities are easier to correct than the ones found during the testing phase since static analysis leads to the root of the vulnerability. Due to many forms of static analysis being computationallyundecidable, the mechanisms for performing it may not always terminate with the correct answer. This can result in eitherfalse negatives("no problems found" when the code does in fact have issues) orfalse positives, or because they may never return an incorrect answer but may also never terminate. Despite these limitations, static analysis can still be valuable: the first type of mechanism might reduce the number of vulnerabilities, while the second can sometimes provide strong assurance of the absence of certain classes of vulnerabilities. Incorrect optimizations are highly undesirable. So, in the context of program optimization, there are two main strategies to handle computationally undecidable analysis: However, there is also a third strategy that is sometimes applicable for languages that are not completely specified, such asC. An optimizing compiler is at liberty to generate code that does anything at runtime – even crashes – if it encounters source code whose semantics are unspecified by the language standard in use. The purpose of control-flow analysis is to obtain information about which functions can be called at various points during the execution of a program. The collected information is represented by acontrol-flow graph(CFG) where the nodes are instructions of the program and the edges represent the flow of control. By identifying code blocks and loops a CFG becomes a starting point for compiler-made optimizations. Data-flow analysis is a technique designed to gather information about the values at each point of the program and how they change over time. This technique is often used by compilers to optimize the code. One of the most well known examples of data-flow analysis istaint checking, which consists of considering all variables that contain user-supplied data – which is considered "tainted", i.e. insecure – and preventing those variables from being used until they have been sanitized. This technique is often used to preventSQL injectionattacks. Taint checking can be done statically or dynamically. Abstract interpretation allows the extraction of information about a possible execution of a program without actually executing the program. This information can be used by compilers to look for possible optimizations or for certifying a program against certain classes of bugs. Type systems associate types to programs that fulfill certain requirements. Their purpose is to select a subset of programs of a language that are considered correct according to a property. Type checking is used in programming to limit how programming objects are used and what can they do. This is done by the compiler orinterpreter. Type checking can also help prevent vulnerabilities by ensuring that a signed value isn't attributed to an unsigned variable. Type checking can be done statically (at compile time), dynamically (at runtime) or a combination of both. Static type information (eitherinferred, or explicitly provided by type annotations in the source code) can also be used to do optimizations, such as replacingboxed arrayswith unboxed arrays. Effect systems are formal systems designed to represent the effects that executing a function or method can have. An effect codifies what is being done and with what it is being done – usually referred to aseffect kindandeffect region, respectively.[clarification needed] Model checking refers to strict, formal, and automated ways to check if amodel(which in this context means a formal model of a piece of code, though in other contexts it can be a model of a piece of hardware) complies with a given specification. Due to the inherent finite-state nature of code, and both the specification and the code being convertible intological formulae, it is possible to check if the system violates the specification using efficient algorithmic methods. Dynamic analysis can use runtime knowledge of the program to increase the precision of the analysis, while also providing runtime protection, but it can only analyze a single execution of the problem and might degrade the program’s performance due to the runtime checks. Software should be tested to ensure its quality and that it performs as it is supposed to in a reliable manner, and that it won’t create conflicts with other software that may function alongside it. The tests are performed by executing the program with an input and evaluating its behavior and the produced output. Even if no security requirements are specified, additionalsecurity testingshould be performed to ensure that an attacker can’t tamper with the software and steal information, disrupt the software’s normal operations, or use it as a pivot to attack its users. Program monitoring records and logs different kinds of information about the program such as resource usage, events, and interactions, so that it can be reviewed to find or pinpoint causes of abnormal behavior. Furthermore, it can be used to perform security audits. Automated monitoring of programs is sometimes referred to asruntime verification. For a given subset of a program’s behavior, program slicing consists of reducing the program to the minimum form that still produces the selected behavior. The reduced program is called a “slice” and is a faithful representation of the original program within the domain of the specified behavior subset. Generally, finding a slice is an unsolvable problem, but by specifying the target behavior subset by the values of a set of variables, it is possible to obtain approximate slices using a data-flow algorithm. These slices are usually used by developers during debugging to locate the source of errors.
https://en.wikipedia.org/wiki/Program_analysis_(computer_science)
TPT(time partition testing) is a systematictestmethodologyfor theautomatedsoftware testandverificationofembedded control systems,cyber-physical systems, anddataflow programs. TPT is specialised on testing andvalidationof embedded systems whose inputs and outputs can be represented assignalsand is a dedicated method for testing continuous behaviour ofsystems.[1]Mostcontrol systemsbelong to this system class. The outstanding characteristic of control systems is the fact that they interact closely interlinked with a real world environment. Controllers need to observe their environment and react correspondingly to its behaviour.[2]The system works in an interactional cycle with its environment and is subject to temporal constraints. Testing these systems is to stimulate and to check the timing behaviour. Traditional functional testing methods use scripts – TPT usesmodel-based testing. TPT combines a systematic and graphic modelling technique for test cases with a fully automated test execution in different environments and automatic test evaluation. TPT covers the following four test activities: In TPT tests are modelled graphically with the aid of special state machines and time partitioning.[1][3]All test cases for one system under test can be modelled using one hybrid automaton. Tests often consist of a sequence of logical phases. Thestatesof thefinite-state machinerepresent the logical passes of a test which are similar for all tests. Trigger conditions model the transitions between the test phases. Each state and transition of the automaton may have different variants. The combination of the variants model the individual test cases. Natural languagetexts become part of the graphics, supporting the simple and demonstrative readability even for non-programmers. Substantial techniques such as parallel and hierarchical branchingstate machines, conditional branching,reactivity, signal description,measuredsignals as well as lists of simple test steps allow an intuitive and graphic modelling even of complex test cases. The test's complexity is hidden behind graphics. The lowest level signal description consists of either test step lists or so called direct definitions. Through the use of the Test-Step List, one can model simple sequences of test steps that do not need to execute in parallel, such as setting signals (Set channel), ramping signals (Ramp channel), setting parameters (Set parameter), and waiting (Wait). Requests for the expected test results can be made within the test sequence to evaluate the system under test as it runs. It is also possible to place subautomatons in the Test-Step List, which in turn contain automatons and sequences, resulting in hierarchical Test-Step Lists. The test sequences can also be combined with other modelling methods, allowing for a great deal of complexity (or simplicity) in one's test. Test sequences can also be combined and parallelised with other modelling methods. Within the Test-Step-List it is possible to implement so-called "Direct Definitions". Using this type of modelling, one can define signals as a function of time, past variables/test events, and other signals. It is also possible to define these signals by writing "C-Style" code as well as importing measurement data and using a manual signal editor. It is possible to definefunctionsthat can act as aclientsorservers. Client functions are called from TPT in the system under test, where server functions implemented in TPT can be called as "stubfunctions" from the system under test. TPT itself may also call the server functions. TPT was developed specifically for testing of continuous and reactive behaviour of embedded systems.[4]TPT can be seen as the extension of theClassification Tree Methodin terms of timing behaviour. Because of its systematic approach intest casegeneration, TPT even keeps track of very complex systems whose thorough testing requires a large number of test cases thus making it possible to find failures in the system under test with an ideal number of test cases. The underlying idea of TPT's systematic is the separation of similarities and differences among the test cases: most test cases are very similar in their structural process and can "only" be differentiated in a few, but crucial details.[5]TPT makes use of this fact by jointly modelling and using joint structures. On the one hand, redundancies are thus avoided. On the other hand, it is made very clear what the test cases actually differ in – i.e. which specific aspect they respectively test. The comparability of test cases and thus the overview is improved in this approach and the attention of the tester is focused on the essential – the differentiating features of the test cases. The hierarchical structure of the test cases makes it possible to break complex test problems down into sub-problems thus also improving the clarity and – as a result – the quality of the test. These modelling techniques support the tester in finding the actually relevant cases, avoiding redundancies and keeping track of even large numbers of test cases.[6] TPT comprises several possibilities to automatically generate test cases: With TPT, each test case can specifically react to the system's behaviour[8]during the testing process in real time – for instance to react on the system exactly when a certain system-state occurs or a sensor signal exceeds a certain threshold. If, for example, a sensor failure for an engine controller is to be simulated when the engine idling speed is exceeded, it has to be possible to react to the event "engine idling speed exceeded" in the description of the test case. TPT test cases are made independent of its execution. The test cases can be executed in almost any environment due to the so-calledvirtual machine(VM) concept also inreal timeenvironments. Examples areMATLAB/Simulink,TargetLink, ASCET,C-code,CAN,AUTOSAR, SystemDesk, DaVinci CT, LABCAR, INCA, Software-in-the-Loop (SiL) andHiL. Thus TPT is an integrated tool to be used in all testing phases of the development likeunit testing,integration testing,system testingandregression testing. For analysis and measurement ofcode coverage, TPT can interact with coverage tools like Testwell CTC++ forC-code. A configurable graphical user interface (Dashboard), based onGUI widgets, can be used to interact with tests. The modelled test cases in TPT are compiled and during test execution interpreted by the so-calledvirtual machine(VM). The VM is the same for all platforms and all tests. Only a platformadapterrealises the signal mapping for the individual application. The TPT-VM is implemented inANSI Cand requires a memory of just a few kilobytes and can completely do without a dynamic memory allocation, allowing it to be applied in minimalist and environments with few resources too. There are alsoAPIsforCand.NET. TPT's Virtual Machine is able to process tests in real time with defined response behaviour. The response times of TPT test cases are normally given within micro seconds – depending on the complexity and test hardware. The expected system behaviour for individual test cases should also be automatically tested to assure efficient test processes. TPT offers the possibility to compute the properties for the expected behaviour online (during test execution) and offline (after test execution). While online evaluation uses the same modelling techniques as test modelling, offline evaluation offers decidedly more far-reaching possibilities for more complex evaluations, including operations such as comparisons with external reference data, limit-value monitoring, signal filters, analyses of state sequences and time conditions. The offline evaluation is, technically speaking, based on thePythonscript language, which has been extended by specific syntactic language elements and a specialised evaluation library to give optimal support to the test evaluation. The use of a script language ensures a high degree of flexibility in the test evaluation: access to reference data, communication with other tools and development of one's own domain-specific libraries for test evaluation is supported. Besides of the script based test result evaluation user interfaces provide simple access to the test assessments and help non-programmers to avoid scripting. Measurement data from other sources likeTargetLinkandSimulinksignal logging or MCD-3 measurement data can be assessed automatically. This data can be independent from the test execution. TPT test documentation according toIEEE 829presents the result of the test evaluation to the tester in a HTML, report, in which not only the pure information "success", "failed" or "unknown" can be depicted as the test result for each test case, but also details such as characteristic parameters or signals that have been observed in the test execution or computed in the test evaluation. Since the test assessment returns proper information about the timing and the checked behaviour this information can be made available in the report. The content of the test documentation as well as the structure of the document can be freely configured with the help of a template. TPT supportstest managementof TPT test projects with the following activities: Industry norms such asIEC 61508,DO-178B, EN 50128 andISO 26262requiretraceability of requirements and tests. TPT offers an interface torequirementstools likeTelelogicDOORS to support these activities. TPT is amodel-based testingtooland is applied mainly in theautomotive controller development[9]and has originally been developed withinDaimler AGfor their own development. Daimler coordinated the development of the testing tool for years.[10]Since 2007 PikeTec continues the development of the tool. TPT is used by many different other car manufacturers likeBMW,Volkswagen,Audi,PorscheandGeneral Motorsas well as suppliers likeRobert Bosch GmbH,ContinentalandHella.[11]
https://en.wikipedia.org/wiki/Time_Partition_Testing
GUI testing toolsserve the purpose of automating thetesting process of software with graphical user interfaces.
https://en.wikipedia.org/wiki/List_of_GUI_testing_tools
Astandards organization,standards body,standards developing organization(SDO), orstandards setting organization(SSO) is an organization whose primary function is developing, coordinating, promulgating, revising, amending, reissuing, interpreting, or otherwise contributing to the usefulness oftechnical standards[1]to those who employ them. Such an organization works to create uniformity across producers, consumers, government agencies, and other relevant parties regarding terminology, product specifications (e.g. size, including units of measure), protocols, and more. Its goals could include ensuring that Company A's external hard drive works on Company B's computer, an individual's blood pressure measures the same with Company C'ssphygmomanometeras it does with Company D's, or that all shirts that should not be ironed have the same icon (aclothes ironcrossed out with an X) on the label.[2] Most standards are voluntary in the sense that they are offered for adoption by people or industry without being mandated in law. Some standards become mandatory when they are adopted by regulators as legal requirements in particular domains, often for the purpose of safety or for consumer protection from deceitful practices. The termformal standardrefers specifically to a specification that has been approved by a standards setting organization. The termde jurestandardrefers to a standard mandated by legal requirements or refers generally to any formal standard. In contrast, the termde facto standardrefers to a specification (or protocol or technology) that has achieved widespread use and acceptance – often without being approved by any standards organization (or receiving such approval only after it already has achieved widespread use). Examples of de facto standards that were not approved by any standards organizations (or at least not approved until after they were in widespreadde factouse) include theHayes command setdeveloped byHayes,Apple'sTrueTypefont design and thePCLprotocol used byHewlett-Packardin thecomputer printersthey produced. Normally, the termstandards organizationis not used to refer to the individual parties participating within the standards developing organization in the capacity of founders,benefactors,stakeholders, members or contributors, who themselves may function as or lead the standards organizations. The implementation of standards in industry and commerce became highly important with the onset of theIndustrial Revolutionand the need for high-precisionmachine toolsandinterchangeable parts.Henry Maudslaydeveloped the first industrially practicalscrew-cutting lathein 1800, which allowed for the standardization ofscrew threadsizes for the first time.[1] Maudslay's work, as well as the contributions of other engineers, accomplished a modest amount of industry standardization; some companies' in-house standards also began to spread more widely within their industries.Joseph Whitworth's screw thread measurements were adopted as the first (unofficial) national standard by companies around Britain in 1841. It came to be known as theBritish Standard Whitworth, and was widely adopted in other countries.[3][4] By the end of the 19th century differences in standards between companies was making trade increasingly difficult and strained. For instance, in 1895 an iron and steel dealer recorded his displeasure inThe Times: "Architects and engineers generally specify such unnecessarily diverse types of sectional material or given work that anything like economical and continuous manufacture becomes impossible. In this country no two professional men are agreed upon the size and weight of agirderto employ for given work".[5] TheEngineering Standards Committeewas established in London in 1901 as the world's first national standards body.[6][7]It subsequently extended its standardization work and became the British Engineering Standards Association in 1918, adopting the name British Standards Institution in 1931 after receiving its Royal Charter in 1929. The national standards were adopted universally throughout the country, and enabled the markets to act more rationally and efficiently, with an increased level of cooperation. After theFirst World War, similar national bodies were established in other countries. TheDeutsches Institut für Normungwas set up in Germany in 1917, followed by its counterparts, the AmericanNational Standard Instituteand the FrenchCommission Permanente de Standardisation, both in 1918.[1] Severalinternational organizationscreateinternational standards, such asCodex Alimentariusin food, theWorld Health OrganizationGuidelines in health, orITURecommendations inICT[8]and being publicly funded, are freely available for consideration and use worldwide. In 1904, Crompton represented Britain at theLouisiana Purchase ExpositioninSt. Louis,Missouri, as part of a delegation by theInstitute of Electrical Engineers. He presented a paper on standardization, which was so well received that he was asked to look into the formation of a commission to oversee the process. By 1906, his work was complete and he drew up a permanent terms for theInternational Electrotechnical Commission.[9]The body held its first meeting that year in London, with representatives from 14 countries. In honour of his contribution to electrical standardization,Lord Kelvinwas elected as the body's first President.[10] TheInternational Federation of the National Standardizing Associations(ISA) was founded in 1926 with a broader remit to enhance international cooperation for all technical standards and specifications. The body was suspended in 1942 duringWorld War II. After the war, ISA was approached by the recently formed United Nations Standards Coordinating Committee (UNSCC) with a proposal to form a new global standards body. In October 1946, ISA and UNSCC delegates from 25 countries met inLondonand agreed to join forces to create the newInternational Organization for Standardization; the new organization officially began operations in February 1947.[11] Standards organizations can be classified by their role, position, and the extent of their influence on the local, national, regional, and global standardization arena. By geographic designation, there are international, regional, and national standards bodies (the latter often referred to as NSBs). By technology or industry designation, there are standards developing organizations (SDOs) and also standards setting organizations (SSOs) also known as consortia. Standards organizations may be governmental, quasi-governmental or non-governmental entities. Quasi- and non-governmental standards organizations are often non-profit organizations. Broadly, an international standards organization developsinternational standards(this does not necessarily restrict the use of other published standards internationally). There are many international standards organizations. The three largest and most well-established such organizations are theInternational Organization for Standardization(ISO), theInternational Electrotechnical Commission(IEC), and theInternational Telecommunication Union(ITU), which have each existed for more than 50 years (founded in 1947, 1906, and 1865, respectively) and are all based inGeneva,Switzerland. They have established tens of thousands of standards covering almost every conceivable topic. Many of these are then adopted worldwide replacing various incompatible "homegrown" standards. Many of these standards are naturally evolved from those designed in-house within an industry, or by a particular country, while others have been built from scratch by groups of experts who sit on various technical committees (TCs). These three organizations together comprise theWorld Standards Cooperation(WSC) alliance. ISO is composed of the national standards bodies (NSBs), one per member economy. The IEC is similarly composed of national committees, one per member economy. In some cases, the national committee to the IEC of an economy may also be the ISO member from that country or economy. ISO and IEC are private international organizations that are not established by any international treaty. Their members may be non-governmental organizations or governmental agencies, as selected by ISO and IEC (which are privately established organizations). The ITU is a treaty-based organization established as a permanent agency of theUnited Nations, in which governments are the primary members,[citation needed]although other organizations (such as non-governmental organizations and individual companies) can also hold a form of direct membership status in the ITU as well. Another example of a treaty-based international standards organization with government membership is theCodex Alimentarius Commission. In addition to these, a large variety of independent international standards organizations such as theASME,ASTM International, theInternational Commission on Illumination (CIE), theIEEE, theInternet Engineering Task Force(IETF),SAE International,TAPPI, theWorld Wide Web Consortium(W3C), and theUniversal Postal Union(UPU) develop and publish standards for a variety of international uses. In many such cases, these international standards organizations are not based on the principle of one member per country. Rather, membership in such organizations is open to those interested in joining and willing to agree to the organization's by-laws – having either organizational/corporate or individual technical experts as members. The Airlines Electronic Engineering Committee (AEEC) was formed in 1949 to prepare avionics system engineering standards with other aviation organizations RTCA, EUROCAE, and ICAO. The standards are widely known as the ARINC Standards. Regional standards bodies also exist, such as theEuropean Committee for Standardization(CEN), theEuropean Committee for Electrotechnical Standardization(CENELEC), theEuropean Telecommunications Standards Institute(ETSI), and theInstitute for Reference Materials and Measurements(IRMM) in Europe, thePacific Area Standards Congress(PASC), thePan American Standards Commission(COPANT), theAfrican Organisation for Standardisation(ARSO), theArabic industrial development and mining organization(AIDMO), and others. In the European Union, only standards created by CEN, CENELEC, and ETSI are recognized asEuropean standards(according to Regulation (EU) No 1025/2012[12]), and member states are required to notify the European Commission and each other about all the draft technical regulations concerning ICT products and services before they are adopted in national law.[13]These rules were laid down in Directive 98/34/EC with the goal of providing transparency and control with regard to technical regulations.[13] Sub-regional standards organizations also exist such as theMERCOSURStandardization Association (AMN), theCARICOM Regional Organisation for Standards and Quality(CROSQ), and the ASEAN Consultative Committee for Standards and Quality (ACCSQ), EAC East Africa Standards Committeewww.eac-quality.net, and theGCC Standardization Organization (GSO)forArab States of the Persian Gulf. In general, each country or economy has a single recognized national standards body (NSB). A national standards body is likely the sole member from that economy in ISO; ISO currently has 161 members. National standards bodies usually do not prepare the technical content of standards, which instead is developed by national technical societies. NSBs may be either public or private sector organizations, or combinations of the two. For example, the Standards Council of Canada is a CanadianCrown Corporation, Dirección General de Normas is a governmental agency within the Mexican Ministry of Economy, and ANSI is a501(c)(3)non-profit U.S. organization with members from both the private and public sectors. TheNational Institute of Standards and Technology(NIST), the U.S. government's standards agency, cooperates with ANSI under amemorandum of understandingto collaborate on the United States Standards Strategy. The determinates of whether an NSB for a particular economy is a public or private sector body may include the historical and traditional roles that the private sector fills in public affairs in that economy or the development stage of that economy. Anational standards body(NSB) generally refers to one standardization organization that is that country's member of theISO. Astandards developing organization(SDO) is one of the thousands of industry- or sector-based standards organizations that develop and publish industry specific standards. Some economies feature only an NSB with no other SDOs. Large economies like the United States and Japan have several hundred SDOs, many of which are coordinated by the central NSBs of each country (ANSI and JISC in this case). In some cases, international industry-based SDOs such as theCIE, theIEEEand theAudio Engineering Society(AES) may have direct liaisons with international standards organizations, having input to international standards without going through a national standards body. SDOs are differentiated from standards setting organizations (SSOs) in that SDOs may be accredited to develop standards using open and transparent processes. Developers of technical standards are generally concerned withinterface standards, which detail how products interconnect with each other, andsafety standards, which established characteristics ensure that a product or process is safe for humans, animals, and the environment. The subject of their work can be narrow or broad. Another area of interest is in defining how the behavior and performance of products is measured and described in data sheets. Overlapping or competing standards bodies tend to cooperate purposefully, by seeking to define boundaries between the scope of their work, and by operating in a hierarchical fashion in terms of national, regional and international scope; international organizations tend to have as members national organizations; and standards emerging at national level (such asBS 5750) can be adopted at regional levels (BS 5750 was adopted as EN 29000) and at international levels (BS 5750 was adopted as ISO 9000). Unless adopted by a government, standards carry no force in law. However, most jurisdictions havetruth in advertisinglaws, and ambiguities can be reduced if a company offers a product that is "compliant" with a standard. When an organization develops standards that may be used openly, it is common to have formal rules published regarding the process. This may include: Though it can be a tedious and lengthy process, formal standard setting is essential to developing new technologies. For example, since 1865, the telecommunications industry has depended on theITUto establish the telecommunications standards that have been adopted worldwide. The ITU has created numerous telecommunications standards including telegraph specifications, allocation of telephone numbers, interference protection, and protocols for a variety of communications technologies. The standards that are created through standards organizations lead to improved product quality, ensuredinteroperabilityof competitors' products, and they provide a technological baseline for future research and product development. Formal standard setting through standards organizations has numerous benefits for consumers including increased innovation, multiple market participants, reduced production costs, and the efficiency effects of product interchangeability. To support the standard development process,ISOpublished Good Standardization Practices (GSP)[25]and theWTOTechnical Barriers to Trade (TBT) Committee published the "Six Principles" guiding members in the development of international standards.[26] Some standards—such as theSIF Specificationin K12 education—are managed by a non-profit organization composed of public entities and private entities working in cooperation that then publish the standards under an open license at no charge and requiring no registration. A technical library at a university may have copies of technical standards on hand. Major libraries in large cities may also have access to many technical standards. Some users of standards mistakenly assume that all standards are in thepublic domain. This assumption is correct only for standards produced by thecentral governmentswhose publications are not amenable tocopyrightor to organizations that issue their standard under an open license. Any standards produced by non-governmental entities remain theintellectual propertyof their developers (unless specifically designed otherwise) and are protected, just like any otherpublications, bycopyrightlaws and internationaltreaties. However, the intellectual property extends only to the standard itself and not to its use. For instance if a company sells a device that is compliant with a given standard, it is not liable for further payment to the standards organization except in the special case when the organization holds patent rights or some other ownership of the intellectual property described in the standard. It is, however, liable for any patent infringement by its implementation, just as with any other implementation of technology. The standards organizations give no guarantees that patents relevant to a given standard have been identified. ISO standards draw attention to this in the foreword with a statement like the following: "Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. ISO and IEC shall not be held responsible for identifying any or all such patent rights".[27]If the standards organization is aware that parts of a given standard fall under patent protection, it will often require the patent holder to agree toreasonable and non-discriminatory licensingbefore including it in the standard. Such an agreement is regarded as a legally binding contract,[28]as in the 2012 caseMicrosoft v. Motorola. The ever-quickening pace of technology evolution is now more than ever affecting the way new standards are proposed, developed and implemented. Since traditional, widely respected standards organizations tend to operate at a slower pace than technology evolves, many standards they develop are becoming less relevant because of the inability of their developers to keep abreast with the technological innovation. As a result, a new class of standards setters appeared on thestandardizationarena: theindustry consortiaor standards setting organizations (SSOs), which are also referred to asprivate standards.[29]Despite having limited financial resources, some of them enjoy truly international acceptance. One example is theWorld Wide Web Consortium(W3C), whose standards forHTML,CSS, andXMLare used universally. There are also community-driven associations such as theInternet Engineering Task Force(IETF), a worldwide network of volunteers who collaborate to set standards for internet protocols. Some industry-driven standards development efforts don't even have a formal organizational structure. They are projects funded by large corporations. Among them are theOpenOffice.org, anApache Software Foundation-sponsored international community of volunteers working on anopen-standardsoftware that aims to compete withMicrosoft Office, and two commercial groups competing fiercely with each other to develop an industry-wide standard forhigh-density optical storage. Another example is theGlobal Food Safety Initiativewhere members of theConsumer Goods Forumdefine benchmarking requirements forharmonizationand recognize scheme owners usingprivate standardsforfood safety. Also, editors of Wikipedia follow their own self-imposedrules for editing. In 2024, the 118th U.S. Congress considered a bill to clarify copyright protection of standards incorporated by reference in legislation.[30]The proposed law would require free public online access to standards, where they could be viewed, but not printed or downloaded. Archived29 August 2012 at theWayback Machine
https://en.wikipedia.org/wiki/Standards_organization
Product testing, also calledconsumer testingorcomparative testing, is a process of measuring the properties or performance of products. The theory is that since the advent ofmass production, manufacturers producebrandedproducts which they assert and advertise to be identical within sometechnical standard.[citation needed] Product testing seeks to ensure that consumers can understand what products will do for them and which products are the best value. Product testing is a strategy to increaseconsumer protectionby checking the claims made duringmarketing strategiessuch asadvertising, which by their nature are in the interest of the entity distributing the service and not necessarily in the interest of the consumer. The advent of product testing was the beginning of the modernconsumer movement. Product testing might be accomplished by a manufacturer, an independent laboratory, a government agency, etc. Often an existing formaltest methodis used as a basis for testing. Other times engineers develop methods of test which are suited to the specific purpose. Comparative testing subjects several replicate samples of similar products to identical test conditions. Product testing might have a variety of purposes, such as: Product tests can be used for: Product testing is any process by means of which a researcher measures a product's performance, safety, quality, and compliance with established standards.[citation needed]The primary element which constitutes an objective comparative test program is the extent to which the researchers can perform tests with independence from the manufacturers, suppliers, and marketers of the products. Asindustrializationproliferated various manufacturers began exploring concepts of what is now calledlean manufacturingto maximize industrial efficiency. This included a trend to produce goods with certain specifications and according to standards for production.[2]Government agencies in the United States in particular started demanding that manufacturers who bid on government contracts fulfill the work according to predefined standards.[2]Early thinkers, such asFrederick J. Schlink, began to imagine a system for applying similar expectations for standards to consumer needs in order to allow people to make purchases according to product merit rather than rival advertising claims or marketing propaganda.[3]Schlink metStuart Chaseand together they publishedYour Money's Worth, which was a national guide to fraud and manipulation of the American marketplace due to lack of consumer representation in the regulation process.[4]At the end of this book, there was a description of a theoretical "consumers' club" which would test products and serve only the interests of consumers.[5]The success of the book led to the founding of Consumers' Research as the world's first consumer organization.[6] This began the consumer movement. The most common government role in product testing is creating laws for the creation of products with the intent of ensuring that manufacturers accurately describe the products they are selling and that products are safe for consumers to use. Lawmakers typically introduce government regulation when the industry's voluntary system will not or can not solve a serious problem.[6]Government standards are almost always more strict than voluntary standards and almost always have the goal of reducing the hazard.[6]Most governments put responsibility to test products on the manufacturer.[6] The most common industry role is to provide products and services according to industry standards. In any industry, some standards will be voluntary (which means that the industry practicesself-regulation), or mandatory (which means that a government issues aregulation).[7] Every major consumer product industry has an associatedtrade organizationwhose duties include developing voluntary standards and promoting the industry.[7]A trade association may also facilitate compliance testing or certification that a particular manufacturer's products meet certain standards.[7]"Voluntary" standards may seem either optional or mandatory from the perspective of a manufacturer, and in many cases when an industry adopts a standard it puts pressure on all manufacturers to comply with the standard.[6]Industry voluntary standards are typically minimal performance criteria with no reference to quality.[7] An example of industry regulation could beUnderwriters Laboratories' founding in the United States in 1894 and its creation of standards with reference to theNational Electrical Codepublished in 1897 are early examples of standards being made with reference to government regulation.[7]Underwriters Laboratories publishes and enforces hundreds of safety standards but no quality standards.[7] It is difficult or impossible to find an industry which has been able to review its members' products and supply unbiased comparative product information on them.[7]Trade associations exist to serve the members' interests and if information which consumers want is contrary to the needs of members then the distribution of that information may harm the industry.[7]The information which an industry provides is integral to the market but the nature of industry information is not to be balanced, objective, complete, and unbiased.[7] The role of the consumer organization is to represent the interest of individual consumers to industry and government. Whereas neither government nor industry regulates product and service quality, from the consumer perspective, product quality is a chief concern.[8] The history of consumer organizations internationally is closely tied to the history of theconsumer movement in the United States, which set the precedent and model for product testing elsewhere.[8]Whereas initially in the consumer movement consumer organizations only sought to have products conform to minimal safety standards, quickly consumers began to demand comparative information about similar products within a category.[9]Comparative information seeks to say that similar products are comparable, whereas, from an industry marketing perspective, the leading manufacturers' interest is inproduct differentiationto claim that their brand of product is desirable for reasons unrelated to the objective value it has for consumers.[9] Having access to comprehensive, objective product testing results is the primary tool which consumers can use to make an informed decision among product choices.[9]
https://en.wikipedia.org/wiki/Product_testing
Atest methodis amethodfor a test inscienceorengineering, such as aphysical test,chemical test, orstatistical test. It is a specified procedure that produces a test result.[1]To ensure accurate and relevant results, a test method should be "explicit, unambiguous, and experimentally feasible.",[2]as well as effective[3]and reproducible.[4] A test is anobservationorexperimentthat determines one or more characteristics of a given sample, product, process, or service, with the purpose of comparing the test result to expected or desired results.[5]The results can bequalitative(yes/no),quantitative(a measured value), orcategoricaland can be derived from personalobservationor the output of a precisionmeasuring instrument. Usually the test result is thedependent variable, the measured response based on the particular conditions of the test defined by the value of theindependent variable. Some tests may involve changing the independent variable to determine the level at which a certain response occurs: in this case, the test result is the independent variable. Insoftware development,engineering,science,manufacturing, andbusiness, its developers, researchers, manufacturers, and related personnel must understand and agree upon methods of obtaining data and makingmeasurements. It is common for aphysical propertyto be strongly affected by the precise method of testing or measuring that property. As such, fully documentingexperimentsand measurements while providing needed documentation and descriptions ofspecifications,contracts, and test methods is vital.[6][2] Using astandardizedtest method, perhaps published by a respectedstandards organization, is a good place to start. Sometimes it is more useful to modify an existing test method or to develop a new one, though such home-grown test methods should be validated[4]and, in certain cases, demonstrate technical equivalency to primary, standardized methods.[6]Again, documentation and full disclosure are necessary.[2] A well-written test method is important. However, even more important is choosing a method of measuring the correct property or characteristic. Not all tests and measurements are equally useful: usually a test result is used to predict or imply suitability for a certain purpose.[2][3]For example, if a manufactured item has several components, test methods may have several levels of connections: These connections orcorrelationsmay be based on published literature, engineering studies, or formal programs such asquality function deployment.Validationof the suitability of the test method is often required.[4] Quality management systemsusually require full documentation of the procedures used in a test. The document for a test method might include:[7][8] Test methods are often scrutinized for their validity, applicability, and accuracy. It is very important that the scope of the test method be clearly defined, and any aspect included in the scope is shown to be accurate and repeatable through validation.[4][7][9][10] Test method validations often encompass the following considerations:[2][4][7][9][10]
https://en.wikipedia.org/wiki/Test_method
Insoftware engineering,graphical user interface testingis the process oftestinga product'sgraphical user interface(GUI) to ensure it meets its specifications. This is normally done through the use of a variety oftest cases. To generate a set oftest cases,test designersattempt to cover all the functionality of the system and fully exercise theGUIitself. The difficulty in accomplishing this task is twofold: to deal with domain size and sequences. In addition, the tester faces more difficulty when they have to doregression testing. Unlike aCLI(command-line interface) system, a GUI may have additional operations that need to be tested. A relatively small program such asMicrosoftWordPadhas 325 possible GUI operations.[1]In a large program, the number of operations can easily be anorder of magnitudelarger. The second problem is the sequencing problem. Some functionality of the system may only be accomplished with a sequence of GUI events. For example, to open a file a user may first have to click on the File Menu, then select the Open operation, use a dialog box to specify the file name, and focus the application on the newly opened window. Increasing the number of possible operations increases the sequencing problem exponentially. This can become a serious issue when the tester is creating test cases manually. Regression testingis often a challenge with GUIs as well. A GUI may change significantly, even though the underlying application does not. A test designed to follow a certain path through the GUI may then fail since a button, menu item, or dialog may have changed location or appearance. These issues have driven the GUI testing problem domain towards automation. Many different techniques have been proposed to automatically generatetest suitesthat are complete and that simulate user behavior. Most of the testing techniques attempt to build on those previously used to test CLI programs, but these can have scaling problems when applied to GUIs. For example,finite-state-machine-based modeling[2][3]– where a system is modeled as a finite-state machine and a program is used to generate test cases that exercise all states – can work well on a system that has a limited number of states but may become overly complex and unwieldy for a GUI (see alsomodel-based testing). A novel approach to test suite generation, adapted from a CLI technique[4]involves using a planning system.[5]Planning is a well-studied technique from theartificial intelligence(AI) domain that attempts to solve problems that involve four parameters: Planning systemsdetermine a path from the initial state to the goal state by using the operators. As a simple example of a planning problem, given two words and a single operation which replaces a single letter in a word with another, the goal might be to change one word into another. In[1]the authors used the planner IPP[6]to demonstrate this technique. The system's UI is first analyzed to determine the possible operations. These become the operators used in the planning problem. Next an initial system state is determined, and a goal state is specified that the tester feels would allow exercising of the system. The planning system determines a path from the initial state to the goal state, which becomes the test plan. Using a planner to generate the test cases has some specific advantages over manual generation. A planning system, by its very nature, generates solutions to planning problems in a way that is very beneficial to the tester: When manually creating a test suite, the tester is more focused on how to test a function (i. e. the specific path through the GUI). By using a planning system, the path is taken care of and the tester can focus on what function to test. An additional benefit of this is that a planning system is not restricted in any way when generating the path and may often find a path that was never anticipated by the tester. This problem is a very important one to combat.[7] Another method of generating GUI test cases simulates a novice user. An expert user of a system tends to follow a direct and predictable path through a GUI, whereas a novice user would follow a more random path. A novice user is then likely to explore more possible states of the GUI than an expert. The difficulty lies in generating test suites that simulate 'novice' system usage. Usinggenetic algorithmshave been proposed to solve this problem.[7]Novice paths through the system are not random paths. First, a novice user will learn over time and generally would not make the same mistakes repeatedly, and, secondly, a novice user is following a plan and probably has some domain or system knowledge. Genetic algorithms work as follows: a set of 'genes' are created randomly and then are subjected to some task. The genes that complete the task best are kept and the ones that do not are discarded. The process is again repeated with the surviving genes being replicated and the rest of the set filled in with more random genes. Eventually one gene (or a small set of genes if there is some threshold set) will be the only gene in the set and is naturally the best fit for the given problem. In the case of GUI testing, the method works as follows. Each gene is essentially a list of random integer values of some fixed length. Each of these genes represents a path through the GUI. For example, for a given tree of widgets, the first value in the gene (each value is called an allele) would select the widget to operate on, the following alleles would then fill in input to the widget depending on the number of possible inputs to the widget (for example a pull down list box would have one input...the selected list value). The success of the genes are scored by a criterion that rewards the best 'novice' behavior. A system to perform GUI testing for the X window system, extensible to any windowing system, was introduced by Kasik and George.[7]TheX Windowsystem provides functionality (viaXServerand its protocol) to dynamically send GUI input to and get GUI output from the program without directly using the GUI. For example, one can call XSendEvent() to simulate a click on a pull-down menu, and so forth. This system allows researchers to automate the test case generation and testing for any given application under test, in such a way that a set of novice user test cases can be created. At first the strategies were migrated and adapted from the CLI testing strategies. A popular method used in the CLI environment is capture/playback. Capture playback is a system where the system screen is "captured" as a bitmapped graphic at various times during system testing. This capturing allowed the tester to "play back" the testing process and compare the screens at the output phase of the test with expected screens. This validation could be automated since the screens would be identical if the case passed and different if the case failed. Using capture/playback worked quite well in the CLI world but there are significant problems when one tries to implement it on a GUI-based system.[8]The most obvious problem one finds is that the screen in a GUI system may look different while the state of the underlying system is the same, making automated validation extremely difficult. This is because a GUI allows graphical objects to vary in appearance and placement on the screen. Fonts may be different, window colors or sizes may vary but the system output is basically the same. This would be obvious to a user, but not obvious to an automated validation system. To combat this and other problems, testers have gone 'under the hood' and collected GUI interaction data from the underlying windowing system.[9]By capturing the window 'events' into logs the interactions with the system are now in a format that is decoupled from the appearance of the GUI. Now, only the event streams are captured. There is some filtering of the event streams necessary since the streams of events are usually very detailed and most events are not directly relevant to the problem. This approach can be made easier by using anMVCarchitecture for example and making the view (i. e. the GUI here) as simple as possible while the model and the controller hold all the logic. Another approach is to use the software'sbuilt-inassistive technology, to use anHTML interfaceor athree-tier architecturethat makes it also possible to better separate the user interface from the rest of the application. Another way to run tests on a GUI is to build a driver into the GUI so that commands or events can be sent to the software from another program.[7]This method of directly sending events to and receiving events from a system is highly desirable when testing, since the input and output testing can be fully automated and user error is eliminated.
https://en.wikipedia.org/wiki/GUI_testing
Insoftware testing,test automationis the use ofsoftwareseparate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes.[1]Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical forcontinuous deliveryandcontinuous testing.[2] There are many approaches to test automation, however below are the general approaches used widely: One way to generate test cases automatically ismodel-based testingthrough use of a model of the system for test case generation, but research continues into a variety of alternative methodologies for doing so.[citation needed]In some cases, the model-based approach enables non-technical users to create automated business test cases in plain English so that no programming of any kind is needed in order to configure them for multiple operating systems, browsers, and smart devices.[3] Somesoftware testingtasks (such as extensive low-level interfaceregression testing) can be laborious and time-consuming to do manually. In addition, a manual approach might not always be effective in finding certain classes of defects. Test automation offers a possibility to perform these types of testing effectively. Once automated tests have been developed, they can be run quickly and repeatedly many times. This can be a cost-effective method for regression testing of software products that have a long maintenance life. Even minor patches over the lifetime of the application can cause existing features to break which were working at an earlier point in time. API testingis also being widely used by software testers as it enables them to verify requirements independent of their GUI implementation, commonly to test them earlier in development, and to make sure the test itself adheres to clean code principles, especially thesingle responsibility principle. It involves directly testingAPIsas part ofintegration testing, to determine if they meet expectations for functionality, reliability, performance, and security.[4]Since APIs lack aGUI, API testing is performed at themessage layer.[5]API testing is considered critical when an API serves as the primary interface toapplication logic.[6] Many test automation tools provide record and playback features that allow users to interactively record user actions and replay them back any number of times, comparing actual results to those expected. The advantage of this approach is that it requires little or nosoftware development. This approach can be applied to any application that has agraphical user interface. However, reliance on these features poses major reliability and maintainability problems. Relabelling a button or moving it to another part of the window may require the test to be re-recorded. Record and playback also often adds irrelevant activities or incorrectly records some activities.[citation needed] A variation on this type of tool is for testing of web sites. Here, the "interface" is the web page. However, such a framework utilizes entirely different techniques because it is renderingHTMLand listening toDOM Eventsinstead of operating system events.Headless browsersor solutions based onSelenium Web Driverare normally used for this purpose.[7][8][9] Another variation of this type of test automation tool is for testing mobile applications. This is very useful given the number of different sizes, resolutions, and operating systems used on mobile phones. For this variation, a framework is used in order to instantiate actions on the mobile device and to gather results of the actions. Another variation is script-less test automation that does not use record and playback, but instead builds a model[clarification needed]of the application and then enables the tester to create test cases by simply inserting test parameters and conditions, which requires no scripting skills. Test automation, mostly using unit testing, is a key feature ofextreme programmingandagile software development, where it is known astest-driven development(TDD) or test-first development. Unit tests can be written to define the functionalitybeforethe code is written. However, these unit tests evolve and are extended as coding progresses, issues are discovered and the code is subjected to refactoring.[10]Only when all the tests for all the demanded features pass is the code considered complete. Proponents argue that it produces software that is both more reliable and less costly than code that is tested by manual exploration.[citation needed]It is considered more reliable because the code coverage is better, and because it is run constantly during development rather than once at the end of awaterfalldevelopment cycle. The developer discovers defects immediately upon making a change, when it is least expensive to fix. Finally,code refactoringis safer when unit testing is used; transforming the code into a simpler form with lesscode duplication, but equivalent behavior, is much less likely to introduce new defects when the refactored code is covered by unit tests. Continuous testingis the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[11][12]For Continuous Testing, the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[13] What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team must make.[14]A multi-vocal literature review of 52 practitioner and 26 academic sources found that five main factors to consider in test automation decision are: 1) System Under Test (SUT), 2) the types and numbers of tests, 3) test-tool, 4) human and organizational topics, and 5) cross-cutting factors. The most frequent individual factors identified in the study were: need for regression testing, economic factors, and maturity of SUT.[15] While the reusability of automated tests is valued by software development companies, this property can also be viewed as a disadvantage. It leads to the so-called"Pesticide Paradox", where repeatedly executed scripts stop detecting errors that go beyond their frameworks. In such cases,manual testingmay be a better investment. This ambiguity once again leads to the conclusion that the decision on test automation should be made individually, keeping in mind project requirements and peculiarities. Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped withtest oracles), defect logging, etc., without necessarily automating tests in an end-to-end fashion. One must keep satisfying popular requirements when thinking of test automation: Test automation tools can be expensive and are usually employed in combination with manual testing. Test automation can be made cost-effective in the long term, especially when used repeatedly inregression testing. A good candidate for test automation is a test case for common flow of an application, as it is required to be executed (regression testing) every time an enhancement is made in the application. Test automation reduces the effort associated with manual testing. Manual effort is needed to develop and maintain automated checks, as well as reviewing test results. In automated testing, thetest engineerorsoftware quality assuranceperson must have software coding ability since the test cases are written in the form of source code which when run produce output according to theassertionsthat are a part of it. Some test automation tools allow for test authoring to be done by keywords instead of coding, which do not require programming. A strategy to decide the amount of tests to automate is the test automation pyramid. This strategy suggests to write three types of tests with different granularity. The higher the level, less is the amount of tests to write.[16] One conception of the testing pyramid contains unit, integration, and end-to-end unit tests. According toGoogle's testing blog, unit tests should make up the majority of your testing strategy, with fewer integration tests and only a small amount of end-to-end tests.[19] A test automation framework is an integrated system that sets the rules of automation of a specific product. This system integrates the function libraries, test data sources, object details and various reusable modules. These components act as small building blocks which need to be assembled to represent a business process. The framework provides the basis of test automation and simplifies the automation effort. The main advantage of aframeworkof assumptions, concepts and tools that provide support for automated software testing is the low cost formaintenance. If there is change to anytest casethen only the test case file needs to be updated and thedriver Scriptandstartup scriptwill remain the same. Ideally, there is no need to update the scripts in case of changes to the application. Choosing the right framework/scripting technique helps in maintaining lower costs. The costs associated with test scripting are due to development and maintenance efforts. The approach of scripting used during test automation has effect on costs. Various framework/scripting techniques are generally used: The Testing framework is responsible for:[20] A growing trend in software development is the use ofunit testingframeworks such as thexUnitframeworks (for example,JUnitandNUnit) that allow the execution of unit tests to determine whether various sections of thecodeare acting as expected under various circumstances.Test casesdescribe tests that need to be run on the program to verify that the program runs as expected. Test automation interfaces are platforms that provide a singleworkspacefor incorporating multiple testing tools and frameworks forSystem/Integration testingof application under test. The goal of Test Automation Interface is to simplify the process of mapping tests to business criteria without coding coming in the way of the process. Test automation interface are expected to improve the efficiency and flexibility of maintaining test scripts.[21] Test Automation Interface consists of the following core modules: Interface engines are built on top of Interface Environment. Interface engine consists of aparserand a test runner. The parser is present to parse the object files coming from the object repository into the test specific scripting language. The test runner executes the test scripts using atest harness.[21] Object repositories are a collection of UI/Application object data recorded by the testing tool while exploring the application under test.[21] Tools are specifically designed to target some particular test environment, such as Windows and web automation tools, etc. Tools serve as a driving agent for an automation process. However, an automation framework is not a tool to perform a specific task, but rather infrastructure that provides the solution where different tools can do their job in a unified manner. This provides a common platform for the automation engineer. There are various types of frameworks. They are categorized on the basis of the automation component they leverage. These are:
https://en.wikipedia.org/wiki/Codeless_test_automation
Pair programmingis asoftware developmenttechnique in which twoprogrammerswork together at one workstation. One, thedriver, writescodewhile the other, theobserverornavigator,[1]reviewseach line of code as it is typed in. The two programmers switch roles frequently. While reviewing, the observer also considers the "strategic" direction of the work, coming up with ideas for improvements and likely future problems to address. This is intended to free the driver to focus all of their attention on the "tactical" aspects of completing the current task, using the observer as a safety net and guide. Pair programming increases theman-hoursrequired to deliver code compared to programmers working individually.[2]However, the resulting code has fewer defects.[3]Along with code development time, other factors like field support costs and quality assurance also figure into the return on investment. Pair programming might theoretically offset these expenses by reducing defects in the programs.[3] In addition to preventing mistakes as they are made, other intangible benefits may exist. For example, the courtesy of rejecting phone calls or other distractions while working together, taking fewer breaks at agreed-upon intervals or sharing breaks to return phone calls (but returning to work quickly since someone is waiting). One member of the team might have more focus and help drive or awaken the other if they lose focus, and that role might periodically change. One member might know about a topic or technique that the other does not, which might eliminate delays to finding or testing a solution, or allow for a better solution, thus effectively expanding the skill set, knowledge, and experience of a programmer as compared to working alone. Each of these intangible benefits, and many more, may be challenging to accurately measure but can contribute to more efficient working hours.[citation needed] A system with two programmers possesses greater potential for the generation of more diverse solutions to problems for three reasons: In an attempt to share goals and plans, the programmers must overtly negotiate a shared course of action when a conflict arises between them. In doing so, they consider a larger number of ways of solving the problem than a single programmer alone might do. This significantly improves the design quality of the program as it reduces the chances of selecting a poor method.[4] In an online survey of pair programmers from 2000, 96% of programmers stated that they enjoyed working more while pair programming than programming alone. Furthermore, 95% said that they were more confident in their work when they pair programmed. However, as the survey was among self-selected pair programmers, it did not account for programmers who were forced to pair program.[5] Knowledge is constantly shared between pair programmers, whether in the industry or in a classroom. Many sources suggest that students show higher confidence when programming in pairs,[5]and many learn whether it be from tips on programming language rules to overall design skills.[6]In "promiscuous pairing", each programmer communicates and works with all the other programmers on the team rather than pairing only with one partner, which causes knowledge of the system to spread throughout the whole team.[3]Pair programming allows programmers to examine their partner's code and provide feedback, which is necessary to increase their own ability to develop monitoring mechanisms for their own learning activities.[6] Pair programming allows team members to share quickly, making them less likely to have agendas hidden from each other. This helps pair programmers learn to communicate more easily. "This raises the communication bandwidth and frequency within the project, increasing overall information flow within the team."[3] There are both empirical studies and meta-analyses of pair programming. The empirical studies tend to examine the level of productivity and the quality of the code, while meta-analyses may focus on biases introduced by the process of testing and publishing. Ameta-analysisfound pairs typically consider more design alternatives than programmers working alone, arrive at simpler, more maintainable designs, and catch design defects earlier. However, it raised concerns that its findings may have been influenced by "signs ofpublication biasamong published studies on pair programming." It concluded that "pair programming is not uniformly beneficial or effective."[7] Although pair programmers may complete a task faster than a solo programmer, the total number ofman-hoursincreases.[2]A manager would have to balance faster completion of the work and reduced testing and debugging time against the higher cost of coding. The relative weight of these factors can vary by project and task. The benefit of pairing is greatest on tasks that the programmers do not fully understand before they begin: that is, challenging tasks that call for creativity and sophistication, and for novices as compared to experts.[2]Pair programming could be helpful for attaining high quality and correctness on complex programming tasks, but it would also increase the development effort (cost) significantly.[7] On simple tasks, which the pair already fully understands, pairing results in a net drop in productivity.[2][8]It may reduce the code development time but also risks reducing the quality of the program.[7]Productivity can also drop when novice–novice pairing is used without sufficient availability of a mentor to coach them.[9] A study of programmers using AI assistance tools such asGitHub Copilotfound that while some programmers conceived of AI assistance as similar to pair programming, in practice the use of such tools is very different in terms of the programmer experience, with the human programmer having to transition repeatedly between driver and navigator roles.[10] There are indicators that a pair is not performing well:[opinion] Remote pair programming, also known asvirtual pair programmingordistributed pair programming, is pair programming in which the two programmers are in different locations,[12]working via acollaborative real-time editor, shared desktop, or a remote pair programmingIDEplugin. Remote pairing introduces difficulties not present in face-to-face pairing, such as extra delays for coordination, depending more on "heavyweight" task-tracking tools instead of "lightweight" ones like index cards, and loss of verbal communication resulting in confusion and conflicts over such things as who "has the keyboard".[13] Tool support could be provided by:
https://en.wikipedia.org/wiki/Pair_programming
Exploratory testingis an approach tosoftware testingthat is concisely described as simultaneous learning,test designand test execution.Cem Kaner, who coined the term in 1984,[1]defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."[2] While the software is being tested, the tester learns things that together with experience andcreativitygenerates new good tests to run. Exploratory testing is often thought of as ablack box testingtechnique. Instead, those who have studied it consider it a testapproachthat can be applied to any test technique, at any stage in the development process. The key is not the test technique nor the item being tested or reviewed; the key is the cognitive engagement of the tester, and the tester's responsibility for managing his or her time.[3] Exploratory testing has always been performed by skilled testers. In the early 1990s,ad hocwas too often synonymous with sloppy and careless work. As a result, a group of test methodologists (now calling themselves theContext-Driven School) began using the term "exploratory" seeking to emphasize the dominant thought process involved in unscripted testing, and to begin to develop the practice into a teachable discipline. This new terminology was first published byCem Kanerin his bookTesting Computer Software[4]and expanded upon inLessons Learned in Software Testing.[5]Exploratory testing can be as disciplined as any other intellectual activity. Exploratory testing seeks to find out how the software actually works, and to ask questions about how it will handle difficult and easy cases. The quality of the testing is dependent on the tester's skill of inventingtest casesand findingdefects. The more the tester knows about the product and differenttest methods, the better the testing will be. To further explain, comparison can be made of freestyle exploratory testing to its antithesisscripted testing. In the latter activity test cases are designed in advance. This includes both the individual steps and the expected results. These tests are later performed by a tester who compares the actual result with the expected. When performing exploratory testing, expectations are open. Some results may be predicted and expected; others may not. The tester configures, operates, observes, and evaluates the product and its behaviour, critically investigating the result, and reporting information that seems likely to be a bug (which threatens the value of the product to some person) or an issue (which threatens the quality of the testing effort). In reality, testing almost always is a combination of exploratory and scripted testing, but with a tendency towards either one, depending on context. According to Kaner andJames Marcus Bach, exploratory testing is more amindsetor "...a way of thinking about testing" than a methodology.[6]They also say that it crosses a continuum from slightly exploratory (slightly ambiguous or vaguely scripted testing) to highly exploratory (freestyle exploratory testing).[7] The documentation of exploratory testing ranges from documenting all tests performed to just documenting thebugs. Duringpair testing, two persons create test cases together; one performs them, and the other documents.Session-based testingis a method specifically designed to make exploratory testing auditable and measurable on a wider scale. Exploratory testers often use tools, including screen capture or video tools as a record of the exploratory session, or tools to quickly help generate situations of interest, e.g. James Bach's Perlclip. The main advantage of exploratory testing is that less preparation is needed, important bugs are found quickly, and at execution time, the approach tends to be more intellectually stimulating than execution of scripted tests. Another major benefit is that testers can usedeductive reasoningbased on the results of previous results to guide their future testing on the fly. They do not have to complete a current series of scripted tests before focusing in on or moving on to exploring a more target rich environment. This also accelerates bug detection when used intelligently. Another benefit is that, after initial testing, most bugs are discovered by some sort of exploratory testing. This can be demonstrated logically by stating, "Programs that pass certain tests tend to continue to pass the same tests and are more likely to fail other tests or scenarios that are yet to be explored." Disadvantages are that tests invented and performed on the fly can't be reviewed in advance (and by that prevent errors in code and test cases), and that it can be difficult to show exactly which tests have been run. Freestyle exploratory test ideas, when revisited, are unlikely to be performed in exactly the same manner, which can be an advantage if it is important to find new errors; or a disadvantage if it is more important to repeat specific details of the earlier tests. This can be controlled with specific instruction to the tester, or by preparing automated tests where feasible, appropriate, and necessary, and ideally as close to the unit level as possible. Replicated experiment has shown that while scripted and exploratory testing result in similar defect detection effectiveness (the total number of defects found) exploratory results in higher efficiency (the number of defects per time unit) as no effort is spent on pre-designing the test cases.[8]Observational study on exploratory testers proposed that the use of knowledge about the domain, the system under test, and customers is an important factor explaining the effectiveness of exploratory testing.[9]A case-study of three companies found that ability to provide rapid feedback was a benefit of Exploratory Testing while managing test coverage was pointed as a short-coming.[10]A survey found that Exploratory Testing is also used in critical domains and that Exploratory Testing approach places high demands on the person performing the testing.[11]
https://en.wikipedia.org/wiki/Exploratory_testing
Agile software developmentis an umbrella term for approaches todeveloping softwarethat reflect the values and principles agreed upon byThe Agile Alliance, a group of 17 software practitioners, in 2001.[1]As documented in theirManifesto for Agile Software Developmentthe practitioners value:[2] The practitioners cite inspiration from new practices at the time includingextreme programming,scrum,dynamic systems development method,adaptive software developmentand being sympathetic to the need for an alternative to documentation driven, heavyweight software development processes.[3] Many software development practices emerged from the agile mindset. These agile-based practices, sometimes calledAgile(with a capital A)[4]include requirements, discovery and solutions improvement through the collaborative effort ofself-organizingandcross-functionalteams with theircustomer(s)/end user(s).[5][6] While there is muchanecdotal evidencethat the agile mindset and agile-based practices improve the software development process, the empirical evidence is limited and less than conclusive.[7][8][9] Iterative and incremental software development methodscan be traced back as early as 1957,[10]with evolutionary project management[11][12]andadaptive software development[13]emerging in the early 1970s.[14] During the 1990s, a number oflightweightsoftware development methods evolved in reaction to the prevailingheavyweightmethods (often referred to collectively aswaterfall) that critics described as overly regulated, planned, andmicromanaged.[15]These lightweight methods included:rapid application development(RAD), from 1991;[16][17]theunified process(UP) anddynamic systems development method(DSDM), both from 1994;Scrum, from 1995; Crystal Clear andextreme programming(XP), both from 1996; andfeature-driven development(FDD), from 1997. Although these all originated before the publication of theAgile Manifesto, they are now collectively referred to as agile software development methods.[3] Already since 1991 similar changes had been underway inmanufacturing[18][19]andmanagement thinking[20]derived fromLean management. In 2001, seventeen software developers met at a resort inSnowbird,Utahto discuss lightweight development methods. They were:Kent Beck(Extreme Programming),Ward Cunningham(Extreme Programming),Dave Thomas(Pragmatic Programming, Ruby),Jeff Sutherland(Scrum),Ken Schwaber(Scrum),Jim Highsmith(Adaptive Software Development),Alistair Cockburn(Crystal),Robert C. Martin(SOLID),Mike Beedle(Scrum), Arie van Bennekum,Martin Fowler(OOADandUML), James Grenning,Andrew Hunt(Pragmatic Programming, Ruby),Ron Jeffries(Extreme Programming),Jon Kern, Brian Marick (Ruby,Test-driven development), andSteve Mellor(OOA). The group, The Agile Alliance, published theManifesto for Agile Software Development.[2] In 2005, a group headed by Cockburn and Highsmith wrote an addendum ofproject managementprinciples, the PM Declaration of Interdependence,[21]to guide software project management according to agile software development methods. In 2009, a group working with Martin wrote an extension ofsoftware developmentprinciples, theSoftware Craftsmanship Manifesto, to guide agile software development according toprofessionalconduct and mastery. In 2011, the Agile Alliance created theGuide to Agile Practices(renamed theAgile Glossaryin 2016),[22]an evolvingopen-sourcecompendium of the working definitions of agile practices, terms, and elements, along with interpretations and experience guidelines from the worldwide community of agile practitioners. The agile manifesto reads:[2] We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: That is, while there is value in the items on the right, we value the items on the left more. Scott Amblerexplained:[23] Introducing the manifesto on behalf of the Agile Alliance,Jim Highsmithsaid, The Agile movement is not anti-methodology, in fact many of us want to restore credibility to the word methodology. We want to restore a balance. We embrace modeling, but not in order to file some diagram in a dusty corporate repository. We embrace documentation, but not hundreds of pages of never-maintained and rarely-used tomes. We plan, but recognize the limits of planning in a turbulent environment. Those who would brand proponents of XP or SCRUM or any of the other Agile Methodologies as "hackers" are ignorant of both the methodologies and the original definition of the term hacker. The values are based on these principles:[25] Most agile development methods break product development work into small increments that minimize the amount of up-front planning and design. Iterations, or sprints, are short time frames (timeboxes)[26]that typically last from one to four weeks.[27]: 20Each iteration involves across-functional teamworking in all functions:planning,analysis,design,coding,unit testing, andacceptance testing. At the end of the iteration a working product is demonstrated to stakeholders. This minimizes overall risk and allows the product to adapt to changes quickly.[28][29]An iteration might not add enough functionality to warrant a market release, but the goal is to have an available release (with minimalbugs) at the end of each iteration.[30]Through incremental development, products have room to "fail often and early" throughout each iterative phase instead of drastically on a final release date.[31]Multiple iterations might be required to release a product or new features. Working software is the primary measure of progress.[25] A key advantage of agile approaches is speed to market and risk mitigation. Smaller increments are typically released to market, reducing the time and cost risks of engineering a product that doesn't meet user requirements. The 6th principle of the agile manifesto for software development states "The most efficient and effective method of conveying information to and within a development team is face-to-face conversation". The manifesto, written in 2001 when video conferencing was not widely used, states this in relation to the communication of information, not necessarily that a team should be co-located. The principle ofco-locationis that co-workers on the same team should be situated together to better establish the identity as a team and to improve communication.[32]This enablesface-to-face interaction, ideally in front of a whiteboard, that reduces the cycle time typically taken when questions and answers are mediated through phone,persistent chat, wiki, or email.[33]With the widespread adoption of remote working during the COVID-19 pandemic and changes to tooling, more studies have been conducted[34]around co-location and distributed working which show that co-location is increasingly less relevant. No matter which development method is followed, every team should include acustomer representative(known asproduct ownerinScrum). This representative is agreed by stakeholders to act on their behalf and makes a personal commitment to being available for developers to answer questions throughout the iteration. At the end of each iteration, theproject stakeholderstogether with the customer representative review progress and re-evaluate priorities with a view to optimizing thereturn on investment(ROI) and ensuring alignment with customer needs and company goals. The importance of stakeholder satisfaction, detailed by frequent interaction and review at the end of each phase, is why the approach is often denoted as acustomer-centered methodology.[35] In agile software development, aninformation radiatoris a (normally large) physical display, board withsticky notesor similar, located prominently near the development team, where passers-by can see it.[36]It presents an up-to-date summary of the product development status.[37]Abuild light indicatormay also be used to inform a team about the current status of their product development. A common characteristic in agile software development is thedaily stand-up(known asdaily scrumin the Scrum framework). In a brief session (e.g., 15 minutes), team members review collectively how they are progressing toward their goal and agree whether they need to adapt their approach. To keep to the agreed time limit, teams often use simple coded questions (such as what they completed the previous day, what they aim to complete that day, and whether there are any impediments or risks to progress), and delay detailed discussions and problem resolution until after the stand-up.[38] Specific tools and techniques, such ascontinuous integration, automatedunit testing,pair programming,test-driven development,design patterns,behavior-driven development,domain-driven design,code refactoringand other techniques are often used to improve quality and enhance product development agility.[39]This is predicated on designing and building quality in from the beginning and being able to demonstrate software for customers at any point, or at least at the end of every iteration.[40] Compared to traditional software engineering, agile software development mainly targets complex systems and product development with dynamic,indeterministicand non-linearproperties. Accurate estimates, stable plans, and predictions are often hard to get in early stages, and confidence in them is likely to be low. Agile practitioners use theirfree willto reduce the "leap of faith" that is needed before anyevidenceofvaluecan be obtained.[41]Requirements and design are held to beemergent. Big up-front specifications would probablycausea lot of waste in such cases, i.e., are not economically sound. These basicargumentsand previous industryexperiences, learned from years of successes and failures, have helped shape agile development's favor of adaptive, iterative and evolutionary development.[42] Development methods exist on a continuum fromadaptivetopredictive.[43]Agile software development methods lie on theadaptiveside of this continuum. One key of adaptive development methods is arolling waveapproach to schedule planning, which identifies milestones but leaves flexibility in the path to reach them, and also allows for the milestones themselves to change.[44] Adaptivemethods focus on adapting quickly to changing realities. When the needs of a project change, an adaptive team changes as well. An adaptive team has difficulty describing exactly what will happen in the future. The further away a date is, the more vague an adaptive method is about what will happen on that date. An adaptive team cannot report exactly what tasks they will do next week, but only which features they plan for next month. When asked about a release six months from now, an adaptive team might be able to report only the mission statement for the release, or a statement of expected value vs. cost. Predictivemethods, in contrast, focus on analyzing and planning the future in detail and cater for known risks. In the extremes, a predictive team can report exactly what features and tasks are planned for the entire length of the development process. Predictive methods rely on effective early phase analysis, and if this goes very wrong, the project may have difficulty changing direction. Predictive teams often institute achange control boardto ensure they consider only the most valuable changes. Risk analysiscan be used to choose between adaptive (agileorvalue-driven) and predictive (plan-driven) methods.[45]Barry BoehmandRichard Turnersuggest that each side of the continuum has its ownhome ground, as follows:[46] One of the differences between agile software development methods and waterfall is the approach to quality and testing. In thewaterfall model, work moves throughsoftware development life cycle(SDLC) phases—with one phase being completed before another can start—hence thetesting phaseis separate and follows abuild phase. In agile software development, however, testing is completed in the same iteration as programming. Because testing is done in every iteration—which develops a small piece of the software—users can frequently use those new pieces of software and validate the value. After the users know the real value of the updated piece of software, they can make better decisions about the software's future. Having a value retrospective and software re-planning session in each iteration—Scrumtypically has iterations of just two weeks—helps the team continuously adapt its plans so as to maximize the value it delivers. This follows a pattern similar to theplan-do-check-act(PDCA) cycle, as the work isplanned,done,checked(in the review and retrospective), and any changes agreed areactedupon. This iterative approach supports aproductrather than aprojectmindset. This provides greater flexibility throughout the development process; whereas on projects the requirements are defined and locked down from the very beginning, making it difficult to change them later. Iterative product development allows the software to evolve in response to changes in business environment or market requirements. In a letter toIEEE Computer, Steven Rakitin expressed cynicism about agile software development, calling it "yet another attempt to undermine the discipline of software engineering" and translating "working software over comprehensive documentation" as "we want to spend all our time coding. Remember, real programmers don't write documentation."[47] This is disputed by proponents of agile software development, who state that developers should write documentation if that is the best way to achieve the relevant goals, but that there are often better ways to achieve those goals than writing static documentation.[48]Scott Amblerstates that documentation should be "just barely good enough" (JBGE),[49]that too much or comprehensive documentation would usually cause waste, and developers rarely trust detailed documentation because it's usually out of sync with code,[48]while too little documentation may also cause problems for maintenance, communication, learning and knowledge sharing.Alistair Cockburnwrote of theCrystal Clearmethod: Crystal considers development a series of co-operative games, and intends that the documentation is enough to help the next win at the next game. The work products for Crystal include use cases, risk list, iteration plan, core domain models, and design notes to inform on choices...however there are no templates for these documents and descriptions are necessarily vague, but the objective is clear,just enough documentationfor the next game. I always tend to characterize this to my team as: what would you want to know if you joined the team tomorrow. Agile software development methods support a broad range of thesoftware development life cycle.[51]Some methods focus on the practices (e.g.,XP,pragmatic programming, agile modeling), while some focus on managing the flow of work (e.g., Scrum, Kanban). Some support activities for requirements specification and development (e.g.,FDD), while some seek to cover the full development life cycle (e.g.,DSDM,RUP). Notable agile software development frameworks include: Agile software development is supported by a number of concrete practices, covering areas like requirements, design, modeling, coding, testing, planning, risk management, process, quality, etc. Some notable agile software development practices include:[52] In the literature, different terms refer to the notion of method adaptation, including 'method tailoring', 'method fragment adaptation' and 'situational method engineering'. Method tailoring is defined as: A process or capability in which human agents determine a system development approach for a specific project situation through responsive changes in, and dynamic interplays between contexts, intentions, and method fragments. Situation-appropriateness should be considered as a distinguishing characteristic between agile methods and more plan-driven software development methods, with agile methods allowing product development teams to adapt working practices according to the needs of individual products.[70][69]Potentially, most agile methods could be suitable for method tailoring,[51]such asDSDMtailored in aCMMcontext.[71]and XP tailored with theRule Description Practices(RDP) technique.[72]Not all agile proponents agree, however, with Schwaber noting "that is how we got into trouble in the first place, thinking that the problem was not having a perfect methodology. Efforts [should] center on the changes [needed] in the enterprise".[73]Bas Vodde reinforced this viewpoint, suggesting that unlike traditional, large methodologies that require you to pick and choose elements, Scrum provides the basics on top of which you add additional elements to localize and contextualize its use.[74]Practitioners seldom usesystem development methods, or agile methods specifically, by the book, often choosing to omit or tailor some of the practices of a method in order to create an in-house method.[75] In practice, methods can be tailored using various tools. Generic process modeling languages such asUnified Modeling Languagecan be used to tailor software development methods. However, dedicated tools for method engineering such as the Essence Theory of Software Engineering ofSEMATalso exist.[76] Agile software development has been widely seen as highly suited to certain types of environments, including small teams of experts working ongreenfield projects,[46][77]and the challenges and limitations encountered in the adoption of agile software development methods in a large organization withlegacy infrastructureare well-documented and understood.[78] In response, a range of strategies and patterns has evolved for overcoming challenges with large-scale development efforts (>20 developers)[79][80]or distributed (non-colocated) development teams,[81][82]amongst other challenges; and there are now several recognized frameworks that seek to mitigate or avoid these challenges. There are many conflicting viewpoints on whether all of these are effective or indeed fit the definition of agile development, and this remains an active and ongoing area of research.[79][83] When agile software development is applied in a distributed setting (with teams dispersed across multiple business locations), it is commonly referred to asdistributed agile software development. The goal is to leverage the unique benefits offered by each approach. Distributed development allows organizations to build software by strategically setting up teams in different parts of the globe, virtually building software round-the-clock (more commonly referred to as follow-the-sun model). On the other hand, agile development provides increased transparency, continuous feedback, and more flexibility when responding to changes. Agile software development methods were initially seen as best suitable for non-critical product developments, thereby excluded from use in regulated domains such asmedical devices, pharmaceutical, financial, nuclear systems, automotive, and avionics sectors, etc. However, in the last several years, there have been several initiatives for the adaptation of agile methods for these domains.[84][85][86][87][88] There are numerous standards that may apply in regulated domains, includingISO 26262,ISO 9000,ISO 9001, andISO/IEC 15504. A number of key concerns are of particular importance in regulated domains:[89] Although agile software development methods can be used with any programming paradigm or language in practice, they were originally closely associated with object-oriented environments such as Smalltalk, Lisp and later Java, C#. The initial adopters of agile methods were usually small to medium-sized teams working on unprecedented systems with requirements that were difficult to finalize and likely to change as the system was being developed. This section describes common problems that organizations encounter when they try to adopt agile software development methods as well as various techniques to measure the quality and performance of agile teams.[90] TheAgility measurement index, amongst others, rates developments against five dimensions of product development (duration, risk, novelty, effort, and interaction).[91]Other techniques are based on measurable goals[92]and one study suggests thatvelocitycan be used as a metric of agility. There are also agile self-assessments to determine whether a team is using agile software development practices (Nokia test,[93]Karlskrona test,[94]42 points test).[95] One of the early studies reporting gains in quality, productivity, and business satisfaction by using agile software developments methods was a survey conducted by Shine Technologies from November 2002 to January 2003.[96] A similar survey, theState of Agile, is conducted every year starting in 2006 with thousands of participants from around the software development community. This tracks trends on the perceived benefits of agility, lessons learned, and good practices. Each survey has reported increasing numbers saying that agile software development helps them deliver software faster; improves their ability to manage changing customer priorities; and increases their productivity.[97]Surveys have also consistently shown better results with agile product development methods compared to classical project management.[98][99]In balance, there are reports that some feel that agile development methods are still too young to enable extensive academic research of their success.[100] Organizations and teams implementing agile software development often face difficulties transitioning from more traditional methods such aswaterfall development, such as teams having an agile process forced on them.[101]These are often termedagile anti-patternsor more commonlyagile smells. Below are some common examples: A goal of agile software development is to focus more on producing working software and less on documentation. This is in contrast to waterfall models where the process is often highly controlled and minor changes to the system require significant revision of supporting documentation. However, this does not justify completely doing without any analysis or design at all. Failure to pay attention to design can cause a team to proceed rapidly at first, but then to require significant rework as they attempt to scale up the system. One of the key features of agile software development is that it is iterative. When done correctly, agile software development allows the design to emerge as the system is developed and helps the team discover commonalities and opportunities for re-use.[102] In agile software development,stories(similar touse casedescriptions) are typically used to define requirements and aniterationis a short period of time during which the team commits to specific goals.[103]Adding stories to an iteration in progress is detrimental to a good flow of work. These should be added to the product backlog and prioritized for a subsequent iteration or in rare cases the iteration could be cancelled.[104] This does not mean that a story cannot expand. Teams must deal with new information, which may produce additional tasks for a story. If the new information prevents the story from being completed during the iteration, then it should be carried over to a subsequent iteration. However, it should be prioritized against all remaining stories, as the new information may have changed the story's original priority. Agile software development is often implemented as a grassroots effort in organizations by software development teams trying to optimize their development processes and ensure consistency in the software development life cycle. By not having sponsor support, teams may face difficulties and resistance from business partners, other development teams and management. Additionally, they may suffer without appropriate funding and resources.[105]This increases the likelihood of failure.[106] A survey performed by VersionOne found respondents cited insufficient training as the most significant cause for failed agile implementations[107]Teams have fallen into the trap of assuming the reduced processes of agile software development compared to other approaches such as waterfall means that there are no actual rules for agile software development.[citation needed] Theproduct owneris responsible for representing the business in the development activity and is often the most demanding role.[108] A common mistake is to fill the product owner role with someone from the development team. This requires the team to make its own decisions on prioritization without real feedback from the business. They try to solve business issues internally or delay work as they reach outside the team for direction. This often leads to distraction and a breakdown in collaboration.[109] Agile software development requires teams to meet product commitments, which means they should focus on work for only that product. However, team members who appear to have spare capacity are often expected to take on other work, which makes it difficult for them to help complete the work to which their team had committed.[110] Teams may fall into the trap of spending too much time preparing or planning. This is a common trap for teams less familiar with agile software development where the teams feel obliged to have a complete understanding and specification of all stories. Teams should be prepared to move forward with only those stories in which they have confidence, then during the iteration continue to discover and prepare work for subsequent iterations (often referred to asbacklog refinementor grooming). A daily standup should be a focused, timely meeting where all team members disseminate information. If problem-solving occurs, it often can involve only certain team members and potentially is not the best use of the entire team's time. If during the daily standup the team starts diving into problem-solving, it should be set aside until a sub-team can discuss, usually immediately after the standup completes.[111] One of the intended benefits of agile software development is to empower the team to make choices, as they are closest to the problem. Additionally, they should make choices as close to implementation as possible, to use more timely information in the decision. If team members are assigned tasks by others or too early in the process, the benefits of localized and timely decision making can be lost.[112] Being assigned work also constrains team members into certain roles (for example, team member A must always do the database work), which limits opportunities for cross-training.[112]Team members themselves can choose to take on tasks that stretch their abilities and provide cross-training opportunities. In the Scrum framework, which claims to be consistent with agile values and principles, thescrum masterrole is accountable for ensuring the scrum process is followed and for coaching the scrum team through that process. A common pitfall is for a scrum master to act as a contributor. While not prohibited by the Scrum framework, the scrum master needs to ensure they have the capacity to act in the role of scrum master first and not work on development tasks. A scrum master's role is to facilitate the process rather than create the product.[113] Having the scrum master also multitasking may result in too many context switches to be productive. Additionally, as a scrum master is responsible for ensuring roadblocks are removed so that the team can make forward progress, the benefit gained by individual tasks moving forward may not outweigh roadblocks that are deferred due to lack of capacity.[114] Due to the iterative nature of agile development, multiple rounds of testing are often needed. Automated testing helps reduce the impact of repeated unit, integration, and regression tests and frees developers and testers to focus on higher value work.[115] Test automation also supports continuedrefactoringrequired by iterative software development. Allowing a developer to quickly run tests to confirm refactoring has not modified the functionality of the application may reduce the workload and increase confidence that cleanup efforts have not introduced new defects. Focusing on delivering new functionality may result in increasedtechnical debt. The team must allow themselves time for defect remediation and refactoring. Technical debt hinders planning abilities by increasing the amount of unscheduled work as production defects distract the team from further progress.[116] As the system evolves it is important torefactor.[117]Over time the lack of constant maintenance causes increasing defects and development costs.[116] A common misconception is that agile software development allows continuous change, however an iteration backlog is an agreement of what work can be completed during an iteration.[118]Having too muchwork-in-progress (WIP)results in inefficiencies such ascontext-switchingand queueing.[119]The team must avoid feeling pressured into taking on additional work.[120] Agile software development fixes time (iteration duration), quality, and ideally resources in advance (though maintaining fixed resources may be difficult if developers are often pulled away from tasks to handle production incidents), while thescoperemains variable. The customer or product owner often pushes for a fixed scope for an iteration. However, teams should be reluctant to commit to the locked time, resources and scope (commonly known as theproject management triangle). Efforts to add scope to the fixed time and resources of agile software development may result in decreased quality.[121] Due to the focused pace and continuous nature of agile practices, there is a heightened risk of burnout among members of the delivery team.[122] Agile project management is an iterative development process, where feedback is continuously gathered from users and stakeholders to create the right user experience. Different methods can be used to perform an agile process, these includescrum,extreme programming,leanandkanban.[123]The termagile managementis applied to an iterative, incremental method of managing the design and build activities of engineering, information technology and other business areas that aim to provide new product or service development in a highly flexible and interactive manner, based on the principles expressed in theManifesto for Agile Software Development.[124]Agile project management metrics help reduce confusion, identify weak points, and measure team's performance throughout the development cycle. Supply chain agility is the ability of a supply chain to cope with uncertainty and variability on offer and demand. An agile supply chain can increase and reduce its capacity rapidly, so it can adapt to a fast-changing customer demand. Finally, strategic agility is the ability of an organisation to change its course of action as its environment is evolving. The key for strategic agility is to recognize external changes early enough and to allocate resources to adapt to these changing environments.[123] Agile X techniques may also be calledextreme project management. It is a variant ofiterative life cycle[125]wheredeliverablesare submitted in stages. The main difference between agile and iterative development is that agile methods complete small portions of the deliverables in each delivery cycle (iteration),[126]while iterative methods evolve the entire set of deliverables over time, completing them near the end of the project. Both iterative and agile methods were developed as a reaction to various obstacles that developed in more sequential forms of project organization. For example, as technology projects grow in complexity, end users tend to have difficulty defining the long-term requirements without being able to view progressive prototypes. Projects that develop in iterations can constantly gather feedback to help refine those requirements. Agile management also offers a simple framework promoting communication and reflection on past work amongstteammembers.[127]Teams who were using traditional waterfall planning and adopted the agile way of development typically go through a transformation phase and often take help from agile coaches who help guide the teams through a smoother transformation. There are typically two styles of agile coaching: push-based and pull-based agile coaching. Here a "push-system" can refer to an upfront estimation of what tasks can be fitted into a sprint (pushing work) e.g. typical with scrum; whereas a "pull system" can refer to an environment where tasks are only performed when capacity is available.[128]Agile management approaches have also been employed and adapted to the business and government sectors. For example, within thefederal government of the United States, theUnited States Agency for International Development(USAID) is employing a collaborative project management approach that focuses on incorporatingcollaborating, learning and adapting(CLA) strategies to iterate and adapt programming.[129] Agile methods are mentioned in theGuide to the Project Management Body of Knowledge(PMBOK Guide 6th Edition) under theProduct Development Lifecycledefinition: Within a project life cycle, there are generally one or more phases that are associated with the development of the product, service, or result. These are called a development life cycle (...) Adaptive life cycles are agile, iterative, or incremental. The detailed scope is defined and approved before the start of an iteration. Adaptive life cycles are also referred to as agile or change-driven life cycles.[130] According to Jean-Loup Richet (research fellow atESSECInstitute for Strategic Innovation & Services) "this approach can be leveraged effectively for non-software products and for project management in general, especially in areas of innovation and uncertainty." The result is a product or project that best meets current customer needs and is delivered with minimal costs, waste, and time, enabling companies to achieve bottom line gains earlier than via traditional approaches.[131] Agile software development methods have been extensively used for development of software products and some of them use certain characteristics of software, such asobject technologies.[132]However, these techniques can be applied to the development of non-software products, such as computers, medical devices, food, clothing, and music.[133]Agile software development methods have been used in non-developmentIT infrastructuredeployments and migrations. Some of the wider principles of agile software development have also found application in general management[134](e.g., strategy, governance, risk, finance) under the termsbusiness agilityor agile business management. Agile software methodologies have also been adopted for use with thelearning engineeringprocess, an iterative data-informed process that applieshuman-centered design, and data informed decision-making to support learners and their development.[135] Agile software development paradigms can be used in other areas of life such as raising children. Its success in child development might be founded on some basic management principles; communication, adaptation, and awareness. In aTED Talk, Bruce Feiler shared how he applied basic agile paradigms to household management and raising children.[136] Agile practices have been cited as potentially inefficient in large organizations and certain types of development.[137]Many organizations believe that agile software development methodologies are too extreme and adopt a hybrid approach[138]that mixes elements of agile software development and plan-driven approaches.[139]Some methods, such asdynamic systems development method(DSDM) attempt this in a disciplined way, without sacrificing fundamental principles. The increasing adoption of agile practices has also been criticized as being amanagement fadthat simply describes existing good practices under new jargon, promotes aone size fits allmindset towards development strategies, and wrongly emphasizes method over results.[140] Alistair Cockburnorganized a celebration of the 10th anniversary of theManifesto for Agile Software Developmentin Snowbird, Utah on 12 February 2011, gathering some 30+ people who had been involved at the original meeting and since. A list of about 20elephants in the room('undiscussable' agile topics/issues) were collected, including aspects: the alliances, failures and limitations of agile software development practices and context (possible causes: commercial interests, decontextualization, no obvious way to make progress based on failure, limited objective evidence, cognitive biases and reasoning fallacies), politics and culture.[141]AsPhilippe Kruchtenwrote: The agile movement is in some ways a bit like a teenager: very self-conscious, checking constantly its appearance in a mirror, accepting few criticisms, only interested in being with its peers, rejecting en bloc all wisdom from the past, just because it is from the past, adopting fads and new jargon, at times cocky and arrogant. But I have no doubts that it will mature further, become more open to the outside world, more reflective, and therefore, more effective. The "Manifesto" may have had a negative impact on higher education management and leadership, where it suggested to administrators that slower traditional and deliberative processes should be replaced with more "nimble" ones. The concept rarely found acceptance among university faculty.[142] Another criticism is that in many ways, agile management and traditional management practices end up being in opposition to one another. A common criticism of this practice is that the time spent attempting to learn and implement the practice is too costly, despite potential benefits. A transition from traditional management to agile management requires total submission to agile and a firm commitment from all members of the organization to seeing the process through. Issues like unequal results across the organization, too much change for employees' ability to handle, or a lack of guarantees at the end of the transformation are just a few examples.[143]
https://en.wikipedia.org/wiki/Agile_software_development
Incomputer science,all-pairs testingorpairwise testingis acombinatorialmethod ofsoftware testingthat, foreach pairof input parameters to a system (typically, asoftwarealgorithm), tests all possible discrete combinations of those parameters. Using carefully chosentest vectors, this can be done much faster than an exhaustive search ofall combinationsof all parameters by "parallelizing" the tests of parameter pairs.[1] In most cases, a single input parameter or an interaction between two parameters is what causes a program's bugs.[2]Bugs involving interactions between three or more parameters are both progressively less common[3]and also progressively more expensive to find, such testing has as its limit the testing of all possible inputs.[4]Thus, a combinatorial technique for picking test cases like all-pairs testing is a useful cost-benefit compromise that enables a significant reduction in the number of test cases without drastically compromising functional coverage.[5] More rigorously, if we assume that a test case hasN{\displaystyle N}parameters given in a set{Pi}={P1,P2,...,PN}{\displaystyle \{P_{i}\}=\{P_{1},P_{2},...,P_{N}\}}. The range of the parameters are given byR(Pi)=Ri{\displaystyle R(P_{i})=R_{i}}. Let's assume that|Ri|=ni{\displaystyle |R_{i}|=n_{i}}. We note that the number of all possible test cases is a∏ni{\displaystyle \prod n_{i}}. Imagining that the code deals with the conditions taking only two parameters at a time, might reduce the number of needed test cases.[clarification needed] To demonstrate, suppose there are X,Y,Z parameters. We can use apredicateof the formP(X,Y,Z){\displaystyle P(X,Y,Z)}of order 3, which takes all 3 as input, or rather three different order 2 predicates of the formp(u,v){\displaystyle p(u,v)}.P(X,Y,Z){\displaystyle P(X,Y,Z)}can be written in an equivalent form ofpxy(X,Y),pyz(Y,Z),pzx(Z,X){\displaystyle p_{xy}(X,Y),p_{yz}(Y,Z),p_{zx}(Z,X)}where comma denotes any combination. If the code is written as conditions taking "pairs" of parameters, then the set of choices of rangesX={ni}{\displaystyle X=\{n_{i}\}}can be amultiset[clarification needed], because there can be multiple parameters having same number of choices. max(S){\displaystyle max(S)}is one of the maximum of the multisetS{\displaystyle S}The number of pair-wise test cases on this test function would be:-T=max(X)×max(X∖max(X)){\displaystyle T=max(X)\times max(X\setminus max(X))} Therefore, if then=max(X){\displaystyle n=max(X)}andm=max(X∖max(X)){\displaystyle m=max(X\setminus max(X))}then the number of tests is typically O(nm), wherenandmare the number of possibilities for each of the two parameters with the most choices, and it can be quite a lot less than the exhaustive∏ni{\displaystyle \prod n_{i}}· N-wise testing can be considered the generalized form of pair-wise testing.[citation needed] The idea is to applysortingto the setX={ni}{\displaystyle X=\{n_{i}\}}so thatP={Pi}{\displaystyle P=\{P_{i}\}}gets ordered too. Let the sorted set be aN{\displaystyle N}tuple :- Ps=<Pi>;i<j⟹|R(Pi)|<|R(Pj)|{\displaystyle P_{s}=<P_{i}>\;;\;i<j\implies |R(P_{i})|<|R(P_{j})|} Now we can take the setX(2)={PN−1,PN−2}{\displaystyle X(2)=\{P_{N-1},P_{N-2}\}}and call it the pairwise testing. Generalizing further we can take the setX(3)={PN−1,PN−2,PN−3}{\displaystyle X(3)=\{P_{N-1},P_{N-2},P_{N-3}\}}and call it the 3-wise testing. Eventually, we can sayX(T)={PN−1,PN−2,...,PN−T}{\displaystyle X(T)=\{P_{N-1},P_{N-2},...,P_{N-T}\}}T-wise testing. The N-wise testing then would just be, all possible combinations from the above formula. Consider the parameters shown in the table below. 'Enabled', 'Choice Type' and 'Category' have a choice range of 2, 3 and 4, respectively. An exhaustive test would involve 24 tests (2 x 3 x 4). Multiplying the two largest values (3 and 4) indicates that a pair-wise tests would involve 12 tests. The pairwise test cases, generated by Microsoft's "pict" tool, are shown below.
https://en.wikipedia.org/wiki/All-pairs_testing
TheInternational Software Testing Qualifications Board(ISTQB) is asoftware testing certification boardthat operates internationally.[1]Founded in Edinburgh in November 2002, the ISTQB is a non-profit association legally registered in Belgium. ISTQB Certified Tester is a standardized qualification for software testers and the certification is offered by the ISTQB. The qualifications are based on a syllabus, and there is a hierarchy of qualifications and guidelines for accreditation and examination. More than 1 million ISTQB exams have been delivered and over 721,000 certifications issued; the ISTQB consists of 67 member boards worldwide representing more than 100 countries as of April 2021.[2] Current ISTQB product portfolio follows a matrix approach[3]characterized by ISTQB streams focus on: Pre-conditions relate to certification exams[4]and provide a natural progression through the ISTQB Scheme which helps people pick the right certificate and informs them about what they need to know. The ISTQB Core Foundation is a pre-condition for any other certification. Additional rules for ISTQB pre-conditions are summarized in the following: Such rules are depicted from a graphical point of view in the ISTQB Product Portfolio map. ISTQB provides a list of referenced books from some previous syllabi online.[5] The Foundation and Advanced exams consist ofmultiple choicetests.[6] Certification is valid for life (Foundation Level and Advanced Level), and there is no requirement for recertification. ISTQB Member boards are responsible for the quality and the auditing of the examination. Worldwide there are testing boards in 67 countries (date: April 2021). Authorized exam providers are also able to offer exams including e-exams. The current list of exam provider you can fine on the dedicated page.[7] The current ISTQB Foundation Level certification is based on the 2023 syllabus. The Foundation Level qualification is suitable for anyone who needs to demonstrate practical knowledge of the fundamental concepts of software testing including people in roles such as testers, test analysts, test engineers, test consultants, test managers, user acceptance testers and software developers.[8] It is also appropriate for individuals who need a basic understanding of software testing including project managers, quality managers, software development managers, business analysts, IT directors and management consultants.[8] The different Advanced Level exams are more practical and require deeper knowledge in special areas. Test Manager deals with planning and control of thetest process. Test Analyst concerns, among other things,reviewsandblack box testingmethods. Technical Test Analyst includescomponent tests(also calledunit test), requiring knowledge ofwhite box testingand non-functional testing methods – this section also includestest tools.
https://en.wikipedia.org/wiki/International_Software_Testing_Qualifications_Board
P-Modeling Frameworkis a package of guidelines, methods, tools and templates for thedevelopment processimprovement. P-Modelingframeworkcan be integrated into any otherSDLCin use, e.g.,MSFAgile, MSFCMMI,RUP, etc. The origins of P-Modeling Framework come from "The Babel Experiment" designed by Vladimir L. Pavlov in 2001 as a training program forsoftware engineeringstudents that was aimed at making students go through a “condensed” version of communication problems typical forsoftware developmentand gain the experience of applyingUMLto overcome these problems. This experiment was done in the following manner. A team of students was assigned the task ofdesigningasoftware systemwith the following restriction factor: UML had to be the only language allowed for communication while working on the project. The premise was intended to make students go through a “condensed” version of communication problems typical for software development and gain the experience of applying UML to overcome these problems. As the result of this experiment, students developed quite clear and concise models. A little later, during a design session, there were two independent teams working on the same task. The communication means of the first team was restricted to UML as described above, while the other team was allowed to communicate verbally using a natural language. It turned out that the first, more restricted team, performed the task more efficiently than the other one. TheUML diagramscreated by the first team were more sound, detailed, readable, and elaborated. Subsequently, Vladimir L. Pavlov conducted a number of additional experiments intended to reveal whether the “silent” modeling sessions are more productive than the traditional ones. In these experiments, silent teams appeared to be at least as efficient as the others, and in some cases the silent teams outperformed the traditional ones. Some of the interpretations of these results are the following: Afterwards, ideas were constructed for conducting additional new experiments with the intention of finding a method to compare UML to natural languages. The premise in these experiments was to set up forward (from a natural language to UML) and backward (from UML to the natural language) "translation" tasks for two teams of professional software designers. This would be done with one team performing the forward translation and the other one performing the backward translation. The intention was to observe how closely the outcome of the backward translation resembled the original text, thus providing verification of correctness of UML model. The experiments showed that, for information describing software systems, UML has sufficient power of expression required to maintain the model's content. Texts obtained after the backward translation from UML were semantically equivalent to the original. The experiments suggested the model of the entiresoftware development cycleexisted as a series of translations. In subsequent experiments backward translation verification has been demonstrated as a method to help guarantee deliverables of each development step do not lose, or have misinterpreted, anything that was produced at the previous step. This method has been named "Reverse Semantic Traceability." It has proven to be a solid second part completion to the P-Modeling Framework. Reverse Semantic Traceabilityis aquality controlmethod that allowstestingoutputs of every translation step. Before proceeding to the next phase, the current artifacts are “reverse engineered”, and the restored text is compared to the original. If there is a difference between these two texts – the tested artifacts are corrected to eliminate the problem (or initial text is corrected.) Consequently, every step is confirmed by stepping back and making sure that development stays on the correct track. In this way, issues may be discovered and fixed without delays, so they do not accumulate, and do not cascade to subsequent phases of the development cycle.The key word in the name of this method is “Semantic.” It is based on the fact the original and restored versions of a text are to be compared semantically, with a focus on the “meaning” of the text, not on particular “words” used in it. The highest usage scenarios reported by early adopters of Reverse Semantic Traceability method are: Being originally invented as an advanced training to teachObject-Oriented Analysisand Design with UML to students, the Speechless Modeling, in essence, is a restriction on using communication means directly or indirectly involving a natural language. In this way, a team of designers is forced to use the modeling language as the only language available for communication during a design session. Regardless of what type of development process is used in an organization;waterfall,spiral, variousiterative-incrementalor some others, there are certain processes, such assoftware design,quality control,human resources management,risk management,communication management, etc. to which can P-Modeling Framework principles can be applied, especially in the earlier stages of aprojectwhen quality control activities are either minor or (virtually) absent. P-Modeling Framework obviously has some room for further improvement. For example:
https://en.wikipedia.org/wiki/P-Modeling_Framework
Test managementmost commonly refers to the activity of managing a testing process. A test management tool issoftwareused to managetests(automated or manual) that have been previously specified by a test procedure. It is often associated withautomationsoftware. Test management tools often includerequirementand/orspecificationmanagement modules that allow automatic generation of therequirement test matrix(RTM), which is one of the main metrics to indicate functional coverage of asystem under test(SUT). Test definition includes:test plan, association with productrequirementsandspecifications. Eventually, some relationship can be set between tests so that precedences can be established. E.g. if test A is parent of test B and if test A is failing, then it may be useless to perform test B. Tests should also be associated with priorities. Every change on a test must be versioned so that the QA team has a comprehensive view of the history of the test. This includes building some bundles of test cases and executing them (or scheduling their execution). Execution can be either manual or automatic. The user will have to perform all the test steps manually and inform the system of the result. Some test management tools includes a framework to interface the user with thetest planto facilitate this task. There are several ways to run tests. The simplest way to run a test is to run a test case. The test case can be associated with other test artifacts such as test plans, test scripts, test environments, test case execution records, and test suites. There are numerous ways of implementing automated tests. Automatic execution requires the test management tool to be compatible with the tests themselves. To do so,test management toolsmay propose proprietary automation frameworks orAPIsto interface with third-party or proprietary automated tests. The ultimate goal of test management tools is to deliver sensitive metrics that will help the QA manager in evaluating the quality of the system under test before releasing. Metrics are generally presented as graphics and tables indicating success rates, progression/regression and much other sensitive data. Eventually, test management tools can integratebug trackingfeatures or at least interface with well-known dedicated bug tracking solutions (such asBugzillaorMantis) efficiently link a test failure with abug. Test management tools may also integrate (or interface with third-party)project managementfunctionalities to help the QA manager planning activities ahead of time. There are several commercial and open source test management tools available in the market today. Most test management tools are web-served applications that need to be installed in-house, while others can be accessed assoftware as a service.[citation needed]
https://en.wikipedia.org/wiki/Test_management
Test automation management toolsare specific tools that provide acollaborativeenvironment that is intended to maketest automationefficient, traceable and clear for stakeholders. Test automation is becoming a cross-discipline (i.e. a mix of both testing and development practices.) Test automationsystems usually need more reporting, analysis and meaningful information about project status. Test management systems target manual effort and do not give all the required information.[1] Test automation management systems leverage automation efforts towards efficient and continuous processes of delivering test execution and new working tests by: Test automation management tools fitAgileSystems Development Life Cycle methodologies. In most cases, test automation covers continuous changes to minimize manual regression testing. Changes are usually noted by monitoring test log diffs. For example, differences in the number of failures signal probable changes either in AUT or in test code (broken test code base, instabilities) or in both. Quick notice of changes and unified workflow of results analysis reduces testing costs and increases project quality. Test-driven developmentutilizes test automation as the primary driver to rapid and high-quality software production. Concepts of green line and thoughtful design are supported with tests before actual coding, assuming there are special tools to track and analyze within TDD process. Another test automation practice[2]iscontinuous integration, which explicitly supposes automated test suites as a final stage upon building, deployment and distributing new versions of software. Based on acceptance of test results, a build is declared either as qualified for further testing or rejected.[3]Dashboards provide relevant information on all stages of software development including test results. However, dashboards do not support comprehensive operations and views for an automation engineer. This is another reason for dedicated management tools that can supply high-level data to other project management tools such astest management, issue management andchange management.
https://en.wikipedia.org/wiki/Test_automation_management_tools
Inmathematicsandcomputer science, analgorithm(/ˈælɡərɪðəm/ⓘ) is a finite sequence ofmathematically rigorousinstructions, typically used to solve a class of specificproblemsor to perform acomputation.[1]Algorithms are used as specifications for performingcalculationsanddata processing. More advanced algorithms can useconditionalsto divert the code execution through various routes (referred to asautomated decision-making) and deduce validinferences(referred to asautomated reasoning). In contrast, aheuristicis an approach to solving problems without well-defined correct or optimal results.[2]For example, although social mediarecommender systemsare commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation. As aneffective method, an algorithm can be expressed within a finite amount of space and time[3]and in a well-definedformal language[4]for calculating afunction.[5]Starting from an initial state and initial input (perhapsempty),[6]the instructions describe a computation that, whenexecuted, proceeds through a finite[7]number of well-defined successive states, eventually producing "output"[8]and terminating at a final ending state. The transition from one state to the next is not necessarilydeterministic; some algorithms, known asrandomized algorithms, incorporate random input.[9] Around 825 AD, Persian scientist and polymathMuḥammad ibn Mūsā al-Khwārizmīwrotekitāb al-ḥisāb al-hindī("Book of Indian computation") andkitab al-jam' wa'l-tafriq al-ḥisāb al-hindī("Addition and subtraction in Indian arithmetic"). In the early 12th century, Latin translations of these texts involving theHindu–Arabic numeral systemandarithmeticappeared, for exampleLiber Alghoarismi de practica arismetrice, attributed toJohn of Seville, andLiber Algorismi de numero Indorum, attributed toAdelard of Bath.[10]Here,alghoarismioralgorismiis theLatinizationof Al-Khwarizmi's name;[1]the text starts with the phraseDixit Algorismi, or "Thus spoke Al-Khwarizmi".[2] The wordalgorismin English came to mean the use of place-value notation in calculations; it occurs in theAncrene Wissefrom circa 1225.[11]By the timeGeoffrey ChaucerwroteThe Canterbury Talesin the late 14th century, he used a variant of the same word in describingaugrym stones, stones used for place-value calculation.[12][13]In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number";cf."arithmetic"), the Latin word was altered toalgorithmus.[14]By 1596, this form of the word was used in English, asalgorithm, byThomas Hood.[15] One informal definition is "a set of rules that precisely defines a sequence of operations",[16]which would include allcomputer programs(including programs that do not perform numeric calculations), and any prescribedbureaucraticprocedure[17]orcook-bookrecipe.[18]In general, a program is an algorithm only if it stops eventually[19]—even thoughinfinite loopsmay sometimes prove desirable.Boolos, Jeffrey & 1974, 1999define an algorithm to be an explicit set of instructions for determining an output, that can be followed by a computing machine or a human who could only carry out specific elementary operations on symbols.[20] Most algorithms are intended to beimplementedascomputer programs. However, algorithms are also implemented by other means, such as in abiological neural network(for example, thehuman brainperformingarithmeticor an insect looking for food), in anelectrical circuit, or a mechanical device. Step-by-step procedures for solving mathematical problems have been recorded since antiquity. This includes inBabylonian mathematics(around 2500 BC),[21]Egyptian mathematics(around 1550 BC),[21]Indian mathematics(around 800 BC and later),[22][23]the Ifa Oracle (around 500 BC),[24]Greek mathematics(around 240 BC),[25]Chinese mathematics (around 200 BC and later),[26]andArabic mathematics(around 800 AD).[27] The earliest evidence of algorithms is found in ancientMesopotamianmathematics. ASumerianclay tablet found inShuruppaknearBaghdadand dated toc.2500 BCdescribes the earliestdivision algorithm.[21]During theHammurabi dynastyc.1800– c.1600 BC,Babylonianclay tablets described algorithms for computing formulas.[28]Algorithms were also used inBabylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events.[29] Algorithms for arithmetic are also found in ancientEgyptian mathematics, dating back to theRhind Mathematical Papyrusc.1550 BC.[21]Algorithms were later used in ancientHellenistic mathematics. Two examples are theSieve of Eratosthenes, which was described in theIntroduction to ArithmeticbyNicomachus,[30][25]: Ch 9.2and theEuclidean algorithm, which was first described inEuclid's Elements(c.300 BC).[25]: Ch 9.1Examples of ancient Indian mathematics included theShulba Sutras, theKerala School, and theBrāhmasphuṭasiddhānta.[22] The first cryptographic algorithm for deciphering encrypted code was developed byAl-Kindi, a 9th-century Arab mathematician, inA Manuscript On Deciphering Cryptographic Messages. He gave the first description ofcryptanalysisbyfrequency analysis, the earliest codebreaking algorithm.[27] Bolter credits the invention of the weight-driven clock as "the key invention [ofEurope in the Middle Ages]," specifically theverge escapementmechanism[31]producing the tick and tock of a mechanical clock. "The accurate automatic machine"[32]led immediately to "mechanicalautomata" in the 13th century and "computational machines"—thedifferenceandanalytical enginesofCharles BabbageandAda Lovelacein the mid-19th century.[33]Lovelace designed the first algorithm intended for processing on a computer, Babbage's analytical engine, which is the first device considered a realTuring-completecomputer instead of just acalculator. Although the full implementation of Babbage's second device was not realized for decades after her lifetime, Lovelace has been called "history's first programmer". Bell and Newell (1971) write that theJacquard loom, a precursor toHollerith cards(punch cards), and "telephone switching technologies" led to the development of the first computers.[34]By the mid-19th century, thetelegraph, the precursor of the telephone, was in use throughout the world. By the late 19th century, theticker tape(c.1870s) was in use, as were Hollerith cards (c. 1890). Then came theteleprinter(c.1910) with its punched-paper use ofBaudot codeon tape. Telephone-switching networks ofelectromechanical relayswere invented in 1835. These led to the invention of the digital adding device byGeorge Stibitzin 1937. While working in Bell Laboratories, he observed the "burdensome" use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".[35][36] In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve theEntscheidungsproblem(decision problem) posed byDavid Hilbert. Later formalizations were framed as attempts to define "effective calculability"[37]or "effective method".[38]Those formalizations included theGödel–Herbrand–Kleenerecursive functions of 1930, 1934 and 1935,Alonzo Church'slambda calculusof 1936,Emil Post'sFormulation 1of 1936, andAlan Turing'sTuring machinesof 1936–37 and 1939. Algorithms can be expressed in many kinds of notation, includingnatural languages,pseudocode,flowcharts,drakon-charts,programming languagesorcontrol tables(processed byinterpreters). Natural language expressions of algorithms tend to be verbose and ambiguous and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts, and control tables are structured expressions of algorithms that avoid common ambiguities of natural language. Programming languages are primarily for expressing algorithms in a computer-executable form but are also used to define or document algorithms. There are many possible representations andTuring machineprograms can be expressed as a sequence of machine tables (seefinite-state machine,state-transition table, andcontrol tablefor more), as flowcharts and drakon-charts (seestate diagramfor more), as a form of rudimentarymachine codeorassembly codecalled "sets of quadruples", and more. Algorithm representations can also be classified into three accepted levels of Turing machine description: high-level description, implementation description, and formal description.[39]A high-level description describes the qualities of the algorithm itself, ignoring how it is implemented on the Turing machine.[39]An implementation description describes the general manner in which the machine moves its head and stores data to carry out the algorithm, but does not give exact states.[39]In the most detail, a formal description gives the exact state table and list of transitions of the Turing machine.[39] The graphical aid called aflowchartoffers a way to describe and document an algorithm (and a computer program corresponding to it). It has four primary symbols: arrows showing program flow, rectangles (SEQUENCE, GOTO), diamonds (IF-THEN-ELSE), and dots (OR-tie). Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure. It is often important to know how much time, storage, or other cost an algorithm may require. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm that adds up the elements of a list ofnnumbers would have a time requirement of⁠O(n){\displaystyle O(n)}⁠, usingbig O notation. The algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. If the space required to store the input numbers is not counted, it has a space requirement of⁠O(1){\displaystyle O(1)}⁠, otherwise⁠O(n){\displaystyle O(n)}⁠is required. Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, abinary searchalgorithm (with cost⁠O(log⁡n){\displaystyle O(\log n)}⁠) outperforms a sequential search (cost⁠O(n){\displaystyle O(n)}⁠) when used fortable lookupson sorted lists or arrays. Theanalysis, and study of algorithmsis a discipline ofcomputer science. Algorithms are often studied abstractly, without referencing any specificprogramming languageor implementation. Algorithm analysis resembles other mathematical disciplines as it focuses on the algorithm's properties, not implementation.Pseudocodeis typical for analysis as it is a simple and general representation. Most algorithms are implemented on particular hardware/software platforms and theiralgorithmic efficiencyis tested using real code. The efficiency of a particular algorithm may be insignificant for many "one-off" problems but it may be critical for algorithms designed for fast interactive, commercial, or long-life scientific usage. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign. Empirical testing is useful for uncovering unexpected interactions that affect performance.Benchmarksmay be used to compare before/after potential improvements to an algorithm after program optimization. Empirical tests cannot replace formal analysis, though, and are non-trivial to perform fairly.[40] To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating toFFTalgorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging.[41]In general, speed improvements depend on special properties of the problem, which are very common in practical applications.[42]Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power. Algorithm design is a method or mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such asdivide-and-conquerordynamic programmingwithinoperation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns,[43]with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; thebig O notationis used to describe e.g., an algorithm's run-time growth as the size of its input increases.[44] Per theChurch–Turing thesis, any algorithm can be computed by anyTuring completemodel. Turing completeness only requires four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language".[45]Tausworthe augments the threeBöhm-Jacopini canonical structures:[46]SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE.[47]An additional benefit of a structured program is that it lends itself toproofs of correctnessusingmathematical induction.[48] By themselves, algorithms are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as inGottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, inDiamond v. Diehr, the application of a simplefeedbackalgorithm to aid in the curing ofsynthetic rubberwas deemed patentable. Thepatenting of softwareis controversial,[49]and there are criticized patents involving algorithms, especiallydata compressionalgorithms, such asUnisys'sLZW patent. Additionally, some cryptographic algorithms have export restrictions (seeexport of cryptography). Another way of classifying algorithms is by their design methodology orparadigm. Some common paradigms are: Foroptimization problemsthere is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following: One of the simplest algorithms finds the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be described in plain English as: High-level description: (Quasi-)formal description:Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm inpseudocodeorpidgin code:
https://en.wikipedia.org/wiki/Algorithms
Aprogramming languageis a system of notation for writingcomputer programs.[1]Programming languages are described in terms of theirsyntax(form) andsemantics(meaning), usually defined by aformal language. Languages usually provide features such as atype system,variables, and mechanisms forerror handling. Animplementationof a programming language is required in order toexecuteprograms, namely aninterpreteror acompiler. An interpreter directly executes the source code, while acompilerproduces anexecutableprogram. Computer architecturehas strongly influenced the design of programming languages, with the most common type (imperative languages—which implement operations in a specified order) developed to perform well on the popularvon Neumann architecture. While early programming languages were closely tied to thehardware, over time they have developed moreabstractionto hide implementation details for greater simplicity. Thousands of programming languages—often classified as imperative,functional,logic, orobject-oriented—have been developed for a wide variety of uses. Many aspects of programming language design involve tradeoffs—for example,exception handlingsimplifies error handling, but at a performance cost.Programming language theoryis the subfield ofcomputer sciencethat studies the design, implementation, analysis, characterization, and classification of programming languages. Programming languages differ fromnatural languagesin that natural languages are used for interaction between people, while programming languages are designed to allow humans to communicate instructions to machines.[citation needed] The termcomputer languageis sometimes used interchangeably with "programming language".[2]However, usage of these terms varies among authors. In one usage, programming languages are described as a subset of computer languages.[3]Similarly, the term "computer language" may be used in contrast to the term "programming language" to describe languages used in computing but not considered programming languages.[citation needed]Most practical programming languages are Turing complete,[4]and as such are equivalent in what programs they can compute. Another usage regards programming languages as theoretical constructs for programmingabstract machinesand computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.[5]John C. Reynoldsemphasizes thatformal specificationlanguages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.[6] The first programmable computers were invented at the end of the 1940s, and with them, the first programming languages.[7]The earliest computers were programmed infirst-generation programming languages(1GLs),machine language(simple instructions that could be directly executed by the processor). This code was very difficult to debug and was notportablebetween different computer systems.[8]In order to improve the ease of programming,assembly languages(orsecond-generation programming languages—2GLs) were invented, diverging from the machine language to make programs easier to understand for humans, although they did not increase portability.[9] Initially, hardware resources were scarce and expensive, whilehuman resourceswere cheaper. Therefore, cumbersome languages that were time-consuming to use, but were closer to the hardware for higher efficiency were favored.[10]The introduction ofhigh-level programming languages(third-generation programming languages—3GLs)—revolutionized programming. These languagesabstractedaway the details of the hardware, instead being designed to express algorithms that could be understood more easily by humans. For example, arithmetic expressions could now be written in symbolic notation and later translated into machine code that the hardware could execute.[9]In 1957,Fortran(FORmula TRANslation) was invented. Often considered the firstcompiledhigh-level programming language,[9][11]Fortran has remained in use into the twenty-first century.[12] Around 1960, the firstmainframes—general purpose computers—were developed, although they could only be operated by professionals and the cost was extreme. The data and instructions were input bypunch cards, meaning that no input could be added while the program was running. The languages developed at this time therefore are designed for minimal interaction.[14]After the invention of themicroprocessor, computers in the 1970s became dramatically cheaper.[15]New computers also allowed more user interaction, which was supported by newer programming languages.[16] Lisp, implemented in 1958, was the firstfunctional programminglanguage.[17]Unlike Fortran, it supportedrecursionandconditional expressions,[18]and it also introduceddynamic memory managementon aheapand automaticgarbage collection.[19]For the next decades, Lisp dominatedartificial intelligenceapplications.[20]In 1978, another functional language,ML, introducedinferred typesand polymorphicparameters.[16][21] AfterALGOL(ALGOrithmic Language) was released in 1958 and 1960,[22]it became the standard in computing literature for describingalgorithms. Although its commercial success was limited, most popular imperative languages—includingC,Pascal,Ada,C++,Java, andC#—are directly or indirectly descended from ALGOL 60.[23][12]Among its innovations adopted by later programming languages included greater portability and the first use ofcontext-free,BNFgrammar.[24]Simula, the first language to supportobject-oriented programming(includingsubtypes,dynamic dispatch, andinheritance), also descends from ALGOL and achieved commercial success.[25]C, another ALGOL descendant, has sustained popularity into the twenty-first century. C allows access to lower-level machine operations more than other contemporary languages. Its power and efficiency, generated in part with flexiblepointeroperations, comes at the cost of making it more difficult to write correct code.[16] Prolog, designed in 1972, was the firstlogic programminglanguage, communicating with a computer using formal logic notation.[26][27]With logic programming, the programmer specifies a desired result and allows theinterpreterto decide how to achieve it.[28][27] During the 1980s, the invention of thepersonal computertransformed the roles for which programming languages were used.[29]New languages introduced in the 1980s included C++, asupersetof C that can compile C programs but also supportsclassesandinheritance.[30]Adaand other new languages introduced support forconcurrency.[31]The Japanese government invested heavily into the so-calledfifth-generation languagesthat added support for concurrency to logic programming constructs, but these languages were outperformed by other concurrency-supporting languages.[32][33] Due to the rapid growth of theInternetand theWorld Wide Webin the 1990s, new programming languages were introduced to supportWeb pagesandnetworking.[34]Java, based on C++ and designed for increased portability across systems and security, enjoyed large-scale success because these features are essential for many Internet applications.[35][36]Another development was that ofdynamically typedscripting languages—Python,JavaScript,PHP, andRuby—designed to quickly produce small programs that coordinate existingapplications. Due to their integration withHTML, they have also been used for building web pages hosted onservers.[37][38] During the 2000s, there was a slowdown in the development of new programming languages that achieved widespread popularity.[39]One innovation wasservice-oriented programming, designed to exploitdistributed systemswhose components are connected by a network. Services are similar to objects in object-oriented programming, but run on a separate process.[40]C#andF#cross-pollinated ideas between imperative and functional programming.[41]After 2010, several new languages—Rust,Go,Swift,ZigandCarbon—competed for the performance-critical software for which C had historically been used.[42]Most of the new programming languages usestatic typingwhile a few numbers of new languages usedynamic typinglikeRingandJulia.[43][44] Some of the new programming languages are classified asvisual programming languageslikeScratch,LabVIEWandPWCT. Also, some of these languages mix between textual and visual programming usage likeBallerina.[45][46][47][48]Also, this trend lead to developing projects that help in developing new VPLs likeBlocklybyGoogle.[49]Many game engines likeUnrealandUnityadded support for visual scripting too.[50][51] Every programming language includes fundamental elements for describing data and the operations or transformations applied to them, such as adding two numbers or selecting an item from a collection. These elements are governed by syntactic and semantic rules that define their structure and meaning, respectively. A programming language's surface form is known as itssyntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, some programming languages aregraphical, using visual relationships between symbols to specify a program. The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (eitherformalor hard-coded in areference implementation). Since most languages are textual, this article discusses textual syntax. The programming language syntax is usually defined using a combination ofregular expressions(forlexicalstructure) andBackus–Naur form(forgrammaticalstructure). Below is a simple grammar, based onLisp: This grammar specifies the following: The following are examples of well-formed token sequences in this grammar:12345,()and(a b c232 (1)). Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibitundefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it. Usingnatural languageas an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false: The followingC languagefragment is syntactically correct, but performs operations that are not semantically defined (the operation*p >> 4has no meaning for a value having a complex type andp->imis not defined because the value ofpis thenull pointer): If thetype declarationon the first line were omitted, the program would trigger an error on the undefined variablepduring compilation. However, the program would still be syntactically correct since type declarations provide only semantic information. The grammar needed to specify a programming language can be classified by its position in theChomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they arecontext-free grammars.[52]Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax analysis anundecidable problem, and generally blur the distinction between parsing and execution.[53]In contrast toLisp's macro systemand Perl'sBEGINblocks, which may contain general computations, C macros are merely string replacements and do not require code execution.[54] The termsemanticsrefers to the meaning of languages, as opposed to their form (syntax). Static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms.[1][failed verification]For compiled languages, static semantics essentially include those semantic rules that can be checked at compile time. Examples include checking that everyidentifieris declared before it is used (in languages that require such declarations) or that the labels on the arms of acase statementare distinct.[55]Many important restrictions of this type, like checking that identifiers are used in the appropriate context (e.g. not adding an integer to a function name), or thatsubroutinecalls have the appropriate number and type of arguments, can be enforced by defining them as rules in alogiccalled atype system. Other forms ofstatic analyseslikedata flow analysismay also be part of static semantics. Programming languages such asJavaandC#havedefinite assignment analysis, a form of data flow analysis, as part of their respective static semantics.[56] Once data has been specified, the machine must be instructed to perform operations on the data. For example, the semantics may define thestrategyby which expressions are evaluated to values, or the manner in whichcontrol structuresconditionally executestatements. Thedynamic semantics(also known asexecution semantics) of a language defines how and when the various constructs of a language should produce a program behavior. There are many ways of defining execution semantics. Natural language is often used to specify the execution semantics of languages commonly used in practice. A significant amount of academic research goes intoformal semantics of programming languages, which allows execution semantics to be specified in a formal manner. Results from this field of research have seen limited application to programming language design and implementation outside academia.[56] Adata typeis a set of allowable values and operations that can be performed on these values.[57]Each programming language'stype systemdefines which data types exist, the type of anexpression, and howtype equivalenceandtype compatibilityfunction in the language.[58] According totype theory, a language is fully typed if the specification of every operation defines types of data to which the operation is applicable.[59]In contrast, an untyped language, such as mostassembly languages, allows any operation to be performed on any data, generally sequences of bits of various lengths.[59]In practice, while few languages are fully typed, most offer a degree of typing.[59] Because different types (such asintegersandfloats) represent values differently, unexpected results will occur if one type is used when another is expected.Type checkingwill flag this error, usually atcompile time(runtime type checking is more costly).[60]Withstrong typing,type errorscan always be detected unless variables are explicitlycastto a different type.Weak typingoccurs when languages allow implicit casting—for example, to enable operations between variables of different types without the programmer making an explicit type conversion. The more cases in which thistype coercionis allowed, the fewer type errors can be detected.[61] Early programming languages often supported only built-in, numeric types such as theinteger(signed and unsigned) andfloating point(to support operations onreal numbersthat are not integers). Most programming languages support multiple sizes of floats (often calledfloatanddouble) and integers depending on the size and precision required by the programmer. Storing an integer in a type that is too small to represent it leads tointeger overflow. The most common way of representing negative numbers with signed types istwos complement, althoughones complementis also used.[62]Other common types includeBoolean—which is either true or false—andcharacter—traditionally onebyte, sufficient to represent allASCIIcharacters.[63] Arraysare a data type whose elements, in many languages, must consist of a single type of fixed length. Other languages define arrays as references to data stored elsewhere and support elements of varying types.[64]Depending on the programming language, sequences of multiple characters, calledstrings, may be supported as arrays of characters or their ownprimitive type.[65]Strings may be of fixed or variable length, which enables greater flexibility at the cost of increased storage space and more complexity.[66]Other data types that may be supported includelists,[67]associative (unordered) arraysaccessed via keys,[68]recordsin which data is mapped to names in an ordered structure,[69]andtuples—similar to records but without names for data fields.[70]Pointersstore memory addresses, typically referencing locations on theheapwhere other data is stored.[71] The simplestuser-defined typeis anordinal type, often called anenumeration, whose values can be mapped onto the set of positive integers.[72]Since the mid-1980s, most programming languages also supportabstract data types, in which the representation of the data and operations arehidden from the user, who can only access aninterface.[73]The benefits ofdata abstractioncan include increased reliability, reduced complexity, less potential forname collision, and allowing the underlyingdata structureto be changed without the client needing to alter its code.[74] Instatic typing, all expressions have their types determined before a program executes, typically at compile-time.[59]Most widely used, statically typed programming languages require the types of variables to be specified explicitly. In some languages, types are implicit; one form of this is when the compiler caninfertypes based on context. The downside ofimplicit typingis the potential for errors to go undetected.[75]Complete type inference has traditionally been associated with functional languages such asHaskellandML.[76] With dynamic typing, the type is not attached to the variable but only the value encoded in it. A single variable can be reused for a value of a different type. Although this provides more flexibility to the programmer, it is at the cost of lower reliability and less ability for the programming language to check for errors.[77]Some languages allow variables of aunion typeto which any type of value can be assigned, in an exception to their usual static typing rules.[78] In computing, multiple instructions can be executed simultaneously. Many programming languages support instruction-level and subprogram-level concurrency.[79]By the twenty-first century, additional processing power on computers was increasingly coming from the use of additional processors, which requires programmers to design software that makes use of multiple processors simultaneously to achieve improved performance.[80]Interpreted languagessuch asPythonandRubydo not support the concurrent use of multiple processors.[81]Other programming languages do support managing data shared between different threads by controlling the order of execution of key instructions via the use ofsemaphores, controlling access to shared data viamonitor, or enablingmessage passingbetween threads.[82] Many programming languages include exception handlers, a section of code triggered byruntime errorsthat can deal with them in two main ways:[83] Some programming languages support dedicating a block of code to run regardless of whether an exception occurs before the code is reached; this is called finalization.[84] There is a tradeoff between increased ability to handle exceptions and reduced performance.[85]For example, even though array index errors are common[86]C does not check them for performance reasons.[85]Although programmers can write code to catch user-defined exceptions, this can clutter a program. Standard libraries in some languages, such as C, use their return values to indicate an exception.[87]Some languages and their compilers have the option of turning on and off error handling capability, either temporarily or permanently.[88] One of the most important influences on programming language design has beencomputer architecture.Imperative languages, the most commonly used type, were designed to perform well onvon Neumann architecture, the most common computer architecture.[89]In von Neumann architecture, thememorystores both data and instructions, while theCPUthat performs instructions on data is separate, and data must be piped back and forth to the CPU. The central elements in these languages are variables,assignment, anditeration, which is more efficient thanrecursionon these machines.[90] Many programming languages have been designed from scratch, altered to meet new needs, and combined with other languages. Many have eventually fallen into disuse.[citation needed]The birth of programming languages in the 1950s was stimulated by the desire to make a universal programming language suitable for all machines and uses, avoiding the need to write code for different computers.[91]By the early 1960s, the idea of a universal language was rejected due to the differing requirements of the variety of purposes for which code was written.[92] Desirable qualities of programming languages include readability, writability, and reliability.[93]These features can reduce the cost of training programmers in a language, the amount of time needed to write and maintain programs in the language, the cost of compiling the code, and increase runtime performance.[94] Programming language design often involves tradeoffs.[104]For example, features to improve reliability typically come at the cost of performance.[105]Increased expressivity due to a large number of operators makes writing code easier but comes at the cost of readability.[105] Natural-language programminghas been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate.Edsger W. Dijkstratook the position that the use of a formal language is essential to prevent the introduction of meaningless constructs.[106]Alan Perliswas similarly dismissive of the idea.[107] The specification of a programming language is an artifact that the languageusersand theimplementorscan use to agree upon whether a piece ofsource codeis a validprogramin that language, and if so what its behavior shall be. A programming language specification can take several forms, including the following: An implementation of a programming language is the conversion of a program intomachine codethat can be executed by the hardware. The machine code then can be executed with the help of theoperating system.[111]The most common form of interpretation inproduction codeis by acompiler, which translates the source code via an intermediate-level language into machine code, known as anexecutable. Once the program is compiled, it will run more quickly than with other implementation methods.[112]Some compilers are able to provide furtheroptimizationto reduce memory or computation usage when the executable runs, but increasing compilation time.[113] Another implementation method is to run the program with aninterpreter, which translates each line of software into machine code just before it executes. Although it can make debugging easier, the downside of interpretation is that it runs 10 to 100 times slower than a compiled executable.[114]Hybrid interpretation methods provide some of the benefits of compilation and some of the benefits of interpretation via partial compilation. One form this takes isjust-in-time compilation, in which the software is compiled ahead of time into an intermediate language, and then into machine code immediately before execution.[115] Although most of the most commonly used programming languages have fully open specifications and implementations, many programming languages exist only as proprietary programming languages with the implementation available only from a single vendor, which may claim that such a proprietary language is their intellectual property. Proprietary programming languages are commonlydomain-specific languagesor internalscripting languagesfor a single product; some proprietary languages are used only internally within a vendor, while others are available to external users.[citation needed] Some programming languages exist on the border between proprietary and open; for example,Oracle Corporationasserts proprietary rights to some aspects of theJava programming language,[116]andMicrosoft'sC#programming language, which has open implementations of most parts of the system, also hasCommon Language Runtime(CLR) as a closed environment.[117] Many proprietary languages are widely used, in spite of their proprietary nature; examples includeMATLAB,VBScript, andWolfram Language. Some languages may make the transition from closed to open; for example,Erlangwas originally Ericsson's internal programming language.[118] Open source programming languagesare particularly helpful foropen scienceapplications, enhancing the capacity forreplicationand code sharing.[119] Thousands of different programming languages have been created, mainly in the computing field.[120]Individual software projects commonly use five programming languages or more.[121] Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness. When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers "do exactly what they are told to do", and cannot "understand" what code the programmer intended to write. The combination of the language definition, a program, and the program's inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program. On the other hand, ideas about an algorithm can be communicated to humans without the precision required for execution by usingpseudocode, which interleaves natural language with code written in a programming language. A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. Aprogrammeruses theabstractionspresent in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (calledprimitives).[122]Programmingis the process by which programmers combine these primitives to compose new programs, or adapt existing ones to new uses or a changing environment. Programs for a computer might beexecutedin abatch processwithout any human interaction, or a user might typecommandsin aninteractive sessionof aninterpreter. In this case the "commands" are simply programs, whose execution is chained together. When a language can run its commands through an interpreter (such as aUnix shellor othercommand-line interface), without compiling, it is called ascripting language.[123] Determining which is the most widely used programming language is difficult since the definition of usage varies by context. One language may occupy the greater number of programmer hours, a different one has more lines of code, and a third may consume the most CPU time. Some languages are very popular for particular kinds of applications. For example,COBOLis still strong in the corporate data center, often on largemainframes;[124][125]Fortranin scientific and engineering applications;Adain aerospace, transportation, military, real-time, and embedded applications; andCin embedded applications and operating systems. Other languages are regularly used to write many different kinds of applications. Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed: Combining and averaging information from various internet sites, stackify.com reported the ten most popular programming languages (in descending order by overall popularity):Java,C,C++,Python,C#,JavaScript,VB .NET,R,PHP, andMATLAB.[129] As of June 2024, the top five programming languages as measured byTIOBE indexarePython,C++,C,JavaandC#. TIOBE provides a list of top 100 programming languages according to popularity and update this list every month.[130] Adialectof a programming language or adata exchange languageis a (relatively small) variation or extension of the language that does not change its intrinsic nature. With languages such asSchemeandForth, standards may be considered insufficient, inadequate, or illegitimate by implementors, so often they will deviate from the standard, making a newdialect. In other cases, a dialect is created for use in adomain-specific language, often a subset. In theLispworld, most languages that use basicS-expressionsyntax and Lisp-like semantics are considered Lisp dialects, although they vary wildly as do, say,RacketandClojure. As it is common for one language to have several dialects, it can become quite difficult for an inexperienced programmer to find the right documentation. TheBASIClanguage hasmany dialects. Programming languages are often placed into four main categories:imperative,functional,logic, andobject oriented.[131] Althoughmarkup languagesare not programming languages, some have extensions that support limited programming. Additionally, there are special-purpose languages that are not easily compared to other programming languages.[135]
https://en.wikipedia.org/wiki/Programming_languages
Inengineering,debuggingis the process of finding theroot cause,workarounds, and possible fixes forbugs. Forsoftware, debugging tactics can involveinteractivedebugging,control flowanalysis,log file analysis, monitoring at theapplicationorsystemlevel,memory dumps, andprofiling. Manyprogramming languagesandsoftware development toolsalso offer programs to aid in debugging, known asdebuggers. The termbug, in the sense of defect, dates back at least to 1878 whenThomas Edisonwrote "little faults and difficulties" in his inventions as "Bugs". A popular story from the 1940s is fromAdmiral Grace Hopper.[1]While she was working on aMark IIcomputer at Harvard University, her associates discovered amothstuck in a relay that impeded operation and wrote in a log book "First actual case of a bug being found". Although probably ajoke, conflating the two meanings of bug (biological and defect), the story indicates that the term was used in the computer field at that time. Similarly, the termdebuggingwas used in aeronautics before entering the world ofcomputers. A letter fromJ. Robert Oppenheimer, director of theWWIIatomic bombManhattan Projectat Los Alamos, used the term in a letter to Dr.Ernest Lawrenceat UC Berkeley, dated October 27, 1944,[2]regarding the recruitment of additional technical staff. TheOxford English Dictionaryentry fordebuguses the termdebuggingin reference to airplane engine testing in a 1945 article in the Journal of the Royal Aeronautical Society. An article in "Airforce" (June 1945 p. 50) refers todebuggingaircraft cameras. The seminal article by Gill[3]in 1951 is the earliest in-depth discussion of programming errors, but it does not use the termbugordebugging. In theACM's digital library, the termdebuggingis first used in three papers from the 1952 ACM National Meetings.[4][5][6]Two of the three use the term in quotation marks. By 1963,debuggingwas a common enough term to be mentioned in passing without explanation on page 1 of theCTSSmanual.[7] As software and electronic systems have become generally more complex, the various common debugging techniques have expanded with more methods to detect anomalies, assess impact, and schedulesoftware patchesor full updates to a system. The words "anomaly" and "discrepancy" can be used, as beingmore neutral terms, to avoid the words "error" and "defect" or "bug" where there might be an implication that all so-callederrors,defectsorbugsmust be fixed (at all costs). Instead, animpact assessmentcan be made to determine if changes to remove ananomaly(ordiscrepancy) would be cost-effective for the system, or perhaps a scheduled new release might render the change(s) unnecessary. Not all issues aresafety-criticalormission-criticalin a system. Also, it is important to avoid the situation where a change might be more upsetting to users, long-term, than living with the known problem(s) (where the "cure would be worse than the disease"). Basing decisions of the acceptability of some anomalies can avoid a culture of a "zero-defects" mandate, where people might be tempted to deny the existence of problems so that the result would appear as zerodefects. Considering the collateral issues, such as the cost-versus-benefit impact assessment, then broader debugging techniques will expand to determine the frequency of anomalies (how often the same "bugs" occur) to help assess their impact to the overall system. Debugging ranges in complexity from fixing simple errors to performing lengthy and tiresome tasks of data collection, analysis, and scheduling updates. The debugging skill of the programmer can be a major factor in the ability to debug a problem, but the difficulty of software debugging varies greatly with the complexity of the system, and also depends, to some extent, on theprogramming language(s) used and the available tools, such asdebuggers. Debuggers are software tools which enable theprogrammerto monitor theexecutionof a program, stop it, restart it, setbreakpoints, and change values in memory. The termdebuggercan also refer to the person who is doing the debugging. Generally,high-level programming languages, such asJava, make debugging easier, because they have features such asexception handlingandtype checkingthat make real sources of erratic behaviour easier to spot. In programming languages such asCorassembly, bugs may cause silent problems such asmemory corruption, and it is often difficult to see where the initial problem happened. In those cases,memory debuggertools may be needed. In certain situations, general purpose software tools that are language specific in nature can be very useful. These take the form ofstatic code analysis tools. These tools look for a very specific set of known problems, some common and some rare, within the source code, concentrating more on the semantics (e.g. data flow) rather than the syntax, as compilers and interpreters do. Both commercial and free tools exist for various languages; some claim to be able to detect hundreds of different problems. These tools can be extremely useful when checking very large source trees, where it is impractical to do code walk-throughs. A typical example of a problem detected would be a variable dereference that occursbeforethe variable is assigned a value. As another example, some such tools perform strong type checking when the language does not require it. Thus, they are better at locating likely errors in code that is syntactically correct. But these tools have a reputation of false positives, where correct code is flagged as dubious. The old Unixlintprogram is an early example. For debugging electronic hardware (e.g.,computer hardware) as well as low-level software (e.g.,BIOSes,device drivers) andfirmware, instruments such asoscilloscopes,logic analyzers, orin-circuit emulators(ICEs) are often used, alone or in combination. An ICE may perform many of the typical software debugger's tasks on low-levelsoftwareandfirmware. The debugging process normally begins with identifying the steps to reproduce the problem. This can be a non-trivial task, particularly with parallel processes and someHeisenbugsfor example. The specificuser environmentand usage history can also make it difficult to reproduce the problem. After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, a bug in a compiler can make itcrashwhen parsing a large source file. However, after simplification of the test case, only few lines from the original source file can be sufficient to reproduce the same crash. Simplification may be done manually using adivide-and-conquerapproach, in which the programmer attempts to remove some parts of original test case then checks if the problem still occurs. When debugging in aGUI, the programmer can try skipping some user interaction from the original problem description to check if the remaining actions are sufficient for causing the bug to occur. After the test case is sufficiently simplified, a programmer can use a debugger tool to examine program states (values of variables, plus thecall stack) and track down the origin of the problem(s). Alternatively,tracingcan be used. In simple cases, tracing is just a few print statements which output the values of variables at particular points during the execution of the program.[citation needed] In contrast to the general purpose computer software design environment, a primary characteristic of embedded environments is the sheer number of different platforms available to the developers (CPU architectures, vendors, operating systems, and their variants). Embedded systems are, by definition, not general-purpose designs: they are typically developed for a single task (or small range of tasks), and the platform is chosen specifically to optimize that application. Not only does this fact make life tough for embedded system developers, it also makes debugging and testing of these systems harder as well, since different debugging tools are needed for different platforms. Despite the challenge of heterogeneity mentioned above, some debuggers have been developed commercially as well as research prototypes. Examples of commercial solutions come fromGreen Hills Software,[19]Lauterbach GmbH[20]and Microchip's MPLAB-ICD (for in-circuit debugger). Two examples of research prototype tools are Aveksha[21]and Flocklab.[22]They all leverage a functionality available on low-cost embedded processors, an On-Chip Debug Module (OCDM), whose signals are exposed through a standardJTAG interface. They are benchmarked based on how much change to the application is needed and the rate of events that they can keep up with. In addition to the typical task of identifying bugs in the system, embedded system debugging also seeks to collect information about the operating states of the system that may then be used to analyze the system: to find ways to boost its performance or to optimize other important characteristics (e.g. energy consumption, reliability, real-time response, etc.). Anti-debugging is "the implementation of one or more techniques within computer code that hinders attempts atreverse engineeringor debugging a target process".[23]It is actively used by recognized publishers incopy-protectionschemas, but is also used bymalwareto complicate its detection and elimination.[24]Techniques used in anti-debugging include: An early example of anti-debugging existed in early versions ofMicrosoft Wordwhich, if a debugger was detected, produced a message that said, "The tree of evil bears bitter fruit. Now trashing program disk.", after which it caused the floppy disk drive to emit alarming noises with the intent of scaring the user away from attempting it again.[25][26]
https://en.wikipedia.org/wiki/Debugging
Thislist of web testing toolsgives a general overview of features of software used forweb testing, and sometimes forweb scraping. Web testing tools may be classified based on different prerequisites that a user may require to test web applications mainly scripting requirements, GUI functionalities and browser compatibility.
https://en.wikipedia.org/wiki/List_of_web_testing_tools
Web server benchmarkingis the process of estimating aweb serverperformance in order to find if the server can serve sufficiently high workload. The performance is usually measured in terms of: The measurements must be performed under a varying load of clients and requests per client. Load testing(stress/performance testing) a web server can be performed using automation/analysis tools such as: Web application benchmarks measure the performance ofapplication serversand database servers used to hostweb applications.TPC-Wwas a common benchmark emulating an online bookstore with synthetic workload generation.
https://en.wikipedia.org/wiki/Web_server_benchmarking
Incomputer science, aninterpreteris acomputer programthat directlyexecutesinstructions written in aprogrammingorscripting language, without requiring them previously to have beencompiledinto amachine languageprogram. An interpreter generally uses one of the following strategies for program execution: Early versions ofLisp programming languageandminicomputer and microcomputer BASIC dialectswould be examples of the first type.Perl,Raku,Python,MATLAB, andRubyare examples of the second, whileUCSD Pascalis an example of the third type. Source programs are compiled ahead of time and stored as machine independent code, which is thenlinkedat run-time and executed by an interpreter and/or compiler (forJITsystems). Some systems, such asSmalltalkand contemporary versions ofBASICandJava, may also combine two and three types.[2]Interpreters of various types have also been constructed for many languages traditionally associated with compilation, such asAlgol,Fortran,Cobol,CandC++. While interpretation and compilation are the two main means by which programming languages are implemented, they are not mutually exclusive, as most interpreting systems also perform some translation work, just like compilers. The terms "interpreted language" or "compiled language" signify that the canonical implementation of that language is an interpreter or a compiler, respectively. Ahigh-level languageis ideally anabstractionindependent of particular implementations. Interpreters were used as early as 1952 to ease programming within the limitations of computers at the time (e.g. a shortage of program storage space, or no native support for floating point numbers). Interpreters were also used to translate between low-level machine languages, allowing code to be written for machines that were still under construction and tested on computers that already existed.[3]The first interpreted high-level language wasLisp. Lisp was first implemented bySteve Russellon anIBM 704computer. Russell had readJohn McCarthy's paper, "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I", and realized (to McCarthy's surprise) that the Lispevalfunction could be implemented in machine code.[4]The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions". The development of editing interpreters was influenced by the need for interactive computing. In the 1960s, the introduction of time-sharing systems allowed multiple users to access a computer simultaneously, and editing interpreters became essential for managing and modifying code in real-time. The first editing interpreters were likely developed for mainframe computers, where they were used to create and modify programs on the fly. One of the earliest examples of an editing interpreter is the EDT (Editor and Debugger for the TECO) system, which was developed in the late 1960s for the PDP-1 computer. EDT allowed users to edit and debug programs using a combination of commands and macros, paving the way for modern text editors and interactive development environments.[citation needed] An interpreter usually consists of a set of knowncommandsit canexecute, and a list of these commands in the order a programmer wishes to execute them. Each command (also known as anInstruction) contains the data the programmer wants to mutate, and information on how to mutate the data. For example, an interpreter might readADD Books, 5andinterpretit as a request to add five to theBooksvariable. Interpreters have a wide variety of instructions which are specialized to perform different tasks, but you will commonly find interpreter instructions for basicmathematical operations,branching, andmemory management, making most interpretersTuring complete. Many interpreters are also closely integrated with agarbage collectoranddebugger. Programs written in ahigh-level languageare either directly executed by some kind of interpreter or converted intomachine codeby a compiler (andassemblerandlinker) for theCPUto execute. While compilers (and assemblers) generally produce machine code directly executable by computer hardware, they can often (optionally) produce an intermediate form calledobject code. This is basically the same machine specific code but augmented with asymbol tablewith names and tags to make executable blocks (or modules) identifiable and relocatable. Compiled programs will typically use building blocks (functions) kept in a library of such object code modules. Alinkeris used to combine (pre-made) library files with the object file(s) of the application to form a single executable file. The object files that are used to generate an executable file are thus often produced at different times, and sometimes even by different languages (capable of generating the same object format). A simple interpreter written in a low-level language (e.g.assembly) may have similar machine code blocks implementing functions of the high-level language stored, and executed when a function's entry in a look up table points to that code. However, an interpreter written in a high-level language typically uses another approach, such as generating and then walking aparse tree, or by generating and executing intermediate software-defined instructions, or both. Thus, both compilers and interpreters generally turn source code (text files) into tokens, both may (or may not) generate a parse tree, and both may generate immediate instructions (for astack machine,quadruple code, or by other means). The basic difference is that a compiler system, including a (built in or separate) linker, generates a stand-alonemachine codeprogram, while an interpreter system insteadperformsthe actions described by the high-level program. A compiler can thus make almost all the conversions from source code semantics to the machine level once and for all (i.e. until the program has to be changed) while an interpreter has to dosomeof this conversion work every time a statement or function is executed. However, in an efficient interpreter, much of the translation work (including analysis of types, and similar) is factored out and done only the first time a program, module, function, or even statement, is run, thus quite akin to how a compiler works. However, a compiled program still runs much faster, under most circumstances, in part because compilers are designed to optimize code, and may be given ample time for this. This is especially true for simpler high-level languages without (many) dynamic data structures, checks, ortype checking. In traditional compilation, the executable output of thelinkers(.exe files or .dll files or a library, see picture) is typically relocatable when run under a general operating system, much like the object code modules are but with the difference that this relocation is done dynamically at run time, i.e. when the program is loaded for execution. On the other hand, compiled and linked programs for smallembedded systemsare typically statically allocated, often hard coded in aNOR flashmemory, as there is often no secondary storage and no operating system in this sense. Historically, most interpreter systems have had a self-contained editor built in. This is becoming more common also for compilers (then often called anIDE), although some programmers prefer to use an editor of their choice and run the compiler, linker and other tools manually. Historically, compilers predate interpreters because hardware at that time could not support both the interpreter and interpreted code and the typical batch environment of the time limited the advantages of interpretation.[5] During thesoftware development cycle, programmers make frequent changes to source code. When using a compiler, each time a change is made to the source code, they must wait for the compiler to translate the altered source files andlinkall of the binary code files together before the program can be executed. The larger the program, the longer the wait. By contrast, a programmer using an interpreter does a lot less waiting, as the interpreter usually just needs to translate the code being worked on to an intermediate representation (or not translate it at all), thus requiring much less time before the changes can be tested. Effects are evident upon saving the source code and reloading the program. Compiled code is generally less readily debugged as editing, compiling, and linking are sequential processes that have to be conducted in the proper sequence with a proper set of commands. For this reason, many compilers also have an executive aid, known as aMakefileand program. The Makefile lists compiler and linker command lines and program source code files, but might take a simple command line menu input (e.g. "Make 3") which selects the third group (set) of instructions then issues the commands to the compiler, and linker feeding the specified source code files. Acompilerconverts source code into binary instruction for a specific processor's architecture, thus making it lessportable. This conversion is made just once, on the developer's environment, and after that the same binary can be distributed to the user's machines where it can be executed without further translation. Across compilercan generate binary code for the user machine even if it has a different processor than the machine where the code is compiled. An interpreted program can be distributed as source code. It needs to be translated in each final machine, which takes more time but makes the program distribution independent of the machine's architecture. However, the portability of interpreted source code is dependent on the target machine actually having a suitable interpreter. If the interpreter needs to be supplied along with the source, the overall installation process is more complex than delivery of a monolithic executable, since the interpreter itself is part of what needs to be installed. The fact that interpreted code can easily be read and copied by humans can be of concern from the point of view ofcopyright. However, various systems ofencryptionandobfuscationexist. Delivery of intermediate code, such as bytecode, has a similar effect to obfuscation, but bytecode could be decoded with adecompilerordisassembler.[citation needed] The main disadvantage of interpreters is that an interpreted program typically runs more slowly than if it had beencompiled. The difference in speeds could be tiny or great; often an order of magnitude and sometimes more. It generally takes longer to run a program under an interpreter than to run the compiled code but it can take less time to interpret it than the total time required to compile and run it. This is especially important when prototyping and testing code when an edit-interpret-debug cycle can often be much shorter than an edit-compile-run-debug cycle.[6][7] Interpreting code is slower than running the compiled code because the interpreter must analyze eachstatementin the program each time it is executed and then perform the desired action, whereas the compiled code just performs the action within a fixed context determined by the compilation. Thisrun-timeanalysis is known as "interpretive overhead". Access to variables is also slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than atcompile time.[6] There are various compromises between thedevelopment speedwhen using an interpreter and the execution speed when using a compiler. Some systems (such as someLisps) allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed.[citation needed]Many interpreters do not execute the source code as it stands but convert it into some more compact internal form. ManyBASICinterpreters replacekeywordswith singlebytetokenswhich can be used to find the instruction in ajump table.[6]A few interpreters, such as thePBASICinterpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally "16-bit" constants are stored in avariable-length coderequiring 3, 6, 10, or 18 bits, and address operands include a "bit offset". Many BASIC interpreters can store and read back their own tokenized internal representation. An interpreter might well use the samelexical analyzerandparseras the compiler and then interpret the resultingabstract syntax tree. Example data type definitions for the latter, and a toy interpreter for syntax trees obtained fromCexpressions are shown in the box. Interpretation cannot be used as the sole method of execution: even though an interpreter can itself be interpreted and so on, a directly executed program is needed somewhere at the bottom of the stack because the code being interpreted is not, by definition, the same as the machine code that the CPU can execute.[8][9] There is a spectrum of possibilities between interpreting and compiling, depending on the amount of analysis performed before the program is executed. For example,Emacs Lispis compiled tobytecode, which is a highly compressed and optimized representation of the Lisp source, but is not machine code (and therefore not tied to any particular hardware). This "compiled" code is then interpreted by a bytecode interpreter (itself written inC). The compiled code in this case is machine code for avirtual machine, which is implemented not in hardware, but in the bytecode interpreter. Such compiling interpreters are sometimes also calledcompreters.[10][11]In a bytecode interpreter each instruction starts with a byte, and therefore bytecode interpreters have up to 256 instructions, although not all may be used. Some bytecodes may take multiple bytes, and may be arbitrarily complicated. Control tables- that do not necessarily ever need to pass through a compiling phase - dictate appropriate algorithmiccontrol flowvia customized interpreters in similar fashion to bytecode interpreters. Threaded code interpreters are similar to bytecode interpreters but instead of bytes they use pointers. Each "instruction" is a word that points to a function or an instruction sequence, possibly followed by a parameter. The threaded code interpreter either loops fetching instructions and calling the functions they point to, or fetches the first instruction and jumps to it, and every instruction sequence ends with a fetch and jump to the next instruction. Unlike bytecode there is no effective limit on the number of different instructions other than available memory and address space. The classic example of threaded code is theForthcode used inOpen Firmwaresystems: the source language is compiled into "F code" (a bytecode), which is then interpreted by avirtual machine.[citation needed] In the spectrum between interpreting and compiling, another approach is to transform the source code into an optimized abstract syntax tree (AST), then execute the program following this tree structure, or use it to generate native codejust-in-time.[12]In this approach, each sentence needs to be parsed just once. As an advantage over bytecode, the AST keeps the global program structure and relations between statements (which is lost in a bytecode representation), and when compressed provides a more compact representation.[13]Thus, using AST has been proposed as a better intermediate format for just-in-time compilers than bytecode. Also, it allows the system to perform better analysis during runtime. However, for interpreters, an AST causes more overhead than a bytecode interpreter, because of nodes related to syntax performing no useful work, of a less sequential representation (requiring traversal of more pointers) and of overhead visiting the tree.[14] Further blurring the distinction between interpreters, bytecode interpreters and compilation is just-in-time (JIT) compilation, a technique in which the intermediate representation is compiled to nativemachine codeat runtime. This confers the efficiency of running native code, at the cost of startup time and increased memory use when the bytecode or AST is first compiled. The earliest published JIT compiler is generally attributed to work onLISPbyJohn McCarthyin 1960.[15]Adaptive optimizationis a complementary technique in which the interpreter profiles the running program and compiles its most frequently executed parts into native code. The latter technique is a few decades old, appearing in languages such asSmalltalkin the 1980s.[16] Just-in-time compilation has gained mainstream attention amongst language implementers in recent years, withJava, the.NET Framework, most modernJavaScriptimplementations, andMatlabnow including JIT compilers.[citation needed] Making the distinction between compilers and interpreters yet again even more vague is a special interpreter design known as a template interpreter. Rather than implement the execution of code by virtue of a large switch statement containing every possible bytecode, while operating on a software stack or a tree walk, a template interpreter maintains a large array of bytecode (or any efficient intermediate representation) mapped directly to corresponding native machine instructions that can be executed on the host hardware as key value pairs (or in more efficient designs, direct addresses to the native instructions),[17][18]known as a "Template". When the particular code segment is executed the interpreter simply loads or jumps to the opcode mapping in the template and directly runs it on the hardware.[19][20]Due to its design, the template interpreter very strongly resembles a just-in-time compiler rather than a traditional interpreter, however it is technically not a JIT due to the fact that it merely translates code from the language into native calls one opcode at a time rather than creating optimized sequences of CPU executable instructions from the entire code segment. Due to the interpreter's simple design of simply passing calls directly to the hardware rather than implementing them directly, it is much faster than every other type, even bytecode interpreters, and to an extent less prone to bugs, but as a tradeoff is more difficult to maintain due to the interpreter having to support translation to multiple different architectures instead of a platform independent virtual machine/stack. To date, the only template interpreter implementations of widely known languages to exist are the interpreter within Java's official reference implementation, the Sun HotSpot Java Virtual Machine,[17]and the Ignition Interpreter in the Google V8 javascript execution engine. A self-interpreter is aprogramming languageinterpreter written in a programming language which can interpret itself; an example is aBASICinterpreter written in BASIC. Self-interpreters are related toself-hosting compilers. If nocompilerexists for the language to be interpreted, creating a self-interpreter requires the implementation of the language in a host language (which may be another programming language orassembler). By having a first interpreter such as this, the system isbootstrappedand new versions of the interpreter can be developed in the language itself. It was in this way thatDonald Knuthdeveloped the TANGLE interpreter for the languageWEBof the de-facto standardTeXtypesetting system. Defining a computer language is usually done in relation to an abstract machine (so-calledoperational semantics) or as a mathematical function (denotational semantics). A language may also be defined by an interpreter in which the semantics of the host language is given. The definition of a language by a self-interpreter is not well-founded (it cannot define a language), but a self-interpreter tells a reader about the expressiveness and elegance of a language. It also enables the interpreter to interpret its source code, the first step towards reflective interpreting. An important design dimension in the implementation of a self-interpreter is whether a feature of the interpreted language is implemented with the same feature in the interpreter's host language. An example is whether aclosurein aLisp-like language is implemented using closures in the interpreter language or implemented "manually" with a data structure explicitly storing the environment. The more features implemented by the same feature in the host language, the less control the programmer of the interpreter has; for example, a different behavior for dealing with number overflows cannot be realized if the arithmetic operations are delegated to corresponding operations in the host language. Some languages such asLispandProloghave elegant self-interpreters.[21]Much research on self-interpreters (particularly reflective interpreters) has been conducted in theScheme programming language, a dialect of Lisp. In general, however, anyTuring-completelanguage allows writing of its own interpreter. Lisp is such a language, because Lisp programs are lists of symbols and other lists. XSLT is such a language, because XSLT programs are written in XML. A sub-domain ofmetaprogrammingis the writing ofdomain-specific languages(DSLs). Clive Gifford introduced[22]a measure quality of self-interpreter (the eigenratio), the limit of the ratio between computer time spent running a stack ofNself-interpreters and time spent to run a stack ofN− 1self-interpreters asNgoes to infinity. This value does not depend on the program being run. The bookStructure and Interpretation of Computer Programspresents examples ofmeta-circular interpretationfor Scheme and its dialects. Other examples of languages with a self-interpreter areForthandPascal. Microcode is a very commonly used technique "that imposes an interpreter between the hardware and the architectural level of a computer".[23]As such, the microcode is a layer of hardware-level instructions that implement higher-levelmachine codeinstructions or internalstate machinesequencing in manydigital processingelements. Microcode is used in general-purposecentral processing units, as well as in more specialized processors such asmicrocontrollers,digital signal processors,channel controllers,disk controllers,network interface controllers,network processors,graphics processing units, and in other hardware. Microcode typically resides in special high-speed memory and translates machine instructions,state machinedata or other input into sequences of detailed circuit-level operations. It separates the machine instructions from the underlyingelectronicsso that instructions can be designed and altered more freely. It also facilitates the building of complex multi-step instructions, while reducing the complexity of computer circuits. Writing microcode is often calledmicroprogrammingand the microcode in a particular processor implementation is sometimes called amicroprogram. More extensive microcoding allows small and simplemicroarchitecturestoemulatemore powerful architectures with widerword length, moreexecution unitsand so on, which is a relatively simple way to achieve software compatibility between different products in a processor family. Even a non microcoding computer processor itself can be considered to be a parsing immediate execution interpreter that is written in a general purpose hardware description language such asVHDLto create a system that parses the machine code instructions and immediately executes them.
https://en.wikipedia.org/wiki/Interpreter_(computing)
Automated theorem proving(also known asATPorautomated deduction) is a subfield ofautomated reasoningandmathematical logicdealing with provingmathematical theoremsbycomputer programs. Automated reasoning overmathematical proofwas a major motivating factor for the development ofcomputer science. While the roots of formalizedlogicgo back toAristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalized mathematics.Frege'sBegriffsschrift(1879) introduced both a completepropositional calculusand what is essentially modernpredicate logic.[1]HisFoundations of Arithmetic, published in 1884,[2]expressed (parts of) mathematics in formal logic. This approach was continued byRussellandWhiteheadin their influentialPrincipia Mathematica, first published 1910–1913,[3]and with a revised second edition in 1927.[4]Russell and Whitehead thought they could derive all mathematical truth usingaxiomsandinference rulesof formal logic, in principle opening up the process to automation. In 1920,Thoralf Skolemsimplified a previous result byLeopold Löwenheim, leading to theLöwenheim–Skolem theoremand, in 1930, to the notion of aHerbrand universeand aHerbrand interpretationthat allowed(un)satisfiabilityof first-order formulas (and hence thevalidityof a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems.[5] In 1929,Mojżesz Presburgershowed that thefirst-order theoryof thenatural numberswith addition and equality (now calledPresburger arithmeticin his honor) isdecidableand gave an algorithm that could determine if a givensentencein thelanguagewas true or false.[6][7] However, shortly after this positive result,Kurt GödelpublishedOn Formally Undecidable Propositions of Principia Mathematica and Related Systems(1931), showing that in any sufficiently strong axiomatic system, there are true statements that cannot be proved in the system. This topic was further developed in the 1930s byAlonzo ChurchandAlan Turing, who on the one hand gave two independent but equivalent definitions ofcomputability, and on the other gave concrete examples ofundecidable questions. In 1954,Martin Davisprogrammed Presburger's algorithm for aJOHNNIACvacuum-tube computerat theInstitute for Advanced Studyin Princeton, New Jersey. According to Davis, "Its great triumph was to prove that the sum of two even numbers is even".[7][8]More ambitious was theLogic Theoristin 1956, a deduction system for thepropositional logicof thePrincipia Mathematica, developed byAllen Newell,Herbert A. SimonandJ. C. Shaw. Also running on a JOHNNIAC, the Logic Theorist constructed proofs from a small set of propositional axioms and three deduction rules:modus ponens, (propositional)variable substitution, and the replacement of formulas by their definition. The system usedheuristicguidance, and managed to prove 38 of the first 52 theorems of thePrincipia.[7] The "heuristic" approach of the Logic Theorist tried to emulate human mathematicians, and could not guarantee that a proof could be found for every valid theorem even in principle. In contrast, other, more systematic algorithms achieved, at least theoretically,completenessfor first-order logic. Initial approaches relied on the results ofHerbrandandSkolemto convert a first-order formula into successively larger sets ofpropositional formulaeby instantiating variables withtermsfrom theHerbrand universe. The propositional formulas could then be checked for unsatisfiability using a number of methods. Gilmore's program used conversion todisjunctive normal form, a form in which the satisfiability of a formula is obvious.[7][9] Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the common case ofpropositional logic, the problem is decidable butco-NP-complete, and hence onlyexponential-timealgorithms are believed to exist for general proof tasks. For afirst-order predicate calculus,Gödel's completeness theoremstates that the theorems (provable statements) are exactly the semantically validwell-formed formulas, so the valid formulas arecomputably enumerable: given unbounded resources, any valid formula can eventually be proven. However,invalidformulas (those that arenotentailed by a given theory), cannot always be recognized. The above applies to first-order theories, such asPeano arithmetic. However, for a specific model that may be described by a first-order theory, some statements may be true but undecidable in the theory used to describe the model. For example, byGödel's incompleteness theorem, we know that any consistent theory whose axioms are true for the natural numbers cannot prove all first-order statements true for the natural numbers, even if the list of axioms is allowed to be infinite enumerable. It follows that an automated theorem prover will fail to terminate while searching for a proof precisely when the statement being investigated is undecidable in the theory being used, even if it is true in the model of interest. Despite this theoretical limit, in practice, theorem provers can solve many hard problems, even in models that are not fully described by any first-order theory (such as theintegers). A simpler, but related, problem isproof verification, where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by aprimitive recursive functionor program, and hence the problem is always decidable. Since the proofs generated by automated theorem provers are typically very large, the problem ofproof compressionis crucial, and various techniques aiming at making the prover's output smaller, and consequently more easily understandable and checkable, have been developed. Proof assistantsrequire a human user to give hints to the system. Depending on the degree of automation, the prover can essentially be reduced to a proof checker, with the user providing the proof in a formal way, or significant proof tasks can be performed automatically. Interactive provers are used for a variety of tasks, but even fully automatic systems have proved a number of interesting and hard theorems, including at least one that has eluded human mathematicians for a long time, namely theRobbins conjecture.[10][11]However, these successes are sporadic, and work on hard problems usually requires a proficient user. Another distinction is sometimes drawn between theorem proving and other techniques, where a process is considered to be theorem proving if it consists of a traditional proof, starting with axioms and producing new inference steps using rules of inference. Other techniques would includemodel checking, which, in the simplest case, involves brute-force enumeration of many possible states (although the actual implementation of model checkers requires much cleverness, and does not simply reduce to brute force). There are hybrid theorem proving systems that use model checking as an inference rule. There are also programs that were written to prove a particular theorem, with a (usually informal) proof that if the program finishes with a certain result, then the theorem is true. A good example of this was the machine-aided proof of thefour color theorem, which was very controversial as the first claimed mathematical proof that was essentially impossible to verify by humans due to the enormous size of the program's calculation (such proofs are callednon-surveyable proofs). Another example of a program-assisted proof is the one that shows that the game ofConnect Fourcan always be won by the first player. Commercial use of automated theorem proving is mostly concentrated inintegrated circuit designand verification. Since thePentium FDIV bug, the complicatedfloating point unitsof modern microprocessors have been designed with extra scrutiny.AMD,Inteland others use automated theorem proving to verify that division and other operations are correctly implemented in their processors.[12] Other uses of theorem provers includeprogram synthesis, constructing programs that satisfy aformal specification.[13]Automated theorem provers have been integrated withproof assistants, includingIsabelle/HOL.[14] Applications of theorem provers are also found innatural language processingandformal semantics, where they are used to analyzediscourse representations.[15][16] In the late 1960s agencies funding research in automated deduction began to emphasize the need for practical applications.[citation needed]One of the first fruitful areas was that ofprogram verificationwhereby first-order theorem provers were applied to the problem of verifying the correctness of computer programs in languages such asPascal,Ada, etc. Notable among early program verification systems was the Stanford Pascal Verifier developed byDavid LuckhamatStanford University.[17][18][19]This was based on the Stanford Resolution Prover also developed at Stanford usingJohn Alan Robinson'sresolutionprinciple. This was the first automated deduction system to demonstrate an ability to solve mathematical problems that were announced in theNotices of the American Mathematical Societybefore solutions were formally published.[citation needed] First-ordertheorem proving is one of the most mature subfields of automated theorem proving. The logic is expressive enough to allow the specification of arbitrary problems, often in a reasonably natural and intuitive way. On the other hand, it is still semi-decidable, and a number of sound and complete calculi have been developed, enablingfullyautomated systems.[20]More expressive logics, such ashigher-order logics, allow the convenient expression of a wider range of problems than first-order logic, but theorem proving for these logics is less well developed.[21][22] There is substantial overlap between first-order automated theorem provers andSMT solvers. Generally, automated theorem provers focus on supporting full first-order logic with quantifiers, whereas SMT solvers focus more on supporting various theories (interpreted predicate symbols). ATPs excel at problems with lots of quantifiers, whereas SMT solvers do well on large problems without quantifiers.[23]The line is blurry enough that some ATPs participate in SMT-COMP, while some SMT solvers participate inCASC.[24] The quality of implemented systems has benefited from the existence of a large library of standardbenchmarkexamples—theThousands of Problems for Theorem Provers(TPTP) Problem Library[25]—as well as from theCADE ATP System Competition(CASC), a yearly competition of first-order systems for many important classes of first-order problems. Some important systems (all have won at least one CASC competition division) are listed below. The Theorem Prover Museum[27]is an initiative to conserve the sources of theorem prover systems for future analysis, since they are important cultural/scientific artefacts. It has the sources of many of the systems mentioned above.
https://en.wikipedia.org/wiki/Automated_theorem_prover
Acomputer-assisted proofis amathematical proofthat has been at least partially generated bycomputer. Most computer-aided proofs to date have been implementations of largeproofs-by-exhaustionof a mathematicaltheorem. The idea is to use a computer program to perform lengthy computations, and to provide a proof that the result of these computations implies the given theorem. In 1976, thefour color theoremwas the first major theorem to be verified using acomputer program. Attempts have also been made in the area ofartificial intelligenceresearch to create smaller, explicit, new proofs of mathematical theorems from the bottom up usingautomated reasoningtechniques such asheuristicsearch. Suchautomated theorem provershave proved a number of new results and found new proofs for known theorems.[citation needed]Additionally, interactiveproof assistantsallow mathematicians to develop human-readable proofs which are nonetheless formally verified for correctness. Since these proofs are generallyhuman-surveyable(albeit with difficulty, as with the proof of theRobbins conjecture) they do not share the controversial implications of computer-aided proofs-by-exhaustion. One method for using computers in mathematical proofs is by means of so-calledvalidated numericsor rigorous numerics. This means computing numerically yet with mathematical rigour. One uses set-valued arithmetic andinclusion principle[clarify]in order to ensure that the set-valued output of a numerical program encloses the solution of the original mathematical problem. This is done by controlling, enclosing and propagating round-off and truncation errors using for exampleinterval arithmetic. More precisely, one reduces the computation to a sequence of elementary operations, say(+,−,×,/){\displaystyle (+,-,\times ,/)}. In a computer, the result of each elementary operation is rounded off by the computer precision. However, one can construct an interval provided by upper and lower bounds on the result of an elementary operation. Then one proceeds by replacing numbers with intervals and performing elementary operations between such intervals of representable numbers.[citation needed] Computer-assisted proofs are the subject of some controversy in the mathematical world, withThomas Tymoczkofirst to articulate objections. Those who adhere to Tymoczko's arguments believe that lengthy computer-assisted proofs are not, in some sense, 'real'mathematical proofsbecause they involve so many logical steps that they are not practicallyverifiableby human beings, and that mathematicians are effectively being asked to replacelogical deductionfrom assumedaxiomswith trust in an empirical computational process, which is potentially affected byerrorsin the computer program, as well as defects in the runtime environment and hardware.[1] Other mathematicians believe that lengthy computer-assisted proofs should be regarded ascalculations, rather thanproofs: the proof algorithm itself should be proved valid, so that its use can then be regarded as a mere "verification". Arguments that computer-assisted proofs are subject to errors in their source programs, compilers, and hardware can be resolved by providing a formal proof of correctness for the computer program (an approach which was successfully applied to thefour color theoremin 2005) as well as replicating the result using different programming languages, different compilers, and different computer hardware. Another possible way of verifying computer-aided proofs is to generate their reasoning steps in amachine readableform, and then use aproof checkerprogram to demonstrate their correctness. Since validating a given proof is much easier than finding a proof, the checker program is simpler than the original assistant program, and it is correspondingly easier to gain confidence into its correctness. However, this approach of using a computer program to prove the output of another program correct does not appeal to computer proof skeptics, who see it as adding another layer of complexity without addressing the perceived need for human understanding. Another argument against computer-aided proofs is that they lackmathematical elegance—that they provide no insights or new and useful concepts. In fact, this is an argument that could be advanced against any lengthy proof by exhaustion. An additional philosophical issue raised by computer-aided proofs is whether they make mathematics into aquasi-empirical science, where thescientific methodbecomes more important than the application of pure reason in the area of abstract mathematical concepts. This directly relates to the argument within mathematics as to whether mathematics is based on ideas, or "merely" anexercisein formal symbol manipulation. It also raises the question whether, if according to thePlatonistview, all possible mathematical objects in some sense "already exist", whether computer-aided mathematics is anobservationalscience like astronomy, rather than an experimental one like physics or chemistry. This controversy within mathematics is occurring at the same time as questions are being asked in the physics community about whether twenty-first centurytheoretical physicsis becoming too mathematical, and leaving behind its experimental roots. The emerging field ofexperimental mathematicsis confronting this debate head-on by focusing on numerical experiments as its main tool for mathematical exploration. Inclusion in this list does not imply that a formal computer-checked proof exists, but rather, that a computer program has been involved in some way. See the main articles for details.
https://en.wikipedia.org/wiki/Computer-assisted_proof
Algebraic geometryis a branch ofmathematicswhich usesabstract algebraictechniques, mainly fromcommutative algebra, to solvegeometrical problems. Classically, it studieszerosofmultivariate polynomials; the modern approach generalizes this in a few different aspects. The fundamental objects of study in algebraic geometry arealgebraic varieties, which are geometric manifestations ofsolutionsofsystems of polynomial equations. Examples of the most studied classes of algebraic varieties arelines,circles,parabolas,ellipses,hyperbolas,cubic curveslikeelliptic curves, and quartic curves likelemniscatesandCassini ovals. These areplane algebraic curves. A point of the plane lies on an algebraic curve if its coordinates satisfy a givenpolynomial equation. Basic questions involve the study of points of special interest likesingular points,inflection pointsandpoints at infinity. More advanced questions involve thetopologyof the curve and the relationship between curves defined by different equations. Algebraic geometry occupies a central place in modern mathematics and has multiple conceptual connections with such diverse fields ascomplex analysis, topology andnumber theory. As a study of systems of polynomial equations in several variables, the subject of algebraic geometry begins with finding specific solutions viaequation solving, and then proceeds to understand the intrinsic properties of the totality of solutions of a system of equations. This understanding requires both conceptual theory and computational technique. In the 20th century, algebraic geometry split into several subareas. Much of the development of the mainstream of algebraic geometry in the 20th century occurred within an abstract algebraic framework, with increasing emphasis being placed on "intrinsic" properties of algebraic varieties not dependent on any particular way of embedding the variety in an ambient coordinate space; this parallels developments in topology,differentialandcomplex geometry. One key achievement of this abstract algebraic geometry isGrothendieck'sscheme theorywhich allows one to usesheaf theoryto study algebraic varieties in a way which is very similar to its use in the study ofdifferentialandanalytic manifolds. This is obtained by extending the notion of point: In classical algebraic geometry, a point of an affine variety may be identified, throughHilbert's Nullstellensatz, with amaximal idealof thecoordinate ring, while the points of the corresponding affine scheme are all prime ideals of this ring. This means that a point of such a scheme may be either a usual point or a subvariety. This approach also enables a unification of the language and the tools of classical algebraic geometry, mainly concerned with complex points, and of algebraic number theory.Wiles' proofof the longstanding conjecture calledFermat's Last Theoremis an example of the power of this approach. In classical algebraic geometry, the main objects of interest are the vanishing sets of collections ofpolynomials, meaning the set of all points that simultaneously satisfy one or morepolynomial equations. For instance, thetwo-dimensionalsphereof radius 1 in three-dimensionalEuclidean spaceR3could be defined as the set of all points(x,y,z){\displaystyle (x,y,z)}with A "slanted" circle inR3can be defined as the set of all points(x,y,z){\displaystyle (x,y,z)}which satisfy the two polynomial equations First we start with afieldk. In classical algebraic geometry, this field was always the complex numbersC, but many of the same results are true if we assume only thatkisalgebraically closed. We consider theaffine spaceof dimensionnoverk, denotedAn(k) (or more simplyAn, whenkis clear from the context). When one fixes a coordinate system, one may identifyAn(k) withkn. The purpose of not working withknis to emphasize that one "forgets" the vector space structure thatkncarries. A functionf:An→A1is said to bepolynomial(orregular) if it can be written as a polynomial, that is, if there is a polynomialpink[x1,...,xn] such thatf(M) =p(t1,...,tn) for every pointMwith coordinates (t1,...,tn) inAn. The property of a function to be polynomial (or regular) does not depend on the choice of a coordinate system inAn. When a coordinate system is chosen, the regular functions on the affinen-space may be identified with the ring ofpolynomial functionsinnvariables overk. Therefore, the set of the regular functions onAnis a ring, which is denotedk[An]. We say that a polynomialvanishesat a point if evaluating it at that point gives zero. LetSbe a set of polynomials ink[An]. Thevanishing set of S(orvanishing locusorzero set) is the setV(S) of all points inAnwhere every polynomial inSvanishes. Symbolically, A subset ofAnwhich isV(S), for someS, is called analgebraic set. TheVstands forvariety(a specific type of algebraic set to be defined below). Given a subsetUofAn, can one recover the set of polynomials which generate it? IfUisanysubset ofAn, defineI(U) to be the set of all polynomials whose vanishing set containsU. TheIstands forideal: if two polynomialsfandgboth vanish onU, thenf+gvanishes onU, and ifhis any polynomial, thenhfvanishes onU, soI(U) is always an ideal of the polynomial ringk[An]. Two natural questions to ask are: The answer to the first question is provided by introducing theZariski topology, a topology onAnwhose closed sets are the algebraic sets, and which directly reflects the algebraic structure ofk[An]. ThenU=V(I(U)) if and only ifUis an algebraic set or equivalently a Zariski-closed set. The answer to the second question is given byHilbert's Nullstellensatz. In one of its forms, it says thatI(V(S)) is theradicalof the ideal generated byS. In more abstract language, there is aGalois connection, giving rise to twoclosure operators; they can be identified, and naturally play a basic role in the theory; theexampleis elaborated at Galois connection. For various reasons we may not always want to work with the entire ideal corresponding to an algebraic setU.Hilbert's basis theoremimplies that ideals ink[An] are always finitely generated. An algebraic set is calledirreducibleif it cannot be written as the union of two smaller algebraic sets. Any algebraic set is a finite union of irreducible algebraic sets and this decomposition is unique. Thus its elements are called theirreducible componentsof the algebraic set. An irreducible algebraic set is also called avariety. It turns out that an algebraic set is a variety if and only if it may be defined as the vanishing set of aprime idealof thepolynomial ring. Some authors do not make a clear distinction between algebraic sets and varieties and useirreducible varietyto make the distinction when needed. Just ascontinuous functionsare the natural maps ontopological spacesandsmooth functionsare the natural maps ondifferentiable manifolds, there is a natural class of functions on an algebraic set, calledregular functionsorpolynomial functions. A regular function on an algebraic setVcontained inAnis the restriction toVof a regular function onAn. For an algebraic set defined on the field of the complex numbers, the regular functions are smooth and evenanalytic. It may seem unnaturally restrictive to require that a regular function always extend to the ambient space, but it is very similar to the situation in anormaltopological space, where theTietze extension theoremguarantees that a continuous function on a closed subset always extends to the ambient topological space. Just as with the regular functions on affine space, the regular functions onVform a ring, which we denote byk[V]. This ring is called thecoordinate ringof V. Since regular functions on V come from regular functions onAn, there is a relationship between the coordinate rings. Specifically, if a regular function onVis the restriction of two functionsfandgink[An], thenf−gis a polynomial function which is null onVand thus belongs toI(V). Thusk[V] may be identified withk[An]/I(V). Using regular functions from an affine variety toA1, we can defineregular mapsfrom one affine variety to another. First we will define a regular map from a variety into affine space: LetVbe a variety contained inAn. Choosemregular functions onV, and call themf1, ...,fm. We define aregular mapffromVtoAmby lettingf= (f1, ...,fm). In other words, eachfidetermines one coordinate of therangeoff. IfV′ is a variety contained inAm, we say thatfis aregular mapfromVtoV′ if the range offis contained inV′. The definition of the regular maps apply also to algebraic sets. The regular maps are also calledmorphisms, as they make the collection of all affine algebraic sets into acategory, where the objects are the affine algebraic sets and themorphismsare the regular maps. The affine varieties is a subcategory of the category of the algebraic sets. Given a regular mapgfromVtoV′ and a regular functionfofk[V′], thenf∘g∈k[V]. The mapf→f∘gis aring homomorphismfromk[V′] tok[V]. Conversely, every ring homomorphism fromk[V′] tok[V] defines a regular map fromVtoV′. This defines anequivalence of categoriesbetween the category of algebraic sets and theopposite categoryof the finitely generatedreducedk-algebras. This equivalence is one of the starting points ofscheme theory. In contrast to the preceding sections, this section concerns only varieties and not algebraic sets. On the other hand, the definitions extend naturally to projective varieties (next section), as an affine variety and its projective completion have the same field of functions. IfVis an affine variety, its coordinate ring is anintegral domainand has thus afield of fractionswhich is denotedk(V) and called thefield of the rational functionsonVor, shortly, thefunction fieldofV. Its elements are the restrictions toVof therational functionsover the affine space containingV. Thedomainof a rational functionfis notVbut thecomplementof the subvariety (a hypersurface) where the denominator offvanishes. As with regular maps, one may define arational mapfrom a varietyVto a varietyV'. As with the regular maps, the rational maps fromVtoV' may be identified to thefield homomorphismsfromk(V') tok(V). Two affine varieties arebirationally equivalentif there are two rational functions between them which areinverseone to the other in the regions where both are defined. Equivalently, they are birationally equivalent if their function fields are isomorphic. An affine variety is arational varietyif it is birationally equivalent to an affine space. This means that the variety admits arational parameterization, that is aparametrizationwithrational functions. For example, the circle of equationx2+y2−1=0{\displaystyle x^{2}+y^{2}-1=0}is a rational curve, as it has theparametric equation which may also be viewed as a rational map from the line to the circle. The problem ofresolution of singularitiesis to know if every algebraic variety is birationally equivalent to a variety whose projective completion is nonsingular (see alsosmooth completion). It was solved in the affirmative incharacteristic0 byHeisuke Hironakain 1964 and is yet unsolved in finite characteristic. Just as the formulas for the roots of second, third, and fourth degree polynomials suggest extending real numbers to the more algebraically complete setting of the complex numbers, many properties of algebraic varieties suggest extending affine space to a more geometrically complete projective space. Whereas the complex numbers are obtained by adding the numberi, a root of the polynomialx2+ 1, projective space is obtained by adding in appropriate points "at infinity", points where parallel lines may meet. To see how this might come about, consider the varietyV(y−x2). If we draw it, we get aparabola. Asxgoes to positive infinity, the slope of the line from the origin to the point (x,x2) also goes to positive infinity. Asxgoes to negative infinity, the slope of the same line goes to negative infinity. Compare this to the varietyV(y−x3). This is acubic curve. Asxgoes to positive infinity, the slope of the line from the origin to the point (x,x3) goes to positive infinity just as before. But unlike before, asxgoes to negative infinity, the slope of the same line goes to positive infinity as well; the exact opposite of the parabola. So the behavior "at infinity" ofV(y−x3) is different from the behavior "at infinity" ofV(y−x2). The consideration of theprojective completionof the two curves, which is their prolongation "at infinity" in theprojective plane, allows us to quantify this difference: the point at infinity of the parabola is aregular point, whose tangent is theline at infinity, while the point at infinity of the cubic curve is acusp. Also, both curves are rational, as they are parameterized byx, and theRiemann-Roch theoremimplies that the cubic curve must have a singularity, which must be at infinity, as all its points in the affine space are regular. Thus many of the properties of algebraic varieties, including birational equivalence and all the topological properties, depend on the behavior "at infinity" and so it is natural to study the varieties in projective space. Furthermore, the introduction of projective techniques made many theorems in algebraic geometry simpler and sharper: For example,Bézout's theoremon the number of intersection points between two varieties can be stated in its sharpest form only in projective space. For these reasons, projective space plays a fundamental role in algebraic geometry. Nowadays, theprojective spacePnof dimensionnis usually defined as the set of the lines passing through a point, considered as the origin, in the affine space of dimensionn+ 1, or equivalently to the set of the vector lines in a vector space of dimensionn+ 1. When a coordinate system has been chosen in the space of dimensionn+ 1, all the points of a line have the same set of coordinates, up to the multiplication by an element ofk. This defines thehomogeneous coordinatesof a point ofPnas a sequence ofn+ 1elements of the base fieldk, defined up to the multiplication by a nonzero element ofk(the same for the whole sequence). A polynomial inn+ 1variables vanishes at all points of a line passing through the origin if and only if it ishomogeneous. In this case, one says that the polynomialvanishesat the corresponding point ofPn. This allows us to define aprojective algebraic setinPnas the setV(f1, ...,fk), where a finite set of homogeneous polynomials{f1, ...,fk}vanishes. Like for affine algebraic sets, there is abijectionbetween the projective algebraic sets and the reducedhomogeneous idealswhich define them. Theprojective varietiesare the projective algebraic sets whose defining ideal is prime. In other words, a projective variety is a projective algebraic set, whosehomogeneous coordinate ringis anintegral domain, theprojective coordinates ringbeing defined as the quotient of the graded ring or the polynomials inn+ 1variables by the homogeneous (reduced) ideal defining the variety. Every projective algebraic set may be uniquely decomposed into a finite union of projective varieties. The only regular functions which may be defined properly on a projective variety are the constant functions. Thus this notion is not used in projective situations. On the other hand, thefield of the rational functionsorfunction fieldis a useful notion, which, similarly to the affine case, is defined as the set of the quotients of two homogeneous elements of the same degree in the homogeneous coordinate ring. Real algebraic geometry is the study of real algebraic varieties. The fact that the field of the real numbers is anordered fieldcannot be ignored in such a study. For example, the curve of equationx2+y2−a=0{\displaystyle x^{2}+y^{2}-a=0}is a circle ifa>0{\displaystyle a>0}, but has no real points ifa<0{\displaystyle a<0}. Real algebraic geometry also investigates, more broadly,semi-algebraic sets, which are the solutions of systems of polynomial inequalities. For example, neither branch of thehyperbolaof equationxy−1=0{\displaystyle xy-1=0}is a real algebraic variety. However, the branch in the first quadrant is a semi-algebraic set defined byxy−1=0{\displaystyle xy-1=0}andx>0{\displaystyle x>0}. One open problem in real algebraic geometry is the following part ofHilbert's sixteenth problem: Decide which respective positions are possible for the ovals of a nonsingularplane curveof degree 8. One may date the origin of computational algebraic geometry to meeting EUROSAM'79 (International Symposium on Symbolic and Algebraic Manipulation) held atMarseille, France, in June 1979. At this meeting, Since then, most results in this area are related to one or several of these items either by using or improving one of these algorithms, or by finding algorithms whose complexity is simply exponential in the number of the variables. A body of mathematical theory complementary to symbolic methods callednumerical algebraic geometryhas been developed over the last several decades. The main computational method ishomotopy continuation. This supports, for example, a model offloating pointcomputation for solving problems of algebraic geometry. AGröbner basisis a system ofgeneratorsof a polynomialidealwhose computation allows the deduction of many properties of the affine algebraic variety defined by the ideal. Given an idealIdefining an algebraic setV: Gröbner basis computations do not allow one to compute directly theprimary decompositionofInor the prime ideals defining the irreducible components ofV, but most algorithms for this involve Gröbner basis computation. The algorithms which are not based on Gröbner bases useregular chainsbut may need Gröbner bases in some exceptional situations. Gröbner bases are deemed to be difficult to compute. In fact they may contain, in the worst case, polynomials whose degree is doubly exponential in the number of variables and a number of polynomials which is also doubly exponential. However, this is only a worst case complexity, and the complexity bound of Lazard's algorithm of 1979 may frequently apply.Faugère F5 algorithmrealizes this complexity, as it may be viewed as an improvement of Lazard's 1979 algorithm. It follows that the best implementations allow one to compute almost routinely with algebraic sets of degree more than 100. This means that, presently, the difficulty of computing a Gröbner basis is strongly related to the intrinsic difficulty of the problem. CAD is an algorithm which was introduced in 1973 by G. Collins to implement with an acceptable complexity theTarski–Seidenberg theoremonquantifier eliminationover the real numbers. This theorem concerns the formulas of thefirst-order logicwhoseatomic formulasare polynomial equalities or inequalities between polynomials with real coefficients. These formulas are thus the formulas which may be constructed from the atomic formulas by the logical operatorsand(∧),or(∨),not(¬),for all(∀) andexists(∃). Tarski's theorem asserts that, from such a formula, one may compute an equivalent formula without quantifier (∀, ∃). The complexity of CAD is doubly exponential in the number of variables. This means that CAD allows, in theory, to solve every problem of real algebraic geometry which may be expressed by such a formula, that is almost every problem concerning explicitly given varieties and semi-algebraic sets. While Gröbner basis computation has doubly exponential complexity only in rare cases, CAD has almost always this high complexity. This implies that, unless if most polynomials appearing in the input are linear, it may not solve problems with more than four variables. Since 1973, most of the research on this subject is devoted either to improving CAD or finding alternative algorithms in special cases of general interest. As an example of the state of art, there are efficient algorithms to find at least a point in every connected component of a semi-algebraic set, and thus to test if a semi-algebraic set is empty. On the other hand, CAD is yet, in practice, the best algorithm to count the number of connected components. The basic general algorithms of computational geometry have a double exponential worst casecomplexity. More precisely, ifdis the maximal degree of the input polynomials andnthe number of variables, their complexity is at mostd2cn{\displaystyle d^{2^{cn}}}for some constantc, and, for some inputs, the complexity is at leastd2c′n{\displaystyle d^{2^{c'n}}}for another constantc′. During the last 20 years of the 20th century, various algorithms have been introduced to solve specific subproblems with a better complexity. Most of these algorithms have a complexitydO(n2){\displaystyle d^{O(n^{2})}}.[1] Among these algorithms which solve a sub problem of the problems solved by Gröbner bases, one may citetesting if an affine variety is emptyandsolving nonhomogeneous polynomial systems which have a finite number of solutions.Such algorithms are rarely implemented because, on most entriesFaugère's F4 and F5 algorithmshave a better practical efficiency and probably a similar or better complexity (probablybecause the evaluation of the complexity of Gröbner basis algorithms on a particular class of entries is a difficult task which has been done only in a few special cases). The main algorithms of real algebraic geometry which solve a problem solved by CAD are related to the topology of semi-algebraic sets. One may citecounting the number of connected components,testing if two points are in the same componentsorcomputing aWhitney stratificationof a real algebraic set. They have a complexity ofdO(n2){\displaystyle d^{O(n^{2})}}, but the constant involved byOnotation is so high that using them to solve any nontrivial problem effectively solved by CAD, is impossible even if one could use all the existing computing power in the world. Therefore, these algorithms have never been implemented and this is an active research area to search for algorithms with have together a good asymptotic complexity and a good practical efficiency. The modern approaches to algebraic geometry redefine and effectively extend the range of basic objects in various levels of generality to schemes,formal schemes,ind-schemes,algebraic spaces,algebraic stacksand so on. The need for this arises already from the useful ideas within theory of varieties, e.g. the formal functions of Zariski can be accommodated by introducing nilpotent elements in structure rings; considering spaces of loops and arcs, constructing quotients by group actions and developing formal grounds for naturalintersection theoryanddeformation theorylead to some of the further extensions. Most remarkably, in the early 1960s,algebraic varietieswere subsumed intoAlexander Grothendieck's concept of ascheme. Their local objects are affine schemes or prime spectra which are locally ringed spaces which form a category which is antiequivalent to the category of commutative unital rings, extending the duality between the category of affine algebraic varieties over a fieldk, and the category of finitely generated reducedk-algebras. The gluing is along Zariski topology; one can glue within the category of locally ringed spaces, but also, using the Yoneda embedding, within the more abstract category of presheaves of sets over the category of affine schemes. The Zariski topology in the set theoretic sense is then replaced by aGrothendieck topology. Grothendieck introduced Grothendieck topologies having in mind more exotic but geometrically finer and more sensitive examples than the crude Zariski topology, namely theétale topology, and the two flat Grothendieck topologies: fppf and fpqc; nowadays some other examples became prominent includingNisnevich topology. Sheaves can be furthermore generalized to stacks in the sense of Grothendieck, usually with some additional representability conditions leading toArtin stacksand, even finer,Deligne–Mumford stacks, both often called algebraic stacks. Sometimes other algebraic sites replace the category of affine schemes. For example,Nikolai Durovhas introduced commutative algebraic monads as a generalization of local objects in a generalized algebraic geometry. Versions of atropical geometry, of anabsolute geometryover a field of one element and an algebraic analogue ofArakelov's geometrywere realized in this setup. Another formal generalization is possible touniversal algebraic geometryin which everyvariety of algebrashas its own algebraic geometry. The termvariety of algebrasshould not be confused withalgebraic variety. The language of schemes, stacks and generalizations has proved to be a valuable way of dealing with geometric concepts and became cornerstones of modern algebraic geometry. Algebraic stacks can be further generalized and for many practical questions like deformation theory and intersection theory, this is often the most natural approach. One can extend theGrothendieck siteof affine schemes to ahigher categoricalsite ofderived affine schemes, by replacing the commutative rings with an infinity category ofdifferential graded commutative algebras, or of simplicial commutative rings or a similar category with an appropriate variant of a Grothendieck topology. One can also replace presheaves of sets by presheaves of simplicial sets (or of infinity groupoids). Then, in presence of an appropriate homotopic machinery one can develop a notion of derived stack as such a presheaf on the infinity category of derived affine schemes, which is satisfying certain infinite categorical version of a sheaf axiom (and to be algebraic, inductively a sequence of representability conditions).Quillen model categories, Segal categories andquasicategoriesare some of the most often used tools to formalize this yielding thederived algebraic geometry, introduced by the school ofCarlos Simpson, including Andre Hirschowitz,Bertrand Toën, Gabrielle Vezzosi, Michel Vaquié and others; and developed further byJacob Lurie,Bertrand Toën, andGabriele Vezzosi. Another (noncommutative) version of derived algebraic geometry, using A-infinity categories has been developed from the early 1990s byMaxim Kontsevichand followers. Some of the roots of algebraic geometry date back to the work of theHellenistic Greeksfrom the 5th century BC. TheDelian problem, for instance, was to construct a lengthxso that the cube of sidexcontained the same volume as the rectangular boxa2bfor given sidesaandb.Menaechmus(c.350 BC) considered the problem geometrically by intersecting the pair of plane conicsay=x2andxy=ab.[2]In the 3rd century BC,ArchimedesandApolloniussystematically studied additional problems onconic sectionsusing coordinates.[2][3]Apolloniusin the Conics further developed a method that is so similar to analytic geometry that his work is sometimes thought to have anticipated the work ofDescartesby some 1800 years.[4]His application of reference lines, adiameterand atangentis essentially no different from our modern use of a coordinate frame, where the distances measured along the diameter from the point of tangency are the abscissas, and the segments parallel to the tangent and intercepted between the axis and the curve are the ordinates. He further developed relations between the abscissas and the corresponding coordinates using geometric methods like using parabolas and curves.[5][6][7]Medieval mathematicians, includingOmar Khayyam,Leonardo of Pisa,GersonidesandNicole Oresmein theMedieval Period,[8]solved certain cubic and quadratic equations by purely algebraic means and then interpreted the results geometrically. ThePersianmathematicianOmar Khayyám(born 1048 AD) believed that there was a relationship betweenarithmetic,algebraandgeometry.[9][10][11]This was criticized by Jeffrey Oaks, who claims that the study of curves by means of equations originated with Descartes in the seventeenth century.[12] Such techniques of applying geometrical constructions to algebraic problems were also adopted by a number ofRenaissancemathematicians such asGerolamo CardanoandNiccolò Fontana "Tartaglia"on their studies of the cubic equation. The geometrical approach to construction problems, rather than the algebraic one, was favored by most 16th and 17th century mathematicians, notablyBlaise Pascalwho argued against the use of algebraic and analytical methods in geometry.[13]The French mathematiciansFranciscus Vietaand laterRené DescartesandPierre de Fermatrevolutionized the conventional way of thinking about construction problems through the introduction ofcoordinate geometry. They were interested primarily in the properties ofalgebraic curves, such as those defined byDiophantine equations(in the case of Fermat), and the algebraic reformulation of the classical Greek works on conics and cubics (in the case of Descartes). During the same period, Blaise Pascal andGérard Desarguesapproached geometry from a different perspective, developing thesyntheticnotions ofprojective geometry. Pascal and Desargues also studied curves, but from the purely geometrical point of view: the analog of the Greekruler and compass construction. Ultimately, theanalytic geometryof Descartes and Fermat won out, for it supplied the 18th century mathematicians with concrete quantitative tools needed to study physical problems using the new calculus ofNewtonandLeibniz. However, by the end of the 18th century, most of the algebraic character of coordinate geometry was subsumed by thecalculus of infinitesimalsofLagrangeandEuler. It took the simultaneous 19th century developments ofnon-Euclidean geometryandAbelian integralsin order to bring the old algebraic ideas back into the geometrical fold. The first of these new developments was seized up byEdmond LaguerreandArthur Cayley, who attempted to ascertain the generalized metric properties of projective space. Cayley introduced the idea ofhomogeneous polynomial forms, and more specificallyquadratic forms, on projective space. Subsequently,Felix Kleinstudied projective geometry (along with other types of geometry) from the viewpoint that the geometry on a space is encoded in a certain class oftransformationson the space. By the end of the 19th century, projective geometers were studying more general kinds of transformations on figures in projective space. Rather than the projective linear transformations which were normally regarded as giving the fundamentalKleinian geometryon projective space, they concerned themselves also with the higher degreebirational transformations. This weaker notion of congruence would later lead members of the 20th centuryItalian school of algebraic geometryto classifyalgebraic surfacesup tobirational isomorphism. The second early 19th century development, that of Abelian integrals, would leadBernhard Riemannto the development ofRiemann surfaces. In the same period began the algebraization of the algebraic geometry throughcommutative algebra. The prominent results in this direction areHilbert's basis theoremandHilbert's Nullstellensatz, which are the basis of the connection between algebraic geometry and commutative algebra, andMacaulay'smultivariate resultant, which is the basis ofelimination theory. Probably because of the size of the computation which is implied by multivariate resultants, elimination theory was forgotten during the middle of the 20th century until it was renewed bysingularity theoryand computational algebraic geometry.[a] B. L. van der Waerden,Oscar ZariskiandAndré Weildeveloped a foundation for algebraic geometry based on contemporarycommutative algebra, includingvaluation theoryand the theory ofideals. One of the goals was to give a rigorous framework for proving the results of theItalian school of algebraic geometry. In particular, this school used systematically the notion ofgeneric pointwithout any precise definition, which was first given by these authors during the 1930s. In the 1950s and 1960s,Jean-Pierre SerreandAlexander Grothendieckrecast the foundations making use ofsheaf theory. Later, from about 1960, and largely led by Grothendieck, the idea ofschemeswas worked out, in conjunction with a very refined apparatus ofhomological techniques. After a decade of rapid development the field stabilized in the 1970s, and new applications were made, both tonumber theoryand to more classical geometric questions on algebraic varieties,singularities,moduli, andformal moduli. An important class of varieties, not easily understood directly from their defining equations, are theabelian varieties, which are the projective varieties whose points form an abeliangroup. The prototypical examples are theelliptic curves, which have a rich theory. They were instrumental in the proof ofFermat's Last Theoremand are also used inelliptic-curve cryptography. In parallel with the abstract trend of the algebraic geometry, which is concerned with general statements about varieties, methods for effective computation with concretely-given varieties have also been developed, which lead to the new area of computational algebraic geometry. One of the founding methods of this area is the theory ofGröbner bases, introduced byBruno Buchbergerin 1965. Another founding method, more specially devoted to real algebraic geometry, is thecylindrical algebraic decomposition, introduced byGeorge E. Collinsin 1973. See also:derived algebraic geometry. Ananalytic varietyover the field of real or complex numbers is defined locally as the set of common solutions of several equations involvinganalytic functions. It is analogous to the concept ofalgebraic varietyin that it carries a structure sheaf of analytic functions instead of regular functions. Anycomplex manifoldis a complex analytic variety. Since analytic varieties may havesingular points, not all complex analytic varieties are manifolds. Over a non-archimedean field analytic geometry is studied viarigid analytic spaces. Modern analytic geometry over the field of complex numbers is closely related to complex algebraic geometry, as has been shown byJean-Pierre Serrein his paperGAGA,[14]the name of which is French forAlgebraic geometry and analytic geometry. The GAGA results over the field of complex numbers may be extended to rigid analytic spaces over non-archimedean fields.[15] Algebraic geometry now finds applications instatistics,[16]control theory,[17][18]robotics,[19]error-correcting codes,[20]phylogenetics[21]andgeometric modelling.[22]There are also connections tostring theory,[23]game theory,[24]graph matchings,[25]solitons[26]andinteger programming.[27]
https://en.wikipedia.org/wiki/Computational_algebraic_geometry
Acomputer algebra system(CAS) orsymbolic algebra system(SAS) is anymathematical softwarewith the ability to manipulatemathematical expressionsin a way similar to the traditional manual computations ofmathematiciansandscientists. The development of the computer algebra systems in the second half of the 20th century is part of the discipline of "computer algebra" or "symbolic computation", which has spurred work inalgorithmsovermathematical objectssuch aspolynomials. Computer algebra systems may be divided into two classes: specialized and general-purpose. The specialized ones are devoted to a specific part of mathematics, such asnumber theory,group theory, or teaching ofelementary mathematics. General-purpose computer algebra systems aim to be useful to a user working in any scientific field that requires manipulation of mathematical expressions. To be useful, a general-purpose computer algebra system must include various features such as: The library must not only provide for the needs of the users, but also the needs of the simplifier. For example, the computation ofpolynomial greatest common divisorsis systematically used for the simplification of expressions involving fractions. This large amount of required computer capabilities explains the small number of general-purpose computer algebra systems. Significant systems includeAxiom,GAP,Maxima,Magma,Maple,Mathematica, andSageMath. In the 1950s, while computers were mainly used for numerical computations, there were some research projects into using them for symbolic manipulation. Computer algebra systems began to appear in the 1960s and evolved out of two quite different sources—the requirements of theoretical physicists and research intoartificial intelligence. A prime example for the first development was the pioneering work conducted by the later Nobel Prize laureate in physicsMartinus Veltman, who designed a program for symbolic mathematics, especially high-energy physics, calledSchoonschip(Dutch for "clean ship") in 1963. Other early systems includeFORMAC. UsingLispas the programming basis,Carl EngelmancreatedMATHLABin 1964 atMITREwithin an artificial-intelligence research environment. Later MATHLAB was made available to users on PDP-6 and PDP-10 systems running TOPS-10 or TENEX in universities. Today it can still be used onSIMHemulations of the PDP-10. MATHLAB ("mathematicallaboratory") should not be confused withMATLAB("matrixlaboratory"), which is a system for numerical computation built 15 years later at theUniversity of New Mexico. In 1987,Hewlett-Packardintroduced the first hand-held calculator CAS with theHP-28 series.[1]Other early handheld calculators with symbolic algebra capabilities included theTexas InstrumentsTI-89 seriesandTI-92calculator, and theCasioCFX-9970G.[2] The first popular computer algebra systems weremuMATH,Reduce,Derive(based on muMATH), andMacsyma; acopyleftversion of Macsyma is calledMaxima.Reducebecame free software in 2008.[3]Commercial systems includeMathematica[4]andMaple, which are commonly used by research mathematicians, scientists, and engineers. Freely available alternatives includeSageMath(which can act as afront-endto several other free and nonfree CAS). Other significant systems includeAxiom,GAP,MaximaandMagma. The movement to web-based applications in the early 2000s saw the release ofWolframAlpha, an online search engine and CAS which includes the capabilities ofMathematica.[5] More recently, computer algebra systems have been implemented usingartificial neural networks, though as of 2020 they are not commercially available.[6] The symbolic manipulations supported typically include: In the above, the wordsomeindicates that the operation cannot always be performed. Many also include: Some include: Some computer algebra systems focus on specialized disciplines; these are typically developed in academia and are free. They can be inefficient for numeric operations as compared tonumeric systems. The expressions manipulated by the CAS typically includepolynomialsin multiple variables; standard functions of expressions (sine,exponential, etc.); various special functions (Γ,ζ,erf,Bessel functions, etc.); arbitrary functions of expressions; optimization; derivatives, integrals, simplifications, sums, and products of expressions; truncatedserieswith expressions as coefficients,matricesof expressions, and so on. Numeric domains supported typically includefloating-point representation of real numbers,integers(of unbounded size),complex(floating-point representation),interval representation of reals,rational number(exact representation) andalgebraic numbers. There have been many advocates for increasing the use of computer algebra systems in primary and secondary-school classrooms. The primary reason for such advocacy is that computer algebra systems represent real-world math more than do paper-and-pencil or hand calculator based mathematics.[12]This push for increasing computer usage in mathematics classrooms has been supported by some boards of education. It has even been mandated in the curriculum of some regions.[13] Computer algebra systems have been extensively used in higher education.[14][15]Many universities offer either specific courses on developing their use, or they implicitly expect students to use them for their course work. The companies that develop computer algebra systems have pushed to increase their prevalence among university and college programs.[16][17] CAS-equipped calculators are not permitted on theACT, thePLAN, and in some classrooms[18]though it may be permitted on all ofCollege Board's calculator-permitted tests, including theSAT, someSAT Subject Testsand theAP Calculus,Chemistry,Physics, andStatisticsexams.[19]
https://en.wikipedia.org/wiki/Computer_algebra_system
Thedifferential analyseris a mechanicalanalogue computerdesigned to solvedifferential equationsbyintegration, using wheel-and-disc mechanisms to perform the integration.[1]It was one of the first advanced computing devices to be used operationally.[2]In addition to the integrator devices, the machine used an epicyclic differential mechanism to perform addition or subtraction - similar to that used on a front-wheel drive car, where the speed of the two output shafts (driving the wheels) may differ but the speeds add up to the speed of the input shaft. Multiplication/division by integer values was achieved by simple gear ratios; multiplication by fractional values was achieved by means of a multiplier table, where a human operator would have to keep a stylus tracking the slope of a bar. A variant of this human-operated table was used to implement other functions such as polynomials. Research on solutions for differential equations using mechanical devices, discountingplanimeters, started at least as early as 1836, when the French physicistGaspard-Gustave Coriolisdesigned a mechanical device to integratedifferential equationsof the first order.[3] The first description of a device which could integrate differential equations of any order was published in 1876 byJames Thomson, who was born inBelfastin 1822, but lived inScotlandfrom the age of 10.[4]Though Thomson called his device an "integrating machine", it is his description of the device, together with the additional publication in 1876 of two further descriptions by his younger brother,Lord Kelvin, which represents the invention of the differential analyser.[5] One of the earliest practical uses of Thomson's concepts was atide-predicting machinebuilt by Kelvin starting in 1872–3. On Lord Kelvin's advice, Thomson's integrating machine was later incorporated into afire-control systemfor naval gunnery being developed byArthur Pollen, resulting in an electrically driven, mechanical analogue computer, which was completed by about 1912.[6]Italian mathematicianErnesto Pascalalso developedintegraphsfor the mechanical integration of differential equations and published details in 1914.[7] However, the first widely practical general-purpose differential analyser was constructed byHarold Locke HazenandVannevar BushatMIT, 1928–1931, comprising six mechanical integrators.[8][9][10]In the same year, Bush described this machine in a journal article as a "continuous integraph".[11]When he published a further article on the device in 1931, he called it a "differential analyzer".[12]In this article, Bush stated that "[the] present device incorporates the same basic idea of interconnection of integrating units as did [Lord Kelvin's]. In detail, however, there is little resemblance to the earlier model." According to his 1970 autobiography, Bush was "unaware of Kelvin’s work until after the first differential analyzer was operational."[13]Claude Shannonwas hired as a research assistant in 1936 to run the differential analyzer in Bush's lab.[14] Douglas HartreeofManchester Universitybrought Bush's design to England, where he constructed his first "proof of concept" model with his student, Arthur Porter, during 1934. As a result of this, the university acquired a full-scale machine incorporating four mechanical integrators in March 1935, which was built byMetropolitan-Vickers, and was, according to Hartree, "[the] first machine of its kind in operation outside the United States".[15]During the next five years three more were added, atCambridge University,Queen's University Belfast, and theRoyal Aircraft Establishmentin Farnborough.[16]One of the integrators from this proof of concept is on display in the History of Computing section of theScience Museumin London, alongside a complete Manchester machine. InNorway, the locally builtOslo Analyserwas finished during 1938, based on the same principles as the MIT machine. This machine had 12 integrators, and was the largest analyser built for a period of four years.[17] In the United States, further differential analysers were built at theBallistic Research LaboratoryinMarylandand in the basement of the Moore School of Electrical Engineering at the University of Pennsylvania during the early 1940s.[18]The latter was used extensively in the computation ofartilleryfiring tables prior to the invention of theENIAC, which, in many ways, was modelled on the differential analyser.[19]Also in the early 1940s, withSamuel H. Caldwell, one of the initial contributors during the early 1930s, Bush attempted an electrical, rather than mechanical, variation, but thedigital computerbuilt elsewhere had much greater promise and the project ceased.[20]In 1947,UCLAinstalled a differential analyser built for them byGeneral Electricat a cost of $125,000.[21]By 1950, this machine had been joined by three more.[22]The UCLA differential analyzer appeared in 1950'sDestination Moon, and the same footage in 1951'sWhen Worlds Collide, where it was called "DA". A different shot appears in 1956'sEarth vs. the Flying Saucers. AtOsaka Imperial University(present-dayOsaka University) around 1944, a complete differential analyser machine was developed (illustrated) to calculate the movement of an object and other problems with mechanical components, and then draws graphs on paper with a pen. It was later transferred to theTokyo University of Scienceand has been displayed at the school's Museum of Science in Shinjuku Ward. Restored in 2014, it is one of only two still operational differential analyzers produced before the end of World War II.[23] In Canada, a differential analyser was constructed at theUniversity of Torontoin 1948 byBeatrice Helen Worsley, but it appears to have had little or no use.[24] A differential analyser may have been used in the development of thebouncing bomb, used to attackGermanhydroelectric damsduringWorld War II.[25]Differential analysers have also been used in the calculation ofsoil erosionby river control authorities.[26] The differential analyser was eventually rendered obsolete byelectronic analogue computersand, later, digital computers. The model differential analyser built at Manchester University in 1934 by Douglas Hartree and Arthur Porter made extensive use ofMeccanoparts: this meant that the machine was less costly to build, and it proved "accurate enough for the solution of many scientific problems".[27]A similar machine built by J.B. Bratt at Cambridge University in 1935 is now in theMuseum of Transport and Technology(MOTAT) collection inAuckland,New Zealand.[27]A memorandum written for the British military's Armament Research Department in 1944 describes how this machine had been modified during World War II for improved reliability and enhanced capability, and identifies its wartime applications as including research on the flow of heat, explosive detonations, and simulations oftransmission lines.[28] It has been estimated, byGarry Teethat "about 15 Meccano model Differential Analysers were built for serious work by scientists and researchers around the world".[29]
https://en.wikipedia.org/wiki/Differential_analyser
Incomputer science,model checkingorproperty checkingis a method for checking whether afinite-state modelof a system meets a givenspecification(also known ascorrectness). This is typically associated withhardwareorsoftware systems, where the specification contains liveness requirements (such as avoidance oflivelock) as well as safety requirements (such as avoidance of states representing asystem crash). In order to solve such a problemalgorithmically, both the model of the system and its specification are formulated in some precise mathematical language. To this end, the problem is formulated as a task inlogic, namely to check whether astructuresatisfies a given logical formula. This general concept applies to many kinds of logic and many kinds of structures. A simple model-checking problem consists of verifying whether a formula in thepropositional logicis satisfied by a given structure. Property checking is used forverificationwhen two descriptions are not equivalent. Duringrefinement, the specification is complemented with details that areunnecessaryin the higher-level specification. There is no need to verify the newly introduced properties against the original specification since this is not possible. Therefore, the strict bi-directional equivalence check is relaxed to a one-way property check. The implementation or design is regarded as a model of the system, whereas the specifications are properties that the model must satisfy.[2] An important class of model-checking methods has been developed for checking models ofhardwareandsoftwaredesigns where the specification is given by atemporal logicformula. Pioneering work in temporal logic specification was done byAmir Pnueli, who received the 1996 Turing award for "seminal work introducing temporal logic into computing science".[3]Model checking began with the pioneering work ofE. M. Clarke,E. A. Emerson,[4][5][6]by J. P. Queille, andJ. Sifakis.[7]Clarke, Emerson, and Sifakis shared the 2007Turing Awardfor their seminal work founding and developing the field of model checking.[8][9] Model checking is most often applied to hardware designs. For software, because of undecidability (seecomputability theory) the approach cannot be fully algorithmic, apply to all systems, and always give an answer; in the general case, it may fail to prove or disprove a given property. Inembedded-systemshardware, it is possible to validate a specification delivered, e.g., by means ofUML activity diagrams[10]or control-interpretedPetri nets.[11] The structure is usually given as a source code description in an industrialhardware description languageor a special-purpose language. Such a program corresponds to afinite-state machine(FSM), i.e., adirected graphconsisting of nodes (orvertices) andedges. A set of atomic propositions is associated with each node, typically stating which memory elements are one. Thenodesrepresent states of a system, the edges represent possible transitions that may alter the state, while the atomic propositions represent the basic properties that hold at a point of execution.[12] Formally, the problem can be stated as follows: given a desired property, expressed as a temporal logic formulap{\displaystyle p}, and a structureM{\displaystyle M}with initial states{\displaystyle s}, decide ifM,s⊨p{\displaystyle M,s\models p}. IfM{\displaystyle M}is finite, as it is in hardware, model checking reduces to agraph search. Instead of enumerating reachable states one at a time, the state space can sometimes be traversed more efficiently by considering large numbers of states at a single step. When such state-space traversal is based on representations of a set of states and transition relations as logical formulas,binary decision diagrams(BDD) or other related data structures, the model-checking method issymbolic. Historically, the first symbolic methods usedBDDs. After the success ofpropositional satisfiabilityin solving theplanningproblem inartificial intelligence(seesatplan) in 1996, the same approach was generalized to model checking forlinear temporal logic(LTL): the planning problem corresponds to model checking for safety properties. This method is known as bounded model checking.[13]The success ofBoolean satisfiability solversin bounded model checking led to the widespread use of satisfiability solvers in symbolic model checking.[14] One example of such a system requirement:Between the time an elevator is called at a floor and the time it opens its doors at that floor, the elevator can arrive at that floor at most twice. The authors of "Patterns in Property Specification for Finite-State Verification" translate this requirement into the following LTL formula:[15] Here,◻{\displaystyle \Box }should be read as "always",◊{\displaystyle \Diamond }as "eventually",U{\displaystyle {\mathcal {U}}}as "until" and the other symbols are standard logical symbols,∨{\displaystyle \lor }for "or",∧{\displaystyle \land }for "and" and¬{\displaystyle \lnot }for "not". Model-checking tools face a combinatorial blow up of the state-space, commonly known as thestate explosion problem, that must be addressed to solve most real-world problems. There are several approaches to combat this problem. Model-checking tools were initially developed to reason about the logical correctness ofdiscrete statesystems, but have since been extended to deal with real-time and limited forms ofhybrid systems. Model checking is also studied in the field ofcomputational complexity theory. Specifically, afirst-order logicalformula is fixed withoutfree variablesand the followingdecision problemis considered: Given a finiteinterpretation, for instance, one described as arelational database, decide whether the interpretation is a model of the formula. This problem is in thecircuit classAC0. It istractablewhen imposing some restrictions on the input structure: for instance, requiring that it hastreewidthbounded by a constant (which more generally implies the tractability of model checking formonadic second-order logic), bounding thedegreeof every domain element, and more general conditions such asbounded expansion, locally bounded expansion, and nowhere-dense structures.[21]These results have been extended to the task ofenumeratingall solutions to a first-order formula with free variables.[citation needed] Here is a list of significant model-checking tools:
https://en.wikipedia.org/wiki/Model_checker
Inmathematicsandcomputer science,symbolic-numeric computationis the use ofsoftwarethat combinessymbolicandnumericmethods to solve problems. Professional organizations Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Symbolic-numeric_computation
Inartificial intelligence,symbolic artificial intelligence(also known asclassical artificial intelligenceorlogic-based artificial intelligence)[1][2]is the term for the collection of all methods in artificial intelligence research that are based on high-levelsymbolic(human-readable) representations of problems,logicandsearch.[3]Symbolic AI used tools such aslogic programming,production rules,semantic netsandframes, and it developed applications such asknowledge-based systems(in particular,expert systems),symbolic mathematics,automated theorem provers,ontologies, thesemantic web, andautomated planning and schedulingsystems. The Symbolic AI paradigm led to seminal ideas insearch,symbolic programminglanguages,agents,multi-agent systems, thesemantic web, and the strengths and limitations of formal knowledge andreasoning systems. Symbolic AI was the dominantparadigmof AI research from the mid-1950s until the mid-1990s.[4]Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine withartificial general intelligenceand considered this the ultimate goal of their field.[citation needed]An early boom, with early successes such as theLogic TheoristandSamuel'sCheckers Playing Program, led to unrealistic expectations and promises and was followed by the firstAI Winteras funding dried up.[5][6]A second boom (1969–1986) occurred with the rise of expert systems, their promise of capturing corporate expertise, and an enthusiastic corporate embrace.[7][8]That boom, and some early successes, e.g., withXCONatDEC, was followed again by later disappointment.[8]Problems with difficulties in knowledge acquisition, maintaining large knowledge bases, and brittleness in handling out-of-domain problems arose. Another, second, AI Winter (1988–2011) followed.[9]Subsequently, AI researchers focused on addressing underlying problems in handling uncertainty and in knowledge acquisition.[10]Uncertainty was addressed with formal methods such ashidden Markov models,Bayesian reasoning, andstatistical relational learning.[11][12]Symbolic machine learning addressed the knowledge acquisition problem with contributions includingVersion Space,Valiant'sPAC learning,Quinlan'sID3decision-treelearning,case-based learning, andinductive logic programmingto learn relations.[13] Neural networks, a subsymbolic approach, had been pursued from early days and reemerged strongly in 2012. Early examples areRosenblatt'sperceptronlearning work, thebackpropagationwork of Rumelhart, Hinton and Williams,[14]and work inconvolutional neural networksby LeCun et al. in 1989.[15]However, neural networks were not viewed as successful until about 2012: "Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods. ... A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power ofGPUsto enormously increase the power of neural networks."[16]Over the next several years,deep learninghad spectacular success in handling vision,speech recognition, speech synthesis, image generation, and machine translation. However, since 2020, as inherent difficulties with bias, explanation, comprehensibility, and robustness became more apparent with deep learning approaches; an increasing number of AI researchers have called forcombiningthe best of both the symbolic and neural network approaches[17][18]and addressing areas that both approaches have difficulty with, such ascommon-sense reasoning.[16] A short history of symbolic AI to the present day follows below. Time periods and titles are drawn from Henry Kautz's 2020 AAAI Robert S. Engelmore Memorial Lecture[19]and the longer Wikipedia article on theHistory of AI, with dates and titles differing slightly for increased clarity. Success at early attempts in AI occurred in three main areas: artificial neural networks, knowledge representation, and heuristic search, contributing to high expectations. This section summarizes Kautz's reprise of early AI history. Cybernetic approaches attempted to replicate the feedback loops between animals and their environments. A robotic turtle, with sensors, motors for driving and steering, and seven vacuum tubes for control, based on a preprogrammed neural net, was built as early as 1948. This work can be seen as an early precursor to later work in neural networks, reinforcement learning, and situated robotics.[20] An important early symbolic AI program was theLogic theorist, written byAllen Newell,Herbert SimonandCliff Shawin 1955–56, as it was able to prove 38 elementary theorems from Whitehead and Russell'sPrincipia Mathematica. Newell, Simon, and Shaw later generalized this work to create a domain-independent problem solver,GPS(General Problem Solver). GPS solved problems represented with formal operators via state-space search usingmeans-ends analysis.[21] During the 1960s, symbolic approaches achieved great success at simulating intelligent behavior in structured environments such as game-playing, symbolic mathematics, and theorem-proving. AI research was concentrated in four institutions in the 1960s:Carnegie Mellon University,Stanford,MITand (later)University of Edinburgh. Each one developed its own style of research. Earlier approaches based oncyberneticsorartificial neural networkswere abandoned or pushed into the background. Herbert SimonandAllen Newellstudied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well ascognitive science,operations researchandmanagement science. Their research team used the results ofpsychologicalexperiments to develop programs that simulated the techniques that people used to solve problems.[22][23]This tradition, centered at Carnegie Mellon University would eventually culminate in the development of theSoararchitecture in the middle 1980s.[24][25] In addition to the highly specialized domain-specific kinds of knowledge that we will see later used in expert systems, early symbolic AI researchers discovered another more general application of knowledge. These were called heuristics, rules of thumb that guide a search in promising directions: "How can non-enumerative search be practical when the underlying problem is exponentially hard? The approach advocated by Simon and Newell is to employheuristics: fast algorithms that may fail on some inputs or output suboptimal solutions."[26]Another important advance was to find a way to apply these heuristics that guarantees a solution will be found, if there is one, not withstanding the occasional fallibility of heuristics: "TheA* algorithmprovided a general frame for complete and optimal heuristically guided search. A* is used as a subroutine within practically every AI algorithm today but is still no magic bullet; its guarantee of completeness is bought at the cost of worst-case exponential time.[26] Early work covered both applications of formal reasoning emphasizingfirst-order logic, along with attempts to handlecommon-sense reasoningin a less formal manner. Unlike Simon and Newell,John McCarthyfelt that machines did not need to simulate the exact mechanisms of human thought, but could instead try to find the essence of abstract reasoning and problem-solving with logic,[27]regardless of whether people used the same algorithms.[a]His laboratory atStanford(SAIL) focused on using formallogicto solve a wide variety of problems, includingknowledge representation, planning andlearning.[31]Logic was also the focus of the work at theUniversity of Edinburghand elsewhere in Europe which led to the development of the programming languagePrologand the science of logic programming.[32][33] Researchers atMIT(such asMarvin MinskyandSeymour Papert)[34][35][6]found that solving difficult problems invisionandnatural language processingrequired ad hoc solutions—they argued that no simple and general principle (likelogic) would capture all the aspects of intelligent behavior.Roger Schankdescribed their "anti-logic" approaches as "scruffy" (as opposed to the "neat" paradigms atCMUand Stanford).[36][37]Commonsense knowledge bases(such asDoug Lenat'sCyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.[38][39][40] The first AI winter was a shock: During the first AI summer, many people thought that machine intelligence could be achieved in just a few years. The Defense Advance Research Projects Agency (DARPA) launched programs to support AI research to use AI to solve problems of national security; in particular, to automate the translation of Russian to English for intelligence operations and to create autonomous tanks for the battlefield. Researchers had begun to realize that achieving AI was going to be much harder than was supposed a decade earlier, but a combination of hubris and disingenuousness led many university and think-tank researchers to accept funding with promises of deliverables that they should have known they could not fulfill. By the mid-1960s neither useful natural language translation systems nor autonomous tanks had been created, and a dramatic backlash set in. New DARPA leadership canceled existing AI funding programs. ... Outside of the United States, the most fertile ground for AI research was the United Kingdom. The AI winter in the United Kingdom was spurred on not so much by disappointed military leaders as by rival academics who viewed AI researchers as charlatans and a drain on research funding. A professor of applied mathematics,Sir James Lighthill, was commissioned by Parliament to evaluate the state of AI research in the nation. The report stated that all of the problems being worked on in AI would be better handled by researchers from other disciplines—such as applied mathematics. The report also claimed that AI successes on toy problems could never scale to real-world applications due to combinatorial explosion.[41] As limitations with weak, domain-independent methods became more and more apparent,[42]researchers from all three traditions began to buildknowledgeinto AI applications.[43][7]The knowledge revolution was driven by the realization that knowledge underlies high-performance, domain-specific AI applications. Edward Feigenbaumsaid: to describe that high performance in a specific domain requires both general and highly domain-specific knowledge. Ed Feigenbaum and Doug Lenat called this The Knowledge Principle: (1) The Knowledge Principle: if a program is to perform a complex task well, it must know a great deal about the world in which it operates.(2) A plausible extension of that principle, called the Breadth Hypothesis: there are two additional abilities necessary for intelligent behavior in unexpected situations: falling back on increasingly general knowledge, and analogizing to specific but far-flung knowledge.[45] This "knowledge revolution" led to the development and deployment ofexpert systems(introduced byEdward Feigenbaum), the first commercially successful form of AI software.[46][47][48] Key expert systems were: DENDRALis considered the first expert system that relied on knowledge-intensive problem-solving. It is described below, byEd Feigenbaum, from aCommunications of the ACMinterview,Interview with Ed Feigenbaum: One of the people at Stanford interested in computer-based models of mind wasJoshua Lederberg, the 1958 Nobel Prize winner in genetics. When I told him I wanted an induction "sandbox", he said, "I have just the one for you." His lab was doing mass spectrometry of amino acids. The question was: how do you go from looking at the spectrum of an amino acid to the chemical structure of the amino acid? That's how we started the DENDRAL Project: I was good at heuristic search methods, and he had an algorithm that was good at generating the chemical problem space. We did not have a grandiose vision. We worked bottom up. Our chemist wasCarl Djerassi, inventor of the chemical behind the birth control pill, and also one of the world's most respected mass spectrometrists. Carl and his postdocs were world-class experts in mass spectrometry. We began to add to their knowledge, inventing knowledge of engineering as we went along. These experiments amounted to titrating DENDRAL more and more knowledge. The more you did that, the smarter the program became. We had very good results. The generalization was: in the knowledge lies the power. That was the big idea. In my career that is the huge, "Ah ha!," and it wasn't the way AI was being done previously. Sounds simple, but it's probably AI's most powerful generalization.[51] The other expert systems mentioned above came after DENDRAL. MYCIN exemplifies the classic expert system architecture of a knowledge-base of rules coupled to a symbolic reasoning mechanism, including the use of certainty factors to handle uncertainty. GUIDON shows how an explicit knowledge base can be repurposed for a second application, tutoring, and is an example of anintelligent tutoring system, a particular kind of knowledge-based application. Clancey showed that it was not sufficient simply to useMYCIN's rules for instruction, but that he also needed to add rules for dialogue management and student modeling.[50]XCON is significant because of the millions of dollars it savedDEC, which triggered the expert system boom where most all major corporations in the US had expert systems groups, to capture corporate expertise, preserve it, and automate it: By 1988, DEC's AI group had 40 expert systems deployed, with more on the way. DuPont had 100 in use and 500 in development. Nearly every major U.S. corporation had its own Al group and was either using or investigating expert systems.[49] Chess expert knowledge was encoded inDeep Blue. In 1996, this allowedIBM'sDeep Blue, with the help of symbolic AI, to win in a game of chess against the world champion at that time,Garry Kasparov.[52] A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules for problem-solving.[53]The simplest approach for an expert system knowledge base is simply a collection or network ofproduction rules. Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols. For example,OPS5,CLIPSand their successorsJessandDroolsoperate in this fashion. Expert systems can operate in either aforward chaining– from evidence to conclusions – orbackward chaining– from goals to needed data and prerequisites – manner. More advanced knowledge-based systems, such asSoarcan also perform meta-level reasoning, that is reasoning about their own reasoning in terms of deciding how to solve problems and monitoring the success of problem-solving strategies. Blackboard systemsare a second kind ofknowledge-basedorexpert systemarchitecture. They model a community of experts incrementally contributing, where they can, to solve a problem. The problem is represented in multiple levels of abstraction or alternate views. The experts (knowledge sources) volunteer their services whenever they recognize they can contribute. Potential problem-solving actions are represented on an agenda that is updated as the problem situation changes. A controller decides how useful each contribution is, and who should make the next problem-solving action. One example, the BB1 blackboard architecture[54]was originally inspired by studies of how humans plan to perform multiple tasks in a trip.[55]An innovation of BB1 was to apply the same blackboard model to solving its control problem, i.e., its controller performed meta-level reasoning with knowledge sources that monitored how well a plan or the problem-solving was proceeding and could switch from one strategy to another as conditions – such as goals or times – changed. BB1 has been applied in multiple domains: construction site planning, intelligent tutoring systems, and real-time patient monitoring. At the height of the AI boom, companies such asSymbolics,LMI, andTexas Instrumentswere sellingLISP machinesspecifically targeted to accelerate the development of AI applications and research. In addition, several artificial intelligence companies, such as Teknowledge andInference Corporation, were selling expert system shells, training, and consulting to corporations. Unfortunately, the AI boom did not last and Kautz best describes the second AI winter that followed: Many reasons can be offered for the arrival of the second AI winter. The hardware companies failed when much more cost-effective general Unix workstations from Sun together with good compilers for LISP and Prolog came onto the market. Many commercial deployments of expert systems were discontinued when they proved too costly to maintain. Medical expert systems never caught on for several reasons: the difficulty in keeping them up to date; the challenge for medical professionals to learn how to use a bewildering variety of different expert systems for different medical conditions; and perhaps most crucially, the reluctance of doctors to trust a computer-made diagnosis over their gut instinct, even for specific domains where the expert systems could outperform an average doctor. Venture capital money deserted AI practically overnight. The world AI conference IJCAI hosted an enormous and lavish trade show and thousands of nonacademic attendees in 1987 in Vancouver; the main AI conference the following year, AAAI 1988 in St. Paul, was a small and strictly academic affair.[9] Both statistical approaches and extensions to logic were tried. One statistical approach,hidden Markov models, had already been popularized in the 1980s for speech recognition work.[11]Subsequently, in 1988,Judea Pearlpopularized the use ofBayesian Networksas a sound but efficient way of handling uncertain reasoning with his publication of the book Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.[56]and Bayesian approaches were applied successfully in expert systems.[57]Even later, in the 1990s, statistical relational learning, an approach that combines probability with logical formulas, allowed probability to be combined with first-order logic, e.g., with eitherMarkov Logic NetworksorProbabilistic Soft Logic. Other, non-probabilistic extensions to first-order logic to support were also tried. For example,non-monotonic reasoningcould be used withtruth maintenance systems. Atruth maintenance systemtracked assumptions and justifications for all inferences. It allowed inferences to be withdrawn when assumptions were found out to be incorrect or a contradiction was derived. Explanations could be provided for an inference byexplaining which rules were appliedto create it and then continuing through underlying inferences and rules all the way back to root assumptions.[58]Lotfi Zadehhad introduced a different kind of extension to handle the representation of vagueness. For example, in deciding how "heavy" or "tall" a man is, there is frequently no clear "yes" or "no" answer, and a predicate for heavy or tall would instead return values between 0 and 1. Those values represented to what degree the predicates were true. Hisfuzzy logicfurther provided a means for propagating combinations of these values through logical formulas.[59] Symbolic machine learning approaches were investigated to address theknowledge acquisitionbottleneck. One of the earliest isMeta-DENDRAL. Meta-DENDRAL used a generate-and-test technique to generate plausible rule hypotheses to test against spectra. Domain and task knowledge reduced the number of candidates tested to a manageable size.Feigenbaumdescribed Meta-DENDRAL as ...the culmination of my dream of the early to mid-1960s having to do with theory formation. The conception was that you had a problem solver like DENDRAL that took some inputs and produced an output. In doing so, it used layers of knowledge to steer and prune the search. That knowledge got in there because we interviewed people. But how did the people get the knowledge? By looking at thousands of spectra. So we wanted a program that would look at thousands of spectra and infer the knowledge of mass spectrometry that DENDRAL could use to solve individual hypothesis formation problems. We did it. We were even able to publish new knowledge of mass spectrometry in theJournal of the American Chemical Society, giving credit only in a footnote that a program, Meta-DENDRAL, actually did it. We were able to do something that had been a dream: to have a computer program come up with a new and publishable piece of science.[51] In contrast to the knowledge-intensive approach of Meta-DENDRAL,Ross Quinlaninvented a domain-independent approach to statistical classification,decision tree learning, starting first withID3[60]and then later extending its capabilities toC4.5.[61]The decision trees created areglass box, interpretable classifiers, with human-interpretable classification rules. Advances were made in understanding machine learning theory, too.Tom Mitchellintroducedversion space learningwhich describes learning as a search through a space of hypotheses, with upper, more general, and lower, more specific, boundaries encompassing all viable hypotheses consistent with the examples seen so far.[62]More formally,ValiantintroducedProbably Approximately Correct Learning(PAC Learning), a framework for the mathematical analysis of machine learning.[63] Symbolic machine learning encompassed more than learning by example. E.g.,John Andersonprovided acognitive modelof human learning where skill practice results in a compilation of rules from a declarative format to a procedural format with hisACT-Rcognitive architecture. For example, a student might learn to apply "Supplementary angles are two angles whose measures sum 180 degrees" as several different procedural rules. E.g., one rule might say that if X and Y are supplementary and you know X, then Y will be 180 - X. He called his approach "knowledge compilation".ACT-Rhas been used successfully to model aspects of human cognition, such as learning and retention. ACT-R is also used inintelligent tutoring systems, calledcognitive tutors, to successfully teach geometry, computer programming, and algebra to school children.[64] Inductive logic programming was another approach to learning that allowed logic programs to be synthesized from input-output examples. E.g.,Ehud Shapiro's MIS (Model Inference System) could synthesize Prolog programs from examples.[65]John R. Kozaappliedgenetic algorithmstoprogram synthesisto creategenetic programming, which he used to synthesize LISP programs. Finally,Zohar MannaandRichard Waldingerprovided a more general approach toprogram synthesisthat synthesizes afunctional programin the course of proving its specifications to be correct.[66] As an alternative to logic,Roger Schankintroduced case-based reasoning (CBR). The CBR approach outlined in his book, Dynamic Memory,[67]focuses first on remembering key problem-solving cases for future use and generalizing them where appropriate. When faced with a new problem, CBR retrieves the most similar previous case and adapts it to the specifics of the current problem.[68]Another alternative to logic,genetic algorithmsandgenetic programmingare based on an evolutionary model of learning, where sets of rules are encoded into populations, the rules govern the behavior of individuals, and selection of the fittest prunes out sets of unsuitable rules over many generations.[69] Symbolic machine learning was applied to learning concepts, rules, heuristics, and problem-solving. Approaches, other than those above, include: With the rise of deep learning, the symbolic AI approach has been compared to deep learning as complementary "...with parallels having been drawn many times by AI researchers betweenKahneman'sresearch on human reasoning and decision making – reflected in his bookThinking, Fast and Slow– and the so-called "AI systems 1 and 2", which would in principle be modelled by deep learning and symbolic reasoning, respectively." In this view, symbolic reasoning is more apt for deliberative reasoning, planning, and explanation while deep learning is more apt for fast pattern recognition in perceptual applications with noisy data.[17][18] Neuro-symbolic AI attempts to integrate neural and symbolic architectures in a manner that addresses strengths and weaknesses of each, in a complementary fashion, in order to support robust AI capable of reasoning, learning, and cognitive modeling. As argued byValiant[77]and many others,[78]the effective construction of rich computationalcognitive modelsdemands the combination of sound symbolic reasoning and efficient (machine) learning models.Gary Marcus, similarly, argues that: "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning.",[79]and in particular: "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol-manipulation in our toolkit. Too much of useful knowledge is abstract to make do without tools that represent and manipulate abstraction, and to date, the only machinery that we know of that can manipulate such abstract knowledge reliably is the apparatus of symbol manipulation."[80] Henry Kautz,[19]Francesca Rossi,[81]andBart Selman[82]have also argued for a synthesis. Their arguments are based on a need to address the two kinds of thinking discussed inDaniel Kahneman's book,Thinking, Fast and Slow. Kahneman describes human thinking as having two components,System 1 and System 2. System 1 is fast, automatic, intuitive and unconscious. System 2 is slower, step-by-step, and explicit. System 1 is the kind used for pattern recognition while System 2 is far better suited for planning, deduction, and deliberative thinking. In this view, deep learning best models the first kind of thinking while symbolic reasoning best models the second kind and both are needed. Garcezand Lamb describe research in this area as being ongoing for at least the past twenty years,[83]dating from their 2002 book on neurosymbolic learning systems.[84]A series of workshops on neuro-symbolic reasoning has been held every year since 2005, seehttp://www.neural-symbolic.org/for details. In their 2015 paper, Neural-Symbolic Learning and Reasoning: Contributions and Challenges, Garcez et al. argue that: The integration of the symbolic and connectionist paradigms of AI has been pursued by a relatively small research community over the last two decades and has yielded several significant results. Over the last decade, neural symbolic systems have been shown capable of overcoming the so-called propositional fixation of neural networks, as McCarthy (1988) put it in response to Smolensky (1988); see also (Hinton, 1990). Neural networks were shown capable of representing modal and temporal logics (d'Avila Garcez and Lamb, 2006) and fragments of first-order logic (Bader, Hitzler, Hölldobler, 2008; d'Avila Garcez, Lamb, Gabbay, 2009). Further, neural-symbolic systems have been applied to a number of problems in the areas of bioinformatics, control engineering, software verification and adaptation, visual intelligence, ontology learning, and computer games.[78] Approaches for integration are varied.Henry Kautz's taxonomy of neuro-symbolic architectures, along with some examples, follows: Many key research questions remain, such as: This section provides an overview of techniques and contributions in an overall context leading to many other, more detailed articles in Wikipedia. Sections onMachine LearningandUncertain Reasoningare covered earlier in thehistory section. The key AI programming language in the US during the last symbolic AI boom period wasLISP.LISPis the second oldest programming language afterFORTRANand was created in 1958 byJohn McCarthy. LISP provided the firstread-eval-print loopto support rapid program development. Compiled functions could be freely mixed with interpreted functions. Program tracing, stepping, and breakpoints were also provided, along with the ability to change values or functions and continue from breakpoints or errors. It had the firstself-hosting compiler, meaning that the compiler itself was originally written in LISP and then ran interpretively to compile the compiler code. Other key innovations pioneered by LISP that have spread to other programming languages include: Programs were themselves data structures that other programs could operate on, allowing the easy definition of higher-level languages. In contrast to the US, in Europe the key AI programming language during that same period wasProlog. Prolog provided a built-in store of facts and clauses that could be queried by aread-eval-print loop. The store could act as a knowledge base and the clauses could act as rules or a restricted form of logic. As a subset of first-order logic Prolog was based onHorn clauseswith aclosed-world assumption—any facts not known were considered false—and aunique name assumptionfor primitive terms—e.g., the identifier barack_obama was considered to refer to exactly one object.Backtrackingandunificationare built-in to Prolog. Alain Colmerauerand Philippe Roussel are credited as the inventors of Prolog. Prolog is a form of logic programming, which was invented byRobert Kowalski. Its history was also influenced byCarl Hewitt'sPLANNER, an assertional database with pattern-directed invocation of methods. For more detail see thesection on the origins of Prolog in the PLANNER article. Prolog is also a kind ofdeclarative programming. The logic clauses that describe programs are directly interpreted to run the programs specified. No explicit series of actions is required, as is the case withimperative programminglanguages. Japan championed Prolog for itsFifth Generation Project, intending to build special hardware for high performance. Similarly,LISP machineswere built to run LISP, but as the second AI boom turned to bust these companies could not compete with new workstations that could now run LISP or Prolog natively at comparable speeds. See thehistory sectionfor more detail. Smalltalkwas another influential AI programming language. For example, it introducedmetaclassesand, along withFlavorsandCommonLoops, influenced theCommon Lisp Object System, or (CLOS), that is now part ofCommon Lisp, the current standard Lisp dialect.CLOSis a Lisp-based object-oriented system that allowsmultiple inheritance, in addition to incremental extensions to both classes and metaclasses, thus providing a run-timemeta-object protocol.[88] For other AI programming languages see thislist of programming languages for artificial intelligence. Currently,Python, amulti-paradigm programming language, is the most popular programming language, partly due to its extensive package library that supportsdata science, natural language processing, and deep learning.Pythonincludes a read-eval-print loop, functional elements such ashigher-order functions, andobject-oriented programmingthat includes metaclasses. Search arises in many kinds of problem solving, includingplanning,constraint satisfaction, and playing games such ascheckers,chess, andgo. The best known AI-search tree search algorithms arebreadth-first search,depth-first search,A*, andMonte Carlo Search. Key search algorithms forBoolean satisfiabilityareWalkSAT,conflict-driven clause learning, and theDPLL algorithm. For adversarial search when playing games,alpha-beta pruning,branch and bound, andminimaxwere early contributions. Multiple different approaches to represent knowledge and then reason with those representations have been investigated. Below is a quick overview of approaches to knowledge representation and automated reasoning. Semantic networks,conceptual graphs,frames, andlogicare all approaches to modeling knowledge such as domain knowledge, problem-solving knowledge, and the semantic meaning of language.Ontologiesmodel key concepts and their relationships in a domain. Example ontologies areYAGO,WordNet, andDOLCE.DOLCEis an example of anupper ontologythat can be used for any domain while WordNet is a lexical resource that can also be viewed as anontology. YAGO incorporates WordNet as part of its ontology, to align facts extracted fromWikipediawith WordNetsynsets. TheDisease Ontologyis an example of a medical ontology currently being used. Description logicis a logic for automated classification of ontologies and for detecting inconsistent classification data.OWLis a language used to represent ontologies withdescription logic.Protégéis an ontology editor that can read inOWLontologies and then check consistency withdeductive classifierssuch as such as HermiT.[89] First-order logic is more general than description logic. The automated theorem provers discussed below can prove theorems in first-order logic.Horn clauselogic is more restricted than first-order logic and is used in logic programming languages such as Prolog. Extensions to first-order logic includetemporal logic, to handle time;epistemic logic, to reason about agent knowledge;modal logic, to handle possibility and necessity; andprobabilistic logicsto handle logic and probability together. Examples of automated theorem provers for first-order logic are: Prover9can be used in conjunction with theMace4model checker.ACL2is a theorem prover that can handle proofs by induction and is a descendant of the Boyer-Moore Theorem Prover, also known asNqthm. Knowledge-based systems have an explicitknowledge base, typically of rules, to enhance reusability across domains by separating procedural code and domain knowledge. A separateinference engineprocesses rules and adds, deletes, or modifies a knowledge store. Forward chaininginference engines are the most common, and are seen inCLIPSandOPS5.Backward chainingoccurs in Prolog, where a more limited logical representation is used,Horn Clauses. Pattern-matching, specificallyunification, is used in Prolog. A more flexible kind of problem-solving occurs when reasoning about what to do next occurs, rather than simply choosing one of the available actions. This kind of meta-level reasoning is used in Soar and in the BB1 blackboard architecture. Cognitive architectures such as ACT-R may have additional capabilities, such as the ability to compile frequently used knowledge into higher-levelchunks. Marvin Minskyfirst proposed frames as a way of interpreting common visual situations, such as an office, and Roger Schank extended this idea toscriptsfor common routines, such as dining out.Cychas attempted to capture useful common-sense knowledge and has "micro-theories" to handle particular kinds of domain-specific reasoning. Qualitative simulation, such asBenjamin Kuipers's QSIM,[90]approximates human reasoning about naive physics, such as what happens when we heat a liquid in a pot on the stove. We expect it to heat and possibly boil over, even though we may not know its temperature, its boiling point, or other details, such as atmospheric pressure. Similarly,Allen'stemporal interval algebrais a simplification of reasoning about time andRegion Connection Calculusis a simplification of reasoning about spatial relationships. Both can be solved withconstraint solvers. Constraint solvers perform a more limited kind of inference than first-order logic. They can simplify sets of spatiotemporal constraints, such as those forRCCorTemporal Algebra, along with solving other kinds of puzzle problems, such asWordle,Sudoku,cryptarithmetic problems, and so on.Constraint logic programmingcan be used to solve scheduling problems, for example withconstraint handling rules(CHR). TheGeneral Problem Solver(GPS) cast planning as problem-solving usedmeans-ends analysisto create plans.STRIPStook a different approach, viewing planning as theorem proving.Graphplantakes a least-commitment approach to planning, rather than sequentially choosing actions from an initial state, working forwards, or a goal state if working backwards.Satplanis an approach to planning where a planning problem is reduced to aBoolean satisfiability problem. Natural language processing focuses on treating language as data to perform tasks such as identifying topics without necessarily understanding the intended meaning. Natural language understanding, in contrast, constructs a meaning representation and uses that for further processing, such as answering questions. Parsing,tokenizing,spelling correction,part-of-speech tagging,noun and verb phrase chunkingare all aspects of natural language processing long handled by symbolic AI, but since improved by deep learning approaches. In symbolic AI,discourse representation theoryand first-order logic have been used to represent sentence meanings.Latent semantic analysis(LSA) andexplicit semantic analysisalso provided vector representations of documents. In the latter case, vector components are interpretable as concepts named by Wikipedia articles. New deep learning approaches based onTransformer modelshave now eclipsed these earlier symbolic AI approaches and attained state-of-the-art performance in natural languageprocessing. However, Transformer models are opaque and do not yet produce human-interpretable semantic representations for sentences and documents. Instead, they produce task-specific vectors where the meaning of the vector components is opaque. Agentsare autonomous systems embedded in an environment they perceive and act upon in some sense. Russell and Norvig's standard textbook on artificial intelligence is organized to reflect agent architectures of increasing sophistication.[91]The sophistication of agents varies from simple reactive agents, to those with a model of the world andautomated planningcapabilities, possibly aBDI agent, i.e., one with beliefs, desires, and intentions – or alternatively areinforcement learningmodel learned over time to choose actions – up to a combination of alternative architectures, such as a neuro-symbolic architecture[87]that includes deep learning for perception.[92] In contrast, amulti-agent systemconsists of multiple agents that communicate amongst themselves with some inter-agent communication language such asKnowledge Query and Manipulation Language(KQML). The agents need not all have the same internal architecture. Advantages of multi-agent systems include the ability to divide work among the agents and to increase fault tolerance when agents are lost. Research problems includehow agents reach consensus,distributed problem solving,multi-agent learning,multi-agent planning, anddistributed constraint optimization. Controversies arose from early on in symbolic AI, both within the field—e.g., between logicists (the pro-logic"neats") and non-logicists (the anti-logic"scruffies")—and between those who embraced AI but rejected symbolic approaches—primarilyconnectionists—and those outside the field. Critiques from outside of the field were primarily from philosophers, on intellectual grounds, but also from funding agencies, especially during the two AI winters. Limitations were discovered in using simple first-order logic to reason about dynamic domains. Problems were discovered both with regards to enumerating the preconditions for an action to succeed and in providing axioms for what did not change after an action was performed. McCarthy and Hayes introduced theFrame Problemin 1969 in the paper, "Some Philosophical Problems from the Standpoint of Artificial Intelligence."[93]A simple example occurs in "proving that one person could get into conversation with another", as an axiom asserting "if a person has a telephone he still has it after looking up a number in the telephone book" would be required for the deduction to succeed. Similar axioms would be required for other domain actions to specify whatdid notchange. A similar problem, called theQualification Problem, occurs in trying to enumerate thepreconditionsfor an action to succeed. An infinite number of pathological conditions can be imagined, e.g., a banana in a tailpipe could prevent a car from operating correctly. McCarthy's approach to fix the frame problem wascircumscription, a kind ofnon-monotonic logicwhere deductions could be made from actions that need only specify what would change while not having to explicitly specify everything that would not change. Othernon-monotonic logicsprovidedtruth maintenance systemsthat revised beliefs leading to contradictions. Other ways of handling more open-ended domains includedprobabilistic reasoningsystems and machine learning to learn new concepts and rules. McCarthy'sAdvice Takercan be viewed as an inspiration here, as it could incorporate new knowledge provided by a human in the form of assertions or rules. For example, experimental symbolic machine learning systems explored the ability to take high-level natural language advice and to interpret it into domain-specific actionable rules. Similar to the problems in handling dynamic domains, common-sense reasoning is also difficult to capture in formal reasoning. Examples of common-sense reasoning include implicit reasoning about how people think or general knowledge of day-to-day events, objects, and living creatures. This kind of knowledge is taken for granted and not viewed as noteworthy. Common-sense reasoning is an open area of research and challenging both for symbolic systems (e.g., Cyc has attempted to capture key parts of this knowledge over more than a decade) and neural systems (e.g.,self-driving carsthat do not know not to drive into cones or not to hit pedestrians walking a bicycle). McCarthy viewed hisAdvice Takeras having common-sense, but his definition of common-sense was different than the one above.[94]He defined a program as having common sense "if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows." Connectionist approaches include earlier work onneural networks,[95]such asperceptrons; work in the mid to late 80s, such asDanny Hillis'sConnection MachineandYann LeCun's advances inconvolutional neural networks; to today's more advanced approaches, such asTransformers,GANs, and other work in deep learning. Three philosophical positions[96]have been outlined among connectionists: Olazaran, in his sociological history of the controversies within the neural network community, described the moderate connectionism view as essentially compatible with current research in neuro-symbolic hybrids: The third and last position I would like to examine here is what I call the moderate connectionist view, a more eclectic view of the current debate betweenconnectionismand symbolic AI. One of the researchers who has elaborated this position most explicitly isAndy Clark, a philosopher from the School of Cognitive and Computing Sciences of the University of Sussex (Brighton, England). Clark defended hybrid (partly symbolic, partly connectionist) systems. He claimed that (at least) two kinds of theories are needed in order to study and model cognition. On the one hand, for some information-processing tasks (such as pattern recognition) connectionism has advantages over symbolic models. But on the other hand, for other cognitive processes (such as serial, deductive reasoning, and generative symbol manipulation processes) the symbolic paradigm offers adequate models, and not only "approximations" (contrary to what radical connectionists would claim).[97] Gary Marcushas claimed that the animus in the deep learning community against symbolic approaches now may be more sociological than philosophical: To think that we can simply abandon symbol-manipulation is to suspend disbelief. And yet, for the most part, that's how most current AI proceeds.Hintonand many others have tried hard to banish symbols altogether. The deep learning hope—seemingly grounded not so much in science, but in a sort of historical grudge—is that intelligent behavior will emerge purely from the confluence of massive data and deep learning. Where classical computers and software solve tasks by defining sets of symbol-manipulating rules dedicated to particular jobs, such as editing a line in a word processor or performing a calculation in a spreadsheet,neural networkstypically try to solve tasks by statistical approximation and learning from examples. According to Marcus,Geoffrey Hintonand his colleagues have been vehemently "anti-symbolic": When deep learning reemerged in 2012, it was with a kind of take-no-prisoners attitude that has characterized most of the last decade. By 2015, his hostility toward all things symbols had fully crystallized. He gave a talk at an AI workshop at Stanford comparing symbols toaether, one of science's greatest mistakes. ... Since then, his anti-symbolic campaign has only increased in intensity. In 2016,Yann LeCun,Bengio, and Hinton wrote a manifesto for deep learning in one of science's most important journals, Nature. It closed with a direct attack on symbol manipulation, calling not for reconciliation but for outright replacement. Later, Hinton told a gathering of European Union leaders that investing any further money in symbol-manipulating approaches was "a huge mistake," likening it to investing in internal combustion engines in the era of electric cars.[98] Part of these disputes may be due to unclear terminology: Turing award winnerJudea Pearloffers a critique of machine learning which, unfortunately, conflates the terms machine learning and deep learning. Similarly, when Geoffrey Hinton refers to symbolic AI, the connotation of the term tends to be that of expert systems dispossessed of any ability to learn. The use of the terminology is in need of clarification. Machine learning is not confined toassociation rulemining, c.f. the body of work on symbolic ML and relational learning (the differences to deep learning being the choice of representation, localist logical rather than distributed, and the non-use ofgradient-based learning algorithms). Equally, symbolic AI is not just aboutproduction ruleswritten by hand. A proper definition of AI concernsknowledge representation and reasoning, autonomousmulti-agent systems, planning andargumentation, as well as learning.[99] It is worth noting that, from a theoretical perspective, the boundary of advantages between connectionist AI and symbolic AI may not be as clear-cut as it appears. For instance, Heng Zhang and his colleagues have proved that mainstream knowledge representation formalisms are recursively isomorphic, provided they are universal or have equivalent expressive power.[100]This finding implies that there is no fundamental distinction between using symbolic or connectionist knowledge representation formalisms for the realization ofartificial general intelligence(AGI). Moreover, the existence of recursive isomorphisms suggests that different technical approaches can draw insights from one another. From this perspective, it seems unnecessary to overemphasize the advantages of any single technical school; instead, mutual learning and integration may offer the most promising path toward the realization of AGI. Another critique of symbolic AI is theembodied cognitionapproach: Theembodied cognitionapproach claims that it makes no sense to consider the brain separately: cognition takes place within a body, which is embedded in an environment. We need to study the system as a whole; the brain's functioning exploits regularities in its environment, including the rest of its body. Under the embodied cognition approach, robotics, vision, and other sensors become central, not peripheral.[101] Rodney Brooksinventedbehavior-based robotics, one approach to embodied cognition.Nouvelle AI, another name for this approach, is viewed as an alternative tobothsymbolic AI and connectionist AI. His approach rejected representations, either symbolic or distributed, as not only unnecessary, but as detrimental. Instead, he created thesubsumption architecture, a layered architecture for embodied agents. Each layer achieves a different purpose and must function in the real world. For example, the first robot he describes inIntelligence Without Representation, has three layers. The bottom layer interprets sonar sensors to avoid objects. The middle layer causes the robot to wander around when there are no obstacles. The top layer causes the robot to go to more distant places for further exploration. Each layer can temporarily inhibit or suppress a lower-level layer. He criticized AI researchers for defining AI problems for their systems, when: "There is no clean division between perception (abstraction) and reasoning in the real world."[102]He called his robots "Creatures" and each layer was "composed of a fixed-topology network of simple finite state machines."[103]In the Nouvelle AI approach, "First, it is vitally important to test the Creatures we build in the real world; i.e., in the same world that we humans inhabit. It is disastrous to fall into the temptation of testing them in a simplified world first, even with the best intentions of later transferring activity to an unsimplified world."[104]His emphasis on real-world testing was in contrast to "Early work in AI concentrated on games, geometrical problems, symbolic algebra, theorem proving, and other formal systems"[105]and the use of theblocks worldin symbolic AI systems such asSHRDLU. Each approach—symbolic, connectionist, and behavior-based—has advantages, but has been criticized by the other approaches. Symbolic AI has been criticized as disembodied, liable to the qualification problem, and poor in handling the perceptual problems where deep learning excels. In turn,connectionist AIhas been criticized as poorly suited for deliberative step-by-step problem solving, incorporating knowledge, and handling planning. Finally, Nouvelle AI excels in reactive and real-world robotics domains but has been criticized for difficulties in incorporating learning and knowledge. Hybrid AIsincorporating one or more of these approaches are currently viewed as the path forward.[19][81][82]Russell and Norvig conclude that: Overall,Dreyfussaw areas where AI did not have complete answers and said that Al is therefore impossible; we now see many of these same areas undergoing continued research and development leading to increased capability, not impossibility.[101]
https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence
Anabstract syntax tree(AST) is a data structure used incomputer scienceto represent the structure of a program or code snippet. It is atreerepresentation of theabstract syntacticstructure of text (oftensource code) written in aformal language. Each node of the tree denotes a construct occurring in the text. It is sometimes called just asyntax tree. The syntax is "abstract" in the sense that it does not represent every detail appearing in the real syntax, but rather just the structural or content-related details. For instance, groupingparenthesesare implicit in the tree structure, so these do not have to be represented as separate nodes. Likewise, a syntactic construct like an if-condition-then statement may be denoted by means of a single node with three branches. This distinguishes abstract syntax trees from concrete syntax trees, traditionally designatedparse trees. Parse trees are typically built by aparserduring the source code translation andcompilingprocess. Once built, additional information is added to the AST by means of subsequent processing, e.g.,contextual analysis. Abstract syntax trees are also used inprogram analysisandprogram transformationsystems. Abstract syntax trees aredata structureswidely used incompilersto represent the structure of program code. An AST is usually the result of thesyntax analysisphase of a compiler. It often serves as an intermediate representation of the program through several stages that the compiler requires, and has a strong impact on the final output of the compiler. An AST has several properties that aid the further steps of the compilation process: Languages are oftenambiguousby nature. In order to avoid this ambiguity, programming languages are often specified as acontext-free grammar(CFG). However, there are often aspects of programming languages that a CFG can't express, but are part of the language and are documented in its specification. These are details that require a context to determine their validity and behaviour. For example, if a language allows new types to be declared, a CFG cannot predict the names of such types nor the way in which they should be used. Even if a language has a predefined set of types, enforcing proper usage usually requires some context. Another example isduck typing, where the type of an element can change depending on context.Operator overloadingis yet another case where correct usage and final function are context-dependent. The design of an AST is often closely linked with the design of a compiler and its expected features. Core requirements include the following: These requirements can be used to design the data structure for the AST. Some operations will always require two elements, such as the two terms for addition. However, some language constructs require an arbitrarily large number of children, such as argument lists passed to programs from thecommand shell. As a result, an AST used to represent code written in such a language has to also be flexible enough to allow for quick addition of an unknown quantity of children. To support compiler verification it should be possible to unparse an AST into source code form. The source code produced should be sufficiently similar to the original in appearance and identical in execution, upon recompilation. The AST is used intensively duringsemantic analysis, where the compiler checks for correct usage of the elements of the program and the language. The compiler also generatessymbol tablesbased on the AST during semantic analysis. A complete traversal of the tree allows verification of the correctness of the program. After verifying correctness, the AST serves as the base for code generation. The AST is often used to generate an intermediate representation (IR), sometimes called anintermediate language, for the code generation. AST differencing, or for short tree differencing, consists of computing the list of differences between two ASTs.[1]This list of differences is typically called an edit script. The edit script directly refers to the AST of the code. For instance, an edit action may result in the addition of a new AST node representing a function. An AST is a powerful abstraction to perform codeclone detection.[2]
https://en.wikipedia.org/wiki/Abstract_syntax_tree
Aflowchartis a type ofdiagramthat represents aworkfloworprocess. A flowchart can also be defined as a diagrammatic representation of analgorithm, a step-by-step approach to solving a task. The flowchart shows the steps as boxes of various kinds, and their order by connecting the boxes with arrows. This diagrammatic representation illustrates a solution model to a given problem. Flowcharts are used in analyzing, designing, documenting or managing a process or program in various fields.[1] Flowcharts are used to design and document simple processes or programs. Like other types of diagrams, they help visualize the process. Two of the many benefits are that flaws andbottlenecksmay become apparent. Flowcharts typically use the following main symbols: A flowchart is described as "cross-functional" when the chart is divided into different vertical or horizontal parts, to describe the control of different organizational units. A symbol appearing in a particular part is within the control of that organizational unit. A cross-functional flowchart allows the author to correctly locate the responsibility for performing an action or making a decision, and to show the responsibility of each organizational unit for different parts of a single process. Flowcharts represent certain aspects of processes and are usually complemented by other types of diagram. For instance,Kaoru Ishikawadefined the flowchart as one of theseven basic tools of quality control, next to thehistogram,Pareto chart,check sheet,control chart,cause-and-effect diagram, and thescatter diagram. Similarly, inUML, a standard concept-modeling notation used in software development, theactivity diagram, which is a type of flowchart, is just one of many different diagram types. Nassi-Shneiderman diagramsandDrakon-chartsare an alternative notation for process flow. Common alternative names include: flow chart, process flowchart, functional flowchart, process map, process chart, functional process chart, business process model, process model, processflow diagram,work flowdiagram, business flow diagram. The terms "flowchart" and "flow chart" are used interchangeably. The underlyinggraphstructure of a flowchart is a flow graph, which abstracts away node types, their contents and other ancillary information. The first structured method for documenting process flow, the "flow process chart", was introduced byFrankandLillian Gilbrethin the presentation "Process Charts: First Steps in Finding the One Best Way to do Work", to members of theAmerican Society of Mechanical Engineers (ASME)in 1921.[2]The Gilbreths' tools quickly found their way intoindustrial engineeringcurricula. In the early 1930s, an industrial engineer,Allan H. Mogensenbegan to train business people in the use of some of the tools of industrial engineering at his Work Simplification Conferences inLake Placid,New York. Art Spinanger, a 1944 graduate ofMogensen's class, took the tools back toProcter and Gamblewhere he developed their Deliberate Methods Change Program.Ben S. Graham, another 1944 graduate, Director of Formcraft Engineering atStandard Register Industrial, applied the flow process chart to information processing with his development of the multi-flow process chart, to present multiple documents and their relationships.[3]In 1947,ASMEadopted a symbol set derived from Gilbreth's original work as the "ASME Standard: Operation and Flow Process Charts."[4] Douglas Hartreein 1949 explained thatHerman GoldstineandJohn von Neumannhad developed a flowchart (originally, diagram) to plan computer programs.[5]His contemporary account was endorsed by IBM engineers[6]and by Goldstine's personal recollections.[7]The original programming flowcharts of Goldstine and von Neumann can be found in their unpublished report, "Planning and coding of problems for an electronic computing instrument, Part II, Volume 1" (1947), which is reproduced in von Neumann's collected works.[8] The flowchart became a popular tool for describingcomputer algorithms, but its popularity decreased in the 1970s, when interactivecomputer terminalsandthird-generation programming languagesbecame common tools forcomputer programming, since algorithms can be expressed more concisely assource codein suchlanguages. Oftenpseudo-codeis used, which uses the common idioms of such languages without strictly adhering to the details of a particular one. Also, flowcharts are not well-suited for new programming techniques such asrecursive programming. Nevertheless, flowcharts were still used in the early 21st century for describingcomputer algorithms.[9]Some techniques such asUMLactivity diagramsandDrakon-chartscan be considered to be extensions of the flowchart. Sterneckert (2003) suggested that flowcharts can be modeled from the perspective of different user groups (such as managers, system analysts and clerks), and that there are four general types:[10] Notice that every type of flowchart focuses on some kind of control, rather than on the particular flow itself.[10] However, there are some different classifications. For example, Andrew Veronis (1978) named three basic types of flowcharts: thesystem flowchart, thegeneral flowchart, and thedetailed flowchart.[11]That same year Marilyn Bohl (1978) stated "in practice, two kinds of flowcharts are used in solution planning:system flowchartsandprogram flowcharts...".[12]More recently, Mark A. Fryman (2001) identified more differences: "Decision flowcharts, logic flowcharts, systems flowcharts, product flowcharts, and process flowcharts are just a few of the different types of flowcharts that are used in business and government".[13] In addition, many diagram techniques are similar to flowcharts but carry a different name, such asUMLactivity diagrams. Reversible flowcharts[14]represent a paradigm in computing that focuses on the reversibility of computational processes. Unlike traditional computing models, where operations are often irreversible, reversible flowcharts ensure that any atomic computational step can be reversed. Reversible flowcharts are shown to be as expressive asreversible Turing machines, and are a theoretical foundation forstructured reversible programmingand energy-efficient reversible computing systems.[15] TheAmerican National Standards Institute(ANSI) set standards for flowcharts and their symbols in the 1960s.[16]TheInternational Organization for Standardization(ISO) adopted the ANSI symbols in 1970.[17]The current standard,ISO 5807, was published in 1985 and last reviewed in 2019.[18]Generally, flowcharts flow from top to bottom and left to right.[19] The ANSI/ISO standards include symbols beyond the basic shapes. Some are:[19][20] Forparallelandconcurrentprocessing theParallel Modehorizontal lines[21]or a horizontal bar[22]indicate the start or end of a section of processes that can be done independently: Any drawing program can be used to create flowchart diagrams, but these will have no underlying data model to share data with databases or other programs such asproject managementsystems orspreadsheet. Many software packages exist that can create flowcharts automatically, either directly from a programming language source code, or from a flowchart description language. There are several applications andvisual programming languages[23]that use flowcharts to represent and execute programs. Generally these are used as teaching tools for beginner students.
https://en.wikipedia.org/wiki/Flowchart
Acontrol-flow diagram(CFD) is adiagramto describe thecontrol flowof abusiness process,processor review. Control-flow diagrams were developed in the 1950s, and are widely used in multipleengineeringdisciplines. They are one of the classicbusiness process modelingmethodologies, along withflow charts,drakon-charts,data flow diagrams,functional flow block diagram,Gantt charts,PERTdiagrams, andIDEF.[2] A control-flow diagram can consist of a subdivision to show sequential steps, with if-then-else conditions, repetition, and/or case conditions. Suitably annotated geometrical figures are used to represent operations, data, or equipment, and arrows are used to indicate the sequential flow from one to another.[3] There are several types of control-flow diagrams, for example: In software and systems development, control-flow diagrams can be used incontrol-flow analysis,data-flow analysis,algorithm analysis, andsimulation. Control and data are most applicable for real time and data-driven systems. These flow analyses transform logic and data requirements text into graphic flows which are easier to analyze than the text. PERT, state transition, and transaction diagrams are examples of control-flow diagrams.[4] A flow diagram can be developed for the process [control system] for each critical activity. Process control is normally a closed cycle in which a sensor. The application determines if the sensor information is within the predetermined (or calculated) data parameters and constraints. The results of this comparison, which controls the critical component. This [feedback] may control the component electronically or may indicate the need for a manual action. This closed-cycle process has many checks and balances to ensure that it stays safe. It may be fully computer controlled and automated, or it may be a hybrid in which only the sensor is automated and the action requires manual intervention. Further, some process control systems may use prior generations of hardware and software, while others are state of the art. The figure presents an example of a performance-seeking control-flow diagramof the algorithm. The control law consists of estimation, modeling, and optimization processes. In theKalman filterestimator, the inputs, outputs, and residuals were recorded. At the compact propulsion-system-modeling stage, all the estimated inlet and engine parameters were recorded.[1] In addition to temperatures, pressures, and control positions, such estimated parameters as stall margins, thrust, and drag components were recorded. In the optimization phase, the operating-condition constraints, optimal solution, and linear-programming health-status condition codes were recorded. Finally, the actual commands that were sent to the engine through the DEEC were recorded.[1] This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
https://en.wikipedia.org/wiki/Control-flow_diagram
Incomputer science,control-flow analysis(CFA) is astatic-code-analysistechnique for determining thecontrol flowof a program. The control flow is expressed as acontrol-flow graph(CFG). For bothfunctional programming languagesandobject-oriented programming languages, the term CFA, and elaborations such ask-CFA, refer to specific algorithms that compute control flow.[dubious–discuss] For manyimperative programming languages, the control flow of a program is explicit in a program's source code.[dubious–discuss]As a result,interproceduralcontrol-flow analysis implicitly usually refers to astatic analysistechnique for determining the receivers of function or method calls in computer programs written in ahigher-order programming language.[dubious–discuss]For example, in a programming language withhigher-order functionslikeScheme, the target of a function call may not be explicit: in the isolated expression it is unclear to which procedurefmay refer. A control-flow analysis must consider where this expression could be invoked and what argument it may receive to determine the possible targets. Techniques such asabstract interpretation,constraint solving, andtype systemsmay be used for control-flow analysis.[1][page needed]
https://en.wikipedia.org/wiki/Control-flow_analysis
Data-flow analysisis a technique for gathering information about the possible set of values calculated at various points in acomputer program. It forms the foundation for a wide variety of compiler optimizations and program verification techniques. A program'scontrol-flow graph(CFG) is used to determine those parts of a program to which a particular value assigned to a variable might propagate. The information gathered is often used bycompilerswhenoptimizinga program. A canonical example of a data-flow analysis isreaching definitions. Other commonly used data-flow analyses include live variable analysis, available expressions, constant propagation, and very busy expressions, each serving a distinct purpose in compiler optimization passes. A simple way to perform data-flow analysis of programs is to set up data-flow equations for eachnodeof the control-flow graph and solve them by repeatedly calculating the output from the input locally at each node until the whole system stabilizes, i.e., it reaches afixpoint. The efficiency and precision of this process are significantly influenced by the design of the data-flow framework, including the direction of analysis (forward or backward), the domain of values, and the join operation used to merge information from multiple control paths.This general approach, also known asKildall's method, was developed byGary Kildallwhile teaching at theNaval Postgraduate School.[1][2][3][4][5][6][7][8] Data-flow analysis is the process of collecting information about the way the variables are defined and used in the program. It attempts to obtain particular information at each point in a procedure. Usually, it is enough to obtain this information at the boundaries ofbasic blocks, since from that it is easy to compute the information at points in the basic block. In forward flow analysis, the exit state of a block is a function of the block's entry state. This function is the composition of the effects of the statements in the block. The entry state of a block is a function of the exit states of its predecessors. This yields a set of data-flow equations: For each block b: In this,transb{\displaystyle trans_{b}}is thetransfer functionof the blockb{\displaystyle b}. It works on the entry stateinb{\displaystyle in_{b}}, yielding the exit stateoutb{\displaystyle out_{b}}. Thejoin operationjoin{\displaystyle join}combines the exit states of the predecessorsp∈predb{\displaystyle p\in pred_{b}}ofb{\displaystyle b}, yielding the entry state ofb{\displaystyle b}. After solving this set of equations, the entry and/or exit states of the blocks can be used to derive properties of the program at the block boundaries. The transfer function of each statement separately can be applied to get information at a point inside a basic block. Each particular type of data-flow analysis has its own specific transfer function and join operation. Some data-flow problems require backward flow analysis. This follows the same plan, except that the transfer function is applied to the exit state yielding the entry state, and the join operation works on the entry states of the successors to yield the exit state. Theentry point(in forward flow) plays an important role: Since it has no predecessors, its entry state is well defined at the start of the analysis. For instance, the set of local variables with known values is empty. If the control-flow graph does not contain cycles (there were no explicit or implicitloopsin the procedure) solving the equations is straightforward. The control-flow graph can then betopologically sorted; running in the order of this sort, the entry states can be computed at the start of each block, since all predecessors of that block have already been processed, so their exit states are available. If the control-flow graph does contain cycles, a more advanced algorithm is required. The most common way of solving the data-flow equations is by using an iterative algorithm. It starts with an approximation of the in-state of each block. The out-states are then computed by applying the transfer functions on the in-states. From these, the in-states are updated by applying the join operations. The latter two steps are repeated until we reach the so-calledfixpoint: the situation in which the in-states (and the out-states in consequence) do not change. A basic algorithm for solving data-flow equations is theround-robin iterative algorithm: To be usable, the iterative approach should actually reach a fixpoint. This can be guaranteed by imposing constraints on the combination of the value domain of the states, the transfer functions and the join operation. The value domain should be apartial orderwithfinite height(i.e., there are no infinite ascending chainsx1{\displaystyle x_{1}}<x2{\displaystyle x_{2}}< ...). The combination of the transfer function and the join operation should bemonotonicwith respect to this partial order. Monotonicity ensures that on each iteration the value will either stay the same or will grow larger, while finite height ensures that it cannot grow indefinitely. Thus we will ultimately reach a situation where T(x) = x for all x, which is the fixpoint. It is easy to improve on the algorithm above by noticing that the in-state of a block will not change if the out-states of its predecessors don't change. Therefore, we introduce awork list: a list of blocks that still need to be processed. Whenever the out-state of a block changes, we add its successors to the work list. In each iteration, a block is removed from the work list. Its out-state is computed. If the out-state changed, the block's successors are added to the work list. For efficiency, a block should not be in the work list more than once. The algorithm is started by putting information-generating blocks in the work list. It terminates when the work list is empty. The efficiency of iteratively solving data-flow equations is influenced by the order at which local nodes are visited.[9]Furthermore, it depends on whether the data-flow equations are used for forward or backward data-flow analysis over the CFG. Intuitively, in a forward flow problem, it would be fastest if all predecessors of a block have been processed before the block itself, since then the iteration will use the latest information. In the absence of loops it is possible to order the blocks in such a way that the correct out-states are computed by processing each block only once. In the following, a few iteration orders for solving data-flow equations are discussed (a related concept to iteration order of aCFGistree traversalof atree). The initial value of the in-states is important to obtain correct and accurate results. If the results are used for compiler optimizations, they should provideconservativeinformation, i.e. when applying the information, the program should not change semantics. The iteration of the fixpoint algorithm will take the values in the direction of the maximum element. Initializing all blocks with the maximum element is therefore not useful. At least one block starts in a state with a value less than the maximum. The details depend on the data-flow problem. If the minimum element represents totally conservative information, the results can be used safely even during the data-flow iteration. If it represents the most accurate information, fixpoint should be reached before the results can be applied. The following are examples of properties of computer programs that can be calculated by data-flow analysis. Note that the properties calculated by data-flow analysis are typically only approximations of the real properties. This is because data-flow analysis operates on the syntactical structure of the CFG without simulating the exact control flow of the program. However, to be still useful in practice, a data-flow analysis algorithm is typically designed to calculate an upper respectively lower approximation of the real program properties. Thereaching definitionanalysis calculates for each program point the set of definitions that may potentially reach this program point. The reaching definition of variableaat line 7 is the set of assignmentsa = 5at line 2 anda = 3at line 4. Thelive variable analysiscalculates for each program point the variables that may be potentially read afterwards before their next write update. The result is typically used bydead code eliminationto remove statements that assign to a variable whose value is not used afterwards. The in-state of a block is the set of variables that are live at the start of it. It initially contains all variables live (contained) in the block, before the transfer function is applied and the actual contained values are computed. The transfer function of a statement is applied by killing the variables that are written within this block (remove them from the set of live variables). The out-state of a block is the set of variables that are live at the end of the block and is computed by the union of the block's successors' in-states. Initial code: Backward analysis: The in-state of b3 only containsbandd, sincechas been written. The out-state of b1 is the union of the in-states of b2 and b3. The definition ofcin b2 can be removed, sincecis not live immediately after the statement. Solving the data-flow equations starts with initializing all in-states and out-states to the empty set. The work list is initialized by inserting the exit point (b3) in the work list (typical for backward flow). Its computed in-state differs from the previous one, so its predecessors b1 and b2 are inserted and the process continues. The progress is summarized in the table below. Note that b1 was entered in the list before b2, which forced processing b1 twice (b1 was re-entered as predecessor of b2). Inserting b2 before b1 would have allowed earlier completion. Initializing with the empty set is an optimistic initialization: all variables start out as dead. Note that the out-states cannot shrink from one iteration to the next, although the out-state can be smaller than the in-state. This can be seen from the fact that after the first iteration the out-state can only change by a change of the in-state. Since the in-state starts as the empty set, it can only grow in further iterations. Several modern compilers usestatic single-assignment formas the method for analysis of variable dependencies.[10] In 2002, Markus Mohnen described a new method of data-flow analysis that does not require the explicit construction of a data-flow graph,[11]instead relying onabstract interpretationof the program and keeping a working set of program counters. At each conditional branch, both targets are added to the working set. Each path is followed for as many instructions as possible (until end of program or until it has looped with no changes), and then removed from the set and the next program counter retrieved. A combination ofcontrol flow analysisand data flow analysis has shown to be useful and complementary in identifying cohesive source code regions implementing functionalities of a system (e.g.,features,requirementsoruse cases).[12] There are a variety of special classes of dataflow problems which have efficient or general solutions. The examples above are problems in which the data-flow value is a set, e.g. the set ofreaching definitions(Using a bit for a definition position in the program), or the set of live variables. These sets can be represented efficiently asbit vectors, in which each bit represents set membership of one particular element. Using this representation, the join and transfer functions can be implemented as bitwise logical operations. The join operation is typically union or intersection, implemented by bitwiselogical orandlogical and. The transfer function for each block can be decomposed in so-calledgenandkillsets. As an example, in live-variable analysis, the join operation is union. Thekillset is the set of variables that are written in a block, whereas thegenset is the set of variables that are read without being written first. The data-flow equations become In logical operations, this reads as Dataflow problems which have sets of data-flow values which can be represented as bit vectors are calledbit vector problems,gen-kill problems, orlocally separable problems.[13]Such problems have generic polynomial-time solutions.[14] In addition to the reaching definitions and live variables problems mentioned above, the following problems are instances of bitvector problems:[14] Interprocedural, finite, distributive, subset problemsorIFDSproblems are another class of problem with a generic polynomial-time solution.[13][15]Solutions to these problems provide context-sensitive and flow-sensitive dataflow analyses. There are several implementations of IFDS-based dataflow analyses for popular programming languages, e.g. in the Soot[16]and WALA[17]frameworks for Java analysis. Every bitvector problem is also an IFDS problem, but there are several significant IFDS problems that are not bitvector problems, including truly-live variables and possibly-uninitialized variables. Data-flow analysis is typically path-insensitive, though it is possible to define data-flow equations that yield a path-sensitive analysis.
https://en.wikipedia.org/wiki/Data-flow_analysis
This is aglossary of graph theory.Graph theoryis the study ofgraphs, systems of nodes orverticesconnected in pairs by lines oredges.
https://en.wikipedia.org/wiki/Interval_(graph_theory)
AProgram Dependence Graph(PDG) is adirected graphof a program'scontrolanddata dependencies. Nodes represent program statements and edges represent dependencies between these statements. PDGs are used in optimization, debugging, and understanding program behavior. One example of this is their utilization by compilers duringdependence analysis, enabling theoptimizing compilerto make transformations to allow forparallelism.[1][2] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Program_dependence_graph
Incompilerdesign,static single assignment form(often abbreviated asSSA formor simplySSA) is a type ofintermediate representation(IR) where eachvariableisassignedexactly once. SSA is used in most high-quality optimizing compilers for imperative languages, includingLLVM, theGNU Compiler Collection, and many commercial compilers. There are efficient algorithms for converting programs into SSA form. To convert to SSA, existing variables in the original IR are split into versions, new variables typically indicated by the original name with a subscript, so that every definition gets its own version. Additional statements that assign to new versions of variables may also need to be introduced at the join point of two control flow paths. Converting from SSA form to machine code is also efficient. SSA makes numerous analyses needed for optimizations easier to perform, such as determininguse-define chains, because when looking at a use of a variable there is only one place where that variable may have received a value. Most optimizations can be adapted to preserve SSA form, so that one optimization can be performed after another with no additional analysis. The SSA based optimizations are usually more efficient and more powerful than their non-SSA form prior equivalents. Infunctional languagecompilers, such as those forSchemeandML,continuation-passing style(CPS) is generally used. SSA is formally equivalent to a well-behaved subset of CPS excluding non-local control flow, so optimizations and transformations formulated in terms of one generally apply to the other. Using CPS as the intermediate representation is more natural for higher-order functions and interprocedural analysis. CPS also easily encodescall/cc, whereas SSA does not.[1] SSA was developed in the 1980s by several researchers atIBM. Kenneth Zadeck, a key member of the team, moved to Brown University as development continued.[2][3]A 1986 paper introduced birthpoints, identity assignments, and variable renaming such that variables had a single static assignment.[4]A subsequent 1987 paper byJeanne Ferranteand Ronald Cytron[5]proved that the renaming done in the previous paper removes all false dependencies for scalars.[3]In 1988, Barry Rosen,Mark N. Wegman, and Kenneth Zadeck replaced the identity assignments with Φ-functions, introduced the name "static single-assignment form", and demonstrated a now-common SSA optimization.[6]The name Φ-function was chosen by Rosen to be a more publishable version of "phony function".[3]Alpern, Wegman, and Zadeck presented another optimization, but using the name "static single assignment".[7]Finally, in 1989, Rosen, Wegman, Zadeck, Cytron, and Ferrante found an efficient means of converting programs to SSA form.[8] The primary usefulness of SSA comes from how it simultaneously simplifies and improves the results of a variety ofcompiler optimizations, by simplifying the properties of variables. For example, consider this piece of code: Humans can see that the first assignment is not necessary, and that the value ofybeing used in the third line comes from the second assignment ofy. A program would have to performreaching definition analysisto determine this. But if the program is in SSA form, both of these are immediate: Compiler optimizationalgorithms that are either enabled or strongly enhanced by the use of SSA include: Converting ordinary code into SSA form is primarily a matter of replacing the target of each assignment with a new variable, and replacing each use of a variable with the "version" of the variablereachingthat point. For example, consider the followingcontrol-flow graph: Changing the name on the left hand side of "x←{\displaystyle \leftarrow }x - 3" and changing the following uses ofxto that new name would leave the program unaltered. This can be exploited in SSA by creating two new variables:x1andx2, each of which is assigned only once. Likewise, giving distinguishing subscripts to all the other variables yields: It is clear which definition each use is referring to, except for one case: both uses ofyin the bottom block could be referring to eithery1ory2, depending on which path the control flow took. To resolve this, a special statement is inserted in the last block, called aΦ (Phi) function. This statement will generate a new definition ofycalledy3by "choosing" eithery1ory2, depending on the control flow in the past. Now, the last block can simply usey3, and the correct value will be obtained either way. A Φ function forxis not needed: only one version ofx, namelyx2is reaching this place, so there is no problem (in other words, Φ(x2,x2)=x2). Given an arbitrary control-flow graph, it can be difficult to tell where to insert Φ functions, and for which variables. This general question has an efficient solution that can be computed using a concept calleddominance frontiers(see below). Φ functions are not implemented as machine operations on most machines. A compiler can implement a Φ function by inserting "move" operations at the end of every predecessor block. In the example above, the compiler might insert a move fromy1toy3at the end of the middle-left block and a move fromy2toy3at the end of the middle-right block. These move operations might not end up in the final code based on the compiler'sregister allocationprocedure. However, this approach may not work when simultaneous operations are speculatively producing inputs to a Φ function, as can happen onwide-issuemachines. Typically, a wide-issue machine has a selection instruction used in such situations by the compiler to implement the Φ function. In a control-flow graph, a node A is said tostrictlydominatea different node B if it is impossible to reach B without passing through A first. In other words, if node B is reached, then it can be assumed that A has run. A is said todominateB (or B tobe dominated byA) if either A strictly dominates B or A = B. A node which transfers control to a node A is called animmediate predecessorof A. Thedominance frontierof node A is the set of nodes B where Adoes notstrictly dominate B, but does dominate some immediate predecessor of B. These are the points at which multiple control paths merge back together into a single path. For example, in the following code: Node 1 strictly dominates 2, 3, and 4 and the immediate predecessors of node 4 are nodes 2 and 3. Dominance frontiers define the points at which Φ functions are needed. In the above example, when control is passed to node 4, the definition ofresultused depends on whether control was passed from node 2 or 3. Φ functions are not needed for variables defined in a dominator, as there is only one possible definition that can apply. There is an efficient algorithm for finding dominance frontiers of each node. This algorithm was originally described in "Efficiently Computing Static Single Assignment Form and the Control Graph" by Ron Cytron, Jeanne Ferrante, et al. in 1991.[10] Keith D. Cooper, Timothy J. Harvey, and Ken Kennedy ofRice Universitydescribe an algorithm in their paper titledA Simple, Fast Dominance Algorithm:[11] In the code above,idom(b)is theimmediate dominatorof b, the unique node that strictly dominates b but does not strictly dominate any other node that strictly dominates b. "Minimal" SSA inserts the minimal number of Φ functions required to ensure that each name is assigned a value exactly once and that each reference (use) of a name in the original program can still refer to a unique name. (The latter requirement is needed to ensure that the compiler can write down a name for each operand in each operation.) However, some of these Φ functions could bedead. For this reason, minimal SSA does not necessarily produce the fewest Φ functions that are needed by a specific procedure. For some types of analysis, these Φ functions are superfluous and can cause the analysis to run less efficiently. Pruned SSA form is based on a simple observation: Φ functions are only needed for variables that are "live" after the Φ function. (Here, "live" means that the value is used along some path that begins at the Φ function in question.) If a variable is not live, the result of the Φ function cannot be used and the assignment by the Φ function is dead. Construction of pruned SSA form useslive-variable informationin the Φ function insertion phase to decide whether a given Φ function is needed. If the original variable name isn't live at the Φ function insertion point, the Φ function isn't inserted. Another possibility is to treat pruning as adead-code eliminationproblem. Then, a Φ function is live only if any use in the input program will be rewritten to it, or if it will be used as an argument in another Φ function. When entering SSA form, each use is rewritten to the nearest definition that dominates it. A Φ function will then be considered live as long as it is the nearest definition that dominates at least one use, or at least one argument of a live Φ. Semi-pruned SSA form[12]is an attempt to reduce the number of Φ functions without incurring the relatively high cost of computing live-variable information. It is based on the following observation: if a variable is never live upon entry into a basic block, it never needs a Φ function. During SSA construction, Φ functions for any "block-local" variables are omitted. Computing the set of block-local variables is a simpler and faster procedure than full live-variable analysis, making semi-pruned SSA form more efficient to compute than pruned SSA form. On the other hand, semi-pruned SSA form will contain more Φ functions. Block arguments are an alternative to Φ functions that is representationally identical but in practice can be more convenient during optimization. Blocks are named and take a list of block arguments, notated as function parameters. When calling a block the block arguments are bound to specified values.MLton,SwiftSIL, and LLVMMLIRuse block arguments.[13] SSA form is not normally used for direct execution (although it is possible to interpret SSA[14]), and it is frequently used "on top of" another IR with which it remains in direct correspondence. This can be accomplished by "constructing" SSA as a set of functions that map between parts of the existing IR (basic blocks, instructions, operands,etc.) and its SSA counterpart. When the SSA form is no longer needed, these mapping functions may be discarded, leaving only the now-optimized IR. Performing optimizations on SSA form usually leads to entangled SSA-Webs, meaning there are Φ instructions whose operands do not all have the same root operand. In such casescolor-outalgorithms are used to come out of SSA. Naive algorithms introduce a copy along each predecessor path that caused a source of different root symbol to be put in Φ than the destination of Φ. There are multiple algorithms for coming out of SSA with fewer copies, most use interference graphs or some approximation of it to do copy coalescing.[15] Extensions to SSA form can be divided into two categories. Renaming schemeextensions alter the renaming criterion. Recall that SSA form renames each variable when it is assigned a value. Alternative schemes include static single use form (which renames each variable at each statement when it is used) and static single information form (which renames each variable when it is assigned a value, and at the post-dominance frontier). Feature-specificextensions retain the single assignment property for variables, but incorporate new semantics to model additional features. Some feature-specific extensions model high-level programming language features like arrays, objects and aliased pointers. Other feature-specific extensions model low-level architectural features like speculation and predication.
https://en.wikipedia.org/wiki/Static_single_assignment
Incomputing, acompileris acomputer programthattranslatescomputer code written in oneprogramming language(thesourcelanguage) into another language (thetargetlanguage). The name "compiler" is primarily used for programs that translatesource codefrom ahigh-level programming languageto alow-level programming language(e.g.assembly language,object code, ormachine code) to create anexecutableprogram.[1][2]: p1[3] There are many different types of compilers which produce output in different useful forms. Across-compilerproduces code for a differentCPUoroperating systemthan the one on which the cross-compiler itself runs. Abootstrap compileris often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language. Related software includedecompilers, programs that translate from low-level languages to higher level ones; programs that translate between high-level languages, usually calledsource-to-source compilersortranspilers; languagerewriters, usually programs that translate the form ofexpressionswithout a change of language; andcompiler-compilers, compilers that produce compilers (or parts of them), often in a generic and reusable way so as to be able to produce many differing compilers. A compiler is likely to perform some or all of the following operations, often called phases:preprocessing,lexical analysis,parsing,semantic analysis(syntax-directed translation), conversion of input programs to anintermediate representation,code optimizationandmachine specific code generation. Compilers generally implement these phases as modular components, promoting efficient design and correctness oftransformationsof source input to target output. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementers invest significant effort to ensurecompiler correctness.[4] With respect to making source code runnable, aninterpreterprovides a similar function as a compiler, but via a different mechanism. An interpreter executes code without converting it to machine code.[2]: p2Some interpreters execute source code while others execute an intermediate form such asbytecode. A program compiled to native code tends to run faster than if interpreted. Environments with a bytecode intermediate form tend toward intermediate speed.Just-in-time compilationallows for native execution speed with a one-time startup processing time cost. Low-level programming languages, such asassemblyandC, are typically compiled, especially when speed is a significant concern, rather thancross-platformsupport. For such languages, there are more one-to-one correspondences between the source code and the resultingmachine code, making it easier for programmers to control the use of hardware. In theory, a programming language can be used via either a compiler or an interpreter, but in practice, each language tends to be used with only one or the other. None-the-less, it is possible to write a compiler for a languages that is commonly interpreted. For example,Common Lispcan be compiled to Java bytecode (then interpreted by theJava virtual machine), C code (then compiled to native machine code), or directly to native code. Theoretical computing concepts developed by scientists, mathematicians, and engineers formed the basis of digital modern computing development during World War II. Primitive binary languages evolved because digital devices only understand ones and zeros and the circuit patterns in the underlying machine architecture. In the late 1940s, assembly languages were created to offer a more workable abstraction of the computer architectures.[5]Limitedmemorycapacity of early computers led to substantial technical challenges when the first compilers were designed. Therefore, the compilation process needed to be divided into several small programs. The front end programs produce the analysis products used by the back end programs to generate target code. As computer technology provided more resources, compiler designs could align better with the compilation process. It is usually more productive for a programmer to use a high-level language, so the development of high-level languages followed naturally from the capabilities offered by digital computers. High-level languages areformal languagesthat are strictly defined by their syntax andsemanticswhich form the high-level language architecture. Elements of these formal languages include: The sentences in a language may be defined by a set of rules called a grammar.[6] Backus–Naur form(BNF) describes the syntax of "sentences" of a language. It was developed byJohn Backusand used for the syntax ofAlgol 60.[7]The ideas derive from thecontext-free grammarconcepts by linguistNoam Chomsky.[8]"BNF and itsextensionshave become standard tools for describing the syntax of programming notations. In many cases, parts of compilers are generated automatically from a BNF description."[9] Between 1942 and 1945,Konrad Zusedesigned the first (algorithmic) programming language for computers calledPlankalkül("Plan Calculus"). Zuse also envisioned aPlanfertigungsgerät("Plan assembly device") to automatically translate the mathematical formulation of a program into machine-readablepunched film stock.[10]While no actual implementation occurred until the 1970s, it presented concepts later seen inAPLdesigned by Ken Iverson in the late 1950s.[11]APL is a language for mathematical computations. Between 1949 and 1951,Heinz RutishauserproposedSuperplan, a high-level language and automatic translator.[12]His ideas were later refined byFriedrich L. BauerandKlaus Samelson.[13] High-level language design during the formative years of digital computing provided useful programming tools for a variety of applications: Compiler technology evolved from the need for a strictly defined transformation of the high-level source program into a low-level target program for the digital computer. The compiler could be viewed as a front end to deal with the analysis of the source code and a back end to synthesize the analysis into the target code. Optimization between the front end and back end could produce more efficient target code.[17] Some early milestones in the development of compiler technology: Early operating systems and software were written in assembly language. In the 1960s and early 1970s, the use of high-level languages for system programming was still controversial due to resource limitations. However, several research and industry efforts began the shift toward high-level systems programming languages, for example,BCPL,BLISS,B, andC. BCPL(Basic Combined Programming Language) designed in 1966 byMartin Richardsat the University of Cambridge was originally developed as a compiler writing tool.[30]Several compilers have been implemented, Richards' book provides insights to the language and its compiler.[31]BCPL was not only an influential systems programming language that is still used in research[32]but also provided a basis for the design of B and C languages. BLISS(Basic Language for Implementation of System Software) was developed for a Digital Equipment Corporation (DEC) PDP-10 computer by W. A. Wulf's Carnegie Mellon University (CMU) research team. The CMU team went on to develop BLISS-11 compiler one year later in 1970. Multics(Multiplexed Information and Computing Service), a time-sharing operating system project, involvedMIT,Bell Labs,General Electric(laterHoneywell) and was led byFernando Corbatófrom MIT.[33]Multics was written in thePL/Ilanguage developed by IBM and IBM User Group.[34]IBM's goal was to satisfy business, scientific, and systems programming requirements. There were other languages that could have been considered but PL/I offered the most complete solution even though it had not been implemented.[35]For the first few years of the Multics project, a subset of the language could be compiled to assembly language with the Early PL/I (EPL) compiler by Doug McIlory and Bob Morris from Bell Labs.[36]EPL supported the project until a boot-strapping compiler for the full PL/I could be developed.[37] Bell Labs left the Multics project in 1969, and developed a system programming languageBbased on BCPL concepts, written byDennis RitchieandKen Thompson. Ritchie created a boot-strapping compiler for B and wroteUnics(Uniplexed Information and Computing Service) operating system for a PDP-7 in B. Unics eventually became spelled Unix. Bell Labs started the development and expansion ofCbased on B and BCPL. The BCPL compiler had been transported to Multics by Bell Labs and BCPL was a preferred language at Bell Labs.[38]Initially, a front-end program to Bell Labs' B compiler was used while a C compiler was developed. In 1971, a new PDP-11 provided the resource to define extensions to B and rewrite the compiler. By 1973 the design of C language was essentially complete and the Unix kernel for a PDP-11 was rewritten in C. Steve Johnson started development of Portable C Compiler (PCC) to support retargeting of C compilers to new machines.[39][40] Object-oriented programming(OOP) offered some interesting possibilities for application development and maintenance. OOP concepts go further back but were part ofLISPandSimulalanguage science.[41]Bell Labs became interested in OOP with the development ofC++.[42]C++ was first used in 1980 for systems programming. The initial design leveraged C language systems programming capabilities with Simula concepts. Object-oriented facilities were added in 1983.[43]The Cfront program implemented a C++ front-end for C84 language compiler. In subsequent years several C++ compilers were developed as C++ popularity grew. In many application domains, the idea of using a higher-level language quickly caught on. Because of the expanding functionality supported by newerprogramming languagesand the increasing complexity of computer architectures, compilers became more complex. DARPA(Defense Advanced Research Projects Agency) sponsored a compiler project with Wulf's CMU research team in 1970. The Production Quality Compiler-CompilerPQCCdesign would produce a Production Quality Compiler (PQC) from formal definitions of source language and the target.[44]PQCC tried to extend the term compiler-compiler beyond the traditional meaning as a parser generator (e.g.,Yacc) without much success. PQCC might more properly be referred to as a compiler generator. PQCC research into code generation process sought to build a truly automatic compiler-writing system. The effort discovered and designed the phase structure of the PQC. The BLISS-11 compiler provided the initial structure.[45]The phases included analyses (front end), intermediate translation to virtual machine (middle end), and translation to the target (back end). TCOL was developed for the PQCC research to handle language specific constructs in the intermediate representation.[46]Variations of TCOL supported various languages. The PQCC project investigated techniques of automated compiler construction. The design concepts proved useful in optimizing compilers and compilers for the (since 1995, object-oriented) programming languageAda. The AdaSTONEMANdocument[a]formalized the program support environment (APSE) along with the kernel (KAPSE) and minimal (MAPSE). An Ada interpreter NYU/ED supported development and standardization efforts with the American National Standards Institute (ANSI) and the International Standards Organization (ISO). Initial Ada compiler development by the U.S. Military Services included the compilers in a complete integrated design environment along the lines of theSTONEMANdocument. Army and Navy worked on the Ada Language System (ALS) project targeted to DEC/VAX architecture while the Air Force started on the Ada Integrated Environment (AIE) targeted to IBM 370 series. While the projects did not provide the desired results, they did contribute to the overall effort on Ada development.[47] Other Ada compiler efforts got underway in Britain at the University of York and in Germany at the University of Karlsruhe. In the U. S., Verdix (later acquired by Rational) delivered the Verdix Ada Development System (VADS) to the Army. VADS provided a set of development tools including a compiler. Unix/VADS could be hosted on a variety of Unix platforms such as DEC Ultrix and the Sun 3/60 Solaris targeted to Motorola 68020 in an Army CECOM evaluation.[48]There were soon many Ada compilers available that passed the Ada Validation tests. The Free Software Foundation GNU project developed theGNU Compiler Collection(GCC) which provides a core capability to support multiple languages and targets. The Ada versionGNATis one of the most widely used Ada compilers. GNAT is free but there is also commercial support, for example, AdaCore, was founded in 1994 to provide commercial software solutions for Ada. GNAT Pro includes the GNU GCC based GNAT with a tool suite to provide anintegrated development environment. High-level languages continued to drive compiler research and development. Focus areas included optimization and automatic code generation. Trends in programming languages and development environments influenced compiler technology. More compilers became included in language distributions (PERL, Java Development Kit) and as a component of an IDE (VADS, Eclipse, Ada Pro). The interrelationship and interdependence of technologies grew. The advent of web services promoted growth of web languages and scripting languages. Scripts trace back to the early days of Command Line Interfaces (CLI) where the user could enter commands to be executed by the system. User Shell concepts developed with languages to write shell programs. Early Windows designs offered a simple batch programming capability. The conventional transformation of these language used an interpreter. While not widely used, Bash and Batch compilers have been written. More recently sophisticated interpreted languages became part of the developers tool kit. Modern scripting languages include PHP, Python, Ruby and Lua. (Lua is widely used in game development.) All of these have interpreter and compiler support.[49] "When the field of compiling began in the late 50s, its focus was limited to the translation of high-level language programs into machine code ... The compiler field is increasingly intertwined with other disciplines including computer architecture, programming languages, formal methods, software engineering, and computer security."[50]The "Compiler Research: The Next 50 Years" article noted the importance of object-oriented languages and Java. Security andparallel computingwere cited among the future research targets. A compiler implements a formal transformation from a high-level source program to a low-level target program. Compiler design can define an end-to-end solution or tackle a defined subset that interfaces with other compilation tools e.g. preprocessors, assemblers, linkers. Design requirements include rigorously defined interfaces both internally between compiler components and externally between supporting toolsets. In the early days, the approach taken to compiler design was directly affected by the complexity of the computer language to be processed, the experience of the person(s) designing it, and the resources available. Resource limitations led to the need to pass through the source code more than once. A compiler for a relatively simple language written by one person might be a single, monolithic piece of software. However, as the source language grows in complexity the design may be split into a number of interdependent phases. Separate phases provide design improvements that focus development on the functions in the compilation process. Classifying compilers by number of passes has its background in the hardware resource limitations of computers. Compiling involves performing much work and early computers did not have enough memory to contain one program that did all of this work. As a result, compilers were split up into smaller programs which each made a pass over the source (or some representation of it) performing some of the required analysis and translations. The ability to compile in asingle passhas classically been seen as a benefit because it simplifies the job of writing a compiler and one-pass compilers generally perform compilations faster thanmulti-pass compilers. Thus, partly driven by the resource limitations of early systems, many early languages were specifically designed so that they could be compiled in a single pass (e.g.,Pascal). In some cases, the design of a language feature may require a compiler to perform more than one pass over the source. For instance, consider a declaration appearing on line 20 of the source which affects the translation of a statement appearing on line 10. In this case, the first pass needs to gather information about declarations appearing after statements that they affect, with the actual translation happening during a subsequent pass. The disadvantage of compiling in a single pass is that it is not possible to perform many of the sophisticatedoptimizationsneeded to generate high quality code. It can be difficult to count exactly how many passes an optimizing compiler makes. For instance, different phases of optimization may analyse one expression many times but only analyse another expression once. Splitting a compiler up into small programs is a technique used by researchers interested in producing provably correct compilers. Proving the correctness of a set of small programs often requires less effort than proving the correctness of a larger, single, equivalent program. Regardless of the exact number of phases in the compiler design, the phases can be assigned to one of three stages. The stages include a front end, a middle end, and a back end. This front/middle/back-end approach makes it possible to combine front ends for different languages with back ends for differentCPUswhile sharing the optimizations of the middle end.[51]Practical examples of this approach are theGNU Compiler Collection,Clang(LLVM-based C/C++ compiler),[52]and theAmsterdam Compiler Kit, which have multiple front-ends, shared optimizations and multiple back-ends. The front end analyzes the source code to build an internal representation of the program, called theintermediate representation(IR). It also manages thesymbol table, a data structure mapping each symbol in the source code to associated information such as location, type and scope. While the frontend can be a single monolithic function or program, as in ascannerless parser, it was traditionally implemented and analyzed as several phases, which may execute sequentially or concurrently. This method is favored due to its modularity andseparation of concerns. Most commonly, the frontend is broken into three phases:lexical analysis(also known as lexing or scanning),syntax analysis(also known as scanning or parsing), andsemantic analysis. Lexing and parsing comprise the syntactic analysis (word syntax and phrase syntax, respectively), and in simple cases, these modules (the lexer and parser) can be automatically generated from a grammar for the language, though in more complex cases these require manual modification. The lexical grammar and phrase grammar are usuallycontext-free grammars, which simplifies analysis significantly, with context-sensitivity handled at the semantic analysis phase. The semantic analysis phase is generally more complex and written by hand, but can be partially or fully automated usingattribute grammars. These phases themselves can be further broken down: lexing as scanning and evaluating, and parsing as building aconcrete syntax tree(CST, parse tree) and then transforming it into anabstract syntax tree(AST, syntax tree). In some cases additional phases are used, notablyline reconstructionandpreprocessing,but these are rare. The main phases of the front end include the following: The middle end, also known asoptimizer,performs optimizations on the intermediate representation in order to improve the performance and the quality of the produced machine code.[56]The middle end contains those optimizations that are independent of the CPU architecture being targeted. The main phases of the middle end include the following: Compiler analysis is the prerequisite for any compiler optimization, and they tightly work together. For example,dependence analysisis crucial forloop transformation. The scope of compiler analysis and optimizations vary greatly; their scope may range from operating within abasic block, to whole procedures, or even the whole program. There is a trade-off between the granularity of the optimizations and the cost of compilation. For example,peephole optimizationsare fast to perform during compilation but only affect a small local fragment of the code, and can be performed independently of the context in which the code fragment appears. In contrast,interprocedural optimizationrequires more compilation time and memory space, but enable optimizations that are only possible by considering the behavior of multiple functions simultaneously. Interprocedural analysis and optimizations are common in modern commercial compilers fromHP,IBM,SGI,Intel,Microsoft, andSun Microsystems. Thefree softwareGCCwas criticized for a long time for lacking powerful interprocedural optimizations, but it is changing in this respect. Another open source compiler with full analysis and optimization infrastructure isOpen64, which is used by many organizations for research and commercial purposes. Due to the extra time and space needed for compiler analysis and optimizations, some compilers skip them by default. Users have to use compilation options to explicitly tell the compiler which optimizations should be enabled. The back end is responsible for the CPU architecture specific optimizations and forcode generation.[56] The main phases of the back end include the following: Compiler correctnessis the branch of software engineering that deals with trying to show that a compiler behaves according to itslanguage specification.[58]Techniques include developing the compiler usingformal methodsand using rigorous testing (often called compiler validation) on an existing compiler. Higher-level programming languages usually appear with a type oftranslationin mind: either designed ascompiled languageorinterpreted language. However, in practice there is rarely anything about a language thatrequiresit to be exclusively compiled or exclusively interpreted, although it is possible to design languages that rely on re-interpretation at run time. The categorization usually reflects the most popular or widespread implementations of a language – for instance,BASICis sometimes called an interpreted language, and C a compiled one, despite the existence of BASIC compilers and C interpreters. Interpretation does not replace compilation completely. It only hides it from the user and makes it gradual. Even though an interpreter can itself be interpreted, a set of directly executed machine instructions is needed somewhere at the bottom of the execution stack (seemachine language). Furthermore, for optimization compilers can contain interpreter functionality, and interpreters may include ahead of time compilation techniques. For example, where an expression can be executed during compilation and the results inserted into the output program, then it prevents it having to be recalculated each time the program runs, which can greatly speed up the final program. Modern trends towardjust-in-time compilationandbytecode interpretationat times blur the traditional categorizations of compilers and interpreters even further. Some language specifications spell out that implementationsmustinclude a compilation facility; for example,Common Lisp. However, there is nothing inherent in the definition of Common Lisp that stops it from being interpreted. Other languages have features that are very easy to implement in an interpreter, but make writing a compiler much harder; for example,APL,SNOBOL4, and many scripting languages allow programs to construct arbitrary source code at runtime with regular string operations, and then execute that code by passing it to a specialevaluation function. To implement these features in a compiled language, programs must usually be shipped with aruntime librarythat includes a version of the compiler itself. One classification of compilers is by theplatformon which their generated code executes. This is known as thetarget platform. Anativeorhostedcompiler is one whose output is intended to directly run on the same type of computer and operating system that the compiler itself runs on. The output of across compileris designed to run on a different platform. Cross compilers are often used when developing software forembedded systemsthat are not intended to support a software development environment. The output of a compiler that produces code for avirtual machine(VM) may or may not be executed on the same platform as the compiler that produced it. For this reason, such compilers are not usually classified as native or cross compilers. The lower level language that is the target of a compiler may itself be ahigh-level programming language. C, viewed by some as a sort of portable assembly language, is frequently the target language of such compilers. For example,Cfront, the original compiler forC++, used C as its target language. The C code generated by such a compiler is usually not intended to be readable and maintained by humans, soindent styleand creating pretty C intermediate code are ignored. Some of the features of C that make it a good target language include the#linedirective, which can be generated by the compiler to supportdebuggingof the original source, and the wide platform support available with C compilers. While a common compiler type outputs machine code, there are many other types: Assemblers,which translate human readableassembly languageto themachine codeinstructions executed by hardware, are not considered compilers.[67][b](The inverse program that translates machine code to assembly language is called adisassembler.)
https://en.wikipedia.org/wiki/Compiler_construction
Anintermediate representation(IR) is thedata structureor code used internally by acompilerorvirtual machineto representsource code. An IR is designed to be conducive to further processing, such asoptimizationandtranslation.[1]A "good" IR must beaccurate– capable of representing the source code without loss of information[2]– andindependentof any particular source or target language.[1]An IR may take one of several forms: an in-memorydata structure, or a specialtuple- orstack-basedcodereadable by the program.[3]In the latter case it is also called anintermediate language. A canonical example is found in most modern compilers. For example, theCPython interpretertransforms the linear human-readable text representing a program into an intermediategraph structurethat allowsflow analysisand re-arrangement before execution. Use of an intermediate representation such as this allows compiler systems like theGNU Compiler CollectionandLLVMto be used by many different source languages togenerate codefor many different targetarchitectures. Anintermediate languageis the language of anabstract machinedesigned to aid in the analysis ofcomputer programs. The term comes from their use incompilers, where the source code of a program is translated into a form more suitable for code-improving transformations before being used to generateobjectormachinecode for a target machine. The design of an intermediate language typically differs from that of a practicalmachine languagein three fundamental ways: A popular format for intermediate languages isthree-address code. The term is also used to refer to languages used as intermediates by somehigh-level programming languageswhich do not output object or machine code themselves, but output the intermediate language only. This intermediate language is submitted to a compiler for such language, which then outputs finished object or machine code. This is usually done to ease the process ofoptimizationor to increaseportabilityby using an intermediate language that has compilers for manyprocessorsandoperating systems, such asC. Languages used for this fall in complexity between high-level languages andlow-levellanguages, such asassembly languages. Though not explicitly designed as an intermediate language,C's nature as an abstraction ofassemblyand its ubiquity as thede factosystem languageinUnix-likeand other operating systems has made it a popular intermediate language:Eiffel,Sather,Esterel, somedialectsofLisp(Lush,Gambit),Squeak's Smalltalk-subset Slang,Nim,Cython,Seed7,SystemTap,Vala, V, and others make use of C as an intermediate language. Variants of C have been designed to provide C's features as a portableassembly language, includingC--and theC Intermediate Language. Any language targeting avirtual machineorp-code machinecan be considered an intermediate language: TheGNU Compiler Collection(GCC) uses several intermediate languages internally to simplify portability andcross-compilation. Among these languages are GCC supports generating these IRs, as a final target: TheLLVMcompiler framework is based on theLLVM IRintermediate language, of which the compact, binary serialized representation is also referred to as "bitcode" and has been productized by Apple.[4][5]Like GIMPLE Bytecode, LLVM Bitcode is useful in link-time optimization. Like GCC, LLVM also targets some IRs meant for direct distribution, including Google'sPNaClIR andSPIR. A further development within LLVM is the use ofMulti-Level Intermediate Representation(MLIR) with the potential to generate code for different heterogeneous targets, and to combine the outputs of different compilers.[6] The ILOC intermediate language[7]is used in classes on compiler design as a simple target language.[8] Static analysistools often use an intermediate representation. For instance,Radare2is a toolbox for binary files analysis and reverse-engineering. It uses the intermediate languages ESIL[9]and REIL[10]to analyze binary files.
https://en.wikipedia.org/wiki/Intermediate_representation
Glitch artis an art movement centering around the practice of using digital or analog errors, more soglitches, for aesthetic purposes by either corrupting digital data or physically manipulating electronic devices. It has been also regarded as an increasing trend innew media art, with it retroactively being described as developing over the course of the 20th century onward.[1] As a technical word, aglitchis the unexpected result of a malfunction, especially occurring in software, video games, images, videos, audio, and other digital artefacts. The term came to be associated with music in the mid 90s to describe a genre of experimental electronic music,glitch music. Shortly after, asVJsand other visual artists began to embrace glitch as an aesthetic of the digital age, glitch art came to refer to a whole assembly of visual arts.[2]One such early movement was later dubbednet.art, including early work by the art collective JODI, which was started by artists Joan Heemskerk and Dirk Paesmans. JODI's experiments on glitch art included purposely causing layout errors in their website in order to display underlying code and error messages.[3]The explorations of JODI and other net.art members would later influence visual distortion practices like databending and datamoshing (see below).[3]The history of glitch art has been regarded as ranging from crafted artworks such as the filmA Colour Box(1935) byLen Lyeand thevideo sculptureTV Magnet(1965) byNam June Paik, as well asDigital TV Dinner(1978) created byJamie Fentonand Raul Zaritsky, with audio by Dick Ainsworth—made by manipulating theBally video game consoleand recording the results on videotape[4]—to more process-based contemporary work such asPanasonic TH-42PWD8UK Plasma Screen Burn(2007) byCory Arcangel.[1] Motherboard, a tech-art collective, held the first glitch art symposium inOslo, Norway during January, to "bring together international artists, academics and other Glitch practitioners for a short space of time to share their work and ideas with the public and with each other."[5][3] On September 29 thru October 3,Chicagoplayed host to the first GLI.TC/H, a five-day conference in Chicago organized by Nick Briz, Evan Meaney,Rosa Menkmanand Jon Satrom that included workshops, lectures, performances, installations and screenings.[6]In November 2011, the second GLI.TC/H event traveled fromChicagotoAmsterdamand lastly toBirmingham, UK.[7]It included workshops, screenings, lectures, performance, panel discussions and a gallery show over the course of seven days at the three cities.[8] Run Computer, Run atGLITCH 2013arts festivalat RuaRed, South Dublin Arts Centre - Dublin, curated byNora O Murchú.[9] /'fu:bar/ 2015[10] Glitch Art is Dead at Teatr Barakah in Krakow, Poland. Curated by Ras Alhague and Aleksandra Pienkosz.[11] reFrag: glitch at La Gaïté Lyrique in Paris, France. Organized by the School Art Institute of Chicago and Parsons Paris. /'fu:bar/ 2016[12] /'fu:bar/ 2017[13] Glitch Art is Dead 2 at Gamut Gallery, in Minneapolis, Minnesota, US. Curated by Miles Taylor, Ras Alhague and Aleksandra Pienkosz.[14] /'fu:bar/ 2018[15] Blue\x80 & Nuit Blanche at Villette Makerz in Paris, France. Curated by Ras Alhague and Kaspar Ravel.[16] Refrag #4 Cradle-to-Grave at Espace en cours in Paris, France. Curated by Benjamin Gaulon.[17] /'fu:bar/ 2019[18] Communication Noiseexhibition,Media Mediterranea 21 festival,Pula, Croatia.[19] /'fu:bar/ 2020[20] An Exercise of Meaning in a Glitch Seasonan exhibition in National Gallery Singapore. Curated By: Syaheedah Iskandar.[21] Posthumanism, Epidigital, and Glitch Feminisman exhibition at Machida City Museum of Graphic Arts in Japan. Curated By:Ryota Matsumoto.[22] /'fu:bar/ 2021[23] Glitch Art: Pixel Language, the first glitch art exhibition in Iran.[24] Glitch Art in Iran. La prima mostra artistica collettiva.[25] Glitch Art in Iran. La prima mostra artistica collettiva.[26] Glitch: Aesthetic of the Pixels, the second glitch video art group exhibit in Iran.[27] Glitch Art is Dead: The 3rd Expo, September 2-4 in Granite Falls, MN[28] GLITCH The Art of Interference,Pinakothek der Moderne, Munich, Germany[29] What is called "glitch art" typically means visual glitches, either in a still or moving image. It is made by either "capturing" an image of a glitch as it randomly happens, or more often by artists/designers manipulating their digital files, software or hardware to produce these "errors." Artists have posted a variety of tutorials online explaining how to make glitch art.[30][31]There are many approaches to making these glitches happen on demand, ranging from physical changes to the hardware to direct alterations of the digital files themselves. ArtistMichael Betancourtidentified five areas of manipulation that are used to create "glitchart."[32]Betancourt notes that "glitch art" is defined by a broad range of technical approaches that can be identified with changes made to the digital file, its generative display, or the technologies used to show it (such as a video screen). He includes within this range changes made to analog technologies such as television (in video art) or the physical film strip in motion pictures. Data manipulation (akadatabending) changes the information inside the digital file to create glitches. Databending involves editing and changing the file data. There are a variety of tutorials explaining how to make these changes using programs such as HexFiend.[33]Adam Woodall explains in his tutorial:[34] Like all files, image files (.jpg .bmp .gif etc) are all made up of text. Unlike some other files, like .svg (vectors) or .html (web pages), when an image is opened in a text editor all that comes up is gobbldygook! Related processes such asdatamoshingchanges the data in a video or picture file.[35][36]Datamoshing with software such asAvidemuxis a common method for creating glitch art by manipulating different frame types in compressed digital video:[37] Datamoshing involves the removal of an encoded video’s I-frames (intra-coded picture, also known as key frames—a frame that does not require any information regarding another frame to be decoded), leaving only the P- (predicted picture) or B- (bi-predictive picture) frames. P-frames contain information predicting the changes in the image between the current frame and the previous one, and B-frames contain information predicting the image differences between the previous, current and subsequent frames. Because P- and B-frames use data from previous and forward frames, they are more compressed than I-Frames. This process of direct manipulation of the digital data is not restricted to files that only appear on digital screens. "3D model glitching" refers to the purposeful corruption of the code in3D animation programsresulting in distorted and abstract images of 3Dvirtual worlds,modelsand even3D printed objects.[38] Misalignment glitches are produced by opening a digital file of one type with a program designed for a different type of file,[36]such as opening a video file as a sound file, or using the wrong codec to decompress a file. Tools commonly used to create glitches of this type includeAudacityandWordPad.[39]These glitches can depend on how Audacity handles files, even when they are not audio-encoded.[40] Hardware failure happens by altering the physical wiring or other internal connections of the machine itself, such as a short circuit, in a process called "circuit bending" causes the machine to create glitches that produce new sounds and visuals.[41]For example, by damaging internal pieces of something like aVHSplayer, one can achieve different colorful visual images. Video artistTom DeFantiexplained the role of hardware failure in a voice-over for Jamie Fenton's early glitch videoDigital TV Dinnerthat used theBally video game consolesystem:[4] This piece represents the absolute cheapest one can go in home computer art. This involves taking a $300 video game system, pounding it with your fist so the cartridge pops out while its trying to write the menu. The music here is done by Dick Ainsworth using the same system, but pounding it with your fingers instead of your fist. Physically beating the case of the game system would cause the game cartridge to pop out, interrupting the computer's operation. The glitches that resulted from this failure were a result of how the machine was set up:[4] There wasROM memoryin the cartridge and ROM memory built into the console. Popping out the cartridge while executing code in the console ROM created garbage references in the stack frames and invalid pointers, which caused the strange patterns to be drawn. ... The Bally Astrocade was unique among cartridge games in that it was designed to allow users to change game cartridges with power-on. When pressing the reset button, it was possible to remove the cartridge from the system and induce various memory dump pattern sequences. Digital TV Dinner is a collection of these curious states of silicon epilepsy set to music composed and generated upon this same platform. Misregistration is produced by the physical noise of historically analog media such as motion picture film. It includes dirt, scratches, smudges and markings that can distort physical media also impact the playback of digital recordings on media such as CDs and DVDs, as electronic music composerKim Casconeexplained in 2002:[42] "There are many types of digital audio ‘failure.' Sometimes, it results in horrible noise, while other times it can produce wondrous tapestries of sound. (To more adventurous ears, these are quite often the same.) When the German sound experimenters known as Oval started creating music in the early 1990s by painting small images on the underside of CDs to make them skip, they were using an aspect of ‘failure' in their work that revealed a subtextual layer embedded in the compact disc. Oval's investigation of ‘failure' is not new. Much work had previously been done in this area such as the optical soundtrack work of Laszlo Moholy-Nagy and Oskar Fischinger, as well as the vinyl record manipulations of John Cage and Christian Marclay, to name a few. What is new is that ideas now travel at the speed of light and can spawn entire musical genres in a relatively short period of time." Distortion was one of the earliest types of glitch art to be produced, such as in the work of video artistNam June Paik, who created video distortions by placing powerful magnets in close proximity to the television screen, resulting in the appearance of abstract patterns.[43]Paik's addition of physical interference to a TV set created new kinds of imagery that changed how the broadcast image was displayed:[44] The magnetic field interferes with the television’s electronic signals, distorting the broadcast image into an abstract form that changes when the magnet is moved. By recording the resulting analog distortions with a camera, they can then be shown without the need for the magnet. Compression artifactsis a noticeable distortion of media (includingimages,audio, andvideo) caused by the application oflossy compression. They can be intentionally used as a visual style in glitch art.Rosa Menkman's work makes use of compression artifacts,[45]particularly thediscrete cosine transformblocks (DCT blocks) found in mostdigital mediadata compressionformats such asJPEGdigital imagesandMP3digital audio.[46]Another example isJpegsby German photographerThomas Ruff, which uses intentional JPEG artifacts as the basis of the picture's style.[47][48] Media related toGlitch artat Wikimedia Commons
https://en.wikipedia.org/wiki/Glitch_art
Glitch removalis the elimination ofglitches—unnecessary signal transitions without functionality—from electronic circuits.Power dissipationof a gate occurs in two ways: static power dissipation and dynamic power dissipation. Glitch power comes under dynamic dissipation in the circuit and is directly proportional to switching activity. Glitch power dissipation is 20%–70% of total power dissipation and hence glitching should be eliminated for low power design. Switching activity occurs due tosignal transitionswhich are of two types: functional transition and aglitch.Switching powerdissipation is directly proportional to the switching activity (α), loadcapacitance(C), Supply voltage (V), andclock frequency(f) as: Switching activity means transition to different levels. Glitches are dependent on signal transitions and more glitches results in higher power dissipation. As per above equation switching power dissipation can be controlled by controlling switching activity (α),voltage scalingetc. As discussed, more transition results in more glitches and hence more power dissipation. To minimize glitch occurrence, switching activity should be minimized. For example,Gray codecould be used in counters instead ofbinary code, since every increment in Gray code only flips one bit. Gate freezing minimizes power dissipation by eliminating glitching. It relies on the availability of modifiedstandard library cellssuch as the so-calledF-Gate. This method consists of transforming high glitch gates into modified devices which filter out the glitches when a control signal is applied. When the control signal is high, the F-Gate operates as normal but when the control signal is low, the gate output is disconnected from the ground. As a result, it can never be discharged to logic 0 and glitches are prevented. Hazardsin digital circuits are unnecessary transitions due to varying path delays in the circuit. Balanced path delay techniques can be used for resolving differing path delays. To make path delays equal, buffer insertion is done on the faster paths. Balanced path delay will avoid glitches in the output. Hazard filtering is another way to remove glitching. In hazard filteringgatepropagation delays are adjusted. This results in balancing all path delays at the output. Hazard filtering is preferred over path balancing as path balancing consumes more power due to the insertion of additional buffers. Gate upsizing and gate downsizing techniques are used for path balancing. A gate is replaced by a logically equivalent but differently-sized cell so that delay of the gate is changed. Because increasing the gate size also increases power dissipation, gate-upsizing is only used when power saved by glitch removal is more than the power dissipation due to the increase in size. Gate sizing affects glitching transitions but does not affect the functional transition. The delay of a gate is a function of itsthreshold voltage. Non-critical paths are selected and threshold voltage of the gates in these paths is increased. This results in balanced propagation delay along different paths converging at the receiving gate. Performance is maintained since it is determined by the time required by the critical path. A higher threshold voltage also reduces theleakage currentof a path.
https://en.wikipedia.org/wiki/Glitch_removal
Indigital logic, ahazardis an undesirable effect caused by either a deficiency in the system or external influences in bothsynchronous[citation needed]andasynchronous circuits.[1]: 43Logic hazards are manifestations of a problem in which changes in the input variables do not change the output correctly due to some form of delay caused by logic elements (NOT,AND,OR gates, etc.) This results in the logic not performing its function properly. The three different most common kinds of hazards are usually referred to as static, dynamic and function hazards. Hazards are a temporary problem, as the logic circuit will eventually settle to the desired function. Therefore, in synchronous designs, it is standard practice toregisterthe output of a circuit before it is being used in a different clock domain or routed out of the system, so that hazards do not cause any problems. If that is not the case, however, it is imperative that hazards be eliminated as they can have an effect on other connected systems. A static hazard is a change of a signal state twice in a row when the signal is expected to stay constant.[1]: 48When one input signal changes, the output changes momentarily before stabilizing to the correct value. There are two types of static hazards: In properly formed two-level AND-OR logic based on a Sum Of Products expression, there will be no static-0 hazards (but may still have static-1 hazards). Conversely, there will be no static-1 hazards in an OR-AND implementation of a Product Of Sums expression (but may still have static-0 hazards). The most commonly used method to eliminate static hazards is to add redundant logic (consensus terms in the logic expression). Consider an imperfect circuit that suffers from a delay in the physical logic elements i.e. AND gates etc. The simple circuit performs the function noting: From a look at the starting diagram it is clear that if no delays were to occur, then the circuit would function normally. However, no two gates are ever manufactured exactly the same. Due to this imperfection, the delay for the first AND gate will be slightly different than its counterpart. Thus an error occurs when the input changes from 111 to 011. i.e. when A changes state. Now we know roughly how the hazard is occurring, for a clearer picture and the solution on how to solve this problem, we would look to theKarnaugh map. A theorem proved by Huffman[2]tells us that adding a redundant loop 'BC' will eliminate the hazard. The amended function is: Now we can see that even with imperfect logic elements, our example will not show signs of hazards when A changes state. This theory can be applied to any logic system. Computer programs deal with most of this work now, but for simple examples it is quicker to do the debugging by hand. When there are many input variables (say 6 or more) it will become quite difficult to 'see' the errors on a Karnaugh map. A dynamic hazard are a series of changes of a signal state that happen several times in a row when the signal is expected to change state only once.[1]: 48A dynamic hazard is the possibility of an output changing more than once as a result of a single input change. Dynamic hazards often occur in larger logic circuits where there are different routes to the output (from the input). If each route has a different delay, then it quickly becomes clear that there is the potential for changing output values that differ from the required / expected output. E.g. A logic circuit is meant to change output state from1to0, but instead changes from1to0then1and finally rests at the correct value0. This is a dynamic hazard. As a rule, dynamic hazards are more complex to resolve, but note that if all static hazards have been eliminated from a circuit, then dynamic hazards cannot occur. In contrast to static and dynamic hazards, functional hazards are ones caused by a change applied to more than one input. There is no specific logical solution to eliminate them. One really reliable method is preventing inputs from changing simultaneously, which is not applicable in some cases. So, circuits should be carefully designed to have equal delays in each path.[3]
https://en.wikipedia.org/wiki/Hazard_(logic)
Ahardware bugis abugincomputer hardware. It is the hardware counterpart ofsoftware bug, a defect insoftware. A bug is different from aglitchwhich describes an undesirable behavior as more quick, transient and repeated than constant, and different from aquirkwhich is a behavior that may be considered useful even though not intentionally designed. Errata, corrections to the documentation, may be published by the manufacturer to describe hardware bugs, anderratais sometimes used as a term for the bugs themselves. Sometimes users take advantage of the unintended or undocumented operation of hardware to serve some purpose, in which case a flaw may be considered a feature. This gives rise to the often ironically employed acronym INABIAF, "It's Not A Bug It's A Feature".[1]For example, undocumented instructions, known as illegal opcodes, on theMOS Technology 6510of theCommodore 64andMOS Technology 6502of theApple IIcomputers are sometimes utilized. Some flaws in hardware may lead to security vulnerabilities wherememory protectionor other features fail to work properly. Starting in 2017 a series of security vulnerabilities were found in the implementations ofspeculative executionon common processor architectures that allowed a violation ofprivilege level. In 2019 researchers discovered that a manufacturer debugging mode, known as VISA, had an undocumented feature onIntelPlatform Controller Hubs, known as chipsets, which made the mode accessible with a normal motherboard possibly leading to a security vulnerability.[2] The IntelPentiumseries of CPUs had two well-known bugs discovered after it was brought to market, theFDIV bugaffecting floating point division which resulted in a recall in 1994, and theF00F bugdiscovered in 1997 which causes the processor to stop operating until rebooted.
https://en.wikipedia.org/wiki/Hardware_bug
Asoftware bugis a design defect (bug) incomputer software. Acomputer programwith many or serious bugs may be described asbuggy. The effects of a software bug range from minor (such as a misspelled word in theuser interface) to severe (such as frequentcrashing). In 2002, a study commissioned by the USDepartment of Commerce'sNational Institute of Standards and Technologyconcluded that "software bugs, or errors, are so prevalent and so detrimental that they cost the US economy an estimated $59 billion annually, or about 0.6 percent of the gross domestic product".[1] Since the 1950s, some computer systems have been designed to detect or auto-correct various software errors during operations. Mistake metamorphism(from Greekmeta= "change",morph= "form") refers to the evolution of a defect in the final stage of software deployment. Transformation of amistakecommitted by an analyst in the early stages of the software development lifecycle, which leads to adefectin the final stage of the cycle has been calledmistake metamorphism.[2] Different stages of a mistake in the development cycle may be described as mistake,[3]: 31anomaly,[3]: 10fault,[3]: 31failure,[3]: 31error,[3]: 31exception,[3]: 31crash,[3]: 22glitch, bug,[3]: 14defect, incident,[3]: 39or side effect. Software bugs have been linked to disasters. Sometimes the use ofbugto describe the behavior of software is contentious due to perception. Some suggest that the term should be abandoned; contending thatbugimplies that the defect arose on its own and push to usedefectinstead since it more clearly indicates they are caused by a human.[8] Some contend thatbugmay be used tocover upan intentional design decision. In 2011, after receiving scrutiny from US SenatorAl Frankenfor recording and storing users' locations in unencrypted files,[9]Apple called the behavior a bug. However, Justin Brookman of theCenter for Democracy and Technologydirectly challenged that portrayal, stating "I'm glad that they are fixing what they call bugs, but I take exception with their strong denial that they track users."[10] Preventing bugs as early as possible in thesoftware development processis a target of investment and innovation.[11][12] Newerprogramming languagestend to be designed to prevent common bugs based on vulnerabilities of existing languages. Lessons learned from older languages such asBASICandCare used to inform the design of later languages such asC#andRust. Acompiledlanguage allows for detecting some typos (such as a misspelled identifier) beforeruntimewhich is earlier in thesoftware development processthan for aninterpretedlanguage. Languages may include features such as a statictype system, restrictednamespacesandmodular programming. For example, for a typed, compiled language (likeC): is syntactically correct, but fails type checking since the right side, a string, cannot be assigned to a float variable. Compilation fails – forcing this defect to be fixed before development progress can resume. With an interpreted language, a failure would not occur until later at runtime. Some languages exclude features that easily lead to bugs, at the expense of slower performance – the principle being that it is usually better to write simpler, slower correct code than complicated, buggy code. For example, theJavadoes not supportpointerarithmetic which is generally fast, but is considered dangerous; relatively likely to cause a major bug. Some languages include features that add runtime overhead in order to prevent some bugs. For example, many languages include runtimebounds checkingand a way to handle out-of-bounds conditions instead of crashing. Programming techniques such asprogramming styleanddefensive programmingare intended to prevent typos. For example, a bug may be caused by a relatively minor typographical error (typo) in the code. For example, this code executes functionfooonly ifconditionis true. But this code always executesfoo: A convention that tends to prevent this particular issue is to require braces for a block even if it has just one line. Enforcement of conventions may be manual (i.e. viacode review) or via automated tools. Some[who?]contend that writing aprogram specification, which states the intended behavior of a program, can prevent bugs. Others[who?], however, contend that formal specifications are impractical for anything but the shortest programs, because of problems ofcombinatorial explosionandindeterminacy. One goal ofsoftware testingis to find bugs. Measurements during testing can provide an estimate of the number of likely bugs remaining. This becomes more reliable the longer a product is tested and developed.[citation needed] Agile software developmentmay involve frequent software releases with relatively small changes. Defects are revealed by user feedback. Withtest-driven development(TDD),unit testsare written while writing the production code, and the production code is not considered complete until all tests complete successfully. Tools forstatic code analysishelp developers by inspecting the program text beyond the compiler's capabilities to spot potential problems. Although in general the problem of finding all programming errors given a specification is not solvable (seehalting problem), these tools exploit the fact that human programmers tend to make certain kinds of simple mistakes often when writing software. Tools to monitor the performance of the software as it is running, either specifically to find problems such asbottlenecksor to give assurance as to correct working, may be embedded in the code explicitly (perhaps as simple as a statement sayingPRINT "I AM HERE"), or provided as tools. It is often a surprise to find where most of the time is taken by a piece of code, and this removal of assumptions might cause the code to be rewritten. Open sourcedevelopment allows anyone to examine source code. A school of thought popularized byEric S. RaymondasLinus's lawsays that popularopen-source softwarehas more chance of having few or no bugs than other software, because "given enough eyeballs, all bugs are shallow".[13]This assertion has been disputed, however: computer security specialistElias Levywrote that "it is easy to hide vulnerabilities in complex, little understood and undocumented source code," because, "even if people are reviewing the code, that doesn't mean they're qualified to do so."[14]An example of an open-source software bug was the2008 OpenSSL vulnerability in Debian. Debuggingcan be a significant part of thesoftware development lifecycle.Maurice Wilkes, an early computing pioneer, described his realization in the late 1940s that “a good part of the remainder of my life was going to be spent in finding errors in my own programs”.[15] A program known as adebuggercan help a programmer find faulty code by examining the inner workings of a program such as executing code line-by-line and viewing variable values. As an alternative to using a debugger, code may be instrumented with logic to output debug information to trace program execution and view values. Output is typically toconsole,window,log fileor ahardwareoutput (i.e.LED). Some contend that locating a bug is something of an art. It is not uncommon for a bug in one section of a program to cause failures in a different section,[citation needed]thus making it difficult to track, in an apparently unrelated part of the system. For example, an error in a graphicsrenderingroutine causing a fileI/Oroutine to fail. Sometimes, the most difficult part of debugging is finding the cause of the bug. Once found, correcting the problem is sometimes easy if not trivial. Sometimes, a bug is not an isolated flaw, but represents an error of thinking or planning on the part of the programmers. Often, such alogic errorrequires a section of the program to be overhauled or rewritten. Some contend that as a part ofcode review, stepping through the code and imagining or transcribing the execution process may often find errors without ever reproducing the bug as such. Typically, the first step in locating a bug is to reproduce it reliably. If unable to reproduce the issue, a programmer cannot find the cause of the bug and therefore cannot fix it. Some bugs are revealed by inputs that may be difficult for the programmer to re-create. One cause of theTherac-25radiation machine deaths was a bug (specifically, arace condition) that occurred only when the machine operator very rapidly entered a treatment plan; it took days of practice to become able to do this, so the bug did not manifest in testing or when the manufacturer attempted to duplicate it. Other bugs may stop occurring whenever the setup is augmented to help find the bug, such as running the program with a debugger; these are calledheisenbugs(humorously named after theHeisenberg uncertainty principle). Since the 1990s, particularly following theAriane 5 Flight 501disaster, interest in automated aids to debugging rose, such asstatic code analysisbyabstract interpretation.[16] Often, bugs come about during coding, but faulty design documentation may cause a bug. In some cases, changes to the code may eliminate the problem even though the code then no longer matches the documentation. In anembedded system, the software is often modified towork arounda hardware bug since it's cheaper than modifying the hardware. Bugs are managed via activities like documenting, categorizing, assigning, reproducing, correcting and releasing the corrected code. Toolsare often used to track bugs and other issues with software. Typically, different tools are used by the software development team totrack their workloadthan bycustomer servicetotrack user feedback.[17] A tracked item is often calledbug,defect,ticket,issue,feature, or foragile software development,storyorepic. Items are often categorized by aspects such as severity, priority andversion number. In a process sometimes calledtriage, choices are made for each bug about whether and when to fix it based on information such as the bug's severity and priority and external factors such as development schedules. Triage generally does not include investigation into cause. Triage may occur regularly. Triage generally consists of reviewing new bugs since the previous triage and maybe all open bugs. Attendees may include project manager, development manager, test manager, build manager, and technical experts.[18][19] Severityis a measure of impact the bug has.[20]This impact may be data loss, financial, loss of goodwill and wasted effort. Severity levels are not standardized, but differ by context such as industry and tracking tool. For example, a crash in a video game has a different impact than a crash in a bank server. Severity levels might becrash or hang,no workaround(user cannot accomplish a task),has workaround(user can still accomplish the task),visual defect(a misspelling for example), ordocumentation error. Another example set of severities:critical,high,low,blocker,trivial.[21]The severity of a bug may be a separate category to its priority for fixing, or the two may be quantified and managed separately. A bug severe enough to delay the release of the product is called ashow stopper.[22][23] Prioritydescribes the importance of resolving the bug in relation to other bugs. Priorities might be numerical, such as 1 through 5, or named, such ascritical,high,low, anddeferred. The values might be similar or identical to severity ratings, even though priority is a different aspect. Priority may be a combination of the bug's severity with the level of effort to fix. A bug with low severity but easy to fix may get a higher priority than a bug with moderate severity that requires significantly more effort to fix. Bugs of sufficiently high priority may warrant a special release which is sometimes called apatch. A software release that emphasizes bug fixes may be called amaintenancerelease – to differentiate it from a release that emphasizes new features or other changes. It is common practice to release software with known, low-priority bugs or other issues. Possible reasons include but are not limited to: The amount and type of damage a software bug may cause affects decision-making, processes and policy regarding software quality. In applications such ashuman spaceflight,aviation,nuclear power,health care,public transportorautomotive safety, since software flaws have the potential to cause human injury or even death, such software will have far more scrutiny and quality control than, for example, an online shopping website. In applications such as banking, where software flaws have the potential to cause serious financial damage to a bank or its customers, quality control is also more important than, say, a photo editing application. Other than the damage caused by bugs, some of their cost is due to the effort invested in fixing them. In 1978, Lientz et al. showed that the median of projects invest 17 percent of the development effort in bug fixing.[26]In 2020, research onGitHubrepositories showed the median is 20%.[27] In 1994, NASA'sGoddard Space Flight Centermanaged to reduce their average number of errors from 4.5 per 1,000 lines of code (SLOC) down to 1 per 1000 SLOC.[28] Another study in 1990 reported that exceptionally good software development processes can achieve deployment failure rates as low as 0.1 per 1000 SLOC.[29]This figure is iterated in literature such asCode CompletebySteve McConnell,[30]and theNASA study on Flight Software Complexity.[31]Some projects even attained zero defects: thefirmwarein theIBM Wheelwritertypewriter which consists of 63,000 SLOC, and theSpace Shuttlesoftware with 500,000 SLOC.[29] To facilitate reproducible research on testing and debugging, researchers use curated benchmarks of bugs: Some notable types of bugs: A bug can be caused by insufficient or incorrect design based on the specification. For example, given that the specification is to alphabetize a list of words, a design bug might occur if the design does not account for symbols; resulting in incorrect alphabetization of words with symbols. Numerical operations can result in unexpected output, slow processing, or crashing.[34]Such a bug can be from a lack of awareness of the qualities of the data storage such as aloss of precisiondue torounding,numerically unstablealgorithms,arithmetic overflowandunderflow, or from lack of awareness of how calculations are handled by different software coding languages such asdivision by zerowhich in some languages may throw an exception, and in others may return a special value such asNaNorinfinity. Acontrol flowbug, a.k.a. logic error, is characterized by code that does not fail with an error, but does not have the expected behavior, such asinfinite looping, infiniterecursion, incorrect comparison in aconditionalsuch as using the wrongcomparison operator, and theoff-by-one error. The Open Technology Institute, run by the group,New America,[39]released a report "Bugs in the System" in August 2016 stating that U.S. policymakers should make reforms to help researchers identify and address software bugs. The report "highlights the need for reform in the field of software vulnerability discovery and disclosure."[40]One of the report's authors said thatCongresshas not done enough to address cyber software vulnerability, even though Congress has passed a number of bills to combat the larger issue of cyber security.[40] Government researchers, companies, and cyber security experts are the people who typically discover software flaws. The report calls for reforming computer crime and copyright laws.[40] The Computer Fraud and Abuse Act, the Digital Millennium Copyright Act and the Electronic Communications Privacy Act criminalize and create civil penalties for actions that security researchers routinely engage in while conducting legitimate security research, the report said.[40]
https://en.wikipedia.org/wiki/Software_bug
Lazy Systematic Unit Testing[1]is a softwareunit testingmethod based on the two notions oflazy specification, the ability to infer the evolving specification of a unit on-the-fly by dynamic analysis, andsystematic testing, the ability to explore and test the unit's state space exhaustively to bounded depths. A testing toolkitJWalkexists to support lazy systematic unit testing in theJava programming language.[2] Lazy specificationrefers to a flexible approach tosoftware specification, in which a specification evolves rapidly in parallel with frequently modified code.[1]The specification is inferred by a semi-automatic analysis of a prototype software unit. This can includestatic analysis(of the unit's interface) anddynamic analysis(of the unit's behaviour). The dynamic analysis is usually supplemented by limited interaction with the programmer. The termLazy specificationis coined by analogy withlazy evaluationinfunctional programming. The latter describes the delayed evaluation of sub-expressions, which are only evaluated on demand. The analogy is with the late stabilization of the specification, which evolves in parallel with the changing code, until this is deemed stable. Systematic testingrefers to a complete,conformance testingapproach tosoftware testing, in which the tested unit is shown to conform exhaustively to a specification, up to the testing assumptions.[3]This contrasts with exploratory, incomplete or random forms of testing. The aim is to provide repeatable guarantees of correctness after testing is finished. Examples of systematic testing methods include theStream X-Machinetesting method[4]andequivalence partitiontesting with fullboundary value analysis.
https://en.wikipedia.org/wiki/Lazy_systematic_unit_testing#Systematic_Testing
SystemVerilog,standardizedasIEEE 1800by theInstitute of Electrical and Electronics Engineers(IEEE), is ahardware descriptionandhardware verification languagecommonly used to model,design,simulate,testandimplementelectronic systems in thesemiconductorandelectronicdesign industry. SystemVerilog is an extension ofVerilog. SystemVerilog started with the donation of theSuperloglanguage toAccellerain 2002 by the startup company Co-Design Automation.[1]The bulk of the verification functionality is based on the OpenVera language donated bySynopsys. In 2005, SystemVerilog was adopted asIEEE Standard1800-2005.[2]In 2009, the standard was merged with the base Verilog (IEEE 1364-2005) standard, creating IEEE Standard 1800-2009. The SystemVerilog standard was subsequently updated in 2012,[3]2017,[4]and most recently in December 2023.[5] The feature-set of SystemVerilog can be divided into two distinct roles: The remainder of this article discusses the features of SystemVerilog not present inVerilog-2005. There are two types of data lifetime specified in SystemVerilog:staticandautomatic. Automatic variables are created the moment program execution comes to the scope of the variable. Static variables are created at the start of the program's execution and keep the same value during the entire program's lifespan, unless assigned a new value during execution. Any variable that is declared inside a task or function without specifying type will be considered automatic. To specify that a variable is static place the "static"keywordin the declaration before the type, e.g., "static int x;". The "automatic" keyword is used in the same way. Enhanced variable typesadd new capability to Verilog's "reg" type: Verilog-1995 and -2001 limit reg variables to behavioral statements such asRTL code. SystemVerilog extends the reg type so it can be driven by a single driver such as gate or module. SystemVerilog names this type "logic" to remind users that it has this extra capability and is not a hardware register. The names "logic" and "reg" are interchangeable. A signal with more than one driver (such as atri-state bufferforgeneral-purpose input/output) needs to be declared a net type such as "wire" so SystemVerilog can resolve the final value. Multidimensionalpacked arraysunify and extend Verilog's notion of "registers" and "memories": Classical Verilog permitted only one dimension to be declared to the left of the variable name. SystemVerilog permits any number of such "packed" dimensions. A variable of packed array type maps 1:1 onto an integer arithmetic quantity. In the example above, each element ofmy_packmay be used in expressions as a six-bit integer. The dimensions to the right of the name (32 in this case) are referred to as "unpacked" dimensions. As inVerilog-2001, any number of unpacked dimensions is permitted. Enumerated data types(enums) allow numeric quantities to be assigned meaningful names. Variables declared to be of enumerated type cannot be assigned to variables of a different enumerated type withoutcasting. This is not true of parameters, which were the preferred implementation technique for enumerated quantities in Verilog-2005: As shown above, the designer can specify an underlying arithmetic type (logic [2:0]in this case) which is used to represent the enumeration value. The meta-values X and Z can be used here, possibly to represent illegal states. The built-in functionname()returns an ASCII string for the current enumerated value, which is useful in validation and testing. New integer types: SystemVerilog definesbyte,shortint,intandlongintas two-state signed integral types having 8, 16, 32, and 64 bits respectively. Abittype is a variable-width two-state type that works much likelogic. Two-state types lack theXandZmetavalues of classical Verilog; working with these types may result in faster simulation. Structuresandunionswork much like they do in theC language. SystemVerilog enhancements include thepackedattribute and thetaggedattribute. Thetaggedattribute allows runtime tracking of which member(s) of a union are currently in use. Thepackedattribute causes the structure or union to be mapped 1:1 onto a packed array of bits. The contents ofstructdata types occupy a continuous block of memory with no gaps, similar tobit fieldsin C and C++: As shown in this example, SystemVerilog also supportstypedefs, as in C and C++. SystemVerilog introduces three new procedural blocks intended to modelhardware:always_comb(to modelcombinational logic),always_ff(forflip-flops), andalways_latch(forlatches). Whereas Verilog used a single, general-purposealwaysblock to model different types of hardware structures, each of SystemVerilog's new blocks is intended to model a specific type of hardware, by imposing semantic restrictions to ensure that hardware described by the blocks matches the intended usage of the model. An HDL compiler or verification program can take extra steps to ensure that only the intended type of behavior occurs. Analways_combblock modelscombinational logic. The simulator infers the sensitivity list to be all variables from the contained statements: Analways_latchblock modelslevel-sensitivelatches. Again, the sensitivity list is inferred from the code: Analways_ffblock modelssynchronous logic(especiallyedge-sensitivesequential logic): Electronic design automation(EDA) tools can verify the design's intent by checking that the hardware model does not violate any block usage semantics. For example, the new blocks restrict assignment to a variable by allowing only one source, whereas Verilog'salwaysblock permitted assignment from multiple procedural sources. For small designs, the Verilogportcompactly describes a module's connectivity with the surrounding environment. But major blocks within a large design hierarchy typically possess port counts in the thousands. SystemVerilog introduces concept ofinterfacesto both reduce the redundancy ofport-name declarationsbetween connected modules, as well as group andabstractrelated signals into a user-declared bundle. An additional concept ismodport, which shows the direction of logic connections. Example: The following verification features are typically not synthesizable, meaning they cannot be implemented in hardware based on HDL code. Instead, they assist in the creation of extensible, flexibletest benches. Thestringdata type represents a variable-length textstring. For example: In addition to the static array used in design, SystemVerilog offersdynamic arrays,associative arraysandqueues: A dynamic array works much like an unpacked array, but offers the advantage of beingdynamically allocatedatruntime(as shown above.) Whereas a packed array's size must be known at compile time (from a constant or expression of constants), the dynamic array size can be initialized from another runtime variable, allowing the array to be sized and resize arbitrarily as needed. An associative array can be thought of as abinary search treewith auser-specifiedkey type and data type. The key implies anordering; the elements of an associative array can be read out in lexicographic order. Finally, a queue provides much of the functionality of theC++ STLdequetype: elements can be added and removed from either end efficiently. These primitives allow the creation of complex data structures required forscoreboardinga large design. SystemVerilog provides anobject-oriented programmingmodel. In SystemVerilog, classes support asingle-inheritancemodel, but may implement functionality similar to multiple-inheritance through the use of so-called "interface classes" (identical in concept to theinterfacefeature of Java). Classescan be parameterized by type, providing the basic function ofC++ templates. However,template specializationandfunction templatesare not supported. SystemVerilog'spolymorphismfeatures are similar to those of C++: the programmer may specifically write avirtualfunction to have a derived class gain control of the function. Seevirtual functionfor further information. Encapsulationanddata hidingis accomplished using thelocalandprotectedkeywords, which must be applied to any item that is to be hidden. By default, all class properties arepublic. Class instances are dynamically created with thenewkeyword. Aconstructordenoted byfunction newcan be defined. SystemVerilog has automaticgarbage collection, so there is no language facility to explicitly destroy instances created by thenew operator. Example: Integer quantities, defined either in a class definition or as stand-alone variables in some lexical scope, can beassigned random valuesbased on a set of constraints. This feature is useful for creatingrandomized scenarios for verification. Within class definitions, therandandrandcmodifiers signal variables that are to undergo randomization.randcspecifiespermutation-based randomization, where a variable will take on all possible values once before any value is repeated. Variables without modifiers are not randomized. In this example, thefcsfield is not randomized; in practice it will be computed with a CRC generator, and thefcs_corruptfield used to corrupt it to inject FCS errors. The two constraints shown are applicable to conformingEthernet frames. Constraints may be selectively enabled; this feature would be required in the example above to generate corrupt frames. Constraints may be arbitrarily complex, involving interrelationships among variables, implications, and iteration. The SystemVerilogconstraint solveris required to find a solution if one exists, but makes no guarantees as to the time it will require to do so as this is in general anNP-hardproblem (boolean satisfiability). In each SystemVerilog class there are 3 predefined methods for randomization: pre_randomize, randomize and post_randomize. The randomize method is called by the user for randomization of the class variables. The pre_randomize method is called by the randomize method before the randomization and the post_randomize method is called by the randomize method after randomization. The constraint_mode() and the random_mode() methods are used to control the randomization. constraint_mode() is used to turn a specific constraint on and off and the random_mode is used to turn a randomization of a specific variable on or off. The below code describes and procedurally tests anEthernet frame: Assertionsare useful for verifying properties of a design that manifest themselves after a specific condition or state is reached. SystemVerilog has its own assertion specification language, similar toProperty Specification Language. The subset of SystemVerilog language constructs that serves assertion is commonly called SystemVerilog Assertion or SVA.[6] SystemVerilog assertions are built fromsequencesandproperties. Properties are a superset of sequences; any sequence may be used as if it were a property, although this is not typically useful. Sequences consist ofboolean expressionsaugmented withtemporal operators. The simplest temporal operator is the##operator which performs a concatenation:[clarification needed] This sequence matches if thegntsignal goes high one clock cycle afterreqgoes high. Note that all sequence operations are synchronous to a clock. Other sequential operators include repetition operators, as well as various conjunctions. These operators allow the designer to express complex relationships among design components. An assertion works by continually attempting to evaluate a sequence or property. An assertion fails if the property fails. The sequence above will fail wheneverreqis low. To accurately express the requirement thatgntfollowreqa property is required: This example shows animplicationoperator|=>. The clause to the left of the implication is called theantecedentand the clause to the right is called theconsequent.Evaluationof an implication starts through repeated attempts to evaluate the antecedent.When the antecedent succeeds, the consequent is attempted, and the success of the assertion depends on the success of the consequent. In this example, the consequent won't be attempted untilreqgoes high, after which the property will fail ifgntis not high on the following clock. In addition to assertions, SystemVerilog supportsassumptionsand coverage of properties. An assumption establishes a condition that aformal logicproving toolmust assume to be true. An assertion specifies a property that must be proven true. Insimulation, both assertions and assumptions are verified against test stimuli. Property coverage allows the verification engineer to verify that assertions are accurately monitoring the design.[vague] Coverageas applied to hardware verification languages refers to the collection of statistics based on sampling events within the simulation. Coverage is used to determine when thedevice under test(DUT) has been exposed to a sufficient variety of stimuli that there is a high confidence that the DUT is functioning correctly. Note that this differs fromcode coveragewhich instruments the design code to ensure that all lines of code in the design have been executed. Functional coverage ensures that all desiredcornerandedge casesin thedesign spacehave beenexplored. A SystemVerilog coverage group creates a database of "bins" that store ahistogramof values of an associated variable. Cross-coverage can also be defined, which creates a histogram representing theCartesian productof multiple variables. Asamplingevent controls when a sample is taken. Thesamplingevent can be a Verilog event, the entry or exit of a block of code, or a call to thesamplemethod of the coverage group. Care is required to ensure that data are sampled only when meaningful. For example: In this example, the verification engineer is interested in the distribution of broadcast and unicast frames, the size/f_type field and the payload size. The ranges in the payload size coverpoint reflect the interesting corner cases, including minimum and maximum size frames. A complex test environment consists of reusable verification components that must communicate with one another. Verilog's 'event' primitive allowed different blocks of procedural statements to trigger each other, but enforcing threadsynchronizationwas up to the programmer's (clever) usage. SystemVerilog offers twoprimitivesspecifically for interthread synchronization:mailboxandsemaphore. The mailbox is modeled as aFIFOmessage queue. Optionally, the FIFO can betype-parameterizedso thatonly objects of the specified typemay be passed through it. Typically, objects areclass instancesrepresentingtransactions: elementary operations (for example, sending a frame) that are executed by the verification components. The semaphore is modeled as acounting semaphore. In addition to the new features above, SystemVerilog enhances the usability of Verilog's existing language features. The following are some of these enhancements: Besides this, SystemVerilog allows convenientinterface to foreign languages(like C/C++), bySystemVerilog DPI(Direct Programming Interface). In the design verification role, SystemVerilog is widely used in the chip-design industry. The three largest EDA vendors (Cadence Design Systems,Mentor Graphics,Synopsys) have incorporated SystemVerilog into their mixed-languageHDL simulators. Although no simulator can yet claim support for the entire SystemVerilog Language Reference Manual, making testbenchinteroperabilitya challenge, efforts to promote cross-vendor compatibility are underway.[when?]In 2008, Cadence and Mentor released the Open Verification Methodology, an open-source class-library and usage-framework to facilitate the development of re-usable testbenches and canned verification-IP. Synopsys, which had been the first to publish a SystemVerilog class-library (VMM), subsequently responded by opening its proprietary VMM to the general public. Many third-party providers have announced or already released SystemVerilog verification IP. In thedesign synthesisrole (transformation of a hardware-design description into a gate-netlist), SystemVerilog adoption has been slow. Many design teams use design flows which involve multiple tools from different vendors. Most design teams cannot migrate to SystemVerilog RTL-design until their entire front-end tool suite (linters,formal verificationandautomated test structure generators) support a common language subset.[needs update?]
https://en.wikipedia.org/wiki/SystemVerilog#Constrained_random_generation
Inengineering, acorner case(orpathological case) involves a problem or situation that occurs only outside normal operatingparameters—specifically one that manifests itself when multiple environmental variables or conditions are simultaneously at extreme levels, even though each parameter is within the specified range for that parameter. For example, a loudspeaker might distort audio, but only when played at maximum volume, maximumbass, and in ahigh-humidityenvironment. Or acomputer servermay be unreliable, but only with the maximum complement of 64processors, 512GBofmemory, and 10,000 signed-onusers. The investigation of corner cases is of extreme importance as it can provide engineers with valuable insight into how corner case effects can be mitigated. In the case where automotive radar fails, corner case investigation can possibly tell engineers and investigators alike what may have occurred.[1] Corner cases form part of anengineer's lexicon—especially an engineer involved in testing ordebugginga complex system. Corner cases are often harder and more expensive to reproduce, test, and optimize because they require maximal configurations in multiple dimensions. They are frequently less-tested, given the belief that few product users will, in practice, exercise the product at multiple simultaneous maximum settings. Expert users of systems therefore routinely find corner case anomalies, and in many of these, errors. The term "corner case" comes about by physical analogy with "edge case" as an extension of the "flight envelope" metaphor to a set of testing conditions whose boundaries are determined by the 2ncombinations of extreme (minimum and maximum) values for the numbernof variables being tested,i.e., the totalparameter spacefor those variables. Where an edge case involves pushing one variable to a minimum or maximum, putting users at the "edge" of theconfiguration space, a corner case involves doing so with multiple variables, which would put users at a "corner" of a multidimensional configuration space.
https://en.wikipedia.org/wiki/Corner_case
Anedge caseis a problem or situation that occurs only at an extreme (maximum or minimum) operatingparameter. For example, a stereo speaker might noticeably distort audio when played at maximum volume, even in the absence of any other extreme setting or condition. An edge case can be expected or unexpected. Inengineering, the process of planning for and gracefully addressing edge cases can be a significant task, and yet this task may be overlooked or underestimated. Some common causes of edge cases[1]are: Some basic examples of edge cases include: Non-trivial edge cases can result in the failure of an object that is being engineered. They may not have been foreseen during thedesignphase, and they may not have been thought possible during normal use of the object. For this reason, attempts to formalize good engineering standards often include information about edge cases. In programming, an edge case typically involves input values that require special handling in analgorithmbehind a computer program. As a measure for validating the behavior of computer programs in such cases,unit testsare usually created; they are testing boundary conditions of an algorithm,functionormethod. A series of edge cases around each "boundary" can be used to give reasonablecoverageand confidence using the assumption that if it behaves correctly at the edges, it should behave everywhere else.[2] For example, a function that divides two numbers might be tested using both very large and very small numbers. This assumes that if it works for both ends of the magnitude spectrum, it should work correctly in between.[3] Programmers may also createintegration teststo address edge cases not covered by unit tests.[4]These tests cover cases which only appear when a system is tested as a whole. For example, while a unit test may ensure that a function correctly calculates a result, an integration test ensures that this function works properly when integrated with a database or an externalAPI. These tests are particularly relevant with increasing system complexity indistributed systems,microservices, andInternet of things (IoT)devices. With microservices in particular, testing becomes a challenge as integration tests may not cover all microservice endpoints, resulting in uncovered edge cases.[5] Other types of testing which relate to edge cases may includeload testingandnegative/failure testing. Both methods aim at expanding the test coverage of a system, reducing the likelihood of unexpected edge cases. Intest-driven development, edge cases may be determined by system requirements and accounted for by tests, before writing code. Such documentation may go inside aproduct requirements documentafter discussions withstakeholdersand other teams.
https://en.wikipedia.org/wiki/Edge_case
Information sensitivityis the control ofaccess to informationorknowledgethat might result in loss of an advantage or level of security if disclosed to others.[1]Loss, misuse, modification, orunauthorized accessto sensitive information can adversely affect theprivacyor welfare of an individual,trade secretsof a business or even thesecurityand international relations of a nation depending on the level of sensitivity and nature of the information.[2] This refers to information that is already a matter of public record or knowledge. With regard to government and private organizations, access to or release of such information may be requested by any member of the public, and there are often formal processes laid out for how to do so.[3]The accessibility of government-held public records is an important part of government transparency, accountability to its citizens, and the values of democracy.[4]Public recordsmay furthermore refer to information about identifiable individuals that is not considered confidential, including but not limited to:censusrecords,criminal records,sex offender registryfiles, andvoter registration. This includes business information that is not subjected to special protection and may be routinely shared with anyone inside or outside of the business. Confidential informationis used in a general sense to mean sensitive information whose access is subject to restriction, and may refer to information about an individual as well as that which pertains to a business. However, there are situations in which the release of personal information could have a negative effect on its owner. For example, a person trying to avoid a stalker will be inclined to further restrict access to such personal information. Furthermore, a person'sSSNorSIN, credit card numbers, and other financial information may be considered private if their disclosure might lead tocrimessuch asidentity theftorfraud. Some types of private information, including records of a person'shealth care, education, and employment may be protected byprivacy laws.[5]Unauthorized disclosure of private information can make the perpetrator liable for civil remedies and may in some cases be subject to criminal penalties. Even though they are often used interchangeably, personal information is sometimes distinguished from private information, orpersonally identifiable information. The latter is distinct from the former in that Private information can be used to identify a unique individual. Personal information, on the other hand, is information belonging to the private life of an individual that cannot be used to uniquely identify that individual. This can range from an individual's favourite colour, to the details of their domestic life.[6]The latter is a common example of personal information that is also regarded as sensitive, where the individual sharing these details with a trusted listener would prefer for it not to be shared with anyone else, and the sharing of which may result in unwanted consequences. Confidential business information (CBI) refers to information whose disclosure may harm the business. Such information may includetrade secrets, sales and marketing plans, new product plans, notes associated with patentable inventions, customer and supplier information, financial data, and more.[7] UnderTSCA, CBI is defined as proprietary information, considered confidential to the submitter, the release of which would cause substantial business injury to the owner. The US EPA may as of 2016, review and determine if a company´s claim is valid.[8] Classified informationgenerally refers to information that is subject to special security classification regulations imposed by many national governments, the disclosure of which may cause harm to national interests and security. The protocol of restriction imposed upon such information is categorized into a hierarchy of classification levels in almost every national government worldwide, with the most restricted levels containing information that may cause the greatest danger to national security if leaked. Authorized access is granted to individuals on aneed to knowbasis who have also passed the appropriate level ofsecurity clearance. Classified information can be reclassified to a different level or declassified (made available to the public) depending on changes of situation or new intelligence. Classified information may also be further denoted with the method of communication or access. For example, Protectively Marked "Secret" Eyes Only or Protectively Marked "Secret" Encrypted transfer only. Indicating that the document must be physically read by the recipient and cannot be openly discussed for example over a telephone conversation or that the communication can be sent only using encrypted means. Often mistakenly listed as meaning for the eyes of the intended recipient only[9]the anomaly becomes apparent when the additional tag "Not within windowed area" is also used. Data privacy concerns exist in various aspects of daily life wherever personal data is stored and collected, such as on theinternet, inmedical records,financial records, andexpression of political opinions. In over 80 countries in the world, personally identifiable information is protected byinformation privacy laws, which outline limits to the collection and use of personally identifiable information by public and private entities. Such laws usually require entities to give clear and unambiguous notice to the individual of the types of data being collected, its reason for collection, and planned uses of the data. In consent-based legal frameworks, explicit consent of the individual is required as well.[10] The EU passed theGeneral Data Protection Regulation(GDPR), replacing the earlierData Protection Directive. The regulation was adopted on 27 April 2016. It became enforceable from 25 May 2018 after a two-year transition period and, unlike a directive, it does not require national governments to pass any enabling legislation, and is thus directly binding and applicable.[11]"The proposed new EU data protection regime extends the scope of the EU data protection law to all foreign companies processing data of EU residents. It provides for a harmonisation of the data protection regulations throughout the EU, thereby making it easier for non-European companies to comply with these regulations; however, this comes at the cost of a strict data protection compliance regime with severe penalties of up to 4% of worldwide turnover."[12]The GDPR also brings a new set of "digital rights" for EU citizens in an age when the economic value of personal data is increasing in the digital economy. In Canada, thePersonal Information Protection and Electronic Documents Act(PIPEDA) regulates the collection and use of personal data and electronic documents by public and private organizations. PIPEDA is in effect in all federal and provincial jurisdictions, except provinces where existing privacy laws are determined to be “substantially similar”.[13] Even though not through the unified sensitive information framework, the United States has implemented significant amount of privacy legislation pertaining to different specific aspects of data privacy, with emphasis to privacy in healthcare, financial, e-commerce, educational industries, and both on federal and state levels. Whether being regulated or self regulated, the laws require to establish ways at which access to sensitive information is limited to the people with different roles, thus in essence requiring establishment of the "sensitive data domain" model[14]and mechanisms of its protection. Some of the domains have a guideline in form of pre-defined models such as "Safe Harbor" of HIPAA,[15]based on the research ofLatanya Sweenyand established privacy industry metrics. Additionally, many other countries have enacted their own legislature regarding data privacy protection, and more are still in the process of doing so.[16] Theconfidentialityof sensitive business information is established throughnon-disclosure agreements, a legally binding contract between two parties in a professional relationship. NDAs may be one-way, such as in the case of an employee receiving confidential information about the employing organization, or two-way between businesses needing to share information with one another to accomplish a business goal. Depending on the severity of consequences, a violation of non-disclosure may result in employment loss, loss of business and client contacts, criminal charges or a civil lawsuit, and a hefty sum in damages.[17]When NDAs are signed between employer and employee at the initiation of employment, anon-compete clausemay be a part of the agreement as an added protection of sensitive business information, where the employee agrees not to work for competitors or start their own competing business within a certain time or geographical limit. Unlike personal and private information, there is no internationally recognized framework protectingtrade secrets, or even an agreed-upon definition of the term “trade secret”.[18]However, many countries and political jurisdictions have taken the initiative to account for the violation of commercial confidentiality in their criminal or civil laws. For example, under the USEconomic Espionage Act of 1996, it is a federal crime in the United States to misappropriate trade secrets with the knowledge that it will benefit a foreign power, or will injure the owner of the trade secret.[19]More commonly, breach of commercial confidentiality falls under civil law, such asin the United Kingdom.[20]In some developing countries, trade secret laws are either non-existent or poorly developed and offer little substantial protection.[21] In many countries, unauthorized disclosure ofclassified informationis a criminal offence, and may be punishable by fines, prison sentence, or even the death penalty, depending on the severity of the violation.[22][23]For less severe violations, civil sanctions may be imposed, ranging from reprimand to revoking of security clearance and subsequent termination of employment.[24] Whistleblowingis the intentional disclosure of sensitive information to a third-party with the intention of revealing alleged illegal, immoral, or otherwise harmful actions.[25]There are many examples of present and former government employees disclosing classified information regarding national government misconduct to the public and media, in spite of the criminal consequences that await them. Espionage, or spying, involves obtaining sensitive information without the permission or knowledge of its holder. The use of spies is a part of national intelligence gathering in most countries, and has been used as a political strategy by nation-states since ancient times. It is unspoken knowledge in international politics that countries are spying on one another all the time, even their allies.[26] Computer securityisinformation securityapplied to computing and network technology, and is a significant and ever-growing field in computer science. The termcomputer insecurity, on the other hand, is the concept that computer systems are inherently vulnerable to attack, and therefore an evolving arms race between those who exploit existing vulnerabilities in security systems and those who must then engineer new mechanisms of security. A number of security concerns have arisen in the recent years as increasing amounts of sensitive information at every level have found their primary existence in digital form. At the personal level,credit card fraud,internet fraud, and other forms ofidentity thefthave become widespread concerns that individuals need to be aware of on a day-to-day basis. The existence of large databases of classified information on computer networks is also changing the face of domestic and international politics.Cyber-warfareandcyber espionageis becoming of increasing importance to the national security and strategy of nations around the world, and it is estimated that 120 nations around the world are currently actively engaged in developing and deploying technology for these purposes.[27] Philosophies and internet cultures such asopen-source governance,hacktivism, and the popular hacktivist slogan "information wants to be free" reflects some of the cultural shifts in perception towards political and government secrecy. The popular, controversialWikiLeaksis just one of many manifestations of a growing cultural sentiment that is becoming an additional challenge to the security and integrity of classified information.[28]
https://en.wikipedia.org/wiki/Information_sensitivity
Acomputer emergency response team(CERT) is anincident response teamdedicated tocomputer securityincidents. Other names used to describe CERT includecyber emergency response team,computer emergency readiness team,computer security incident response team(CSIRT), orcyber security incident response team. The name "Computer Emergency Response Team" was first used in 1988 by theCERT Coordination Center(CERT-CC) atCarnegie Mellon University(CMU). The term CERT is registered as a trade and service mark by CMU in multiple countries worldwide. CMU encourages the use of Computer Security Incident Response Team (CSIRT) as a generic term for the handling of computer security incidents. CMU licenses the CERT mark to various organizations that are performing the activities of a CSIRT. The histories of CERT and CSIRT, are linked to the existence ofmalware, especiallycomputer wormsandviruses. Whenever a newtechnologyarrives, its misuse is not long in following. The first worm in theIBMVNETwas covered up. Shortly after, a worm hit theInterneton 3 November 1988, when the so-calledMorris Wormparalysed a good percentage of it. This led to the formation of the first computer emergency response team at Carnegie Mellon University under aU.S. Governmentcontract. With the massive growth in the use of information and communications technologies over the subsequent years, the generic term 'CSIRT' refers to an essential part of most large organisations' structures. In many organisations the CSIRT evolves into aninformation security operations center.
https://en.wikipedia.org/wiki/Computer_emergency_response_team
In theU.S.,critical infrastructure protection(CIP) is a concept that relates to the preparedness and response to serious incidents that involve thecritical infrastructureof a region or the nation. The AmericanPresidential directivePDD-63 of May 1998 set up a national program of "Critical Infrastructure Protection".[1]In 2014 theNIST Cybersecurity Frameworkwas published after further presidential directives. The U.S. CIP is a national program to ensure the security of vulnerable and interconnectedinfrastructuresof theUnited States. In May 1998, PresidentBill Clintonissuedpresidential directivePDD-63 on the subject of critical infrastructure protection.[1]This recognized certain parts of the national infrastructure as critical to the national and economic security of the United States and the well-being of its citizenry, and required steps to be taken to protect it. This was updated on December 17, 2003, by PresidentGeorge W. Bushthrough Homeland Security Presidential Directive HSPD-7 forCritical Infrastructure Identification, Prioritization, and Protection.[2]The updated directive would add in agriculture to the list of critical infrastructure within the country; this would undo the omission of agriculture from the 1998 presidential directive. The directive describes the United States as having some critical infrastructure that is "so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety."[2] Take, for example, acomputer virusthat disrupts thedistribution of natural gasacross a region. This could lead to a consequential reduction inelectrical power generation, which in turn leads to the forced shutdown ofcomputerized controlsand communications. Road traffic, air traffic, and rail transportation might then become affected.Emergency servicesmight also be hampered. An entire region can become debilitated because some critical elements in the infrastructure become disabled throughnatural disaster. While potentially in contravention of theGeneva Conventions,[3]military forces have also recognized that it can cripple an enemy's ability to resist by attacking key elements of its civilian and military infrastructure. The federal government has developed a standardized description of critical infrastructure, in order to facilitate monitoring and preparation for disabling events. The government requires private industry in each critical economic sector to: CIP defines sectors and organizational responsibilities in a standard way: In 2003 the remit was expanded to include: With much of the critical infrastructure privately owned, theDepartment of Defense(DoD) depends on commercial infrastructure to support its normal operations. TheDepartment of Stateand theCentral Intelligence Agencyare also involved inintelligence analysiswith friendly countries. In May 2007 the DHS completed its sector-specific plans (SSP) for coordinating and dealing with critical events.[6]the Continuity of government (COG) in time of a catastrophic event can be used to preserve the government as seen fit by the president, at which point the welfare of the government can be placed above the welfare of the citizenry of the United States ensuring that the government is preserved to rebuild the economy and country when it is deemed safe to return to the surface of the United States of America. On March 9, 1999, Deputy Defense SecretaryJohn Hamrewarned theUnited States Congressof a cyber terrorist "electronicPearl Harbor" saying, "It is not going to be against Navy ships sitting in a Navy shipyard. It is going to be against commercial infrastructure". Later this fear was qualified by President Clinton after reports of actual cyber terrorist attacks in 2000: "I think it was an alarm. I don't think it was Pearl Harbor. We lost our Pacific fleet at Pearl Harbor. I don't think the analogous loss was that great.[7]" There aremany examples of computer systems that have been hackedor victims of extortion. One such example occurred in September 1995 where a Russian national allegedly masterminded the break-in ofCiticorp'selectronic funds transfersystem and was ordered to stand trial in the United States.[8]A gang ofhackersunder his leadership had breached Citicorp's security 40 times during 1994. They were able to transfer $12 million from customer accounts and withdraw an estimated $400,000. In the past, the systems and networks of the infrastructure elements were physically and logically independent and separate. They had little interaction or connection with each other or other sectors of the infrastructure. With advances in technology, the systems within each sector became automated, and interlinked through computers and communications facilities. As a result, the flow of electricity, oil, gas, and telecommunications throughout the country are linked—albeit sometimes indirectly—but the resulting linkages blur traditional security borders. While this increased reliance on interlinked capabilities helps make the economy and nation more efficient and perhaps stronger, it also makes the country more vulnerable to disruption and attack. This interdependent and interrelated infrastructure is more vulnerable to physical and cyber disruptions because it has become a complex system with single points of failure. In the past an incident that would have been an isolated failure can now cause widespread disruption because of cascading effects.[9]As an example, capabilities within the information and communication sector have enabled the United States to reshape its government and business processes, while becoming increasingly software driven. One catastrophic failure in this sector now has the potential to bring down multiple systems including air traffic control, emergency services, banking, trains, electrical power, and dam control. The elements of the infrastructure themselves are also considered possible targets ofterrorism. For example, the 2022 attack on North Carolina’s power substations near Carthage leaving tens of thousands of residents without power. The ordeal left residents without proper heating, hot water, and the ability to cook for days as repairs took place. Authorities noted that the attack was intentionally committed via gunfire.[10]Traditionally, critical infrastructure elements have been lucrative targets for anyone wanting to attack another country. Now, because the infrastructure has become a national lifeline, terrorists can achieve high economic and political value by attacking elements of it. Disrupting or even disabling the infrastructure may reduce the ability to defend the nation, erode public confidence in critical services, and reduce economic strength. Additionally, well chosen terrorist attacks can become easier and less costly than traditional warfare because of the interdependence of infrastructure elements. These infrastructure elements can become easier targets where there is a low probability of detection. The elements of the infrastructure are also increasingly vulnerable to a dangerous mix of traditional and nontraditional types of threats. Traditional and non-traditional threats include equipment failures, human error, weather and natural causes, physical attacks, and cyber-attacks. For each of these threats, the cascading effect caused by single points of failure has the potential to pose dire and far-reaching consequences. There are fears from global leaders that the frequency and severity of critical infrastructure incidents will increase in the future.[11]These infrastructure failures are prone to greatly affect the country’s residents who are on high alert. One of these future potential failures can be seen within the cyber security world as American citizens fear that their technological infrastructure is at risk. This comes as the world becomes more technologically advanced with the introduction of AI and technology into many areas of American life.[12] Although efforts are under way, there is no unified national capability to protect the interrelated aspects of the country's infrastructure. One reason for this is that a good understanding of the inter-relationships does not exist. There is also no consensus on how the elements of the infrastructure mesh together, or how each element functions and affects the others. Securing national infrastructure depends on understanding the relationships among its elements as well as the immediate and delayed effects that these failures may have on residents. Thus, when one sector scheduled a three-week drill to mimic the effects of apandemic flu, even though two-thirds of the participants claimed to havebusiness continuityplans in place, barely half reported that their plan was moderately effective.[13]These can have drastic effects on those who do not have access to the necessary safeguards in place to protect themselves. Some of the most critical infrastructure for any in the US are the roads, hospitals, and infrastructure that provides food to residents. An increased risk of the collapse of these critical infrastructure points can lead to, and in most cases have led to, drastic decreases in access to water, food, medical attention, and electricity. Critical needs for residents trapped in their homes and areas, residents who rely on medication or need to be transported to the nearest hospital, and residents who are heavily affected by malnutrition. This was seen during the 2005Hurricane Katrinaaftermath in new Orleans as thousands were displaced, hundreds killed, and thousands more injured with no clear way of receiving shelter or assistance.[14]There is a current movement to improve critical infrastructure with residents in mind. Critical infrastructure protection requires the development of a national capability to identify and monitor the critical elements and to determine when and if the elements are under attack or are the victim of destructive natural occurrences. These natural occurrences have become more of a threat over the past couple of years due to climate change; increased occurrences of stronger storms, longer droughts, and rising sea levels.[15]CIP's is importance is the link between risk management and infrastructure assurance. It provides the capability needed to eliminate potential vulnerabilities in the critical infrastructure. CIP practitioners determine vulnerabilities and analyze alternatives to prepare for incidents. They focus on improving the capability to detect and warn of impending attacks on, and system failures within, the critical elements of the national infrastructure. However, there are skeptics that see certain national infrastructure methods as harm to the communities that local and federal governments swore to protect. This is a key factor in the movement against Atlanta’s “Cop City” as residents say that there are negative systematic affects as well as negative environmental affects as well.[16] The Atlanta Public Safety Training Center, also known as, “Cop City” is one of the many examples of critical infrastructure created to try and serve the purpose of protecting the civilian populations. This form of critical infrastructure is one that works indirectly as the project aims to train current and incoming police officers and combat units. The plan is one that was put forth by the Atlanta Police Foundation in 2017 called the “Vision Safe Atlanta – Public Safety Action Plan.” This plan sees the police foundation receiving upgrades and attention to the crumbling infrastructure of police buildings and necessities throughout Atlanta.[17]The largest addition to the foundation and police force’s portfolio is about 85 acres of the city’s greenspace to construct a state-of-the-art training facility.[17] The Atlanta Police Foundation is a private entity that works for the betterment of the city’s police force. According to the Atlanta Police Foundation, a private entity, the project has the backing of CEOs in the area along with public officials who not only have support for the foundation but support for the $90 million project which would lease and raze about 85 acres of publicly owned forests.[17]According to… the center is being privately funded by other private entities with 2/3 of funding coming from private finance and the other third coming from taxpayer dollars.[18]The new center would replace the crumbling police academy, which, according to the “Public Safety Action Plan,” needs to be replaced. the “Public Safety Action Plan” has a renovated and updated police academy budget at over $2 million;[17]a much cheaper price tag than the $30 million that the city would be contributing for the completion of the facility. These facilities include “a firing range, a burn building, and a ‘kill house’ designed to mimic urban combat scenarios”.[17]The overall goal of the facility is to ensure that Atlanta police officers are getting better training within the city whiles also getting a fresh space. Although a positive outlook for the Police Foundation and the Atlanta Police Department, the facility has faced some pushback from the public in terms of the destruction of public land as well as police misconduct concerns. Atlanta is a city that is surrounded by large and lush green foliage earning the title of a “a city in a forest” by residents and visitors; according to residents, the importance of the city’s forests cannot be understated. With such a large presence of trees and green space, residents see these spaces as a vital part of the city’s natural ecosystems. Residents are concerned that the clearing of 85 acres of the forest will bring about an increase in poor air quality, decrease natural habitats in the area, and increase flooding in a vulnerable community which happens to be predominantly black.[18]Residents believe that the potential ecological damage was too much of a risk for some residents to stay idle; this is in addition to the potential increase of police violence.[16] The movement to stop “cop city” came about as the calls to defund the Atlanta Police Department grew in the wake of calls to defund police departments across the country.[18][16]There have been numerous organizations working to prevent construction from beginning through acts of moving into the forest, sabotaging equipment, and seeking legal action against the city and private companies who are working to supply equipment for the construction of the facility.[18]Many people outside of the community worked to stop “cop city” as well. The forest that the Police Foundation seeks to build their facility on is a part of Native Muscogee land. Tribe members traveled to the city to demand that the city end work and retreat from Muscogee land. The movement to stop “cop city” became a group effort by individuals who wanted to see change in the justice system in Atlanta as well as individuals who wanted to protect the natural habitats as they seek justice for the nonhuman species of Atlanta’s forests.[18]Although a growing movement, there is pushback from the city and the Police Foundation who want the so called “cop city” to go forward and will do anything to get it done. In defending the forest and trying to be heard and recognized by the city and the state government, protestors were met with harsh punishments; both legal and physical. In protesting, a nonbinary environmental activist named Manuel Esteban Paez Terán, or Tortuguita, was killed by Georgia State Patrol on January 18 bringing to light the violent effects of police patrol and watch over protestors.[18][19] As much as protestors are speaking up and out about the environmental destruction and negative effects of infrastructure, there are many laws and policies going into place that make it harder to exercise one’s free speech. Critical Infrastructure (CI) Trespass Bills are being introduced across the country to allow for the detention and prosecution of protestors that get in the way of infrastructure construction.[19]According to Jalbert et. al. and if these bills become law, they will allow for the use of drones, excessive force, facial recognition, and community surveillance tactics.[19][20]Further, Akbar, Love, and Donoghoe argue that these bills will disproportionality affect protestors of color.[16][18]This is seen by the death of Tortuguita and the mass arrest of Indigenous, black, and brown protesters. Not only are there arrests and violent responses by law authorities, but there are also actions that officials are taking to make CI trespassing a felony as many cite the protest of critical infrastructure to be terroristic actions.[19] These laws stem from the growing systematic push to criminalize protestors from preventing the construction of new critical infrastructure; these laws are in response to country wide protests against pipeline construction. These are protests that stem from the desire to protect the climate and to protect Indigenous lands.[21]Protests tend to work to stop/slow the construction of new pipelines and aim to speak out against the local, state, and federal governments who support, and in many cases, fund the addition of oil and gas pipelines.[21]Indigenous peoples argue that pipeline construction goes against tribal treaties as well as has the possibility to jeopardize the land through pollution.[22]The intertwining of alleged environmental and systemic oppression has pushed residents of areas due for pipeline construction to speak out and against the projects. Due to oil and gas pipelines falling under the sector of energy, they are seen as critical infrastructure. Therefore, the US government aims to  uphold and protect them. As a result, multiple states have deployed “Anti-Protest” laws to prevent the disruption of pipeline construction and any development of projects considered to be critical infrastructure and necessary for the advancement of the country.[23][21]These laws make it a felony to stall and prevent the construction and development of critical infrastructure projects within states that implement these various “anti-protest” laws.[23][24] Due to the growing political push to prevent protesters from interfering in infrastructure projects, Georgia has become a state that is slowly using “anti-protest” laws and measures to prevent people from protesting “cop city” and other critical infrastructure projects in the state.[20]Activists have been detained and charged with felonies as they protest “cop city”.[20] Critical Infrastructure Projects in the US has its supporters and its protestors and the response to these projects are made clear from all sides. PDD-63 mandated the formation of a national structure for critical infrastructure protection. To accomplish this one of the primary actions was to produce a National Infrastructure Assurance Plan, or NIAP, later renamed National Infrastructure Protection Plan or NIPP. The different entities of the national CIP structure work together as a partnership between the government and the public sectors. Each department and agency of the federal government is responsible for protecting its portion of the government's critical infrastructure. In addition, there aregrantsmade available through theDepartment of Homeland Securityfor municipal and private entities to use for CIP and security purposes. These include grants for emergency management,water securitytraining, rail, transit and port security, metropolitan medical response,LEAterrorism prevention programs and the Urban Areas Security Initiative.[25] PDD-63 identified certain functions related to critical infrastructure protection that must be performed chiefly by the federal government. These are national defense, foreign affairs, intelligence, and law enforcement. Each lead agency for these special functions appoints a senior official to serve as a functional coordinator for the federal government. In 2008 a mobile PDA-based Vulnerability Assessment Security Survey Tool (VASST) was introduced to speed physical security assessment of critical infrastructure by law enforcement to meet compliance requirements of PDD-63.[26] TheNational Infrastructure Protection Plan(NIPP) is a document called for byHomeland Security Presidential Directive 7, which aims to unify Critical Infrastructure and Key Resource (CIKR) protection efforts across the country. The latest version of the plan was produced in 2013[27]The NIPP's goals are to protectcritical infrastructureand keyresourcesand ensure resiliency. It is generally considered unwieldy and not an actual plan to be carried out in an emergency, but it is useful as a mechanism for developing coordination betweengovernmentand theprivate sector. The NIPP is based on the model laid out in the 1998 Presidential Decision Directive-63, which identified critical sectors of the economy and tasked relevant government agencies to work with them on sharing information and on strengthening responses to attack. The NIPP is structured to create partnerships between Government Coordinating Councils (GCC) from the public sector and Sector Coordinating Councils (SCC) from the private sector for the eighteen sectors DHS has identified as critical. For each of the identified major sectors of the critical infrastructure, the federal government appointed a Sector Liaison Official from a designated Lead Agency. A private sector counterpart, a Sector Coordinator, was also identified. Together, the two sector representatives, one federal government and one corporate, were responsible for developing a sector NIAP. In addition, each department and agency of the federal government was responsible for developing its own CIP plan for protecting its portion of the federal government's critical infrastructure. The federal department and agency plans were assimilated with the sector NIAPs to create one comprehensive National Infrastructure Assurance Plan. Additionally the national structure must ensure there is a national CIP program. This program includes responsibilities such as education and awareness,threat assessmentand investigation, and research. The process includes assessments of: Examples of similar critical infrastructure protection plans are the German National Strategy for Critical Infrastructure Protection (CIP Strategy) and the Swedish STYREL Steering of electricity to prioritized users during short-term electricity shortages[28] There have been public criticisms of the mechanisms and implementation of some security initiatives and grants, with claims they are being led by the same companies who can benefit,[29]and that they are encouraging an unnecessaryculture of fear. Commentators note that these initiatives started directly after the collapse of theCold War, raising the concern that this was simply a diversion of themilitary-industrialcomplex away from a funding area which was shrinking and into a richer previously civilian arena. Grants have been distributed across the different states even though the perceived risk is not evenly spread, leading to accusations ofpork barrelpolitics that directs money and jobs towards marginal voting areas. The Urban Areas Security Initiative grant program has been particularly controversial, with the 2006 infrastructure list covering 77,000 assets, including a popcorn factory and a hot dog stand.[30]The 2007 criteria were reduced to 2,100 and now those facilities must make a much stronger case to become eligible for grants.[31]While well-intentioned, some of the results have also been questioned regarding claims of poorly designed and intrusivesecurity theaterthat distracts attention and money from more pressing issues or creates damaging side effects. An absence of comparative risk analysis and benefits tracking it has made it difficult to counter such allegations with authority. In order to better understand this, and ultimately direct effort more productively, a Risk Management and Analysis Office was recently created in the National Protection and Programs directorate at theDepartment of Homeland Security. TheU.S. Department of Defenseis responsible for protecting its portion of the government'scritical infrastructure. But as part of the CIP program, DoD has responsibilities that traverse both the national and department-wide critical infrastructure. PDD-63 identified the responsibilities DoD had for critical infrastructure protection. First, DoD had to identify its own critical assets and infrastructures and provide assurance through analysis, assessment, and remediation. DoD was also responsible for identifying and monitoring the national and international infrastructure requirements of industry and other government agencies, all of which needed to be included in the protection planning. DoD also addressed the assurance and protection of commercial assets and infrastructure services in DoD acquisitions. Other DoD responsibilities for CIP included assessing the potential impact on military operations that would result from the loss or compromise of infrastructure service. There were also requirements for monitoring DoD operations, detecting and responding to infrastructure incidents, and providing department indications and warnings as part of the national process. Ultimately, DoD was responsible for supporting national critical infrastructure protection. In response to the requirements identified in PDD-63, DoD categorized its own critical assets by sector, in a manner similar to the national CIP organization. The DoD identified a slightly different list of infrastructure sectors for those areas that specifically required protection by DoD. DoD's organizational structure for critical infrastructure protection reflects, complements, and effectively interacts with the national structure for CIP. There are ten defense critical infrastructure sectors that are protected by the DoD. These include: The DoD CIP special function components interface with the equivalent national functional coordinators and coordinate all activities related to their function within DoD. DoD's special function components currently include seven areas of focus. They include the following components: As mandated by PDD-63, the DoD must protect its portion of the federal government's critical infrastructure. For DoD, this is the Defense Infrastructure or DI. Protecting the Defense Infrastructure is a complex task involving ten defense sectors. It was deemed that it was nearly impossible to protect every critical asset at every location, therefore the focus was directed on protecting the critical Defense Infrastructure. The critical Defense Infrastructure is the critical assets essential to providing mission assurance. The six phases of the DoD CIP life cycle build on one another to create a framework for a comprehensive solution for infrastructure assurance. The life cycle phases occur before, during, and after an event that may compromise or degrade the infrastructure. A synopsis of the six phases are: Effective management of the CIP life cycle ensures that protection activities can be coordinated and reconciled among all DoD sectors. In many ways, DoD CIP, is risk management at its most imperative. Achieving success means obtaining mission assurance. Missing the mark can mean mission failure as well as human and material losses. For critical infrastructure protection, risk management requires leveraging resources to address the most critical infrastructure assets that are also the most vulnerable and that have the greatest threat exposure. The most important part of the CIP lifecycle is Phase 1. Because it is crucial to target the right assets for infrastructure protection, determining these assets is the first phase in the CIP life cycle. This phase, Analysis and Assessment, is the key and foundation of the seven lifecycle activities. Without a solid foundation, the remaining CIP life cycle phases may be flawed, resulting in a CIP plan that fails to protect the critical infrastructure and, therefore, mission assurance. Phase 1 determines what assets are important, and identifies their vulnerabilities, and dependencies so that decision makers have the information they need to make effective risk management choices. The Defense Infrastructure, or DI, is organized into ten sectors. Each sector is composed of assets, such as systems, programs, people, equipment, or facilities. Assets may be simple, such as one facility within one geographic location, or complex, involving geographically dispersed links and nodes. The Analysis and Assessment is made up of five steps that include activities that span and encompass the ten DI sectors and their assets. On August 24, 2001, the Director of the Joint Staff requested USPACOM to serve as the lead support Combatant Command for creating a CIP first-ever theater CIP Plan – known as the “CIP Appendix 16 Plan”. The following is how USPACOM approached the task. USPACOM focused the Analysis and Assessment phase by organizing its activities to answer three major questions: To answer the question, “What is critical?”, USPACOM outlined a three-step procedure: To accomplish these steps, USPACOM adopted a methodology that focuses its CIP efforts on Tier 1 assets. Tier 1 assets are assets that could cause mission failure if they are compromised or damaged. The methodology UAPACOM adopted and modified is Mission Area Analysis, or MAA. The MAA links combatant command missions to infrastructure assets that are critical to a given Operations Plan, or OPLAN, Contingency Plan, or CONPLAN, or Crisis Action Plan. Typically, the MAA process determines the assessment site priorities. USPACOM modified the process and selected the CIP assessment sites and installations prior to conducting the MAA. The following is an illustration of the USPACOM MAA process: USPACOM uses the MAA data it gathers to scope and focus its efforts on truly mission-critical assets to answer the next question in its process, Is it vulnerable? The first step in answering this question is to complete an installation analysis. The next step is to complete a commercial infrastructure analysis. USPACOM relied upon two different DoD organizations for CIP assessments: Balanced Survivability Assessments, or BSAs, and Mission Assurance Assessments. The BSA is a two-week mission-focused assessment at a military installation or designated site. A Mission Assurance Assessment is unique because it uses an area assessment approach to focus on both commercial and military asset vulnerabilities and dependencies. The final step to determine vulnerabilities is to integrate the two analyses and assessments. With its critical assets and their vulnerabilities identified, USPACOM is ready to perform risk management activities to decide what can be done to protect themission-criticalassets. Booz Allen Hamiltondeveloped this process at PACOM. The first phase of the CIP life cycle, Analysis and Assessment, identified the critical assets of DoD sector infrastructures and the vulnerabilities or weaknesses of those critical assets. The second phase is the Remediation phase. In the Remediation phase, the known weaknesses and vulnerabilities are addressed. Remediation actions are deliberate, precautionary measures designed to fix known virtual and physical vulnerabilities before an event occurs. The purpose of remediation is to improve the reliability, availability, and survivability of critical assets and infrastructures. Remediation actions apply to any type of vulnerability, regardless of its cause. They apply to acts of nature, technology failures, or deliberate malicious actions. The cost of each remediation action depends on the nature of the vulnerability it addresses. The Defense Infrastructure Sector Assurance Plan that each infrastructure sector must develop, establishes the priorities and resources for remediation. Remediation requirements are determined by multiple factors. These are analysis and assessment, input from military planners and other DoD sectors, the National Infrastructure Assurance Plan and other plans, reports, and information on national infrastructure vulnerabilities and remediation, as well as intelligence estimates and assessments of threats. Remediation requirements are also gathered through lessons learned from Defense Infrastructure sector monitoring and reporting and infrastructure protection operations and exercises. The CIP program tracks the status of remediation activities for critical assets. Remediation activities to protect the critical Defense Infrastructure cross multiple Department components. The need to monitor activities and warn of potential threats to the United States is not new. From conventional assaults to potential nuclear attacks, the military has been at the forefront of monitoring and warning of potential dangers since the founding of the country. Protecting the security and well-being of the United States, including the critical Defense Infrastructure, has now entered a new era. It has been deemed essential to have a coordinated ability to identify and warn of potential or actual incidents among critical infrastructure domains. The ability to detect and warn of infrastructure events is the third phase of the critical infrastructure protection life cycle, the Indications and Warnings phase. Indications and warnings are actions or infrastructure conditions that signal an event is either: Historically, DoD event indications have focused and relied on intelligence information about foreign developments. These event indications have been expanded to include all potential infrastructure disruption or degradation, regardless of its cause. DoD CIP indications are based on four levels of input: This fusion of traditional intelligence information with sector-specific information has been determined to be essential for meaningful CIP indications. If an indication is detected, a warning notifying the appropriate asset owners of a possible or occurring event or hazard can be issued. The sector's assurance plan determines what conditions and actions are monitored and reported for each Defense Infrastructure Sector. Each sector must develop a written Defense Sector Assurance Plan that includes a compendium of sector incidents for monitoring and reporting. The sector incident compendium is made up of three types of incidents: DoD critical asset owners, installations, and sector CIAOs determine the DoD and sector-defined incidents. Each of the reportable incidents or classes of incidents must include the following components: TheNational Infrastructure Protection Center(NIPC) is the primary national warning center for significant infrastructure attacks. Critical asset owners, DoD installations, and Sector CIAOs monitor the infrastructure daily. Indications of an infrastructure incident are reported to theNational Military Command Center, or NMCC. If indications are on a computer network, they are also reported to the Joint Task Force Computer Network Operations (JTF-CNO). The NMCC and JTF-CNO assess the indications and pass them to the NIPC and appropriate DoD organizations. When the NIPC determines that an infrastructure event is likely to occur, is planned, or is under way, it issues a national warning. For DoD, the NIPC passes its warnings and alerts to the NMCC and JTF-CNO. These warnings and alerts are then passed to the DoD components. The warning may include guidance regarding additional protection measures DoD should take. Phase 1 of the CIP life cycle provided a layer of protection by identifying and assessing critical assets and their vulnerabilities. Phase 2 provided another layer of protection by remediating or improving the identified deficiencies and weaknesses of an asset. Even with these protections and precautions, an infrastructure incident was still possible. When it does the Indications and Warnings phase goes into effect. The Mitigation phase (Phase 4), is made up of preplanned coordinated actions in response to infrastructure warnings or incidents. Mitigation actions are taken before or during an infrastructure event. These actions are designed to minimize the operational impact of the loss of a critical asset, facilitate incident response, and quickly restore the infrastructure service. A primary purpose of the Mitigation phase is to minimize the operational impact on other critical Defense Infrastructures and assets when a critical asset is lost or damaged. As an example, if there is a U.S. installation, Site A, located in a host nation. Site A is a tier 1 asset, meaning that if it fails, the Combatant Commands mission fails. Site A has mutual Global Information Grid Command Control (GIG/C2), information interdependencies with Sites B and C. In addition, other Defense Infrastructure sectors rely on Site A for mission capabilities. In this scenario, what could be the impact if the supply line to the commercial power plant that provides the installation's primary power is accidentally severed. Because of all the interdependencies, losing this asset is more than the loss of just one site. It means the loss of other sector capabilities. A possible mitigation action might be for Site A to go on backup power. An alternate action could be to pass complete control of Site A's functionality to another site, where redundancy has been previously arranged. These actions would limit the impact of this incident on the other sites and related sectors. In addition to lessening the operational impact of a critical infrastructure event, the Mitigation phase of the CIP life cycle supports and complements two other life cycle phases. Mitigation actions aid in the emergency, investigation, and management activities of Phase 5, Incident Response. They also facilitate the reconstitution activities of Phase 6. During the Mitigation phase, DoD critical asset owners, DoD installations, and Sector Chief Infrastructure Assurance Officers, or CIAOs, work with the National Military Command Center (NMCC) and the Joint Task Force-Computer Network Operations (JTF-CNO) to develop, train for, and exercise mitigation responses for various scenarios. When there is a warning, emergency, or infrastructure incident, the critical asset owners, installations, and Sector CIAOs initiate mitigation actions to sustain service to the DoD. They also provide mitigation status information to the NMCC and JTF-CNO. The NMCC monitors for consequences from an event within one Defense Infrastructure sector that are significant enough to affect other sectors. For events that cross two or more sectors, the NMCC advises on the prioritization and coordination of mitigation actions. When event threats or consequences continue to escalate, the NMCC directs mitigation actions by sector to ensure a coordinated response across the DoD. The NMCC and the JTF-CNO keep the National Infrastructure Protection Center, or NIPC, apprised of any significant mitigation activities. When an event affects the Defense Infrastructure, the Incident Response phase begins. Incident Response is the fifth phase of the CIP life cycle. The purpose of the Incident Response phase is to eliminate the cause or source of an infrastructure event. For example, during the9/11 attackson theWorld Trade CenterandPentagon, all non-military airplanes were grounded over the United States to prevent further incidents. Response activities included emergency measures, not from the asset owners or operators, but from dedicated third parties such as law enforcement, medical rescue, fire rescue, hazardous material or explosives handling, and investigative agencies. Response to Defense Infrastructure incidents can take one of two paths depending on whether or not the event affects a DoD computer network. When incidents compromise a DoD computer network, the Joint Task Force-Computer Network Operations (JTF-CNO) directs the response activities. These activities are designed to stop the computer network attack, contain and mitigate damage to a DoD information network and then restore minimum required functionality. JTF-CNO also requests and coordinates any support or assistance from other Federal agencies and civilian organizations during incidents affecting a DoD network. When incidents impact any other DoD owned assets, installation commanders and critical asset owners follow traditional channels and procedures to coordinate responses. This includes notifying affected Sector Chief Infrastructure Assurance Officers, or CIAOs, in the initial notice and status reporting. Although third parties play a major role in the response to Defense Infrastructure events, DoD CIP personnel also have responsibilities to fulfill. After the source or cause of an infrastructure event is eliminated or contained, the infrastructure and its capabilities must be restored. Reconstitution is the last phase of the critical infrastructure protection. Reconstitution is probably the most challenging and least developed process of the life cycle. DoD critical asset owners have the major responsibility for reconstitution.
https://en.wikipedia.org/wiki/Critical_infrastructure_protection
Development testingis asoftware developmentprocess that involves synchronized application of a broad spectrum ofdefectprevention and detection strategies in order to reduce software development risks, time, and costs. Depending on the organization's expectations for software development, development testing might includestatic code analysis,data flow analysis,metrics analysis,peer code reviews,unit testing,code coverage analysis,traceability, and other software verification practices. Development testing is performed by the software developer or engineer during theconstruction phaseof thesoftware development lifecycle.[1] Rather than replace traditionalQAfocuses, it augments it.[2]Development testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process.[3] Development testing is applied for the following main purposes: VDC research reports that the standardized implementation of development testing processes within an overarching standardized process not only improves software quality (by aligning development activities with proven best practices) but also increases project predictability.[4]voke research reports that development testing makes software more predictable, traceable, visible, and transparent throughout the software development lifecycle.[2] In each of the above applications, development testing starts by defining policies that express the organization's expectations forreliability, security, performance, and regulatory compliance. Then, after the team is trained on these policies, development testing practices are implemented to align software development activities with these policies.[5]These development testing practices include: The emphasis on applying a broad spectrum of defect prevention and defect detection practices is based on the premise that different development testing techniques are tuned to expose different types of defects at different points in the software development lifecycle, so applying multiple techniques in concert decreases the risk of defects slipping through the cracks.[3]The importance of applying broad set of practices is confirmed by Boehm and Basili in the often-referenced "Software Defect Reduction Top 10 List."[7] The term "development testing" has occasionally been used to describe the application of static analysis tools. Numerous industry leaders have taken issue with this conflation because static analysis is not technically testing; even static analysis that "covers" every line of code is incapable ofvalidatingthat the code does what it is supposed to do—or of exposing certain types of defects orsecurity vulnerabilitiesthat manifest themselves only as software is dynamically executed. Although many warn that static analysis alone should not be considered a silver bullet or panacea, most industry experts agree that static analysis is a proven method for eliminating many security, reliability, and performance defects. In other words, while static analysis is not the same as development testing, it is commonly considered a component of development testing.[8][9] In addition to various implementations of static analysis, such asflow analysis, and unit testing, development testing also includes peer code review as a primary quality activity. Code review is widely considered one of the most effective defect detection and prevention methods in software development.[10]
https://en.wikipedia.org/wiki/Development_testing
BoundsCheckeris amemory checkingand API call validation tool used forC++software development withMicrosoft Visual C++. It was created byNuMegain the early 1990s. When NuMega was purchased byCompuwarein 1997, BoundsChecker became part of a larger tool suite,DevPartner Studio.Micro Focuspurchased the product line from Compuware in 2009.[1]Comparable tools includePurify,Insure++andValgrind. BoundsChecker may be run in two distinct modes:ActiveCheck, which will work against any application as is, orFinalCheck, which makes use ofinstrumentationadded to the application when it is built. ActiveCheck performs a less intrusive analysis and monitors allcallsby the application to the CRuntime Library,Windows APIand calls toCOM objects. By monitoringmemory allocations and releases, it can detectmemory leaksand overruns. Monitoring API and COM calls enables ActiveCheck to check parameters, returns and exceptions and report exceptions when they occur.Threaddeadlockscan also be detected by monitoring of the synchronization objects and calls giving actual and potential deadlock detection. FinalCheck requires an instrumented build and gives a much deeper but more intrusive analysis. It provides all of the detection features of ActiveCheck plus the ability to detectbuffer overflows(read and write) anduninitialized memory accesses. It monitors every scope change, and tracks pointers referencing memory objects. API calls are monitored, their input parameters verified before the function calls are actually performed, warning of possible problems. The API return codes are also monitored, and error codes are logged. Such validation is limited to such APIs as are known to BoundsChecker, currently several thousand in number. If Memory Tracking is enabled, API Call Validation can make use of the information gathered for more precise validation of memory pointers. When both memory tracking and API validation are enabled, it becomes possible to detect many kinds of array and buffer overrun conditions. Compiler instrumentation enhances this ability. This is the feature for which the product was originally named. API, COM method and .NET Interop function calls can be logged in detail, noting the call parameter values and the resulting return values. This feature is limited in value, as non-trivial applications often result in the session log quickly becoming too large. A report can be generated analyzing .NET Interop, garbage collection and finalizer activity over the life of the process under test. Certain kinds of deadly embraces and other such lockups can be detected. The current version (12.1.40) of BoundsChecker supports 32-bit and 64-bit native applications on Windows 10 (2020 Spring Update). MS-DOS, 16-bit Windows, Windows 2000, Windows XP and Windows 7 environments are no longer supported. As part ofDevPartner Studio, the product integrates with2017Update 15.9.33 and2019Update 16.9[2] As of March 2021, the Deadlock Analysis feature is not yet supported in X64 applications.
https://en.wikipedia.org/wiki/BoundsChecker
A"Hello, World!" programis usually a simplecomputer programthat emits (or displays) to the screen (often theconsole) a message similar to "Hello, World!". A small piece of code in mostgeneral-purpose programming languages, this program is used to illustrate a language's basicsyntax. Such a program is often the first written by a student of a new programming language,[1]but it can also be used as asanity checkto ensure that thecomputer softwareintended tocompileor runsource codeis correctly installed, and that its operator understands how to use it. While several small test programs have existed since the development of programmablecomputers, the tradition of using the phrase "Hello, World!" as a test message was influenced by an example program in the 1978 bookThe C Programming Language,[2]with likely earlier use inBCPL. The example program from the book prints"hello, world", and was inherited from a 1974Bell Laboratoriesinternal memorandum byBrian Kernighan,Programming in C: A Tutorial:[3] In the above example, themain( )functiondefines where the programshould start executing. The function body consists of a singlestatement, a call to theprintf()function, which stands for "print formatted"; it outputs to theconsolewhatever is passed to it as theparameter, in this case thestring"hello, world". The C-language version was preceded by Kernighan's own 1972A Tutorial Introduction to the LanguageB,[4]where the first known version of the program is found in an example used to illustrate external variables: The program above printshello, world!on the terminal, including anewlinecharacter. The phrase is divided into multiple variables because in B a character constant is limited to fourASCIIcharacters. The previous example in the tutorial printedhi!on the terminal, and the phrasehello, world!was introduced as a slightly longer greeting that required several character constants for its expression. TheJargon Filereports that "hello, world" instead originated in 1967 with the languageBCPL.[5]Outside computing, use of the exact phrase began over a decade prior; it was thecatchphraseof New York radio disc jockeyWilliam B. Williamsbeginning in the 1950s.[6] "Hello, World!" programs vary in complexity between different languages. In some languages, particularlyscripting languages, the "Hello, World!" program can be written as one statement, while in others (more so manylow-level languages) many more statements can be required. For example, inPython, to print the stringHello, World!followed by a newline, one only needs to writeprint("Hello, World!"). In contrast, the equivalent code inC++[7]requires the import of theinput/output(I/O)software library, the manual declaration of anentry point, and the explicit instruction that the output string should be sent to thestandard outputstream. The phrase "Hello, World!" has seen various deviations in casing and punctuation, such as "hello world" which lacks the capitalization of the leadingHandW, and the presence of the comma or exclamation mark. Some devices limit the format to specific variations, such as all-capitalized versions on systems that support only capital letters, while someesoteric programming languagesmay have to print a slightly modified string. Other human languages have been used as the output; for example, a tutorial for theGo languageemitted both English and Chinese or Japanese characters, demonstrating the language's built-inUnicodesupport.[8]Another notable example is theRust language, whose management system automatically inserts a "Hello, World" program when creating new projects. Some languages change the function of the "Hello, World!" program while maintaining the spirit of demonstrating a simple example.Functional programminglanguages, such asLisp,ML, andHaskell, tend to substitute afactorialprogram for "Hello, World!", as functional programming emphasizes recursive techniques, whereas the original examples emphasize I/O, which violates the spirit of pure functional programming by producingside effects. Languages otherwise able to print "Hello, World!" (assembly language,C,VHDL) may also be used inembedded systems, where text output is either difficult (requiring added components or communication with another computer) or nonexistent. For devices such asmicrocontrollers,field-programmable gate arrays, andcomplex programmable logic devices(CPLDs), "Hello, World!" may thus be substituted with a blinkinglight-emitting diode(LED), which demonstrates timing and interaction between components.[9][10][11][12][13] TheDebianandUbuntuLinux distributionsprovide the "Hello, World!" program through theirsoftware package managersystems, which can be invoked with the commandhello. It serves as asanity checkand a simple example of installing a software package. For developers, it provides an example of creating a.debpackage, either traditionally or usingdebhelper, and the version ofhelloused,GNU Hello, serves as an example of writing aGNUprogram.[14] Variations of the "Hello, World!" program that produce agraphical output(as opposed to text output) have also been shown.Sundemonstrated a "Hello, World!" program inJavabased onscalable vector graphics,[15]and theXLprogramming language features a spinning Earth "Hello, World!" using3D computer graphics.[16]Mark Guzdial andElliot Solowayhave suggested that the "hello, world" test message may be outdated now that graphics and sound can be manipulated as easily as text.[17] Incomputer graphics, rendering a triangle – called "Hello Triangle" – is sometimes used as an introductory example forgraphics libraries.[18][19] "Time to hello world" (TTHW) is the time it takes to author a "Hello, World!" program in a given programming language. This is one measure of a programming language's ease of use. Since the program is meant as an introduction for people unfamiliar with the language, a more complex "Hello, World!" program may indicate that the programming language is less approachable.[20]For instance, the first publicly known "Hello, World!" program inMalbolge(which actually output "HEllO WORld") took two years to be announced, and it was produced not by a human but by a code generator written inCommon Lisp(see§ Variations, above). The concept has been extended beyond programming languages toAPIs, as a measure of how simple it is for a new developer to get a basic example working; a shorter time indicates an easier API for developers to adopt.[21][22]
https://en.wikipedia.org/wiki/%22Hello,_World!%22_program
Ashakedownis a period of testing or a trial journey undergone by a ship, aircraft or other craft and its crew before being declaredoperational. Statistically, a proportion of the components will failafter a relatively short period of use, and those that survive this period can be expected to last for a much longer, and more importantly, predictablelife-span. For example, if a bolt has a hidden flaw introduced during manufacturing, it will not be as reliable as other bolts of the same type. Most racing cars require a "shakedown" test before being used at a race meeting. For example, on May 3, 2006,Luca Badoerperformed shakedowns on all three ofFerrari'sFormula Onecars at theFiorano Circuit, in preparation for theEuropean Grand Prixat theNürburgring. Badoer was the Ferrari F1 team's test driver at the time, while the main drivers wereMichael SchumacherandFelipe Massa. Aircraft shakedowns checkavionics,flight controls, all systems, and the general airframe'sairworthiness.[1]In aircraft there are two forms of shakedown testing: shakedown testing of the design as a whole with flight-tests, and shakedown testing of individual aircraft. Shakedown testing of an aircraft design involvestest flightsof the prototypes, a process that actually starts months or years before first flight with simulator flights and hardware testing. This process often incorporates aniron birdtest rig in which all the flight control systems are brought together in an engineering lab, while test-articles of the physical structure will be subjected to stress and fatigue loads beyond anything the aircraft is likely to encounter in service (sometimes, although not necessarily, testing one or more articlesto destruction). The aircraft systems are gradually commissioned on board the prototypes; first on external power, then, once engines are fitted, on internal power, progressing to taxi trials and eventually first flight. Flight-testing proceeds conservatively, demonstrating that each test condition can be safely achieved before proceeding to the next. Prototype aircraft are generallyheavily instrumentedin order to support these flight-test objectives by capturing large amounts of data for both live analysis (which on larger aircraft such asairlinersmay happen at dedicated flight-test engineer stations on board) and for analysis post-flight. The ultimate aim of testing is to demonstrate the aircraft can operate safely throughout its flight envelope and that all regulatory requirements of the relevantcivil aviation authoritieshave been met, allowing the design to receive itsCertificate of Airworthiness. Shakedown testing of production aircraft is a simplified version of prototype testing. The design has been demonstrated to be safe and the objective is to now demonstrate that the components on an individual aircraft operate appropriately. Shakedown now comprises the general power-on trials, followed by one or more pre-delivery test flights carried out by the aircraft builder's personnel, and generally culminating in a final acceptance test also involving the purchaser's own flight crew and engineering personnel. A shakedown for a ship is generally referred to as asea trial.[2]The maiden voyage takes place after a successful shakedown. However, for warships, the shakedown period extends post-commissioningas the new crew familiarise themselves with the ship and with operating together as a single unit, raising their proficiency until the warship can be consideredoperational. A shakedown hike is when a backpacker, in preparation for a long hike such as theAppalachian Trail,Pacific Crest Trailor theContinental Divide Trail, takes their selection of equipment on a shorter backpacking trip with the intention of testing its trail worthiness. A related term, the pack shakedown, is when a novice hiker has a more experienced hiker suggest changes to the novice's equipment, often simply suggesting things to leave out.
https://en.wikipedia.org/wiki/Shakedown_(testing)
Insoftware testing,test automationis the use ofsoftwareseparate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes.[1]Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical forcontinuous deliveryandcontinuous testing.[2] There are many approaches to test automation, however below are the general approaches used widely: One way to generate test cases automatically ismodel-based testingthrough use of a model of the system for test case generation, but research continues into a variety of alternative methodologies for doing so.[citation needed]In some cases, the model-based approach enables non-technical users to create automated business test cases in plain English so that no programming of any kind is needed in order to configure them for multiple operating systems, browsers, and smart devices.[3] Somesoftware testingtasks (such as extensive low-level interfaceregression testing) can be laborious and time-consuming to do manually. In addition, a manual approach might not always be effective in finding certain classes of defects. Test automation offers a possibility to perform these types of testing effectively. Once automated tests have been developed, they can be run quickly and repeatedly many times. This can be a cost-effective method for regression testing of software products that have a long maintenance life. Even minor patches over the lifetime of the application can cause existing features to break which were working at an earlier point in time. API testingis also being widely used by software testers as it enables them to verify requirements independent of their GUI implementation, commonly to test them earlier in development, and to make sure the test itself adheres to clean code principles, especially thesingle responsibility principle. It involves directly testingAPIsas part ofintegration testing, to determine if they meet expectations for functionality, reliability, performance, and security.[4]Since APIs lack aGUI, API testing is performed at themessage layer.[5]API testing is considered critical when an API serves as the primary interface toapplication logic.[6] Many test automation tools provide record and playback features that allow users to interactively record user actions and replay them back any number of times, comparing actual results to those expected. The advantage of this approach is that it requires little or nosoftware development. This approach can be applied to any application that has agraphical user interface. However, reliance on these features poses major reliability and maintainability problems. Relabelling a button or moving it to another part of the window may require the test to be re-recorded. Record and playback also often adds irrelevant activities or incorrectly records some activities.[citation needed] A variation on this type of tool is for testing of web sites. Here, the "interface" is the web page. However, such a framework utilizes entirely different techniques because it is renderingHTMLand listening toDOM Eventsinstead of operating system events.Headless browsersor solutions based onSelenium Web Driverare normally used for this purpose.[7][8][9] Another variation of this type of test automation tool is for testing mobile applications. This is very useful given the number of different sizes, resolutions, and operating systems used on mobile phones. For this variation, a framework is used in order to instantiate actions on the mobile device and to gather results of the actions. Another variation is script-less test automation that does not use record and playback, but instead builds a model[clarification needed]of the application and then enables the tester to create test cases by simply inserting test parameters and conditions, which requires no scripting skills. Test automation, mostly using unit testing, is a key feature ofextreme programmingandagile software development, where it is known astest-driven development(TDD) or test-first development. Unit tests can be written to define the functionalitybeforethe code is written. However, these unit tests evolve and are extended as coding progresses, issues are discovered and the code is subjected to refactoring.[10]Only when all the tests for all the demanded features pass is the code considered complete. Proponents argue that it produces software that is both more reliable and less costly than code that is tested by manual exploration.[citation needed]It is considered more reliable because the code coverage is better, and because it is run constantly during development rather than once at the end of awaterfalldevelopment cycle. The developer discovers defects immediately upon making a change, when it is least expensive to fix. Finally,code refactoringis safer when unit testing is used; transforming the code into a simpler form with lesscode duplication, but equivalent behavior, is much less likely to introduce new defects when the refactored code is covered by unit tests. Continuous testingis the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[11][12]For Continuous Testing, the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[13] What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team must make.[14]A multi-vocal literature review of 52 practitioner and 26 academic sources found that five main factors to consider in test automation decision are: 1) System Under Test (SUT), 2) the types and numbers of tests, 3) test-tool, 4) human and organizational topics, and 5) cross-cutting factors. The most frequent individual factors identified in the study were: need for regression testing, economic factors, and maturity of SUT.[15] While the reusability of automated tests is valued by software development companies, this property can also be viewed as a disadvantage. It leads to the so-called"Pesticide Paradox", where repeatedly executed scripts stop detecting errors that go beyond their frameworks. In such cases,manual testingmay be a better investment. This ambiguity once again leads to the conclusion that the decision on test automation should be made individually, keeping in mind project requirements and peculiarities. Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped withtest oracles), defect logging, etc., without necessarily automating tests in an end-to-end fashion. One must keep satisfying popular requirements when thinking of test automation: Test automation tools can be expensive and are usually employed in combination with manual testing. Test automation can be made cost-effective in the long term, especially when used repeatedly inregression testing. A good candidate for test automation is a test case for common flow of an application, as it is required to be executed (regression testing) every time an enhancement is made in the application. Test automation reduces the effort associated with manual testing. Manual effort is needed to develop and maintain automated checks, as well as reviewing test results. In automated testing, thetest engineerorsoftware quality assuranceperson must have software coding ability since the test cases are written in the form of source code which when run produce output according to theassertionsthat are a part of it. Some test automation tools allow for test authoring to be done by keywords instead of coding, which do not require programming. A strategy to decide the amount of tests to automate is the test automation pyramid. This strategy suggests to write three types of tests with different granularity. The higher the level, less is the amount of tests to write.[16] One conception of the testing pyramid contains unit, integration, and end-to-end unit tests. According toGoogle's testing blog, unit tests should make up the majority of your testing strategy, with fewer integration tests and only a small amount of end-to-end tests.[19] A test automation framework is an integrated system that sets the rules of automation of a specific product. This system integrates the function libraries, test data sources, object details and various reusable modules. These components act as small building blocks which need to be assembled to represent a business process. The framework provides the basis of test automation and simplifies the automation effort. The main advantage of aframeworkof assumptions, concepts and tools that provide support for automated software testing is the low cost formaintenance. If there is change to anytest casethen only the test case file needs to be updated and thedriver Scriptandstartup scriptwill remain the same. Ideally, there is no need to update the scripts in case of changes to the application. Choosing the right framework/scripting technique helps in maintaining lower costs. The costs associated with test scripting are due to development and maintenance efforts. The approach of scripting used during test automation has effect on costs. Various framework/scripting techniques are generally used: The Testing framework is responsible for:[20] A growing trend in software development is the use ofunit testingframeworks such as thexUnitframeworks (for example,JUnitandNUnit) that allow the execution of unit tests to determine whether various sections of thecodeare acting as expected under various circumstances.Test casesdescribe tests that need to be run on the program to verify that the program runs as expected. Test automation interfaces are platforms that provide a singleworkspacefor incorporating multiple testing tools and frameworks forSystem/Integration testingof application under test. The goal of Test Automation Interface is to simplify the process of mapping tests to business criteria without coding coming in the way of the process. Test automation interface are expected to improve the efficiency and flexibility of maintaining test scripts.[21] Test Automation Interface consists of the following core modules: Interface engines are built on top of Interface Environment. Interface engine consists of aparserand a test runner. The parser is present to parse the object files coming from the object repository into the test specific scripting language. The test runner executes the test scripts using atest harness.[21] Object repositories are a collection of UI/Application object data recorded by the testing tool while exploring the application under test.[21] Tools are specifically designed to target some particular test environment, such as Windows and web automation tools, etc. Tools serve as a driving agent for an automation process. However, an automation framework is not a tool to perform a specific task, but rather infrastructure that provides the solution where different tools can do their job in a unified manner. This provides a common platform for the automation engineer. There are various types of frameworks. They are categorized on the basis of the automation component they leverage. These are:
https://en.wikipedia.org/wiki/Automated_testing
Automatic test equipmentorautomated test equipment(ATE) is any apparatus that performs tests on a device, known as thedevice under test(DUT), equipment under test (EUT) or unit under test (UUT), usingautomationto quickly perform measurements and evaluate the test results. An ATE can be a simple computer-controlleddigital multimeter, or a complicated system containing dozens of complex test instruments (real or simulatedelectronic test equipment) capable of automatically testing and diagnosing faults in sophisticated electronicpackaged partsor onwafer testing, includingsystem on chipsandintegrated circuits. ATE is widely used in the electronic manufacturing industry to test electronic components and systems after being fabricated. ATE is also used to testavionicsand the electronic modules in automobiles. It is used in military applications like radar and wireless communication. Semiconductor ATE, named for testingsemiconductor devices, can test a wide range of electronic devices and systems, from simple components (resistors,capacitors, andinductors) tointegrated circuits(ICs),printed circuit boards(PCBs), and complex, completely assembled electronic systems. For this purpose,probe cardsare used. ATE systems are designed to reduce the amount of test time needed to verify that a particular device works or to quickly find its faults before the part has a chance to be used in a final consumer product. To reduce manufacturing costs and improve yield, semiconductor devices should be tested after being fabricated to prevent defective devices ending up with the consumer. The semiconductor ATE architecture consists of master controller (usually acomputer) that synchronizes one or more source and capture instruments (listed below). Historically, custom-designed controllers orrelayswere used by ATE systems. The Device Under Test (DUT) is physically connected to the ATE by another robotic machine called a handler orproberand through a customized Interface Test Adapter (ITA) or "fixture" that adapts the ATE's resources to the DUT. Theindustrial PCis a normal desktop computer packaged in 19-inch rack standards with sufficient PCI / PCIe slots for accommodating the Signal stimulator/sensing cards. This takes up the role of a controller in the ATE. Development of test applications and result storage is managed in this PC. Most modern semiconductor ATEs include multiple computer-controlled instruments to source or measure a wide range of parameters. The instruments may include device power supplies (DPS),[1][2]parametric measurement units (PMU), arbitrary waveform generators (AWG), digitizers, digital IOs, and utility supplies. The instruments perform different measurements on the DUT, and the instruments are synchronized so that they source and measure waveforms at the proper times. Based on the requirement of response-time, real-time systems are also considered for stimulation and signal capturing. Themass interconnectis a connector interface between test instruments (PXI, VXI, LXI, GPIB, SCXI, & PCI) and devices/units under test (D/UUT). This section acts as a nodal point for signals going in/out between ATE and D/UUT. For example, to measure a voltage of a particular semiconductor device, the Digital Signal Processing (DSP) instruments in the ATE measure the voltage directly and send the results to a computer for signal processing, where the desired value is computed. This example shows that conventional instruments, like anammeter, may not be used in many ATEs due to the limited number of measurements the instrument could make, and the time it would take to use the instruments to make the measurement. One key advantage to using DSP to measure the parameters is time. If we have to calculate the peak voltage of an electrical signal and other parameters of the signal, then we have to employ a peak detector instrument as well as other instruments to test the other parameters. If DSP-based instruments are used, however, then a sample of the signal is made and the other parameters can be computed from the single measurement. Not all devices are tested equally. Testing adds costs, so low-cost components are rarely tested completely, whereas medical or high costs components (where reliability is important) are frequently tested. But testing the device for all parameters may or may not be required depending on the device functionality and end user. For example, if the device finds application in medical or life-saving products then many of its parameters must be tested, and some of the parameters must be guaranteed. But deciding on the parameters to be tested is a complex decision based on cost vs yield. If the device is a complex digital device, with thousands of gates, then test fault coverage has to be calculated. Here again, the decision is complex based on test economics, based on frequency, number and type of I/Os in the device and the end-use application... ATE can be used on packaged parts (typical IC 'chip') or directly on thesilicon wafer. Packaged parts use a handler to place the device on a customized interface board, whereas silicon wafers are tested directly with high precision probes. The ATE systems interact with the handler or prober to test the DUT. ATE systems typically interface with an automated placement tool, called a "handler", that physically places the Device Under Test (DUT) on an Interface Test Adapter (ITA) so that it can be measured by the equipment. There may also be an Interface Test Adapter (ITA), a device just making electronic connections between the ATE and the Device Under Test (also called Unit Under Test or UUT), but also it might contain an additional circuitry to adapt signals between the ATE and the DUT and has physical facilities to mount the DUT. Finally, asocketis used to bridge the connection between the ITA and the DUT. A socket must survive the rigorous demands of a production floor, so they are usually replaced frequently. Simple electrical interface diagram: ATE → ITA → DUT (package) ← Handler Wafer-based ATEs typically use a device called aproberthat moves across a silicon wafer to test the device. Simple electrical interface diagram: ATE → Prober → Wafer (DUT) One way to improve test time is to test multiple devices at once. ATE systems can now support having multiple "sites" where the ATE resources are shared by each site. Some resources can be used in parallel, others must be serialized to each DUT. The ATE computer uses modern computer languages (likeC,C++,Java,VEE,Python,LabVIEWorSmalltalk) with additional statements to control the ATE equipment through standard and proprietaryapplication programming interfaces(API). Also some dedicated computer languages exist, likeAbbreviated Test Language for All Systems(ATLAS). Automatic test equipment can also be automated using a test execution engine such asNI's TestStand.[3] Sometimesautomatic test pattern generationis used to help design the series of tests. Many ATE platforms used in the semiconductor industry output data usingStandard Test Data Format(STDF) Automatic test equipment diagnostics is the part of an ATE test that determines the faulty components. ATE tests perform two basic functions. The first is to test whether or not the Device Under Test is working correctly. The second is when the DUT is not working correctly, to diagnose the reason. The diagnostic portion can be the most difficult and costly portion of the test. It is typical for ATE to reduce a failure to a cluster or ambiguity group of components. One method to help reduce these ambiguity groups is the addition ofanalog signature analysistesting to the ATE system. Diagnostics are often aided by the use offlying probetesting. The addition of a high-speedswitching systemto a test system's configuration allows for faster, more cost-effective testing of multiple devices, and is designed to reduce both test errors and costs. Designing a test system's switching configuration requires an understanding of the signals to be switched and the tests to be performed, as well as the switching hardware form factors available. Several modular electronic instrumentation platforms are currently in common use for configuring automated electronic test and measurement systems. These systems are widely employed for incoming inspection, quality assurance, and production testing of electronic devices and subassemblies. Industry-standard communication interfaces link signal sources with measurement instruments in "rack-and-stack" or chassis-/mainframe-based systems, often under the control of a custom software application running on an external PC. The General Purpose Interface Bus (GPIB) is an IEEE-488 (a standard created by theInstitute of Electrical and Electronics Engineers) standard parallel interface used for attaching sensors and programmable instruments to a computer. GPIB is a digital 8-bit parallel communications interface capable of achieving data transfers of more than 8 MB/s. It allows daisy-chaining up to 14 instruments to a system controller using a 24-pin connector. It is one of the most common I/O interfaces present in instruments and is designed specifically for instrument control applications. The IEEE-488 specifications standardized this bus and defined its electrical, mechanical, and functional specifications, while also defining its basic software communication rules. GPIB works best for applications in industrial settings that require a rugged connection for instrument control. The original GPIB standard was developed in the late 1960s by Hewlett-Packard to connect and control the programmable instruments the company manufactured. The introduction of digital controllers and programmable test equipment created a need for a standard, high-speed interface for communication between instruments and controllers from various vendors. In 1975, the IEEE published ANSI/IEEE Standard 488-1975, IEEE Standard Digital Interface for Programmable Instrumentation, which contained the electrical, mechanical, and functional specifications of an interfacing system. This standard was subsequently revised in 1978 (IEEE-488.1) and 1990 (IEEE-488.2). The IEEE 488.2 specification includes the Standard Commands for Programmable Instrumentation (SCPI), which define specific commands that each instrument class must obey. SCPI ensures compatibility and configurability among these instruments. The IEEE-488 bus has long been popular because it is simple to use and takes advantage of a large selection of programmable instruments and stimuli. Large systems, however, have the following limitations: TheLXIStandard defines the communication protocols for instrumentation and data acquisition systems using Ethernet. These systems are based on small, modular instruments, using low-cost, open-standard LAN (Ethernet). LXI-compliant instruments offer the size and integration advantages of modular instruments without the cost and form factor constraints of card-cage architectures. Through the use of Ethernet communications, the LXI Standard allows for flexible packaging, high-speed I/O, and standardized use of LAN connectivity in a broad range of commercial, industrial, aerospace, and military applications. Every LXI-compliant instrument includes an Interchangeable Virtual Instrument (IVI) driver to simplify communication with non-LXI instruments, so LXI-compliant devices can communicate with devices that are not themselves LXI compliant (i.e., instruments that employ GPIB, VXI, PXI, etc.). This simplifies building and operating hybrid configurations of instruments. LXI instruments sometimes employ scripting using embedded test script processors for configuring test and measurement applications. Script-based instruments provide architectural flexibility, improved performance, and lower cost for many applications. Scripting enhances the benefits of LXI instruments, and LXI offers features that both enable and enhance scripting. Although the current LXI standards for instrumentation do not require that instruments be programmable or implement scripting, several features in the LXI specification anticipate programmable instruments and provide useful functionality that enhances scripting's capabilities on LXI-compliant instruments.[5] TheVXIbus architecture is an open standard platform for automated test based on theVMEbus. Introduced in 1987, VXI uses all Eurocard form factors and adds trigger lines, a local bus, and other functions suited for measurement applications. VXI systems are based on a mainframe or chassis with up to 13 slots into which various VXI instrument modules can be installed.[6]The chassis also provides all the power supply and cooling requirements for the chassis and the instruments it contains. VXI bus modules are typically6Uin height. PXIis a peripheral bus specialized for data acquisition and real-time control systems. Introduced in 1997, PXI uses the CompactPCI3Uand6Uform factors and adds trigger lines, a local bus, and other functions suited for measurement applications. PXI hardware and software specifications are developed and maintained by the PXI Systems Alliance.[7]More than 50 manufacturers around the world produce PXI hardware.[8] USBconnects peripheral devices, such as keyboards and mice, to PCs. USB is aPlug and Playbus that can handle up to 127 devices on one port, and has a theoretical maximum throughput of 480 Mbit/s (high-speed USB defined by the USB 2.0 specification). Because USB ports are standard features of PCs, they are a natural evolution of conventional serial port technology. However, it is not widely used in building industrial test and measurement systems for a number of reasons; for example, USB cables are not industrial grade, are noise sensitive, can accidentally become detached, and the maximum distance between the controller and the device is 30 m. LikeRS-232, USB is useful for applications in a laboratory setting that do not require a rugged bus connection. RS-232 is a specification for serial communication that is popular in analytical and scientific instruments, as well for controlling peripherals such as printers. Unlike GPIB, with the RS-232 interface, it is possible to connect and control only one device at a time. RS-232 is also a relatively slow interface with typical data rates of less than 20 KB/s. RS-232 is best suited for laboratory applications compatible with a slower, less rugged connection. It works on a ±24 volt supply. Boundary scancan be implemented as a PCB-level or system-level interface bus for the purpose of controlling the pins of an IC and facilitating continuity (interconnection) tests on a test target (UUT) and also functional cluster tests on logic devices or groups of devices. It can also be used as a controlling interface for other instrumentation that can be embedded into the ICs themselves (see IEEE 1687) or instruments that are part of an external controllable test system. One of the most recently developed test system platforms employs instrumentation equipped with onboard test script processors combined with a high-speed bus. In this approach, one "master" instrument runs a test script (a small program) that controls the operation of the various "slave" instruments in the test system, to which it is linked via a high-speed LAN-based trigger synchronization and inter-unit communication bus. Scripting is writing programs in a scripting language to coordinate a sequence of actions. This approach is optimized for small message transfers that are characteristic of test and measurement applications. With very little network overhead and a 100 Mbit/sec data rate, it is significantly faster than GPIB and 100BaseT Ethernet in real applications. The advantage of this platform is that all connected instruments behave as one tightly integrated multi-channel system, so users can scale their test system to fit their required channel counts cost-effectively. A system configured on this type of platform can stand alone as a complete measurement and automation solution, with the master unit controlling sourcing, measuring, pass/fail decisions, test sequence flow control, binning, and the component handler or prober. Support for dedicated trigger lines means that synchronous operations between multiple instruments equipped with onboard Test Script Processors that are linked by this high speed bus can be achieved without the need for additional trigger connections.[9]
https://en.wikipedia.org/wiki/Automatic_test_equipment
Mobile-device testingfunctions to assure the quality of mobile devices, like mobile phones, PDAs, etc. It is conducted on both hardware and software, and from the view of different procedures, the testing comprises R&D testing, factory testing and certificate testing. It involves a set of activities from monitoring and trouble shooting mobile application, content and services on real handsets. It includes verification and validation of hardware devices and software applications. Test must be conducted with multiple operating system versions, hardware configurations, device types, network capabilities, and notably with the Android operating system, with various hardware vendor interface layers. Listed companies likeKeynote Systems,Capgemini Consultingand Mobile Applications and Handset testing companyIntertekand QA companies like PASS Technologies AG,[1]andTestdroidprovide mobile testing, helping application stores, developers and mobile device manufacturers in testing and monitoring ofmobile content, applications and services.[2] Static code analysisis theanalysis of computer softwarethat is performed without actually executing programs built from that software (analysis performed on executing programs is known asdynamic analysis)[3]Static analysis rules are available for code written to target various mobile development platforms. Unit testingis a test phase when portions of mobile device development are tested, usually by the developer. It may containhardware testing,software testing, andmechanical testing. Factory testing is a kind of sanity check on mobile devices. It is conducted automatically to verify that there are no defects brought by the manufacturing or assembling. Mobile testing contains: Certification testing is the check before amobile devicegoes to market. Many institutes or governments require mobile devices to conform with their stated specifications andprotocolsto make sure the mobile device will not harm users' health and are compatible with devices from other manufacturers. Once the mobile device passes all checks, acertificationwill be issued for it. When users submit mobile apps to application stores/marketplaces, it goes through a certification process. Many of these vendors outsource the testing and certification to third party vendors, to increase coverage and lower the costs.[4]
https://en.wikipedia.org/wiki/Mobile-device_testing
Quality control(QC) is a process by which entities review the quality of all factors involved inproduction.ISO 9000defines quality control as "a part ofquality managementfocused on fulfilling quality requirements".[1] This approach places emphasis on three aspects (enshrined in standards such as ISO 9001):[2][3] Inspectionis a major component of quality control, where physical product is examined visually (or the end results of a service are analyzed). Product inspectors will be provided with lists and descriptions of unacceptableproduct defectssuch ascracksor surfaceblemishesfor example.[3] Earlystone toolssuch asanvilshad no holes and were not designed asinterchangeable parts.Mass productionestablished processes for the creation of parts and system with identical dimensions and design, but these processes are not uniform and hence some customers were unsatisfied with the result. Quality control separates the act oftestingproductsto uncover defects from the decision to allow or deny product release, which may be determined by fiscal constraints.[6]For contract work, particularly work awarded by government agencies, quality control issues are among the top reasons for not renewing a contract.[7] The simplest form of quality control was a sketch of the desired item. If the sketch did not match the item, it was rejected, in a simpleGo/no goprocedure. However, manufacturers soon found it was difficult and costly to make parts be exactly like their depiction; hence around 1840 tolerance limits were introduced, wherein a design would function if its parts were measured to be within the limits. Quality was thus precisely defined using devices such asplug gaugesandring gauges. However, this did not address the problem of defective items; recycling or disposing of thewasteadds to the cost of production, as does trying to reduce the defect rate. Various methods have been proposed to prioritize quality control issues and determine whether to leave them unaddressed or usequality assurancetechniques to improve and stabilize production.[6] There is a tendency for individual consultants and organizations to name their own unique approaches to quality control—a few of these have ended up in widespread use: Inproject management, quality control requires the project manager and/or the project team to inspect the accomplished work to ensure its alignment with the project scope.[15]In practice, projects typically have a dedicated quality control team which focuses on this area.[16]
https://en.wikipedia.org/wiki/Quality_control
Insoftware engineering, atest caseis a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particularsoftware testingobjective, such as to exercise a particular program path or to verify compliance with a specific requirement.[1]Test cases underlie testing that is methodical rather than haphazard. Abattery of test casescan be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistentregression testing.[2] In order to fully test that all the requirements of an application are met, there must be at least two test cases for each requirement: one positive test and one negative test.[3]If a requirement has sub-requirements, each sub-requirement must have at least two test cases. Keeping track of the link between the requirement and the test is frequently done using atraceability matrix. Written test cases should include a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted. A formal written test case is characterized by a known input and by an expected output, which is worked out before the test is executed.[4]The known input should test apreconditionand the expected output should test apostcondition. For applications or systems without formal requirements, test cases can be written based on the accepted normal operation of programs of a similar class. In some schools of testing, test cases are not written at all but the activities and results are reported after the tests have been run. Inscenario testing, hypothetical stories are used to help the tester think through a complex problem or system. These scenarios are usually not written down in any detail. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test is a story that is motivating, credible, complex, and easy to evaluate. They are usually different from test cases in that test cases are single steps while scenarios cover a number of steps of the key.[5][6] A test case usually contains a single step or a sequence of steps to test the correct behavior/functionality and features of an application. An expected result or expected outcome is usually given. Additional information that may be included:[7] Larger test cases may also contain prerequisite states or steps, and descriptions.[7] A written test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table. Test suitesoften also contain[8] Besides a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted, the most time-consuming part in the test case is creating the tests and modifying them when the system changes. Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would evaluate if the results can be considered as a pass. This happens often on new products' performance number determination. The first test is taken as the base line for subsequent test and product release cycles. Acceptance tests, which use a variation of a written test case, are commonly performed by a group ofend-usersor clients of the system to ensure the developed system meets the requirements specified or the contract.[9][10]User acceptance tests are differentiated by the inclusion ofhappy pathor positive test cases to the almost complete exclusion of negative test cases.[11]
https://en.wikipedia.org/wiki/Test_case
Atest fixtureis a device used to consistently test some item, device, or piece of software. Test fixtures are used in the testing of electronics, software and physical devices. In testing electronic equipment such as circuit boards, electronic components, and chips, a test fixture is a device or setup designed to hold thedevice under testin place and allow it to be tested by being subjected to controlled electronic test signals.[1]Examples are abed of nails testerorsmart fixture. Test fixtures can come in different shapes, sizes, and functions. There are several different types of test fixtures[2], includingIn-Circuit TestFixtures,Functional TestFixtures, andWireless Test Fixtures.[3]In Circuit Test (ICT) fixtures individually test each component on aPCB, while functional test fixtures assess the entire board's functionality. Functional test fixtures simulate real-world conditions, whereasICTis more focused on detecting assembly defects likeshort circuitsor missing components.[4]An In-Circuit Test fixture can come in both Inline and Standard variations. AnInline Test Fixtureis designed for fast, automated testing directly within a production line, ideal for high-volume manufacturing where continuous testing maximises efficiency.A Standard Test Fixture, on the other hand, usually requires manual loading, making it well-suited to smaller-scale or specialised testing.[5] In the context of software, a test fixture (also called "test context") is used to set up the system state and input data needed fortestexecution.[6][7]For example, theRuby on Railsweb framework usesYAMLto initialize a database with known parameters before running a test.[8]This allows for tests to be repeatable, which is one of the key features of an effective test framework.[6]In most cases, a custom test fixture will normally require custom test software. This software is created in order to ensure optimal testing performance and seamless integration. The custom software can be configured to carry out a number of different tests fromBIST(Built-In Self Test) to advancedJTAGImplementation.[9] Test fixtures can be set up three different ways: in-line, delegate, and implicit. The main advantage of a test fixture is that it allows for tests to be repeatable since each test is always starting with the same setup. Test fixtures also ease test code design by allowing the developer to separate methods into different functions and reuse each function for other tests. Further, test fixtures preconfigure tests into a known initial state instead of working with whatever was left from a previous test run. A disadvantage is that it could lead to duplication of test fixtures if using in-line setup.[6][10] It is considered bad practice when implicit test fixtures are too general, or when a test method sets up a test fixture and does not use it during the test. A more subtle issue is if the test methods ignore certain fields within the test fixture. Another bad practice is a test setup that contains more steps than needed for the test; this is a problem seen in in-line setup.[10] Atest caseis considered "unsafe" when it modifies its fixture(s). An unsafe test case can render subsequent tests useless by leaving the fixture in an unexpected state. It also causes the order of tests to be important: a modified fixture must be reset if more tests are to be run after an unsafe test.[6] Examples of fixtures include loading a database with a specific known set of data, erasing a hard disk and installing a known clean operating system installation, copying a specific known set of files, or the preparation of input data as well as set-up and creation ofmock objects. Software which is used to run reproducible tests systematically on a piece of software under test is known as atest harness; part of its job is to set up suitable test fixtures. In genericxUnit, atest fixtureis all the things that must be in place in order to run a test and expect a particular outcome.[11] Frequently fixtures are created by handlingsetUp()andtearDown()events of theunit testing framework. InsetUp()one would create the expected state for the test and intearDown()it would clean up what had been set up. Four phases of a test: Inphysical testing, a fixture is a device or apparatus to hold or support the testspecimenduring the test. The influence of test fixtures on test results is important and is an ongoing subject of research.[12] Manytest methodsdetail the requirements of test fixtures in the text of the document.[13][14] Some fixtures employ clamps, wedge grips and pincer grips. Further types of construction include eccentric roller fixtures, thread grips and button head grips and rope grips. Mechanical holding apparatuses provide the clamping force via arms, wedges or eccentric wheel to the jaws. Additionally there are pneumatic and hydraulic fixtures for tensile testing that allow very fast clamping procedures and very high clamping forces.
https://en.wikipedia.org/wiki/Test_fixture
Atest planis a document detailing the objectives, resources, and processes for a specific test session for asoftwareor hardware product. The plan typically contains a detailed understanding of the eventualworkflow. A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. A test plan is usually prepared by or with significant input fromtest engineers.[1] Depending on the product and the responsibility of the organization to which the test plan applies, a test plan may include a strategy for one or more of the following: A complex system may have a high-level test plan to address the overall requirements and supporting test plans to address the design details of subsystems and components. Test plan document formats can be as varied as the products and organizations to which they apply. There are three major elements that should be described in the test plan: test coverage, test methods, and test responsibilities. These are also used in a formaltest strategy.[2] Test coverage in the test plan states what requirements will be verified during what stages of the product life. Test coverage is derived from design specifications and other requirements, such as safety standards or regulatory codes, where each requirement or specification of the design ideally will have one or more corresponding means of verification. Test coverage for different product life stages may overlap but will not necessarily be exactly the same for all stages. For example, some requirements may be verified duringdesign verification test, but not repeated during acceptance test. Test coverage also feeds back into the design process, since the product may have to be designed to allow test access. Test methods in the test plan state how test coverage will be implemented. Test methods may be determined by standards, regulatory agencies, or contractual agreement, or may have to be created new. Test methods also specify test equipment to be used in the performance of the tests and establish pass/fail criteria. Test methods used to verify hardware design requirements can range from very simple steps, such as visual inspection, to elaborate test procedures that are documented separately. Test responsibilities include what organizations will perform the test methods and at each stage of the product life. This allows test organizations to plan, acquire or develop test equipment and other resources necessary to implement the test methods for which they are responsible. Test responsibilities also include what data will be collected and how that data will be stored and reported (often referred to as "deliverables"). One outcome of a successful test plan should be a record or report of the verification of all design specifications and requirements as agreed upon by all parties. IEEE 829-2008, also known as the 829 Standard for Software Test Documentation, is anIEEEstandard that specifies the form of a set of documents for use in defined stages of software testing, each stage potentially producing its own separate type of document.[3]These stages are: The IEEE documents that suggest what should be contained in a test plan are:
https://en.wikipedia.org/wiki/Test_plan
GUI testing toolsserve the purpose of automating thetesting process of software with graphical user interfaces.
https://en.wikipedia.org/wiki/Comparison_of_GUI_testing_tools
Continuous testingis the process of executingautomated testsas part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[1][2]Continuous testing was originally proposed as a way of reducing waiting time for feedback to developers by introducing development environment-triggered tests as well as more traditional developer/tester-triggered tests.[3] For Continuous testing, the scope of testing extends from validating bottom-uprequirementsoruser storiesto assessing thesystem requirementsassociated with overarching business goals.[4] In the 2010s, software has become a key business differentiator.[5]As a result, organizations now expect software development teams to deliver more, and more innovative, software within shorter delivery cycles.[6][7]To meet these demands, teams have turned toleanapproaches, such asAgile,DevOps, andContinuous Delivery, to try to speed up thesystems development life cycle(SDLC).[8]After accelerating other aspects of the delivery pipeline, teams typically find that their testing process is preventing them from achieving the expected benefits of their SDLC acceleration initiative.[9]Testing and the overall quality process remain problematic for several key reasons.[10] Organizations adopt Continuous Testing because they recognize that these problems are preventing them from delivering quality software at the desired speed. They recognize the growing importance of software as well as the rising cost of software failure, and they are no longer willing to make a tradeoff between time, scope, and quality.[2][17][18] The goal of continuous testing is to provide fast and continuous feedback regarding the level of business risk in the latest build or release candidate.[2]This information can then be used to determine if the software is ready to progress through the delivery pipeline at any given time.[1][5][13][19] Since testing begins early and is executed continuously, application risks are exposed soon after they are introduced.[6]Development teams can then prevent those problems from progressing to the next stage of the SDLC. This reduces the time and effort that need to be spent finding and fixing defects. As a result, it is possible to increase the speed and frequency at which quality software (software that meets expectations for an acceptable level of risk) is delivered, as well as decreasetechnical debt.[4][10][20] Moreover, when software quality efforts and testing are aligned with business expectations, test execution produces a prioritized list of actionable tasks (rather than a potentially overwhelming number of findings that require manual review). This helps teams focus their efforts on the quality tasks that will have the greatest impact, based on their organization's goals and priorities.[2] Additionally, when teams are continuously executing a broad set of continuous tests throughout the SDLC, they amass metrics regarding the quality of the process as well as the state of the software. The resulting metrics can be used to re-examine and optimize the process itself, including the effectiveness of those tests. This information can be used to establish a feedback loop that helps teams incrementally improve the process.[4][10]Frequent measurement, tight feedback loops, and continuous improvement are key principles ofDevOps.[21] Continuous testing includes the validation of bothfunctional requirementsandnon-functional requirements. For testing functional requirements (functional testing), Continuous Testing often involvesunit tests,API testing,integration testing, andsystem testing. For testing non-functional requirements (non-functional testing- to determine if the application meets expectations around performance, security, compliance, etc.), it involves practices such asstatic code analysis,security testing,performance testing, etc.[9][20]Tests should be designed to provide the earliest possible detection (or prevention) of the risks that are most critical for the business or organization that is releasing the software.[6] Teams often find that in order to ensure that test suite can run continuously and effectively assesses the level of risk, it's necessary to shift focus from GUI testing to API testing because 1) APIs (the "transaction layer") are considered the most stable interface to the system under test, and 2) GUI tests require considerable rework to keep pace with the frequent changes typical of accelerated release processes; tests at the API layer are less brittle and easier to maintain.[11][22][23] Tests are executed during or alongsidecontinuous integration—at least daily.[24]For teams practicingcontinuous delivery, tests are commonly executed many times a day, every time that the application is updated in to theversion controlsystem.[9] Ideally, all tests are executed across all non-productiontest environments. To ensure accuracy and consistency, testing should be performed in the most complete, production-like environment possible. Strategies for increasing test environment stability include virtualization software (for dependencies your organization can control and image) service virtualization (for dependencies beyond your scope of control or unsuitable for imaging), and test data management.[1][4][10][25] Since modern applications are highly distributed, test suites that exercise them typically require access to dependencies that are not readily available for testing (e.g., third-party services, mainframes that are available for testing only in limited capacity or at inconvenient times, etc.) Moreover, with the growing adoption of Agile and parallel development processes, it is common for end-to-end functional tests to require access to dependencies that are still evolving or not yet implemented. This problem can be addressed by usingservice virtualizationto simulate the application under test's (AUT's) interactions with the missing or unavailable dependencies. It can also be used to ensure that data, performance, and behavior is consistent across the various test runs.[1][7][10] One reason teams avoid continuous testing is that their infrastructure is not scalable enough to continuously execute the test suite. This problem can be addressed by focusing the tests on the business's priorities, splitting the test base, and parallelizing the testing withapplication release automationtools.[24] The goal of Continuous Testing is to apply "extreme automation" to stable, production-like test environments. Automation is essential for Continuous Testing.[27]But automated testing is not the same as Continuous Testing.[4] Automated testing involves automated, CI-driven execution of whatever set of tests the team has accumulated.[clarification needed]Moving from automated testing to continuous testing involves executing a set of tests that is specifically designed to assess the business risks associated with a release candidate, and to regularly execute these tests in the context of stable, production-like test environments. Some differences between automated and continuous testing: Since the 1990s,Continuous test-driven developmenthas been used to provide programmers rapid feedback on whether the code they added a) functioned properly and b) unintentionally changed or broke existing functionality. This testing, which was a key component ofExtreme Programming, involves automatically executing unit tests (and sometimes acceptance tests or smoke tests) as part of the automated build, often many times a day. These tests are written prior to implementation; passing tests indicate that implementation is successful.[13][28]
https://en.wikipedia.org/wiki/Continuous_testing
Aheadless browseris aweb browserwithout agraphical user interface. Headless browsers provide automated control of a web page in an environment similar to popular web browsers, but they are executed via acommand-line interfaceor using network communication. They are particularly useful fortestingweb pages as they are able to render and understand HTML the same way a browser would, including styling elements such as page layout, color, font selection and execution ofJavaScriptandAjaxwhich are usually not available when using other testing methods.[1] Since version 59 ofGoogle Chrome[2][3]and version 56[4]ofFirefox,[5]there is native support for remote control of the browser. This made earlier efforts obsolete, notablyPhantomJS.[6] The main use cases for headless browsers are: Headless browsers are also useful forweb scraping.Googlestated in 2009 that using a headless browser could help their search engine index content from websites that use Ajax.[7] Headless browsers have also been misused in various ways: However, a study of browser traffic in 2018 found no preference by malicious actors for headless browsers.[3]There is no indication that headless browsers are used more frequently than non-headless browsers for malicious purposes, like DDoS attacks,SQL injectionsorcross-site scriptingattacks. As several major browsers natively support headless mode throughAPIs, some software exists to perform browser automation through a unified interface. These include: Sometest automation softwareand frameworks include headless browsers as part of their testing apparati.[3] Another approach is to use software that provides browser APIs. For example,Denoprovides browser APIs as part of its design. ForNode.js, jsdom[17]is the most complete provider. While most are able to support common browser features (HTML parsing,cookies,XHR, some JavaScript, etc.), they do notrendertheDOMand have limited support forDOM events. They usually perform faster than full browsers, but are unable to correctly interpret many popular websites.[18][19][20] Another isHtmlUnit, a headless browser written in Java. HtmlUnit uses theRhino engineto provide JavaScript and Ajax support as well as partial rendering capability.[21][22] These are various software that provide headless browser APIs. Another noted earlier effort was envjs in 2008 fromJohn Resig, which was a simulated browser environment written in JavaScript for theRhino engine.[29]
https://en.wikipedia.org/wiki/Headless_browser
Unit testing,a.k.a.componentormoduletesting, is a form ofsoftware testingby which isolatedsource codeis tested to validate expected behavior.[1] Unit testing describes tests that are run at the unit-level to contrast testing at theintegrationorsystemlevel. Unit testing, as a principle for testing separately smaller parts of large software systems, dates back to the early days of software engineering. In June 1956 at US Navy's Symposium on Advanced Programming Methods for Digital Computers, H.D. Benington presented theSAGEproject. It featured a specification-based approach where the coding phase was followed by "parameter testing" to validate component subprograms against their specification, followed then by an "assembly testing" for parts put together.[2][3] In 1964, a similar approach is described for the software of theMercury project, where individual units developed by different programmes underwent "unit tests" before being integrated together.[4]In 1969, testing methodologies appear more structured, with unit tests, component tests and integration tests collectively validating individual parts written separately and their progressive assembly into larger blocks.[5]Some public standards adopted in the late 1960s, such as MIL-STD-483[6]and MIL-STD-490, contributed further to a wide acceptance of unit testing in large projects. Unit testing was in those times interactive[3]or automated,[7]using either coded tests or capture and replay testing tools. In 1989,Kent Beckdescribed a testing framework forSmalltalk(later calledSUnit) in "Simple Smalltalk Testing: With Patterns". In 1997,Kent BeckandErich Gammadeveloped and releasedJUnit, a unit test framework that became popular withJavadevelopers.[8]Googleembraced automated testing around 2005–2006.[9] A unit is defined as a single behaviour exhibited by the system under test (SUT), usually corresponding to a requirement[definition needed]. While a unit may correspond to a single function or module (inprocedural programming) or a single method or class (inobject-oriented programming), functions/methods and modules/classes do not necessarily correspond to units. From the system requirements perspective only the perimeter of the system is relevant, thus only entry points to externally visible system behaviours define units.[clarification needed][10] Unit tests can be performed manually or viaautomated testexecution. Automated tests include benefits such as: running tests often, running tests without staffing cost, and consistent and repeatable testing. Testing is often performed by the programmer who writes and modifies the code under test. Unit testing may be viewed as part of the process of writing code. Unit testing,a.k.a.componentormoduletesting, is a form ofsoftware testingby which isolatedsource codeis tested to validate expected behavior.[1] Unit testing describes tests that are run at the unit-level to contrast testing at theintegrationorsystemlevel.[11] Aparameterized testis a test that accepts a set of values that can be used to enable the test to run with multiple, different input values. A testing framework that supports parametrized tests supports a way to encode parameter sets and to run the test with each set. Use of parametrized tests can reduce test code duplication. Parameterized tests are supported byTestNG,JUnit,[14]XUnitandNUnit, as well as in various JavaScript test frameworks.[citation needed] Parameters for the unit tests may be coded manually or in some cases are automatically generated by the test framework. In recent years support was added for writing more powerful (unit) tests, leveraging the concept of theories, test cases that execute the same steps, but using test data generated at runtime, unlike regular parameterized tests that use the same execution steps with input sets that are pre-defined.[citation needed] Sometimes, in the agile software development, unit testing is done peruser storyand comes in the later half of the sprint after requirements gathering and development are complete. Typically, the developers or other members from the development team, such asconsultants, will write step-by-step 'test scripts' for the developers to execute in the tool. Test scripts are generally written to prove the effective and technical operation of specific developed features in the tool, as opposed to full fledged business processes that would be interfaced by theend user, which is typically done duringuser acceptance testing. If the test-script can be fully executed from start to finish without incident, the unit test is considered to have "passed", otherwise errors are noted and the user story is moved back to development in an 'in-progress' state. User stories that successfully pass unit tests are moved on to the final steps of the sprint - Code review, peer review, and then lastly a 'show-back' session demonstrating the developed tool to stakeholders. In test-driven development (TDD), unit tests are written while the production code is written. Starting with working code, the developer adds test code for a required behavior, then addsjust enoughcode to make the test pass, then refactors the code (including test code) as makes sense and then repeats by adding another test. Unit testing is intended to ensure that the units meet theirdesignand behave as intended.[15] By writing tests first for the smallest testable units, then the compound behaviors between those, one can build up comprehensive tests for complex applications.[15] One goal of unit testing is to isolate each part of the program and show that the individual parts are correct.[1]A unit test provides a strict, writtencontractthat the piece of code must satisfy. Unit testing finds problems early in thedevelopment cycle. This includes both bugs in the programmer's implementation and flaws or missing parts of the specification for the unit. The process of writing a thorough set of tests forces the author to think through inputs, outputs, and error conditions, and thus more crisply define the unit's desired behavior.[citation needed] The cost of finding a bug before coding begins or when the code is first written is considerably lower than the cost of detecting, identifying, and correcting the bug later. Bugs in released code may also cause costly problems for the end-users of the software.[16][17][18]Code can be impossible or difficult to unit test if poorly written, thus unit testing can force developers to structure functions and objects in better ways. Unit testing enables more frequent releases in software development. By testing individual components in isolation, developers can quickly identify and address issues, leading to faster iteration and release cycles.[19] Unit testing allows the programmer torefactorcode or upgrade system libraries at a later date, and make sure the module still works correctly (e.g., inregression testing). The procedure is to write test cases for allfunctionsandmethodsso that whenever a change causes a fault, it can be identified quickly. Unit tests detect changes which may break adesign contract. Unit testing may reduce uncertainty in the units themselves and can be used in abottom-uptesting style approach. By testing the parts of a program first and then testing the sum of its parts,integration testingbecomes much easier.[citation needed] Some programmers contend that unit tests provide a form of documentation of the code. Developers wanting to learn what functionality is provided by a unit, and how to use it, can review the unit tests to gain an understanding of it.[citation needed] Test cases can embody characteristics that are critical to the success of the unit. These characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. A test case documents these critical characteristics, although many software development environments do not rely solely upon code to document the product in development.[citation needed] In some processes, the act of writing tests and the code under test, plus associated refactoring, may take the place of formal design. Each unit test can be seen as a design element specifying classes, methods, and observable behavior.[citation needed] Testing will not catch every error in the program, because it cannot evaluate every execution path in any but the most trivial programs. Thisproblemis a superset of thehalting problem, which isundecidable. The same is true for unit testing. Additionally, unit testing by definition only tests the functionality of the units themselves. Therefore, it will not catch integration errors or broader system-level errors (such as functions performed across multiple units, or non-functional test areas such asperformance). Unit testing should be done in conjunction with othersoftware testingactivities, as they can only show the presence or absence of particular errors; they cannot prove a complete absence of errors. To guarantee correct behavior for every execution path and every possible input, and ensure the absence of errors, other techniques are required, namely the application offormal methodsto prove that a software component has no unexpected behavior.[citation needed] An elaborate hierarchy of unit tests does not equal integration testing. Integration with peripheral units should be included in integration tests, but not in unit tests.[citation needed]Integration testing typically still relies heavily on humanstesting manually; high-level or global-scope testing can be difficult to automate, such that manual testing often appears faster and cheaper.[citation needed] Software testing is a combinatorial problem. For example, every Boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written, programmers often need 3 to 5 lines of test code.[citation needed]This obviously takes time and its investment may not be worth the effort. There are problems that cannot easily be tested at all – for example those that arenondeterministicor involve multiplethreads. In addition, code for a unit test is as likely to be buggy as the code it is testing.Fred BrooksinThe Mythical Man-Monthquotes: "Never go to sea with two chronometers; take one or three."[20]Meaning, if twochronometerscontradict, how do you know which one is correct? Another challenge related to writing the unit tests is the difficulty of setting up realistic and useful tests. It is necessary to create relevant initial conditions so the part of the application being tested behaves like part of the complete system. If these initial conditions are not set correctly, the test will not be exercising the code in a realistic context, which diminishes the value and accuracy of unit test results.[citation needed] To obtain the intended benefits from unit testing, rigorous discipline is needed throughout the software development process. It is essential to keep careful records not only of the tests that have been performed, but also of all changes that have been made to the source code of this or any other unit in the software. Use of aversion controlsystem is essential. If a later version of the unit fails a particular test that it had previously passed, the version-control software can provide a list of the source code changes (if any) that have been applied to the unit since that time.[citation needed] It is also essential to implement a sustainable process for ensuring that test case failures are reviewed regularly and addressed immediately.[21]If such a process is not implemented and ingrained into the team's workflow, the application will evolve out of sync with the unit test suite, increasing false positives and reducing the effectiveness of the test suite. Unit testing embedded system software presents a unique challenge: Because the software is being developed on a different platform than the one it will eventually run on, you cannot readily run a test program in the actual deployment environment, as is possible with desktop programs.[22] Unit tests tend to be easiest when a method has input parameters and some output. It is not as easy to create unit tests when a major function of the method is to interact with something external to the application. For example, a method that will work with a database might require a mock up of database interactions to be created, which probably won't be as comprehensive as the real database interactions.[23][better source needed] Below is an example of a JUnit test suite. It focuses on theAdderclass. The test suite usesassertstatements to verify the expected result of various input values to thesummethod. Using unit-tests as a design specification has one significant advantage over other design methods: The design document (the unit-tests themselves) can itself be used to verify the implementation. The tests will never pass unless the developer implements a solution according to the design. Unit testing lacks some of the accessibility of a diagrammatic specification such as aUMLdiagram, but they may be generated from the unit test using automated tools. Most modern languages have free tools (usually available as extensions toIDEs). Free tools, like those based on thexUnitframework, outsource to another system the graphical rendering of a view for human consumption.[24] Unit testing is the cornerstone ofextreme programming, which relies on an automatedunit testing framework. This automated unit testing framework can be either third party, e.g.,xUnit, or created within the development group. Extreme programming uses the creation of unit tests fortest-driven development. The developer writes a unit test that exposes either a software requirement or a defect. This test will fail because either the requirement isn't implemented yet, or because it intentionally exposes a defect in the existing code. Then, the developer writes the simplest code to make the test, along with other tests, pass. Most code in a system is unit tested, but not necessarily all paths through the code. Extreme programming mandates a "test everything that can possibly break" strategy, over the traditional "test every execution path" method. This leads developers to develop fewer tests than classical methods, but this isn't really a problem, more a restatement of fact, as classical methods have rarely ever been followed methodically enough for all execution paths to have been thoroughly tested.[citation needed]Extreme programming simply recognizes that testing is rarely exhaustive (because it is often too expensive and time-consuming to be economically viable) and provides guidance on how to effectively focus limited resources. Crucially, the test code is considered a first class project artifact in that it is maintained at the same quality as the implementation code, with all duplication removed. Developers release unit testing code to the code repository in conjunction with the code it tests. Extreme programming's thorough unit testing allows the benefits mentioned above, such as simpler and more confident code development andrefactoring, simplified code integration, accurate documentation, and more modular designs. These unit tests are also constantly run as a form ofregression test. Unit testing is also critical to the concept ofEmergent Design. As emergent design is heavily dependent upon refactoring, unit tests are an integral component.[citation needed] An automated testing framework provides features for automating test execution and can accelerate writing and running tests. Frameworks have been developed fora wide variety of programming languages. Generally, frameworks arethird-party; not distributed with a compiler orintegrated development environment(IDE). Tests can be written without using a framework to exercise the code under test usingassertions,exception handling, and othercontrol flowmechanisms to verify behavior and report failure. Some note that testing without a framework is valuable since there is abarrier to entryfor the adoption of a framework; that having some tests is better than none, but once a framework is in place, adding tests can be easier.[25] In some frameworks advanced test features are missing and must be hand-coded. Some programming languages directly support unit testing. Their grammar allows the direct declaration of unit tests without importing a library (whether third party or standard). Additionally, the Boolean conditions of the unit tests can be expressed in the same syntax as Boolean expressions used in non-unit test code, such as what is used forifandwhilestatements. Languages with built-in unit testing support include: Languages with standard unit testing framework support include: Some languages do not have built-in unit-testing support but have established unit testing libraries or frameworks. These languages include:
https://en.wikipedia.org/wiki/Unit_test
Acodec listening testis ascientificstudydesigned to compare two or morelossyaudiocodecs, usually with respect to perceivedfidelityor compression efficiency. Most tests take the form of adouble-blindcomparison. Commonly used methods are known as "ABX" or "ABC/HR" or "MUSHRA". There are various software packages available for individuals to perform this type of testing themselves with minimal assistance. In an ABX test, the listener has to identify an unknown sample X as being A or B, with A (usually the original) and B (usually the encoded version) available for reference. The outcome of a test must be statistically significant. This setup ensures that the listener is not biased by their expectations, and that the outcome is not likely to be the result of chance. If sample X cannot be determined reliably with a lowp-valuein a predetermined number of trials, then thenull hypothesiscannot be rejected and it cannot be proved that there is a perceptible difference between samples A and B. This usually indicates that the encoded version will actually betransparentto the listener. In an ABC/HR test, C is the original which is always available for reference. A and B are the original and the encoded version in randomized order. The listener must first distinguish the encoded version from the original (which is the Hidden Reference that the "HR" in ABC/HR stands for), prior to assigning a score as a subjective judgment of the quality. Different encoded versions can be compared against each other using these scores. In MUSHRA (MUltiple Stimuli with Hidden Reference and Anchor), the listener is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference and one or more anchors. The purpose of the anchor(s) is to make the scale be closer to an "absolute scale", making sure that minor artifacts are not rated as having very bad quality. Many double-blind music listening tests have been carried out. The following table lists the results of several listening tests that have been published online. To obtain meaningful results, listening tests must compare codecs' performance at similar or identicalbitrates, since the audio quality produced by anylossyencoder will be trivially improved by increasing the bitrate. If listeners cannot consistently distinguish a lossy encoder's output from the uncompressed original audio, then it may be concluded that the codec has achievedtransparency. Popular formats compared in these tests includeMP3,AAC(andextensions),Vorbis,Musepack, andWMA. TheRealAudio Gecko,ATRAC3,QDesign, andmp3PROformats appear in some tests, despite much lower adoption as of 2007[update]. Many encoder and decoder implementations (bothproprietaryandopen source) exist for some formats, such asMP3, which is the oldest and best-known format still in widespread use today. "No codec delivers the marketing plot [sic] of same quality as MP3 at half the bitrates." "Vorbis is now –thanks to Aoyumi [creator of aoTuV]– an excellent audio format for 180 kbit/s encodings (and classical music)."
https://en.wikipedia.org/wiki/Codec_listening_test
Indata compressionandpsychoacoustics,transparencyis the result oflossy data compressionaccurate enough that the compressed result isperceptuallyindistinguishable from the uncompressed input, i.e.perceptually lossless. Atransparency thresholdis a given value at which transparency is reached. It is commonly used to describe compressed data bitrates. For example, the transparency threshold for MP3 tolinear PCMaudio is said to be between 175 and 245 kbit/s, at44.1 kHz, when encoded asVBRMP3 (corresponding to the -V3 and -V0 settings of the highly popularLAMEMP3 encoder).[1]This means that when an MP3 that was encoded at those bitrates is being played back, it is indistinguishable from the original PCM, and the compression is transparent to the listener. The termtransparent compressioncan also refer to afilesystemfeature that allows compressed files to be read and written just like regular ones. In this case, the compressor is typically a general-purpose lossless compressor. Transparency, like sound or video quality, is subjective. It depends most on the listener's familiarity with digital artifacts, their awareness that artifacts may in fact be present, and to a lesser extent, the compression method,bit rateused, input characteristics, and the listening/viewing conditions and equipment. Despite this, sometimes general consensus is formed for what compression options "should" provide transparent results for most people on most equipment. Due to the subjectivity and the changing nature of compression, recording, and playback technology, such opinions should be considered only as rough estimates rather than established fact. Judging transparency can be difficult, due toobserver bias, in which subjective like/dislike of a certain compression methodology emotionally influences their judgment. This bias is commonly referred to asplacebo, although this use is slightly different from the medical use of the term. To scientifically prove that a compression method isnottransparent,double-blindtests may be useful. TheABX methodis normally used, with anull hypothesisthat the samples tested are the same and with analternative hypothesisthat the samples are in fact different. Alllossless data compressionmethods are transparent, by nature. Both the DSC inDisplayPortand the default settings ofJPEG XL[2]are regarded asvisually lossless. The losslessness is usually determined by aflickertest: the display initially shows the compressed and the original side-by-side, switches them around for a tiny fraction of a second and then goes back to the original. This test is more sensitive than a side-by-side comparison ("visually almost lossless"), as the human eye is highly sensitive to temporal changes in light.[3]There is also apanningtest that is purportedly more representative of sensitivity in the case of moving images than theflickertest.[4] A perceptually lossless compression is always free ofcompression artifacts, but the inverse is not true: it is possible for a compressor to produce a signal that appears natural but with altered contents. Such a confusion is widely present in the field ofradiology(specifically for the study ofdiagnostically acceptable irreversible compression), wherevisually losslessis taken to mean anywhere from artifact-free[5]to being indistinguishable on a side-to-side view,[6]neither being as stringent as theflickertest.
https://en.wikipedia.org/wiki/Transparency_(data_compression)
Psychophysicsis the field ofpsychologywhich quantitatively investigates the relationship between physicalstimuliand thesensationsandperceptionsthey produce. Psychophysics has been described as "the scientific study of the relation between stimulus andsensation"[1]or, more completely, as "the analysis of perceptual processes by studying the effect on a subject's experience or behaviour of systematically varying the properties of a stimulus along one or more physical dimensions".[2] Psychophysicsalso refers to a general class of methods that can be applied to study aperceptual system. Modern applications rely heavily on threshold measurement,[3]ideal observer analysis, andsignal detection theory.[4] Psychophysics has widespread and important practical applications. For instance, in the realm ofdigital signal processing, insights from psychophysics have guided the development of models and methods forlossy compression. These models help explain why humans typically perceive minimal loss of signal quality when audio and video signals are compressed using lossy techniques. Many of the classical techniques and theories of psychophysics were formulated in 1860 whenGustav Theodor Fechnerin Leipzig publishedElemente der Psychophysik (Elements of Psychophysics).[5]He coined the term "psychophysics", describing research intended to relate physical stimuli to the contents of consciousness such as sensations(Empfindungen). As a physicist and philosopher, Fechner aimed at developing a method that relates matter to the mind, connecting the publicly observable world and a person's privately experienced impression of it. His ideas were inspired by experimental results on the sense of touch and light obtained in the early 1830s by the German physiologistErnst Heinrich WeberinLeipzig,[6][7]most notably those on the minimum discernible difference in intensity of stimuli of moderate strength (just noticeable difference; jnd) which Weber had shown to be a constant fraction of the reference intensity, and which Fechner referred to as Weber's law. From this, Fechner derived his well-known logarithmic scale, now known asFechner scale. Weber's and Fechner's work formed one of the bases of psychology as ascience, withWilhelm Wundtfounding the first laboratory for psychological research in Leipzig (Institut für experimentelle Psychologie). Fechner's work systematised the introspectionist approach (psychology as the science of consciousness), that had to contend with the Behaviorist approach in which even verbal responses are as physical as the stimuli. Fechner's work was studied and extended byCharles S. Peirce, who was aided by his studentJoseph Jastrow, who soon became a distinguished experimental psychologist in his own right. Peirce and Jastrow largely confirmed Fechner's empirical findings, but not all. In particular, a classic experiment of Peirce and Jastrow rejected Fechner's estimation of a threshold of perception of weights. In their experiment, Peirce and Jastrow in fact invented randomized experiments: They randomly assigned volunteers to ablinded,repeated-measures designto evaluate their ability to discriminate weights.[8][9][10][11]On the basis of their results they argued that the underlying functions were continuous, and that there is no threshold below which a difference in physical magnitude would be undetected. Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1900s.[8][9][10][11] The Peirce–Jastrow experiments were conducted as part of Peirce's application of hispragmaticismprogram tohuman perception; other studies considered the perception of light, etc.[12]Jastrow wrote the following summary: "Mr. Peirce's courses in logic gave me my first real experience of intellectual muscle. Though I promptly took to the laboratory of psychology when that was established byStanley Hall, it was Peirce who gave me my first training in the handling of a psychological problem, and at the same time stimulated my self-esteem by entrusting me, then fairly innocent of any laboratory habits, with a real bit of research. He borrowed the apparatus for me, which I took to my room, installed at my window, and with which, when conditions of illumination were right, I took the observations. The results were published over our joint names in theProceedings of the National Academy of Sciences. The demonstration that traces of sensory effect too slight to make any registry in consciousness could none the less influence judgment, may itself have been a persistent motive that induced me years later to undertake a book onThe Subconscious." This work clearly distinguishes observable cognitive performance from the expression of consciousness. Modern approaches to sensory perception, such as research on vision, hearing, or touch, measure what the perceiver's judgment extracts from the stimulus, often putting aside the question what sensations are being experienced. One leading method is based onsignal detection theory, developed for cases of very weak stimuli. However, the subjectivist approach persists among those in the tradition ofStanley Smith Stevens(1906–1973). Stevens revived the idea of apower lawsuggested by 19th century researchers, in contrast with Fechner's log-linear function (cf.Stevens' power law). He also advocated the assignment of numbers in ratio to the strengths of stimuli, called magnitude estimation. Stevens added techniques such as magnitude production and cross-modality matching. He opposed the assignment of stimulus strengths to points on a line that are labeled in order of strength. Nevertheless, that sort of response has remained popular in applied psychophysics. Such multiple-category layouts are often misnamedLikert scalingafter the question items used by Likert to create multi-item psychometric scales, e.g., seven phrases from "strongly agree" through "strongly disagree". Omar Khaleefa[13]has argued that the medieval scientistAlhazenshould be considered the founder of psychophysics. Although al-Haytham made many subjective reports regarding vision, there is no evidence that he used quantitative psychophysical techniques and such claims have been rebuffed.[14][dubious–discuss] Psychophysicists usually employ experimental stimuli that can be objectively measured, such as pure tones varying in intensity, or lights varying in luminance. All the canonicalsenseshave been studied:vision,hearing,touch(includingskinandenteric perception),taste,smell, and thesense of time. Regardless of the sensory domain, there are three main areas of investigation:absolute thresholds, discrimination thresholds (e.g. thejust-noticeable difference), andscaling. A threshold (orlimen) is the point of intensity at which the participant can just detect the presence of a stimulus (absolute threshold[15]) or the difference between two stimuli (difference threshold[7]). Stimuli with intensities below this threshold are not detectable and are consideredsubliminal. Stimuli at values close to a threshold may be detectable on some occasions; therefore, a threshold is defined as the point at which a stimulus or change in a stimulus is detected on a certain proportionpof trials. An absolute threshold is the level of intensity at which a subject can detect the presence of a stimulus a certain proportion of the time; aplevel of 50% is commonly used.[16]For example, consider the absolute threshold for tactile sensation on the back of one's hand. A participant might not feel a single hair being touched, but might detect the touch of two or three hairs, as this exceeds the threshold. The absolute threshold is also often referred to as thedetection threshold. Various methods are employed to measure absolute thresholds, similar to those used for discrimination thresholds (see below). A difference threshold (orjust-noticeable difference, JND) is the magnitude of the smallest difference between two stimuli of differing intensities that a participant can detect a certain proportion of the time, with the specific percentage depending on the task. Several methods are employed to test this threshold. For instance, the subject may be asked to adjust one stimulus until it is perceived as identical to another (method of adjustment), to describe the direction and magnitude of the difference between two stimuli, or to decide whether the intensities in a pair of stimuli are the same or different (forced choice). The just-noticeable difference is not a fixed quantity; rather, it varies depending on the intensity of the stimuli and the specific sense being tested.[17]According toWeber's Law, the just-noticeable difference for any stimulus is a constant proportion, regardless of variations in intensity.[18] In discrimination experiments, the experimenter seeks to determine at what point the difference between two stimuli, such as two weights or two sounds, becomes detectable. The subject is presented with one stimulus, for example, a weight, and is asked to say whether another weight is heavier or lighter. In some experiments, the subject may also indicate that the two weights are the same. At the point of subjective equality (PSE), the subject perceives both weights as identical. The just-noticeable difference, or difference limen (DL), is the magnitude of the difference in stimuli that the subject notices some proportionpof the time; typically, 50% is used forpin the comparison task.[19]Additionally, thetwo-alternative forced choice(2AFC) paradigm is used to assess the point at which performance reduces to chance in discriminating between two alternatives; here,pis typically 75%, as a 50% success rate corresponds to chance in the 2AFC task. Absolute and difference thresholds are sometimes considered similar in principle because background noise always interferes with our ability to detect stimuli.[6][20] In psychophysics, experiments seek to determine whether the subject can detect a stimulus, identify it, differentiate between it and another stimulus, or describe the magnitude or nature of this difference.[6][7]Software for psychophysical experimentation is overviewed by Strasburger.[21] Psychophysical experiments have traditionally used three methods for testing subjects' perception in stimulus detection and difference detection experiments: the method of limits, the method of constant stimuli and the method of adjustment.[22] In the ascending method of limits, some property of the stimulus starts out at a level so low that the stimulus could not be detected, then this level is gradually increased until the participant reports that they are aware of it. For example, if the experiment is testing the minimum amplitude of sound that can be detected, the sound begins too quietly to be perceived, and is made gradually louder. In the descending method of limits, this is reversed. In each case, the threshold is considered to be the level of the stimulus property at which the stimuli are just detected.[22] In experiments, the ascending and descending methods are used alternately and the thresholds are averaged. A possible disadvantage of these methods is that the subject may become accustomed to reporting that they perceive a stimulus and may continue reporting the same way even beyond the threshold (the error ofhabituation). Conversely, the subject may also anticipate that the stimulus is about to become detectable or undetectable and may make a premature judgment (the error of anticipation). To avoid these potential pitfalls,Georg von Békésyintroduced thestaircase procedurein 1960 in his study of auditory perception. In this method, the sound starts out audible and gets quieter after each of the subject's responses, until the subject does not report hearing it. At that point, the sound is made louder at each step, until the subject reports hearing it, at which point it is made quieter in steps again. This way the experimenter is able to "zero in" on the threshold.[22] Instead of being presented in ascending or descending order, in the method of constant stimuli the levels of a certain property of the stimulus are not related from one trial to the next, but presented randomly. This prevents the subject from being able to predict the level of the next stimulus, and therefore reduces errors of habituation and expectation. For 'absolute thresholds' again the subject reports whether they are able to detect the stimulus.[22]For 'difference thresholds' there has to be a constant comparison stimulus with each of the varied levels. Friedrich Hegelmaier described the method of constant stimuli in an 1852 paper.[23]This method allows for full sampling of thepsychometric function, but can result in a lot of trials when several conditions are interleaved. In the method of adjustment, the subject is asked to control the level of the stimulus and to alter it until it is just barely detectable against the background noise, or is the same as the level of another stimulus. The adjustment is repeated many times. This is also called themethod of average error.[22]In this method, the observers themselves control the magnitude of the variable stimulus, beginning with a level that is distinctly greater or lesser than a standard one and vary it until they are satisfied by the subjective equality of the two. The difference between the variable stimuli and the standard one is recorded after each adjustment, and the error is tabulated for a considerable series. At the end, the mean is calculated giving the average error which can be taken as a measure of sensitivity. The classic methods of experimentation are often argued to be inefficient. This is because, in advance of testing, the psychometric threshold is usually unknown and most of the data are collected at points on thepsychometric functionthat provide little information about the parameter of interest, usually the threshold. Adaptive staircase procedures (or the classical method of adjustment) can be used such that the points sampled are clustered around the psychometric threshold. Data points can also be spread in a slightly wider range, if the psychometric function's slope is also of interest. Adaptive methods can thus be optimized for estimating the threshold only, or both thresholdandslope. Adaptive methods are classified into staircase procedures (see below) and Bayesian, or maximum-likelihood, methods. Staircase methods rely on the previous response only, and are easier to implement. Bayesian methods take the whole set of previous stimulus-response pairs into account and are generally more robust against lapses in attention.[24]Practical examples are found here.[21] Staircases usually begin with a high intensity stimulus, which is easy to detect. The intensity is then reduced until the observer makes a mistake, at which point the staircase 'reverses' and intensity is increased until the observer responds correctly, triggering another reversal. The values for the last of these 'reversals' are then averaged. There are many different types of staircase procedures, using different decision and termination rules. Step-size, up/down rules and the spread of the underlying psychometric function dictate where on the psychometric function they converge.[24]Threshold values obtained from staircases can fluctuate wildly, so care must be taken in their design. Many different staircase algorithms have been modeled and some practical recommendations suggested by Garcia-Perez.[25] One of the more common staircase designs (with fixed-step sizes) is the 1-up-N-down staircase. If the participant makes the correct response N times in a row, the stimulus intensity is reduced by one step size. If the participant makes an incorrect response the stimulus intensity is increased by the one size. A threshold is estimated from the mean midpoint of all runs. This estimate approaches, asymptotically, the correct threshold. Bayesian and maximum-likelihood (ML) adaptive procedures behave, from the observer's perspective, similar to the staircase procedures. The choice of the next intensity level works differently, however: After each observer response, from the set of this and all previous stimulus/response pairs the likelihood is calculated of where the threshold lies. The point of maximum likelihood is then chosen as the best estimate for the threshold, and the next stimulus is presented at that level (since a decision at that level will add the most information). In a Bayesian procedure, a prior likelihood is further included in the calculation.[24]Compared to staircase procedures, Bayesian and ML procedures are more time-consuming to implement but are considered to be more robust. Well-known procedures of this kind are Quest,[26]ML-PEST,[27]and Kontsevich & Tyler's method.[28] In the prototypical case, people are asked to assign numbers in proportion to the magnitude of the stimulus. This psychometric function of the geometric means of their numbers is often apower lawwith stable, replicable exponent. Although contexts can change the law & exponent, that change too is stable and replicable. Instead of numbers, other sensory or cognitive dimensions can be used to match a stimulus and the method then becomes "magnitude production" or "cross-modality matching". The exponents of those dimensions found in numerical magnitude estimation predict the exponents found in magnitude production. Magnitude estimation generally finds lower exponents for the psychophysical function than multiple-category responses, because of the restricted range of the categorical anchors, such as those used byLikertas items in attitude scales.[29]
https://en.wikipedia.org/wiki/Psychophysics
Psychoacousticsis the branch ofpsychophysicsinvolving the scientific study of theperceptionofsoundby the humanauditory system. It is the branch of science studying thepsychologicalresponses associated with sound includingnoise,speech, andmusic. Psychoacoustics is an interdisciplinary field including psychology,acoustics, electronic engineering, physics, biology, physiology, and computer science.[1] Hearing is not a purely mechanical phenomenon ofwave propagation, but is also a sensory and perceptual event. When a person hears something, that something arrives at theearas a mechanical sound wave traveling through the air, but within the ear it is transformed into neuralaction potentials. These nerve pulses then travel to the brain where they are perceived. Hence, in many problems in acoustics, such as foraudio processing, it is advantageous to take into account not just the mechanics of the environment, but also the fact that both the ear and the brain are involved in a person's listening experience.[citation needed] Theinner ear, for example, does significantsignal processingin converting soundwaveformsinto neural stimuli, this processing renders certain differences between waveforms imperceptible.[2]Data compressiontechniques, such asMP3, make use of this fact.[3]In addition, the ear has a nonlinear response to sounds of different intensity levels; this nonlinear response is calledloudness.Telephone networksand audionoise reductionsystems make use of this fact by nonlinearly compressing data samples before transmission and then expanding them for playback.[4]Another effect of the ear's nonlinear response is that sounds that are close in frequency produce phantom beat notes, orintermodulationdistortion products.[5] The human ear can nominally hear sounds in the range20 to 20000Hz. The upper limit tends to decrease with age; most adults are unable to hear above16000Hz. Under ideal laboratory conditions, the lowest frequency that has been identified as a musical tone is 12 Hz.[6]Tones between 4 and 16 Hz can be perceived via the body'ssense of touch. Human perception of audio signal time separation has been measured to be less than10μs. This does not mean that frequencies above100kHz(1/10 μs)are audible, but that time discrimination is not directly coupled with frequency range.[7][8] Frequency resolution of the ear is about 3.6 Hz within the octave of1000–2000 HzThat is, changes in pitch larger than 3.6 Hz can be perceived in a clinical setting.[6]However, even smaller pitch differences can be perceived through other means. For example, the interference of two pitches can often be heard as a repetitive variation in the volume of the tone. This amplitude modulation occurs with a frequency equal to the difference in frequencies of the two tones and is known asbeating. Thesemitonescale used in Western musical notation is not a linear frequency scale butlogarithmic. Other scales have been derived directly from experiments on human hearing perception, such as themel scaleandBark scale(these are used in studying perception, but not usually in musical composition), and these are approximately logarithmic in frequency at the high-frequency end, but nearly linear at the low-frequency end. The intensity range of audible sounds is enormous. Human eardrums are sensitive to variations in sound pressure and can detect pressure changes from as small as a fewmicropascals(μPa) to greater than100kPa. For this reason,sound pressure levelis also measured logarithmically, with all pressures referenced to20 μPa(or1.97385×10−10atm). The lower limit of audibility is therefore defined as0dB, but the upper limit is not as clearly defined. The upper limit is more a question of the potential to causenoise-induced hearing loss. A more rigorous exploration of the lower limits of audibility determines that the minimum threshold at which a sound can be heard is frequency dependent. By measuring this minimum intensity for test tones of various frequencies, a frequency-dependentabsolute threshold of hearing(ATH) curve may be derived. Typically, the ear shows a peak of sensitivity (i.e., its lowest ATH) between1–5 kHz, though the threshold changes with age, with older ears showing decreased sensitivity above 2 kHz.[9] The ATH is the lowest of theequal-loudness contours. Equal-loudness contours indicate the sound pressure level (dB SPL), over the range of audible frequencies, that are perceived as being of equal loudness. Equal-loudness contours were first measured by Fletcher and Munson atBell Labsin 1933 using pure tones reproduced via headphones, and the data they collected are calledFletcher–Munson curves. Because subjective loudness was difficult to measure, the Fletcher–Munson curves were averaged over many subjects. Robinson and Dadson refined the process in 1956 to obtain a new set of equal-loudness curves for a frontal sound source measured in ananechoic chamber. The Robinson-Dadson curves were standardized asISO 226in 1986. In 2003,ISO 226was revised using data collected from 12 international studies. Sound localizationis the process of determining the location of a sound source. The brain utilizes subtle differences in loudness, tone and timing between the two ears to allow us to localize sound sources.[10]Localization can be described in terms of three-dimensional position: theazimuthor horizontal angle, thezenithor vertical angle, and the distance (for static sounds) or velocity (for moving sounds).[11]Humans, as mostfour-legged animals, are adept at detecting direction in the horizontal, but less so in the vertical directions due to the ears being placed symmetrically. Some species ofowlshave their ears placed asymmetrically and can detect sound in all three planes, an adaptation to hunt small mammals in the dark.[12] Suppose a listener can hear a given acoustical signal under silent conditions. When a signal is playing while another sound is being played (a masker), the signal has to be stronger for the listener to hear it. The masker does not need to have the frequency components of the original signal for masking to happen. A masked signal can be heard even though it is weaker than the masker. Masking happens when a signal and a masker are played together—for instance, when one person whispers while another person shouts—and the listener doesn't hear the weaker signal as it has been masked by the louder masker. Masking can also happen to a signal before a masker starts or after a masker stops. For example, a single sudden loud clap sound can make sounds inaudible that immediately precede or follow. The effects ofbackward maskingis weaker than forward masking. The masking effect has been widely studied in psychoacoustical research. One can change the level of the masker and measure the threshold, then create a diagram of a psychophysical tuning curve that will reveal similar features. Masking effects are also used in lossy audio encoding, such asMP3. When presented with aharmonic seriesof frequencies in the relationship 2f, 3f, 4f, 5f, etc. (wherefis a specific frequency), humans tend to perceive that the pitch isf. An audible example can be found on YouTube.[13] The psychoacoustic model provides for high qualitylossy signal compressionby describing which parts of a given digital audio signal can be removed (or aggressively compressed) safely—that is, without significant losses in the (consciously) perceived quality of the sound. It can explain how a sharp clap of the hands might seem painfully loud in a quiet library but is hardly noticeable after a car backfires on a busy, urban street. This provides great benefit to the overall compression ratio, and psychoacoustic analysis routinely leads to compressed music files that are one-tenth to one-twelfth the size of high-quality masters, but with discernibly less proportional quality loss. Such compression is a feature of nearly all modern lossy audio compression formats. Some of these formats includeDolby Digital(AC-3),MP3,Opus,Ogg Vorbis,AAC,WMA,MPEG-1 Layer II(used fordigital audio broadcastingin several countries), andATRAC, the compression used inMiniDiscand someWalkmanmodels. Psychoacoustics is based heavily onhuman anatomy, especially the ear's limitations in perceiving sound as outlined previously. To summarize, these limitations are: A compression algorithm can assign a lower priority to sounds outside the range of human hearing. By carefully shifting bits away from the unimportant components and toward the important ones, the algorithm ensures that the sounds a listener is most likely to perceive are most accurately represented. Psychoacoustics includes topics and studies that are relevant tomusic psychologyandmusic therapy. Theorists such asBenjamin Boretzconsider some of the results of psychoacoustics to be meaningful only in a musical context.[14] Irv Teibel'sEnvironments seriesLPs (1969–79) are an early example of commercially available sounds released expressly for enhancing psychological abilities.[15] Psychoacoustics has long enjoyed a symbiotic relationship withcomputer science. Internet pioneersJ. C. R. LickliderandBob Taylorboth completed graduate-level work in psychoacoustics, whileBBN Technologiesoriginally specialized in consulting on acoustics issues before it began building the firstpacket-switched network. Licklider wrote a paper entitled "A duplex theory of pitch perception".[16] Psychoacoustics is applied within many fields of software development, where developers map proven and experimental mathematical patterns in digital signal processing. Many audio compression codecs such asMP3andOpususe a psychoacoustic model to increase compression ratios. The success ofconventional audio systemsfor the reproduction of music in theatres and homes can be attributed to psychoacoustics[17]and psychoacoustic considerations gave rise to novel audio systems, such as psychoacousticsound field synthesis.[18]Furthermore, scientists have experimented with limited success in creating new acoustic weapons, which emit frequencies that may impair, harm, or kill.[19]Psychoacoustics are also leveraged insonificationto make multiple independent data dimensions audible and easily interpretable.[20]This enables auditory guidance without the need for spatial audio and insonificationcomputer games[21]and other applications, such asdroneflying andimage-guided surgery.[22]It is also applied today within music, where musicians and artists continue to create new auditory experiences by masking unwanted frequencies of instruments, causing other frequencies to be enhanced. Yet another application is in the design of small or lower-quality loudspeakers, which can use the phenomenon ofmissing fundamentalsto give the effect of bass notes at lower frequencies than the loudspeakers are physically able to produce (see references). Automobile manufacturers engineer their engines and even doors to have a certain sound.[23]
https://en.wikipedia.org/wiki/Psychoacoustics
In the branch ofexperimental psychologyfocused onsense,sensation, andperception, which is calledpsychophysics, ajust-noticeable differenceorJNDis the amount something must be changed in order for a difference to be noticeable, detectable at least half the time.[1]Thislimenis also known as thedifference limen,difference threshold, orleast perceptible difference.[2] For many sensory modalities, over a wide range of stimulus magnitudes sufficiently far from the upper and lower limits of perception, the 'JND' is a fixed proportion of the reference sensory level, and so the ratio of the JND/reference is roughly constant (that is the JND is a constant proportion/percentage of the reference level). Measured in physical units, we have: ΔII=k,{\displaystyle {\frac {\Delta I}{I}}=k,} whereI{\displaystyle I\!}is the original intensity of the particular stimulation,ΔI{\displaystyle \Delta I\!}is the addition to it required for the change to be perceived (theJND), andkis a constant. This rule was first discovered byErnst Heinrich Weber(1795–1878), an anatomist and physiologist, in experiments on the thresholds of perception of lifted weights. A theoretical rationale (not universally accepted) was subsequently provided byGustav Fechner, so the rule is therefore known either as the Weber Law or as theWeber–Fechner law; the constantkis called theWeber constant. It is true, at least to a good approximation, of many but not all sensory dimensions, for example the brightness of lights, and the intensity and thepitchof sounds. It is not true, however, for the wavelength of light.Stanley Smith Stevensargued that it would hold only for what he calledprotheticsensorycontinua, where change of input takes the form of increase in intensity or something obviously analogous; it would not hold formetatheticcontinua, where change of input produces a qualitative rather than a quantitative change of the percept. Stevens developed his own law, calledStevens' Power Law, that raises the stimulus to a constant power while, like Weber, also multiplying it by a constant factor in order to achieve the perceived stimulus. The JND is a statistical, rather than an exact quantity: from trial to trial, the difference that a given person notices will vary somewhat, and it is therefore necessary to conduct many trials in order to determine the threshold. The JND usually reported is the difference that a person notices on 50% of trials. If a different proportion is used, this should be included in the description—for example one might report the value of the "75% JND". Modern approaches to psychophysics, for examplesignal detection theory, imply that the observed JND, even in this statistical sense, is not an absolute quantity, but will depend on situational and motivational as well as perceptual factors. For example, when a researcher flashes a very dim light, a participant may report seeing it on some trials but not on others. The JND formula has an objective interpretation (implied at the start of this entry) as the disparity between levels of the presented stimulus that is detected on 50% of occasions by a particular observed response,[3]rather than what is subjectively "noticed" or as a difference in magnitudes of consciously experienced 'sensations'. This 50%-discriminated disparity can be used as a universal unit of measurement of thepsychological distanceof the level of a feature in an object or situation and an internal standard of comparison in memory, such as the 'template' for a category or the 'norm' of recognition.[4]The JND-scaled distances from norm can be combined among observed and inferred psychophysical functions to generate diagnostics among hypothesised information-transforming (mental) processes mediating observed quantitative judgments.[5] In music production, a single change in a property of sound which is below the JND does not affect perception of the sound. For amplitude, the JND for humans is around 1dB.[6][7] The JND for tone is dependent on the tone's frequency content. Below 500 Hz, the JND is about 3 Hz for sine waves; above 1000 Hz, the JND for sine waves is about 0.6% (about 10cents).[8] The JND is typically tested by playing two tones in quick succession with the listener asked if there was a difference in their pitches.[9]The JND becomes smaller if the two tones are playedsimultaneouslyas the listener is then able to discernbeat frequencies. The total number of perceptible pitch steps in the range of human hearing is about 1,400; the total number of notes in the equal-tempered scale, from 16 to 16,000 Hz, is 120.[9] JND analysis is frequently occurring in both music and speech, the two being related and overlapping in the analysis of speech prosody (i.e. speech melody). Although JND varies as a function of the frequency band being tested, it has been shown that JND for the best performers at around 1 kHz is well below 1 Hz, (i.e. less than a tenth of a percent).[10][11][12]It is, however, important to be aware of the role played by critical bandwidth when performing this kind of analysis.[11] When analysing speech melody, rather than musical tones, accuracy decreases. This is not surprising given that speech does not stay at fixed intervals in the way that tones in music do. Johan 't Hart (1981) found that JND for speech averaged between 1 and 2 STs but concluded that "only differences of more than 3 semitones play a part in communicative situations".[13] Note that, given the logarithmic characteristics of Hz, for both music and speech perception results should not be reported in Hz but either as percentages or in STs (5 Hz between 20 and 25 Hz is very different from 5 Hz between 2000 and 2005 Hz, but an ~18.9% or 3 semitone increase is perceptually the same size difference, regardless of whether one starts at 20Hz or at 2000Hz). Weber's law has important applications inmarketing. Manufacturers and marketers endeavor to determine the relevant JND for their products for two very different reasons: When it comes to product improvements, marketers very much want to meet or exceed the consumer's differential threshold; that is, they want consumers to readily perceive any improvements made in the original products. Marketers use the JND to determine the amount of improvement they should make in their products. Less than the JND is wasted effort because the improvement will not be perceived; more than the JND is again wasteful because it reduces the level of repeat sales. On the other hand, when it comes to price increases, less than the JND is desirable because consumers are unlikely to notice it. Weber's lawis used inhapticdevices and robotic applications. Exerting the proper amount of force to human operator is a critical aspects in human robot interactions and tele operation scenarios. It can highly improve the performance of the user in accomplishing a task.[14]
https://en.wikipedia.org/wiki/Just-noticeable_difference
Acceptance samplingusesstatistical samplingto determine whether to accept or reject a production lot of material. It has been a commonquality controltechnique used in industry. It is usually done as products leave the factory, or in some cases even within the factory. Most often a producer supplies a consumer with several items and a decision to accept or reject the items is made by determining the number of defective items in a sample from the lot. The lot is accepted if the number of defects falls below where the acceptance number or otherwise the lot is rejected.[1] In general, acceptance sampling is employed when one or several of the following hold:[2] A wide variety of acceptancesampling plansis available. For example, multiple sampling plans use more than two samples to reach a conclusion. A shorter examination period and smaller sample sizes are features of this type of plan. Although the samples are taken at random, the sampling procedure is still reliable.[3] Acceptance sampling procedures became common during World War II. Sampling plans, such asMIL-STD-105, were developed byHarold F. Dodgeand others and became frequently used asstandards. More recently,quality assurancebroadened the scope beyond final inspection to include all aspects of manufacturing. Broaderquality management systemsinclude methodologies such asstatistical process control,HACCP,six sigma, andISO 9000. Some use of acceptance sampling still remains. Sampling provides one rational means ofverificationthat a production lot conforms with the requirements oftechnical specifications. 100% inspection does not guarantee 100% compliance and is too time-consuming and costly. Rather than evaluating all items, a specified sample is taken, inspected or tested, and a decision is made about accepting or rejecting the entire production lot. Plans have known risks: anacceptable quality limit(AQL) and a rejectable quality level, such as lot tolerance percent defective (LTDP), are part of theoperating characteristic curveof the sampling plan. These are primarily statistical risks and do not necessarily imply that a defective product is intentionally being made or accepted. Plans can have a known average outgoing quality limit (AOQL). A single sampling plan for attributes is a statistical method by which the lot is accepted or rejected on the basis of one sample.[4]Suppose that we have a lot of sizesM{\displaystyle M}; a random sample of sizeN<M{\displaystyle N<M}is selected from the lot; and an acceptance numberB{\displaystyle B}is determined. If it is found the number of nonconforming is less than or equal toB{\displaystyle B}, the lot is accepted; and if the number of nonconforming is greater thanB{\displaystyle B}, the lot is not accepted. The design of a single sampling plan requires the selection of the sample sizeN{\displaystyle N}and the acceptance numberB{\displaystyle B}. MIL-STD-105 was a United States defense standard that provided procedures and tables for sampling by attributes (pass or fail characteristic). MIL-STD-105E was cancelled in 1995 but is available in related documents such as ANSI/ASQ Z1.4, "Sampling Procedures and Tables for Inspection by Attributes". Several levels of inspection are provided and can be indexed to several AQLs. Thesample sizeis specified and the basis for acceptance or rejection (number of defects) is provided. MIL-STD-1916 is currently the preferred method of sampling for all Department of Defense (DoD) contracts. When a measured characteristic produces a number, other sampling plans, such as those based on MIL-STD-414, are often used. Compared with attribute sampling plans, these often use a smallersample sizefor the same indexed AQL.
https://en.wikipedia.org/wiki/Acceptance_sampling