text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
An object-modeling language is a standardized set of symbols used to model a software system using an object-oriented framework. The symbols can be either informal or formal ranging from predefined graphical templates to formal object models defined by grammars and specifications.
A modeling language is usually associated with a methodology for object-oriented development. The modeling language defines the elements of the model. E.g., that a model has classes, methods, object properties, etc. The methodology defines the steps developers and users need to take to develop and maintain a software system. Steps such as Define requirements, Develop code, and Test system.
It is common to equate the modeling language and the modeling methodology. For example, the Booch method may refer to Grady Booch's standard for diagramming, his methodology, or both. Or the Rumbaugh Object Modeling Technique is both a set of diagrams and a process model for developing object-oriented systems.
In the early years of the object-oriented community there were several competing modeling and methodology standards. Booch and Rumbaugh were two of the most popular. Ivar Jacobson's Objectory, Shlaer-Mellor, and Yourdon-Coad were also popular.
However, the object-oriented community values re-use and standardization. As shown in the graphic there were efforts starting in the mid 1990s to reconcile the leading models and focus on one unified specification. The graphic shows the evolution of one of the most important object modeling language standards: the Unified Modeling Language (UML).
The UML began as an attempt by some of the major thought leaders in the community to define a standard language at the OOPSLA '95 Conference. Originally, Grady Booch and James Rumbaugh merged their models into a unified model. This was followed by Booch's company Rational Software purchasing Ivar Jacobson's Objectory company and merging their model into the UML. At the time Rational and Objectory were two of the dominant players in the small world of independent vendors of Object-Oriented tools and methods.
The Object Management Group then picked up and took over ownership of the UML. The OMG is one of the most influential standards organizations in the object-oriented world. The UML is both a formal metamodel and a collection of graphical templates. The meta-model defines the elements in an object-oriented model such as classes and properties. It is essentially the same thing as the meta-model in object-oriented languages such as Smalltalk or CLOS. However, in those cases the meta-model is meant primarily to be used by developers at run time to dynamically inspect and modify an application object model. The UML meta-model provides a mathematical formal foundation for the various graphic views used by the modeling language to describe an emerging system.
The following diagram illustrates the class hierarchy of the various graphic templates defined by the UML. Structure diagrams define the static structure of an object: its place in the class hierarchy, its relation to other objects, etc. Behavior diagrams specify the dynamic aspects of the model, business process logic, coordination and timing of distributed objects, etc.
== References == | Wikipedia/Object_modeling_language |
The energy systems language, also referred to as energese, or energy circuit language, or generic systems symbols, is a modelling language used for composing energy flow diagrams in the field of systems ecology. It was developed by Howard T. Odum and colleagues in the 1950s during studies of the tropical forests funded by the United States Atomic Energy Commission.
== Design intent ==
The design intent of the energy systems language was to facilitate the generic depiction of energy flows through any scale system while encompassing the laws of physics, and in particular, the laws of thermodynamics (see energy transformation for an example).
In particular, H.T. Odum aimed to produce a language which could facilitate the intellectual analysis, engineering synthesis and management of global systems such as the geobiosphere and its many subsystems. Within this aim, H.T. Odum had a strong concern that many abstract mathematical models of such systems were not thermodynamically valid. Hence he used analog computers to make system models due to their intrinsic value; that is, the electronic circuits are of value for modelling natural systems which are assumed to obey the laws of energy flow, because, in themselves, the circuits, like natural systems, also obey the known laws of energy flow, where the energy form is electrical. However, Odum was interested not only in the electronic circuits themselves but also in how they might be used as formal analogies for modeling other systems which also had energy flowing through them. As a result, Odum did not restrict his inquiry to the analysis and synthesis of any one system in isolation. The discipline that is most often associated with this kind of approach, together with the use of the energy systems language is known as systems ecology.
== General characteristics ==
When applying the electronic circuits (and schematics) to modeling ecological and economic systems, Odum believed that generic categories, or characteristic modules, could be derived. Moreover, he felt that a general symbolic system, fully defined in electronic terms (including the mathematics thereof) would be useful for depicting real system characteristics, such as the general categories of production, storage, flow, transformation, and consumption. Central principles of electronics also therefore became central features of the energy systems language – Odum's generic symbolism.
Depicted to the left is the generic symbol for storage, which Odum named the Bertalanffy module, in honor of the general systems theorist Ludwig von Bertalanffy.
For Odum, in order to achieve a holistic understanding of how many apparently different systems actually affect each other, it was important to have a generic language with a massively scalable modeling capacity – to model global-to-local, ecological, physical and economic systems. The intention was, and for those who still apply it, is, to make biological, physical, ecological, economic and other system models thermodynamically, and so also energetically, valid and verifiable. As a consequence the designers of the language also aimed to include the energy metabolism of any system within the scope of inquiry.
== Pictographic icons ==
In order to aid learning, in Modeling for all Scales Odum and Odum (2000) suggested systems might first be introduced with pictographic icons, and then later defined in the generic symbolism. Pictograms have therefore been used in software programs like ExtendSim to depict the basic categories of the Energy Systems Language. Some have argued that such an approach shares similar motivations to Otto Neurath's isotype project, Leibniz's (Characteristica Universalis) Enlightenment Project and Buckminster Fuller's works.
== See also ==
== References ==
== External links == | Wikipedia/Energy_Systems_Language |
Domain-specific modeling (DSM) is a software engineering methodology for designing and developing systems, such as computer software. It involves systematic use of a domain-specific language to represent the various facets of a system.
Domain-specific modeling languages tend to support higher-level abstractions than general-purpose modeling languages, so they require less effort and fewer low-level details to specify a given system.
== Overview ==
Domain-specific modeling often also includes the idea of code generation: automating the creation of executable source code directly from the domain-specific language models. Being free from the manual creation and maintenance of source code means domain-specific language can significantly improve developer productivity. The reliability of automatic generation compared to manual coding will also reduce the number of defects in the resulting programs thus improving quality.
Domain-specific language differs from earlier code generation attempts in the CASE tools of the 1980s or UML tools of the 1990s. In both of these, the code generators and modeling languages were built by tool vendors. While it is possible for a tool vendor to create a domain-specific language and generators, it is more normal for domain-specific language to occur within one organization. One or a few expert developers creates the modeling language and generators, and the rest of the developers use them.
Having the modeling language and generator built by the organization that will use them allows a tight fit with their exact domain and in response to changes in the domain.
Domain-specific languages can usually cover a range of abstraction levels for a particular domain. For example, a domain-specific modeling language for mobile phones could allow users to specify high-level abstractions for the user interface, as well as lower-level abstractions for storing data such as phone numbers or settings. Likewise, a domain-specific modeling language for financial services could permit users to specify high-level abstractions for clients, as well as lower-level abstractions for implementing stock and bond trading algorithms.
== Topics ==
=== Defining domain-specific languages ===
To define a language, one needs a language to write the definition in. The language of a model is often called a metamodel, hence the language for defining a modeling language is a meta-metamodel. Meta-metamodels can be divided into two groups: those that are derived from or customizations of existing languages, and those that have been developed specifically as meta-metamodels.
Derived meta-metamodels include entity–relationship diagrams, formal languages, extended Backus–Naur form (EBNF), ontology languages, XML schema, and Meta-Object Facility (MOF). The strengths of these languages tend to be in the familiarity and standardization of the original language.
The ethos of domain-specific modeling favors the creation of a new language for a specific task, and so there are unsurprisingly new languages designed as meta-metamodels. The most widely used family of such languages is that of OPRR, GOPRR, and GOPPRR, which focus on supporting things found in modeling languages with the minimum effort.
=== Tool support for domain-specific languages ===
Many General-Purpose Modeling languages already have tool support available in the form of CASE tools. Domain-specific language languages tend to have too small a market size to support the construction of a bespoke CASE tool from scratch. Instead, most tool support for domain-specific language languages is built based on existing domain-specific language frameworks or through domain-specific language environments.
A domain-specific language environment may be thought of as a metamodeling tool, i.e., a modeling tool used to define a modeling tool or CASE tool. The resulting tool may either work within the domain-specific language environment, or less commonly be produced as a separate stand-alone program. In the more common case, the domain-specific language environment supports an additional layer of abstraction when compared to a traditional CASE tool.
Using a domain-specific language environment can significantly lower the cost of obtaining tool support for a domain-specific language, since a well-designed domain-specific language environment will automate the creation of program parts that are costly to build from scratch, such as domain-specific editors, browsers and components. The domain expert only needs to specify the domain specific constructs and rules, and the domain-specific language environment provides a modeling tool tailored for the target domain.
Most existing domain-specific language takes place with domain-specific language environments, either commercial such as MetaEdit+ or Actifsource, open source such as GEMS, or academic such as GME. The increasing popularity of domain-specific language has led to domain-specific language frameworks being added to existing IDEs, e.g. Eclipse Modeling Project (EMP) with EMF and GMF, or in Microsoft's DSL Tools for Software Factories.
== Domain-specific language and UML ==
The Unified Modeling Language (UML) is a general-purpose modeling language for software-intensive systems that is designed to support mostly object oriented programming. Consequently, in contrast to domain-specific language languages, UML is used for a wide variety of purposes across a broad range of domains. The primitives offered by UML are those of object oriented programming, while domain-specific languages offer primitives whose semantics are familiar to all practitioners in that domain. For example, in the domain of automotive engineering, there will be software models to represent the properties of an anti-lock braking system, or a steering wheel, etc.
UML includes a profile mechanism that allows it to be constrained and customized for specific domains and platforms. UML profiles use stereotypes, stereotype attributes (known as tagged values before UML 2.0), and constraints to restrict and extend the scope of UML to a particular domain. Perhaps the best known example of customizing UML for a specific domain is SysML, a domain specific language for systems engineering.
UML is a popular choice for various model-driven development approaches whereby technical artifacts such as source code, documentation, tests, and more are generated algorithmically from a domain model. For instance, application profiles of the legal document standard Akoma Ntoso can be developed by representing legal concepts and ontologies in UML class objects.
== See also ==
Computer-aided software engineering
Domain-driven design
Domain-specific language
Framework-specific modeling language
General-purpose modeling
Domain-specific multimodeling
Model-driven engineering
Model-driven architecture
Software factories
Discipline-Specific Modeling
== References ==
== External links ==
Domain-specific modeling for generative software development Archived 2010-01-31 at the Wayback Machine, Web-article by Martijn Iseger, 2010
Domain Specific Modeling in IoC frameworks Web-article by Ke Jin, 2007
Domain-Specific Modeling for Full Code Generation from Methods & Tools Web-article by Juha-Pekka Tolvanen, 2005
Creating a Domain-Specific Modeling Language for an Existing Framework Web-article by Juha-Pekka Tolvanen, 2006 | Wikipedia/Domain-Specific_Modeling |
Algebraic modeling languages (AML) are high-level computer programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). One particular advantage of some algebraic modeling languages like AIMMS, AMPL, GAMS,
Gekko,
MathProg,
Mosel,
and
OPL
is the similarity of their syntax to the mathematical notation of optimization problems. This allows for a very concise and readable definition of problems in the domain of optimization, which is supported by certain language elements like sets, indices, algebraic expressions, powerful sparse index and data handling variables, constraints with arbitrary names. The algebraic formulation of a model does not contain any hints how to process it.
An AML does not solve those problems directly; instead, it calls appropriate external algorithms to obtain a solution. These algorithms are called solvers and can handle certain kind of mathematical problems like:
linear problems
integer problems
(mixed integer) quadratic problems
mixed complementarity problems
mathematical programs with equilibrium constraints
constrained nonlinear systems
general nonlinear problems
non-linear programs with discontinuous derivatives
nonlinear integer problems
global optimization problems
stochastic optimization problems
== Core elements ==
The core elements of an AML are:
a modeling language interpreter (the AML itself)
solver links
user interfaces (UI)
data exchange facilities
== Design principles ==
Most AML follow certain design principles:
a balanced mix of declarative and procedural elements
open architecture and interfaces to other systems
different layers with separation of:
model and data
model and solution methods
model and operating system
model and interface
=== Data driven model generation ===
Most modeling languages exploit the similarities between structured models and relational databases by providing a database access layer, which enables the modelling system to directly access data from external data sources (e.g. these table handlers for AMPL).
With the refinement of analytic technologies applied to business processes, optimization models are becoming an integral part of decision support systems; optimization models can be structured and layered to represent and support complex business processes. In such applications, the multi-dimensional data structure typical of OLAP systems can be directly mapped to the optimization models and typical MDDB operations can be translated into aggregation and disaggregation operations on the underlying model
== History ==
Algebraic modelling languages find their roots in matrix-generator and report-writer programs (MGRW), developed in the late seventies. Some of these are MAGEN, MGRW (IBM), GAMMA.3, DATAFORM and MGG/RWG. These systems simplified the communication of problem instances to the solution algorithms and the generation of a readable report of the results.
An early matrix-generator for LP was developed around 1969 at the Mathematisch Centrum (now CWI), Amsterdam.
Its syntax was very close to the usual mathematical notation, using subscripts en sigmas. Input for the generator consisted of separate sections for the model and the data. It found users at universities and in industry. The main industrial user was the steel maker Hoogovens (now Tata Steel) where it was used for nearly 25 years.
A big step towards the modern modelling languages is found in UIMP, where the structure of the mathematical programming models taken from real life is analyzed for the first time, to highlight the natural grouping of variables and constraints arising from such models. This led to data-structure features, which supported structured modelling; in this paradigm, all the input and output tables, together with the decision variables, are defined in terms of these structures, in a way comparable to the use of subscripts and sets.
This is probably the single most notable feature common to all modern AMLs and enabled, in time, a separation between the model structure and its data, and a correspondence between the entities in an MP model and data in relational databases. So, a model could be finally instantiated and solved over different datasets, just by modifying its datasets.
The correspondence between modelling entities and relational data models, made then possible to seamlessly generate model instances by fetching data from corporate databases.
This feature accounts now for a lot of the usability of optimization in real life applications, and is supported by most well-known modelling languages.
While algebraic modelling languages were typically isolated, specialized and commercial languages, more recently algebraic modelling languages started to appear in the form of open-source, specialized libraries within a general-purpose language, like Gekko or Pyomo for Python or JuMP for the Julia language.
== Notable AMLs ==
=== Specialized AMLs ===
AIMMS
AMPL
GAMS
MathProg
MiniZinc
=== AML Packages in Generic Programming Languages ===
FlopC++ for C++
OptimJ for Java
JuMP for Julia
GBOML for Python
Pyomo for Python
== References == | Wikipedia/Algebraic_modeling_language |
Object–role modeling (ORM) is used to model the semantics of a universe of discourse. ORM is often used for data modeling and software engineering.
An object–role model uses graphical symbols that are based on first order predicate logic and set theory to enable the modeler to create an unambiguous definition of an arbitrary universe of discourse. Attribute free, the predicates of an ORM Model lend themselves to the analysis and design of graph database models in as much as ORM was originally conceived to benefit relational database design.
The term "object–role model" was coined in the 1970s and ORM based tools have been used for more than 30 years – principally for data modeling. More recently ORM has been used to model business rules, XML-Schemas, data warehouses, requirements engineering and web forms.
== History ==
The roots of ORM can be traced to research into semantic modeling for information systems in Europe during the 1970s. There were many pioneers and this short summary does not by any means mention them all. An early contribution came in 1973 when Michael Senko wrote about "data structuring" in the IBM Systems Journal. In 1974 Jean-Raymond Abrial contributed an article about "Data Semantics". In June 1975, Eckhard Falkenberg's doctoral thesis was published and in 1976 one of Falkenberg's papers mentions the term "object–role model".
G.M. Nijssen made fundamental contributions by introducing the "circle-box" notation for object types and roles, and by formulating the first version of the conceptual schema design procedure. Robert Meersman extended the approach by adding subtyping, and introducing the first truly conceptual query language.
Object role modeling also evolved from the Natural language Information Analysis Method, a methodology that was initially developed by the academic researcher, G.M. Nijssen in the Netherlands (Europe) in the mid-1970s and his research team at the Control Data Corporation Research Laboratory in Belgium, and later at the University of Queensland, Australia in the 1980s. The acronym NIAM originally stood for "Nijssen's Information Analysis Methodology", and later generalised to "Natural language Information Analysis Methodology" and Binary Relationship Modeling since G. M. Nijssen was only one of many people involved in the development of the method.
In 1989, Terry Halpin completed his PhD thesis on ORM, providing the first full formalization of the approach and incorporating several extensions.
Also in 1989, Terry Halpin and G.M. Nijssen co-authored the book "Conceptual Schema and Relational Database Design" and several joint papers, providing the first formalization of object–role modeling.
A graphical NIAM design tool which included the ability to generate database-creation scripts for Oracle, DB2 and DBQ was developed in the early 1990s in Paris. It was originally named Genesys and was marketed successfully in France and later Canada. It could also handle ER diagram design. It was ported to SCO Unix, SunOs, DEC 3151's and Windows 3.0 platforms, and was later migrated to succeeding Microsoft operating systems, utilising XVT for cross operating system graphical portability. The tool was renamed OORIANE and is currently being used for large data warehouse and SOA projects.
Also evolving from NIAM is "Fully Communication Oriented Information Modeling" FCO-IM (1992). It distinguishes itself from traditional ORM in that it takes a strict communication-oriented perspective. Rather than attempting to model the domain and its essential concepts, it models the communication in this domain (universe of discourse). Another important difference is that it does this on instance level, deriving type level and object/fact level during analysis.
Another recent development is the use of ORM in combination with standardised relation types with associated roles and a standard machine-readable dictionary and taxonomy of concepts as are provided in the Gellish English dictionary. Standardisation of relation types (fact types), roles and concepts enables increased possibilities for model integration and model reuse.
== Concepts ==
=== Facts ===
Object–role models are based on elementary facts, and expressed in diagrams that can be verbalised into natural language. A fact is a proposition such as "John Smith was hired on 5 January 1995" or "Mary Jones was hired on 3 March 2010".
With ORM, propositions such as these, are abstracted into "fact types" for example "Person was hired on Date" and the individual propositions are regarded as sample data. The difference between a "fact" and an "elementary fact" is that an elementary fact cannot be simplified without loss of meaning. This "fact-based" approach facilitates modeling, transforming, and querying information from any domain.
=== Attribute-free ===
ORM is attribute-free: unlike models in the entity–relationship (ER) and Unified Modeling Language (UML) methods, ORM treats all elementary facts as relationships and so treats decisions for grouping facts into structures (e.g. attribute-based entity types, classes, relation schemes, XML schemas) as implementation concerns irrelevant to semantics. By avoiding attributes, ORM improves semantic stability and enables verbalization into natural language.
=== Fact-based modeling ===
Fact-based modeling includes procedures for mapping facts to attribute-based structures, such as those of ER or UML.
Fact-based textual representations are based on formal subsets of native languages. ORM proponents argue that ORM models are easier to understand by people without a technical education. For example, proponents argue that object–role models are easier to understand than declarative languages such as Object Constraint Language (OCL) and other graphical languages such as UML class models. Fact-based graphical notations are more expressive than those of ER and UML. An object–role model can be automatically mapped to relational and deductive databases (such as datalog).
=== ORM 2 graphical notation ===
ORM2 is the latest generation of object–role modeling. The main objectives for the ORM 2 graphical notation are:
More compact display of ORM models without compromising clarity
Improved internationalization (e.g. avoid English language symbols)
Simplified drawing rules to facilitate creation of a graphical editor
Extended use of views for selectively displaying/suppressing detail
Support for new features (e.g. role path delineation, closure aspects, modalities)
=== Design procedure ===
System development typically involves several stages such as: feasibility study; requirements analysis; conceptual design of data and operations; logical design; external design; prototyping; internal design and implementation; testing and validation; and maintenance. The seven steps of the conceptual schema design procedure are:
Transform familiar information examples into elementary facts, and apply quality checks
Draw the fact types, and apply a population check
Check for entity types that should be combined, and note any arithmetic derivations
Add uniqueness constraints, and check arity of fact types
Add mandatory role constraints, and check for logical derivations
Add value, set comparison and subtyping constraints
Add other constraints and perform final checks
ORM's conceptual schema design procedure (CSDP) focuses on the analysis and design of data.
== See also ==
Concept map
Conceptual schema
Enhanced entity–relationship model (EER)
Information flow diagram
Ontology double articulation
Ontology engineering
Relational algebra
Three-schema approach
== References ==
== Further reading ==
Halpin, Terry (1989), Conceptual Schema and Relational Database Design, Sydney: Prentice Hall, ISBN 978-0-13-167263-5
Rossi, Matti; Siau, Keng (April 2001), Information Modeling in the New Millennium, IGI Global, ISBN 978-1-878289-77-3
Halpin, Terry; Evans, Ken; Hallock, Pat; Maclean, Bill (September 2003), Database Modeling with Microsoft Visio for Enterprise Architects, Morgan Kaufmann, ISBN 978-1-55860-919-8
Halpin, Terry; Morgan, Tony (March 2008), Information Modeling and Relational Databases: From Conceptual Analysis to Logical Design (2nd ed.), Morgan Kaufmann, ISBN 978-0-12-373568-3
== External links == | Wikipedia/Object-Role_Modeling |
Visual modeling is the graphic representation of objects and systems of interest using graphical languages. Visual modeling is a way for experts and novices to have a common understanding of otherwise complicated ideas. By using visual models complex ideas are not held to human limitations, allowing for greater complexity without a loss of comprehension. Visual modeling can also be used to bring a group to a consensus. Models help effectively communicate ideas among designers, allowing for quicker discussion and an eventual consensus. Visual modeling languages may be General-Purpose Modeling (GPM) languages (e.g., UML, Southbeach Notation, IDEF) or Domain-Specific Modeling (DSM) languages (e.g., SysML). Visual modeling in computer science had no standard before the 90's, and was incomparable until the introduction of the UML. They include industry open standards (e.g., UML, SysML, Modelica), as well as proprietary standards, such as the visual languages associated with VisSim, MATLAB and Simulink, OPNET, NetSim, NI Multisim, and Reactive Blocks. Both VisSim and Reactive Blocks provide a royalty-free, downloadable viewer that lets anyone open and interactively simulate their models. The community edition of Reactive Blocks also allows full editing of the models as well as compilation, as long as the work is published under the Eclipse Public License. Visual modeling languages are an area of active research that continues to evolve, as evidenced by increasing interest in DSM languages, visual requirements, and visual OWL (Web Ontology Language).
== See also ==
Service-oriented modeling
Domain-specific modeling
Model-driven engineering
Modeling language
== References ==
== External links ==
Visual Modeling Forum A web community dedicated to visual modeling languages and tools. | Wikipedia/Visual_modeling |
Service Modeling Language (SML) and Service Modeling Language Interchange Format (SML-IF) are a pair of XML-based specifications created by leading information technology companies that define a set of XML instance document extensions for expressing links between elements, a set of XML Schema extensions for constraining those links, and a way to associate Schematron rules with global element declarations, global complex type definitions, and/or model documents. The SML specification defines model concepts, and the SML-IF specification describes a packaging format for exchanging SML-based models.
SML and SML-IF were standardized in a W3C working group chartered to produce W3C Recommendations for the Service Modeling Language by refining the “Service Modeling Language” (SML) Member Submission, addressing implementation experience and feedback on the specifications. The submission was from an industry group consisting of representatives from BEA Systems, BMC, CA, Cisco, Dell, EMC, HP, IBM, Intel, Microsoft, and Sun Microsystems. They were published as W3C Recommendations on May 12, 2009. In the market and in applying by vendors, SML is seen as a successor/replacement for earlier developed standards like DCML and Microsoft's (in hindsight) proprietary System Definition Model or SDM. See for a historically helpful relation between SDM and DCML, and for the joint pressrelease announcing SML. In the Microsoft section of it the sequel role to SDM is mentioned.
== Fast Formal Facts about SML ==
SML is a language for building a rich set of constructs for creating and constraining models of complex IT services and systems. SML-based models could include information about configuration, deployment, monitoring, policy, health, capacity planning, target operating range, service level agreements, and so on.
An SML model is a set of interrelated XML documents. An SML model could contain information about the parts of an IT service, as well as the constraints that each part must satisfy for the IT service to function properly. Constraints are captured in two ways:
XML Schema documents: constrain the structure and content of the XML instance documents in a model. SML uses XML Schema 1.0, but allows later versions as well. SML also defines a set of extensions to XML Schema to constrain references, and identity constraints (key, unique, ...) that apply to sets of documents.
Rule documents: constrain the structure and content of documents in a model. SML uses Schematron and XPath 1.0 for rules, but allows later versions as well.
Once a model is defined, one of the important operations on the model is to establish its validity. This involves checking whether all model documents satisfy the XML Schema and rule document constraints.
== SML-Based Models ==
Models provide value in several important ways:
Models focus on capturing all invariant aspects of a service/system that must be maintained for the service/system to be functional. They capture as much detail as is necessary, and no more.
Models are units of communication and collaboration between designers, implementers, operators, and users; and can easily be shared, tracked, and revision controlled. This is important because complex services are often built and maintained by a variety of people playing different roles.
Models drive modularity, Re-use, and standardization. Most real-world complex services and systems are composed of sufficiently complex parts. Re-use and standardization of services/systems and their parts is a key factor in reducing overall production and operation cost and in increasing reliability.
Models represent a powerful mechanism for validating changes before applying the changes to a service/system. Also, when changes happen in a running service/system, they can be validated against the intended state described in the model. The actual service/system and its model together enable a self-healing service/system – the ultimate objective. Models of a service/system must necessarily stay decoupled from the live service/system to create the control loop.
Models enable increased automation of management tasks. Automation facilities exposed by the majority of IT services/systems today could be driven by software – not people – for reliable initial realization of a service/system as well as for ongoing lifecycle management.
== References ==
== External links ==
W3C Service Modeling Language Working Group home page
W3C public working drafts of SML/SML-IF specification | Wikipedia/Service_Modeling_Language |
Extended Enterprise Modeling Language (EEML) in software engineering is a modelling language used for enterprise modelling across a number of layers.
== Overview ==
Extended Enterprise Modeling Language (EEML) is a modelling language which combines structural modelling, business process modelling, goal modelling with goal hierarchies and resource modelling. It was intended to bridge the gap between goal modelling and other modelling approaches. According to Johannesson and Söderström (2008) "the process logic in EEML is mainly expressed through nested structures of tasks and decision points. The sequencing of tasks is expressed by the flow relation between decision points. Each task has an input port and the output port being decision points for modeling process logic".
EEML was designed as a simple language, making it easy to update models. In addition to capturing tasks and their interdependencies, models show which roles perform each task, and the tools, services and information they apply.
== History ==
Extended Enterprise Modeling Language (EEML) is from the late 1990s, developed in the EU project EXTERNAL as extension of the Action Port Model (APM) by S. Carlsen (1998). The EXTERNAL project aimed to "facilitate inter-organisational cooperation in knowledge intensive industries. The project worked on the hypothesis that interactive process models form a suitable framework for tools and methodologies for dynamically networked organisations. In the project EEML (Extended Enterprise Modelling Language) was first constructed as a common metamodel, designed to enable syntactic and semantic interoperability".
It was further developed in the EU projects Unified Enterprise Modelling Language (UEML) from 2002 to 2003 and the ongoing ATHENA project.
The objectives of the UEML Working group were to "define, to validate and to disseminate a set of core language constructs to support a Unified Language for Enterprise Modelling, named UEML, to serve as a basis for interoperability within a smart organisation or a network of enterprises".
== Topics ==
=== Modeling domains ===
The EEML-language is divided into 4 sub-languages, with well-defined links across these languages:
Process modelling
Data modelling
Resource modelling
Goal modelling
Process modelling in EEML, according to Krogstie (2006) "supports the modeling of process logic which is mainly expressed through nested structures of tasks and decision points. The sequencing of the tasks is expressed by the flow relation between decision points. Each task has minimum an input port and an output port being decision points for modeling process logic, Resource roles are used to connect resources of various kinds (persons, organisations, information, material objects, software tools and manual tools) to the tasks. In addition, data modeling (using UML class diagrams), goal modeling and competency modeling (skill requirements and skills possessed) can be integrated with the process models".
=== Layers ===
EEML has four layers of interest:
Generic Task Type: This layer identifies the constituent tasks of generic, repetitive processes and the logical dependencies between these tasks.
Specific Task Type: At this layer, we deal with process modelling in another scale, which is more linked to the concretisation, decomposition and specialisation phases. Here process models are expanded and elaborated to facilitate business solutions. From an integration viewpoint, this layer aims at uncovering more efficiently the dependencies between the sub-activities, with regards for the resources required for actual performance.
Manage Task Instances: The purpose of this layer consists in providing constraints but also useful resources (in the form of process templates) to the planning and performance of an enterprise process. The performance of organisational, information, and tool resources in their environment are highlighted through concrete resources allocation management.
Perform Task Instances: Here is covered the actual execution of tasks with regards to issues of empowerment and decentralisation. At this layer, resources are utilised or consumed in an exclusive or shared manner.
These tasks are tied together through another layer called Manage Task Knowledge which allows to achieve a global interaction through the different layers by performing a real consistency between them. According to EEML 2005 Guide, this Manage Task Knowledge can be defined as the collection of processes necessary for innovation, dissemination, and exploitation of knowledge in a co-operating ensemble where interact knowledge seekers and knowledge sources by the means of a shared knowledge base.
=== Goal modelling ===
Goal modelling is one of the four EEML modelling domains age. A goal expresses the wanted (or unwanted) state of affairs (either current or future) in a certain context. Example of the goal model is depicted below. It shows goals and relationships between them. It is possible to model advanced goal-relationships in EEML by using goal connectors. A goal connector is used when one need to link several goals.
In goal modelling to fulfil Goal1, one must achieve to other goals: both Goal2 and Goal3 (goal-connector with “and” as the logical relation going out). If Goal2 and Goal3 are two different ways of achieving Goal1, then it should be “xor” logical relationship. It can be an opposite situation when both Goal2 and Goal3 need to be fulfilled and to achieve them one must fulfil Goal1. In this case Goal2 and Goal3 are linked to goal connector and this goal connector has a link to Goal1 with ”and”-logical relationship.
The table indicates different types of connecting relationships in EEML goal modelling. Goal model can also be interlinked with a process model.
== Goal and process oriented modelling ==
We can describe process model as models that comprise a set of activities and an activity can be decomposed into sub-activities. These activities have relationship amongst themselves. A goal describes the expected state of operation in a business enterprise and it can be linked to whole process model or to a process model fragment with each level activity in a process model can be considered as a goal.
Goals are related in a hierarchical format where you find some goals are dependent on other sub goals for them to be complete which means all the sub goals must be achieved for the main goal to be achieved. There is other goals where only one of the goals need to be fulfilled for the main goal to be achieved. In goal modelling, there is use of deontic operator which falls in between the context and achieved state. Goals apply to tasks, milestones, resource roles and resources as well and can be considered as action rule for at task. EEML rules were also possible to although the goal modelling requires much more consultation in finding the connections between rules on the different levels. Goal-oriented analysis focuses on the description and evaluation of alternatives and their relationship to the organisational objectives.
== Resource modeling ==
Resources have specific roles during the execution of various processes in an organisation. The following icons represent the various resources required in modelling.
The relations of these resources can be of different types:
a. Is Filled By – this is the assignment relation between roles and resources. It has a cardinality of one-to-many relationship.
b. Is Candidate For – candidate indicates the possible filling of the role by a resource.
c. Has Member – this is a kind of relations between organisation and person by denoting that a certain person has membership in the organisation. Has a cardinality of many-to-many relation.
d. Provide Support To – support pattern between resources and roles.
e. Communicates With – Communication pattern between resources and roles.
f. Has Supervision Over – shows which role resource supervises another role or resource.
g. Is Rating Of – describes the relation between skill and a person or organisation.
h. Is required By – this is the primary skill required for this role
i. Has Access to – creating of models with the access rights.
== Benefits ==
From a general point of view, EEML can be used like any other modelling languages in numerous cases. However we can highlight the virtual enterprise example, which can be considered as a direct field of application for EEML with regard to Extended Enterprise planning, operation, and management.
Knowledge sharing: Create and maintain a shared understanding of the scope and purpose of the enterprise, as well as viewpoints on how to fulfil the purpose.
Dynamically networked organisations: Make knowledge as available as possible within the organisation.
Heterogeneous infrastructures: Achieve a relevant knowledge sharing process through heterogeneous infrastructures.
Process knowledge management: Integrate the different business processes levels of abstraction.
Motivation: creates enthusiasm and commitment among members of an organisation to follow up on the various actions that are necessary to restructure the enterprise.
EEML can help organisations meet these challenges by modelling all the manufacturing and logistics processes in the extended enterprise. This model allows capturing a rich set of relationships between the organisation, people, processes and resources of the virtual enterprise. It also aims at making people understand, communicate, develop and cultivate solutions to business problems
According to J. Krogstie (2008), enterprise models can be created to serve various purposes which include:
Human sense making and communication – the main purpose of enterprise modelling is to make sense of the real world aspects of an enterprise in order to facilitate communicate with parties involved.
Computer assisted analysis – the main purpose of enterprise modelling is to gain knowledge about the enterprise through simulation and computation of various parameters.
Model deployment and activation – the main purpose of enterprise modelling is to integrate the model in an enterprise-wide information system and enabling on-line information retrieval and direct work process guidance.
EEML enables Extended Enterprises to build up their operation based on standard processes through allowing modelling of all actors, processes and tasks in the Extended Enterprise and thereby have clear description of the Extended Enterprise. Finally, models developed will be used to measure and evaluate the Extended Enterprise.
== See also ==
i*
Modeling language
Semantic parameterization
Software design
Software development methodology
== References ==
== Further reading ==
Bolchini, D., Paolini, P.: "Goal-Driven Requirements Analysis for Hypermedia-intensive Web Applications", Requirements Engineering Journal, Springer, RE03 Special Issue (9) 2004: 85-103.
Jørgensen, Håvard D.: "Process-Integrated eLearning"
Kramberg, V.: "Goal-oriented Business Processes with WS-BPEL", Master Thesis, University of Stuttgart, 2008.
John Krogstie (2005). EEML2005: Extended Enterprise Modeling Language
John Krogstie (2001). "A Semiotic Approach to Quality in Requirements Specifications" (Proc. IFIP 8.1) IFIP 8.1. Working Conference on Organizational Semiotics.
Lin Liu, Eric Yu. "Designing information systems in social context: a goal and scenario modelling approach"
== External links ==
Description of EEML
GRL web site University of Toronto,
"The Business Motivation Model Business Governance in a Volatile World", Release 1.3, Business Rules Group, 2007. | Wikipedia/Extended_Enterprise_Modeling_Language |
General-purpose modeling (GPM) is the systematic use of a general-purpose modeling language to represent the various facets of an object or a system. Examples of GPM languages are:
The Unified Modeling Language (UML), an industry standard for modeling software-intensive systems
EXPRESS, a data modeling language for product data, standardized as ISO 10303-11
IDEF, a group of languages from the 1970s that aimed to be neutral, generic and reusable
Gellish, an industry standard natural language oriented modeling language for storage and exchange of data and knowledge, published in 2005
XML, a data modeling language now beginning to be used to model code (MetaL, Microsoft .Net [1])
GPM languages are in contrast with domain-specific modeling languages (DSMs).
== See also ==
Model-driven engineering (MDE) | Wikipedia/General-purpose_modeling |
A transformation language is a computer language designed to transform some input text in a certain formal language into a modified output text that meets some specific goal.
Program transformation systems such as Stratego/XT, TXL, Tom, DMS, and ASF+SDF all have transformation languages as a major component. The transformation languages for these systems are driven by declarative descriptions of the structure of the input text (typically a grammar), allowing them to be applied to wide variety of formal languages and documents.
Macro languages are a kind of transformation languages to transform a meta language into specific higher programming language like Java, C++, Fortran or into lower-level Assembly language.
In the model-driven engineering technical space, there are model transformation languages (MTLs), that take as input models conforming to a given metamodel and produce as output models conforming to a different metamodel. An example of such a language is the QVT OMG standard.
There are also low-level languages such as the Lx family implemented by the bootstrapping method. The L0 language may be considered as assembler for transformation languages. There is also a high-level graphical language built on upon Lx called MOLA.
There are a number of XML transformation languages. These include Tritium, XSLT, XQuery, STX, FXT, XDuce, CDuce, HaXml, XMLambda, and FleXML.
== See also ==
== References == | Wikipedia/Transformation_language |
A framework-specific modeling language (FSML) is a kind of domain-specific modeling language which is designed for an object-oriented application framework.
FSMLs define framework-provided abstractions as FSML concepts and decompose the abstractions into features. The features represent implementation steps or choices.
A FSML concept can be configured by selecting features and providing values for features.
Such a concept configuration represents how the concept should be implemented in the code. In other words, concept configuration describes how the framework should be completed in order to create the implementation of the concept.
== Applications ==
FSMLs are used in model-driven development for creating models or specifications of software to be built.
FSMLs enable
the creation of the models from the framework completion code (that is, automated reverse engineering)
the creation of the framework completion code from the models (that is, automated forward engineering)
code verification through constraint checking on the model
automated round-trip engineering
== Examples ==
Eclipse Workbench Part Interaction FSML
An example FSML for modeling Eclipse Parts (that is, editors and views) and Part Interactions (for example listens to parts, requires adapter, provides selection).
The prototype implementation supports automated round-trip engineering of Eclipse plug-ins that implement workbench parts and part interactions.
== See also ==
General-purpose modeling (GPM)
Model-driven engineering (MDE)
Domain-specific language (DSL)
Model-driven architecture (MDA)
Meta-Object Facility (MOF)
== References == | Wikipedia/Framework-specific_modeling_language |
Rebeca (acronym for Reactive Objects Language) is an actor-based modeling language with a formal foundation, designed in an effort to bridge the gap between formal verification approaches and real applications. It can be considered as a reference model for concurrent computation, based on an operational interpretation of the actor model. It is also a platform for developing object-based concurrent systems in practice.
Besides having an appropriate and efficient way for modeling concurrent and distributed systems, one needs a formal verification approach to ensure their correctness. Rebeca is supported by a set of verification tools. Earlier tools provided a front-end to work with Rebeca code, and to translate the Rebeca code into input languages of well-known and mature model checkers (like SPIN and NuSMV) and thus, were able to verify their properties.
Rebeca, since 2005, is supported by a direct model checker based on Modere (the Model checking Engine of Rebeca).
Modular verification and abstraction techniques are used to reduce the state space and make it possible to verify complicated reactive systems.
Besides these techniques, Modere supports partial order reduction and symmetry reduction.
== See also ==
Formal methods
Model checking
SPIN model checker
== References ==
M. Sirjani. Formal Specification and Verification of Concurrent and Reactive Systems, PhD Thesis, Department of Computer Engineering, Sharif University of Technology, December 2004.
M. Sirjani, A. Movaghar. An Object-Based Model for Agents, in Proceedings of Workshop on Agents for Information Management, Austrian Computer Society, October 2002.
== External links ==
Rebeca Home Page
Formal Methods Laboratory, University of Tehran | Wikipedia/Rebeca_Modeling_Language |
Business process modeling (BPM) is the action of capturing and representing processes of an enterprise (i.e. modeling them), so that the current business processes may be analyzed, applied securely and consistently, improved, and automated.
BPM is typically performed by business analysts, with subject matter experts collaborating with these teams to accurately model processes. It is primarily used in business process management, software development, or systems engineering.
Alternatively, process models can be directly modeled from IT systems, such as event logs.
== Overview ==
According to the Association of Business Process Management Professionals (ABPMP), business process modeling is one of the five key disciplines within Business Process Management (BPM). (Chapter 1.4 CBOK® structure) ← automatic translation from German The five disciplines are:
Process modeling : Creating visual or structured representations of business processes to better understand how they work.
Process analysis : understanding the as-is processes and their alignment with the company's objectives – analysis of business activities.
Process design : redesign – business process reengineering – or redesign of business processes – business process optimization.
Process performance measurement : can focus on the factors of time, cost, capacity, and quality or on the overarching view of waste.
Process transformation : planned, structured development, technical realization, and transfer to ongoing operations.
However, these disciplines cannot be considered in isolation: Business process modeling always requires a business process analysis for modeling the as-is processes (see section Analysis of business activities) or specifications from process design for modeling the to-be processes (see sections Business process reengineering and Business process optimization).
The focus of business process modeling is on the representation of the flow of actions (activities), according to Hermann J. Schmelzer and Wolfgang Sesselmann consisting "of the cross-functional identification of value-adding activities that generate specific services expected by the customer and whose results have strategic significance for the company. They can extend beyond company boundaries and involve activities of customers, suppliers, or even competitors." (Chapter 2.1 Differences between processes and business processes) ← automatic translation from German
But also other qualities (facts) such as data and business objects (as inputs/outputs, formal organizations and roles (responsible/accountable/consulted/informed persons, see RACI), resources and IT-systems as well as guidelines/instructions (work equipment), requirements, key figures etc. can be modeled.
Incorporating more of these characteristics into business process modeling enhances the accuracy of abstraction but also increases model complexity. "To reduce complexity and improve the comprehensibility and transparency of the models, the use of a view concept is recommended."(Chapter 2.4 Views of process modeling) ← automatic translation from German There is also a brief comparison of the view concepts of five relevant German-speaking schools of business informatics: 1) August W. Scheer, 2) Hubert Österle, 3) Otto K. Ferstl and Elmar J. Sinz, 4) Hermann Gehring and 5) Andreas Gadatsch.
The term views (August W. Scheer, Otto K. Ferstl and Elmar J. Sinz, Hermann Gehring and Andreas Gadatsch) is not used uniformly in all schools of business informatics – alternative terms are design dimensions (Hubert Österle) or perspectives (Zachman).
M. Rosemann, A. Schwegmann, and P. Delfmann also see disadvantages in the concept of views: "It is conceivable to create information models for each perspective separately and thus partially redundantly. However, redundancies always mean increased maintenance effort and jeopardize the consistency of the models." (Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German
According to Andreas Gadatsch, business process modeling is understood as a part of business process management alongside process definition and process management. (Chapter 1.1 Process management) ← automatic translation from German
Business process modeling is also a central aspect of holistic company mapping – which also deals with the mapping of the corporate mission statement, corporate policy/corporate governance, organizational structure, process organization, application architecture, regulations and interest groups as well as the market.
According to the European Association of Business Process Management EABPM, there are three different types of end-to-end business processes:
Leadership processes;
Execution processes and
Support processes. (Chapter 2.4 Process types) ← automatic translation from German
These three process types can be identified in every company and are used in practice almost without exception as the top level for structuring business process models. Instead the term leadership processes the term management processes is typically used. Instead of the term execution processes the term core processes has become widely accepted. (Chapter 6.2.1 Objectives and concept) ← automatic translation from German, (Chapter 1.3 The concept of process) ← automatic translation from German, (Chapter 4.12.2 Differentiation between core and support objectives) ← automatic translation from German, (Chapter 6.2.2 Identification and rough draft) ← automatic translation from German
If the core processes are then organized/decomposed at the next level in supply chain management (SCM), customer relationship management (CRM), and product lifecycle management (PLM), standard models of large organizations and industry associations such as the SCOR model can also be integrated into business process modeling.
== History ==
Techniques to model business processes such as the flow chart, functional flow block diagram, control flow diagram, Gantt chart, PERT diagram, and IDEF have emerged since the beginning of the 20th century. The Gantt charts were among the first to arrive around 1899, the flow charts in the 1920s, functional flow block diagram and PERT in the 1950s, and data-flow diagrams and IDEF in the 1970s. Among the modern methods are Unified Modeling Language and Business Process Model and Notation. Still, these represent just a fraction of the methodologies used over the years to document business processes. The term business process modeling was coined in the 1960s in the field of systems engineering by S. Williams in his 1967 article "Business Process Modelling Improves Administrative Control". His idea was that techniques for obtaining a better understanding of physical control systems could be used in a similar way for business processes. It was not until the 1990s that the term became popular.
In the 1990s, the term process became a new productivity paradigm. Companies were encouraged to think in processes instead of functions and procedures. Process thinking looks at the chain of events in the company from purchase to supply, from order retrieval to sales, etc. The traditional modeling tools were developed to illustrate time and cost, while modern tools focus on cross-functional activities. These cross-functional activities have increased significantly in number and importance, due to the growth of complexity and dependence. New methodologies include business process redesign, business process innovation, business process management, integrated business planning, among others, all "aiming at improving processes across the traditional functions that comprise a company".
In the field of software engineering, the term business process modeling opposed the common software process modeling, aiming to focus more on the state of the practice during software development. In that time (the early 1990s) all existing and new modeling techniques to illustrate business processes were consolidated as 'business process modeling languages'. In the Object Oriented approach, it was considered to be an essential step in the specification of business application systems. Business process modeling became the base of new methodologies, for instance, those that supported data collection, data flow analysis, process flow diagrams, and reporting facilities. Around 1995, the first visually oriented tools for business process modeling and implementation were presented.
== Objectives of business process modeling ==
The objective of business process modeling is a – usually graphical – representation of end-to-end processes, whereby complex facts of reality are documented using a uniform (systematized) representation and reduced to the substantial (qualities). Regulatory requirements for the documentation of processes often also play a role here (e.g. document control, traceability, or integrity), for example from quality management, information security management or data protection.
Business process modeling typically begins with determining the environmental requirements: First, the goal of the modeling (applications of business process modeling) must be determined. Business process models are now often used in a multifunctional way (see above). Second the model addressees must be determined, as the properties of the model to be created must meet their requirements. This is followed by the determination of the business processes to be modeled.
The qualities of the business process that are to be represented in the model are specified in accordance with the goal of the modeling. As a rule, these are not only the functions constituting the process, including the relationships between them, but also a number of other qualities, such as formal organization, input, output, resources, information, media, transactions, events, states, conditions, operations and methods.
The objectives of business process modeling may include (compare: Association of Business Process Management Professionals (ABPMP) (Chapter 3.1.2 Process characteristics and properties) ← automatic translation from German):
Documentation of the company's business processes
to gain knowledge of the business processes
to map business unit(s) with the applicable regulations
to transfer business processes to other locations
to determine the requirements of new business activities
to provide an external framework for the set of rules from procedures and work instructions
to meet the requirements of business partners or associations (e.g. certifications)
to gain advantages over competitors (e.g. in tenders)
to comply with legal regulations (e.g. for operators of critical infrastructures, banks or producers of armaments)
to check the fulfillment of standards and compliance requirements
to create the basis for communication and discussion
to train or familiarize employees
to avoid loss of knowledge (e.g. due to staff leaving)
to support quality and environmental management
Definition of process performance indicators and monitoring of process performance
to increase process speed
to reduce cycle time
to increase quality
to reduce costs, such as labor, materials, scrap, or capital costs
Preparation/Implementation of a business process optimization (which usually begins with an analysis of the current situation)
to support the analysis of the current situation
to develop alternative processes
to introduce new organizational structures
to outsource company tasks
to redesign, streamline, or improve company processes (e.g. with the help of the CMM)
Preparation of an information technology project
to support a software evaluation/software selection
to support the customizing of commercial off-the-shelf software
to introduce automation or IT support with a workflow management system
Definition of interfaces and SLAs
Modularization of company processes
Benchmarking between parts of the company, partners and competitors
Performing activity-based costing and simulations
to understand how the process reacts to different stress rituals or expected changes
to evaluate the effectiveness of measures for business process optimization and compare alternatives
Finding the best practice
Accompanying organizational changes
such as the sale or partial sale
such as the acquisition and integration of companies or parts of companies
such as the introduction or change of IT systems or organizational structures
Participation in competitions (such as EFQM).
== Applications of business process modeling ==
Since business process modeling in itself makes no direct contribution to the financial success of a company, there is no motivation for business process modeling from the most important goal of a company, the intention to make a profit. The motivation of a company to engage in business process modeling therefore always results from the respective purpose. Michael Rosemann, Ansgar Schwegmann und Patrick Delfmann lists a number of purposes as motivation for business process modeling:
Organizational documentation, with the "objective of increasing transparency about the processes in order to increase the efficiency of communication about the processes" (Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German, (Chapter 2.5.4 Areas of application for process modeling in practice) ← automatic translation from German including the ability to create process templates to relocate or replicate business functions or the objective to create a complete company model
Process-oriented re-organization, both in the sense of "(revolutionary) business process re-engineering and in the sense of continual (evolutionary) process improvement" (Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German with the objective of a vulnerability assessment (Chapter 2.5.4 Areas of application for process modeling in practice) ← automatic translation from German, process optimization (e.g. by controlling and reducing total cycle time (TCT), through Kaizen, Six Sigma etc.) or process standardization
Continuous process management, as "planning, implementation and control of processes geared towards sustainability" (Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German
Certifications according to DIN ISO/IEC 9001 (or also according to ISO/IEC 14001, ISO/IEC 27001 etc.)
Benchmarking, defined as "comparison of company-specific structures and performance with selected internal or external references. In the context of process modeling, this can include the comparison of process models (structural benchmarking) or the comparison of process key figures" (Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German
Knowledge management with the "aim of increasing transparency about the company's knowledge resource in order to improve the process of identifying, acquiring, utilizing, developing and distributing knowledge" (Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German
Selection of ERP software, which "often documents its functionality in the form of (software-specific) reference models, so that it makes sense to also use a comparison of the company-specific process models with these software-specific models for software selection" (Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German, < (Chapter 2.5.4 Areas of application for process modeling in practice) ← automatic translation from German
Model-based customization, i.e. "the configuration of commercial off-the-shelf software" often by means of "parameterization of the software through configuration of reference models" (Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German, (Chapter 2.5.4 Areas of application for process modeling in practice) ← automatic translation from German
Software development, using the processes for "the description of the requirements for the software to be developed at a conceptual level as part of requirements engineering"(Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German, (Chapter 3 The path to a process-oriented application landscape) ← automatic translation from German, (Chapter 2.5.4 Areas of application for process modeling in practice) ← automatic translation from German
Workflow management, for which the process models are "the basis for the creation of instantiable workflow models" (Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German
Simulation with the aim of "investigating the system behavior over time" and the "identification of weak points that would not be revealed by a pure model view" (Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German
=== Business process re-engineering (BPR) ===
Within an extensive research program initiated in 1984 titled "Management in the 1990s" at MIT, the approach of process re-engineering emerged in the early 1990s. The research program was designed to explore the impact of information technology on the way organizations would be able to survive and thrive in the competitive environment of the 1990s and beyond. In the final report, N. Venkat Venkatraman summarizes the result as follows: The greatest increases in productivity can be achieved when new processes are planned in parallel with information technologies.
This approach was taken up by Thomas H. Davenport (Part I: A Framework For Process Innovation, Chapter: Introduction) as well as Michael M. Hammer and James A. Champy and developed it into business process re-engineering (BPR) as we understand it today, according to which business processes are fundamentally restructured in order to achieve an improvement in measurable performance indicators such as costs, quality, service and time.
Business process re-engineering has been criticized in part for starting from a "green field" and therefore not being directly implementable for established companies. Hermann J. Schmelzer and Wolfgang Sesselmann assess this as follows: "The criticism of BPR has an academic character in many respects. ... Some of the points of criticism raised are justified from a practical perspective. This includes pointing out that an overly radical approach carries the risk of failure. It is particularly problematic if the organization and employees are not adequately prepared for BPR." (Chapter 6.2.1 Objectives and concept) ← automatic translation from German
The high-level approach to BPR according to Thomas H. Davenport consists of:
Identifying Process for Innovation
Identifying Change Levers
Developing Process Visions
Understanding Existing Processes
Designing and Prototyping the New Process
=== Certification of the management system according to ISO ===
With ISO/IEC 27001:2022, the standard requirements for management systems are now standardized for all major ISO standards and have a process character.
==== General standard requirements for management systems with regard to processes ====
In the ISO/IEC 9001, ISO/IEC 14001, ISO/IEC 27001 standards, this is anchored in Chapter 4.4 in each case:
Each of these standards requires the organization to establish, implement, maintain and continually improve an appropriate management system "including the processes needed and their interactions"., ,
In the definition of the standard requirements for the processes needed and their interactions, ISO/IEC 9001 is more specific in clause 4.4.1 than any other ISO standard for management systems and defines that "the organization shall determine and apply the processes needed for" an appropriate management system throughout the organization and also lists detailed requirements with regard to processes:
Determine the inputs required and the outputs expected
Determine the sequence and interaction
Define and apply the criteria and methods (including monitoring, measurement, and related performance indicators) for effective operation and control
Determine the resources needed
Assign the responsibilities and authorities
Address the risks and opportunities
Evaluate these processes and implement any changes needed for effective operation and control
Improve
In addition, clause 4.4.2 of the ISO/IEC 9001 lists some more
detailed requirements with regard to processes:
Maintain documented information
Retain documented information for correct implementation
The standard requirements for documented information are also relevant for business process modelling as part of an ISO management system.
==== Specific standard requirements for management systems with regard to documented information ====
In the standards ISO/IEC 9001, ISO/IEC 14001, ISO/IEC 27001 the requirements with regard to documented information are anchored in clause 7.5 (detailed in the respective standard in clauses "7.5.1. General", "7.5.2. Creating and updating" and "7.5.3. Control of documented information").
The standard requirements of ISO/IEC 9001 used here as an example include in clause "7.5.1. General"
Documented information by the standard requirements; and
Documented information on the effectiveness of the management system must be included;
Demand in clause "7.5.2. Creating and updating"
Labelling and description (e.g. with title, date, author or reference number);
Suitable format (e.g. language, software version, graphics) and medium (e.g. paper, electronic); and
Review and approval
And require in clause "7.5.3. Control of documented information"
To ensure suitable and available at the place and time as required;
To ensure protection (e.g. against loss of confidentiality, improper use or loss of integrity);
To consider distribution, access, retrieval,and use;
To consider filing/storage and preservation (including preservation of readability);
To perform monitoring of changes (e.g. version control); and
To consider storage and disposition of further whereabouts.
Based on the standard requirements,
To determine and continuously improve the required processes and their interactions
To determine and maintain the content of the documented information deemed necessary and
To ensure the secure handling of documented information (protection, access, monitoring, and maintenance)
Preparing for ISO certification of a management system is a very good opportunity to establish or promote business process modelling in the organisation.
=== Business process optimization ===
Hermann J. Schmelzer and Wolfgang Sesselmann point out that the field of improvement of the three methods mentioned by them as examples for process optimization (control and reduction of total cycle time (TCT), Kaizen and Six Sigma) are processes: In the case of total cycle time (TCT), it is the business processes (end-to-end processes) and sub-processes, with Kaizen it is the process steps and activity and with Six Sigma it is the sub-processes, process steps and activity. (Chapter 6.3.1 Total Cycle Time (TCT), KAIZEN and Six Sigma in comparison) ← automatic translation from German
For the total cycle time (TCT), Hermann J. Schmelzer and Wolfgang Sesselmann list the following key features: (Chapter 6.3.2 Total Cycle Time (TCT)) ← automatic translation from German
Identify barriers that hinder the process flow
Eliminate barriers and substitute processes
Measure the effects of barrier removal
Comparison of the measured variables with the targets
Consequently, business process modeling for TCT must support adequate documentation of barriers, barrier handling, and measurement.
When examining Kaizen tools, initially, there is no direct connection to business processes or business process modeling. However, Kaizen and business process management can mutually enhance each other. In the realm of business process management, Kaizen's objectives are directly derived from the objectives for business processes and sub-processes. This linkage ensures that Kaizen measures effectively support the overarching business objectives." (Chapter 6.3.3 KAIZEN) ← automatic translation from German
Six Sigma is designed to prevent errors and improve the process capability so that the proportion of process outcomes that meet the requirements is 6σ – or in other words, for every million process outcomes, only 3.4 errors occur. Hermann J. Schmelzer and Wolfgang Sesselmann explain: "Companies often encounter considerable resistance at a level of 4σ, which makes it necessary to redesign business processes in the sense of business process re-engineering (design for Six Sigma)." (Chapter 6.3.4 Six Sigma) ← automatic translation from German For a reproducible measurement of process capability, precise knowledge of the business processes is required and business process modeling is a suitable tool for design for Six Sigma. Six Sigma, therefore, uses business process modeling according to SIPOC as an essential part of the methodology, and business process modeling using SIPOC has established itself as a standard tool for Six Sigma.
=== Inter-company business process modeling ===
The aim of inter-company business process modeling is to include the influences of external stakeholders in the analysis or to achieve inter-company comparability of business processes, e.g. to enable benchmarking.
Martin Kugler lists the following requirements for business process modeling in this context: (Chapter 14.2.1 Requirements for inter-company business process modeling) ← automatic translation from German
Employees from different companies must comprehend business process models, highlighting the critical importance of familiarity with modeling techniques. Acceptance of business process modeling is bolstered by the simplicity of representation. Models should be clear, easy to understand, and as self-explanatory as possible. Standardization of the presentation of inter-company business process models across different companies is essential to ensure consistent comprehensibility and acceptance, particularly given the varied representations used within different organizations. It is imperative to employ an industry-neutral modeling technique to accommodate the diverse backgrounds of companies along the value chain (supplier, manufacturer, retailer, customer), which typically span different industries.
== Topics ==
=== Analysis of business activities ===
==== Define framework conditions ====
The analysis of business activities determines and defines the framework conditions for successful business process modeling. This is where the company should start,
define the relevant applications of business process modeling on the basis of the business model and where it is positioned in the value chain,
derive the strategy for the long-term success of business process modeling from the business strategy and develop an approach for structuring the business process models. Both the relevant purposes and the strategy directly influence the process map.
This strategy for the long-term success of business process modeling can be characterized by the market-oriented view and/or the resource-based view. Jörg Becker and Volker Meise explain: "Whereas in the market view, the industry and the behavior of competitors directly determine a company's strategy, the resource-oriented approach takes an internal view by analyzing the strengths and weaknesses of the company and deriving the direction of development of the strategy from this." (Chapter 4.6 The resource-based view) ← automatic translation from German And further: "The alternative character initially formulated in the literature between the market-based and resource-based view has now given way to a differentiated perspective. The core competence approach is seen as an important contribution to the explanation of success potential, which is used alongside the existing, market-oriented approaches."(Chapter 4.7 Combination of views) ← automatic translation from German Depending on the company's strategy, the process map will therefore be the business process models with a view to market development and to resource optimization in a balanced manner.
==== Identify business processes ====
Following the identification phase, a company's business processes are distinguished from one another through an analysis of their respective business activities (refer also to business process analysis). A business process constitutes a set of interconnected, organized actions (activities) geared towards delivering a specific service or product (to fulfill a specific goal) for a particular customer or customer group.
According to the European Association of Business Process Management (EABPM), establishing a common understanding of the current process and its alignment with the objectives serves as an initial step in process design or reengineering." (Chapter 4 Process analysis) ← automatic translation from German
The effort involved in analysing the as-is processes is repeatedly criticised in the literature, especially by proponents of business process re-engineering (BPR), and it is suggested that the definition of the target state should begin immediately.
Hermann J. Schmelzer and Wolfgang Sesselmann, on the other hand, discuss and evaluate the criticism levelled at the radical approach of business process re-engineering (BPR) in the literature and "recommend carrying out as-is analyses. A reorganisation must know the current weak points in order to be able to eliminate them. The results of the analyses also provide arguments as to why a process re-engineering is necessary. It is also important to know the initial situation for the transition from the current to the target state. However, the analysis effort should be kept within narrow limits. The results of the analyses should also not influence the redesign too strongly." (Chapter 6.2.2 Critical assessment of the BPR) ← automatic translation from German
==== Structure business processes – building a process map ====
Timo Füermann explains: "Once the business processes have been identified and named, they are now compiled in an overview. Such overviews are referred to as process maps." (Chapter 2.4 Creating the process map) ← automatic translation from German
Jörg Becker and Volker Meise provide the following list of activities for structuring business processes:
Enumeration of the main processes,
Definition of the process boundaries,
Determining the strategic relevance of each process,
Analysis of the need for improvement of a process and
Determining the political and cultural significance of the process (Chapter 4.10 Defining the process structure) ← automatic translation from German
The structuring of business processes generally begins with a distinction between management, core, and support processes.
Management processes govern the operation of a company. Typical management processes include corporate governance and strategic management. They define corporate objectives and monitor the achievement of objectives.
Core processes constitute the core business and create the primary value stream. Typical operational processes are purchasing, manufacturing, marketing, and sales. They generate visible, direct customer benefits.
Support processes provide and manage operational resources. They support the core and management processes by ensuring the smooth running of business operations. Examples include accounting, recruitment, and technical support.
==== Structure core processes based on the strategy for the long-term success of business process modeling ====
As the core business processes clearly make up the majority of a company's identified business processes, it has become common practice to subdivide the core processes once again. There are different approaches to this depending on the type of company and business activity. These approaches are significantly influenced by the defined application of business process modeling and the strategy for the long-term success of business process modeling.
In the case of a primarily market-based strategy, end-to-end core business processes are often defined from the customer or supplier to the retailer or customer (e.g. "from offer to order", "from order to invoice", "from order to delivery", "from idea to product", etc.). In the case of a strategy based on resources, the core business processes are often defined on the basis of the central corporate functions ("gaining orders", "procuring and providing materials", "developing products", "providing services", etc.).
In a differentiated view without a clear focus on the market view or the resource view, the core business processes are typically divided into CRM, PLM and SCM.
CRM (customer relationship management) describes the business processes for customer acquisition, quotation and order creation as well as support and maintenance
PLM (product lifecycle management) describes the business processes from product portfolio planning, product planning, product development and product maintenance to product discontinuation and individual developments
SCM (supply chain management) describes the business processes from supplier management through purchasing and all production stages to delivery to the customer, including installation and commissioning where applicable
However, other approaches to structuring core business processes are also common, for example from the perspective of customers, products or sales channels.
"Customers" describes the business processes that can be assigned to specific customer groups (e.g. private customer, business customer, investor, institutional customer)
"Products" describes the business processes that are product-specific (e.g. current account, securities account, loan, issue)
"Sales channels" describe the business processes that are typical for the type of customer acquisition and support (e.g. direct sales, partner sales, online).
The result of structuring a company's business processes is the process map (shown, for example, as a value chain diagram). Hermann J. Schmelzer and Wolfgang Sesselmann add: "There are connections and dependencies between the business processes. They are based on the transfer of services and information. It is important to know these interrelationships in order to understand, manage, and control the business processes." (Chapter 2.4.3 Process map) ← automatic translation from German
=== Definition of business processes ===
The definition of business processes often begins with the company's core processes because they
Fulfill their own market requirements,
Operate largely autonomously/independently and independently of other business areas and
Contribute to the business success of the company,
For the company
Have a strong external impact,
Can be easily differentiated from other business processes and
Offer the greatest potential for business process optimization, both by improving process performance or productivity and by reducing costs.
The scope of a business process should be selected in such a way that it contains a manageable number of sub-processes, while at the same time keeping the total number of business processes within reasonable limits. Five to eight business processes per business unit usually cover the performance range of a company.
Each business process should be independent – but the processes are interlinked.
The definition of a business process includes: What result should be achieved on completion? What activities are necessary to achieve this? Which objects should be processed (orders, raw materials, purchases, products, ...)?
Depending on the prevailing corporate culture, which may either be more inclined towards embracing change or protective of the status quo and the effectiveness of communication, defining business processes can prove to be either straightforward or challenging. This hinges on the willingness of key stakeholders within the organization, such as department heads, to lend their support to the endeavor. Within this context, effective communication plays a pivotal role.
In elucidating this point, Jörg Becker and Volker Meise elucidate that the communication strategy within an organizational design initiative should aim to garner support from members of the organization for the intended structural changes. It is worth noting that business process modeling typically precedes business process optimization, which entails a reconfiguration of process organization – a fact well understood by the involved parties. Therefore, the communication strategy must focus on persuading organizational members to endorse the planned structural adjustments." (Chapter 4.15 Influencing the design of the regulatory framework) ← automatic translation from German In the event of considerable resistance, however, external knowledge can also be used to define the business processes.
==== General process identification and individual process identification ====
Jörg Becker and Volker Meise mention two approaches (general process identification and individual process identification) and state the following about general process identification: "In the general process definition, it is assumed that basic, generally valid processes exist that are the same in all companies." It goes on to say: "Detailed reference models can also be used for general process identification. They describe industry- or application system-specific processes of an organization that still need to be adapted to the individual case, but are already coordinated in their structure." (Chapter 4.11 General process identification) ← automatic translation from German
Jörg Becker and Volker Meise state the following about individual process identification: "In individual or singular process identification, it is assumed that the processes in each company are different according to customer needs and the competitive situation and can be identified inductively based on the individual problem situation." (Chapter 4.12 Individual process identification) ← automatic translation from German
The result of the definition of the business processes is usually a rough structure of the business processes as a value chain diagram.
=== Further structuring of business processes ===
The rough structure of the business processes created so far will now be decomposed – by breaking it down into sub-processes that have their own attributes but also contribute to achieving the goal of the business process. This decomposition should be significantly influenced by the application and strategy for the long-term success of business process modeling and should be continued as long as the tailoring of the sub-processes defined this way contributes to the implementation of the purpose and strategy.
A sub-process created in this way uses a model to describe the way in which procedures are carried out in order to achieve the intended operating goals of the company. The model is an abstraction of reality (or a target state) and its concrete form depends on the intended use (application).
A further decomposition of the sub-processes can then take place during business process modeling if necessary. If the business process can be represented as a sequence of phases, separated by milestones, the decomposition into phases is common. Where possible, the transfer of milestones to the next level of decomposition contributes to general understanding.
The result of the further structuring of business processes is usually a hierarchy of sub-processes, represented in value chain diagrams. It is common that not all business processes have the same depth of decomposition. In particular, business processes that are not safety-relevant, cost-intensive or contribute to the operating goal are broken down to a much lesser depth. Similarly, as a preliminary stage of a decomposition of a process planned for (much) later, a common understanding can first be developed using simpler / less complex means than value chain diagrams – e.g. with a textual description or with a turtle diagram (Chapter 3.1 Defining process details) ← automatic translation from German (not to be confused with turtle graphic!).
=== Assigning the process responsibility ===
Complete, self-contained processes are summarized and handed over to a responsible person or team. The process owner is responsible for success, creates the framework conditions, and coordinates his or her approach with that of the other process owners. Furthermore, he/she is responsible for the exchange of information between the business processes. This coordination is necessary in order to achieve the overall goal orientation.
=== Modeling business process ===
==== Design of the process chains ====
If business processes are documented using a specific IT-system and representation, e.g. graphically, this is generally referred to as modeling. The result of the documentation is the business process model.
As is modeling and to be modeling
The question of whether the business process model should be created through as is modeling or to be modeling is significantly influenced by the defined application and the strategy for the long-term success of business process modeling. The previous procedure with analysis of business activities, defineition of business processes and further structuring of business processes is advisable in any case.
As-is modeling
Ansgar Schwegmann and Michael Laske explain: "Determining the current status is the basis for identifying weaknesses and localizing potential for improvement. For example, weak points such as organizational breaks or insufficient IT penetration can be identified." (Chapter 5.1 Intention of the as is modeling) ← automatic translation from German
The following disadvantages speak against as is modeling:
The creativity of those involved in the project to develop optimal target processes is stifled, as old structures and processes may be adopted without reflection in downstream target modeling and
The creation of detailed as is models represents a considerable effort, also influenced by the effort required to reach a consensus between the project participants at interfaces and responsibility transitions
These arguments weigh particularly heavily if Business process re-engineering (BPR) is planned anyway.
Ansgar Schwegmann and Michael Laske also list a number of advantages of as is modeling: (Chapter 5.1 Intention of as-is modeling) ← automatic translation from German
Modeling the current situation is the basis for identifying weaknesses and potential for improvement
Knowledge of the current state is a prerequisite for developing migration strategies to the target state
Modeling the current state provides an overview of the existing situation, which can be particularly valuable for newly involved and external project participants
The as is modeling can be a starting point for training and introducing project participants to the tools and methods
The as is model can serve as a checklist for later target modeling so that no relevant issues are overlooked
The as is models can be used as starting models for target modeling if the target state is very similar to the current situation, at least in some areas
Other advantages can also be found, such as
The as is model is suitable for supporting certification of the management system
The as is model can serve as a basis for organizational documentation (written rules, specifications and regulations of the organization, ...)
The requirements for workflow management can be checked on the basis of the as is model (definition of processes, repetition rate, ...)
Key figures can be collected on the basis of the as is model in order to be compared with the key figures achieved after a reorganization and to measure the success of the measures.
To be modeling
Mario Speck and Norbert Schnetgöke define the objective of to be modeling as follows: "The target processes are based on the strategic goals of the company. This means that all sub-processes and individual activities of a company must be analyzed with regard to their target contribution. Sub-processes or activities that cannot be identified as value-adding and do not serve at least one non-monetary corporate objective must therefore be eliminated from the business processes." (Chapter 6.2.3 Capturing and documenting to be models
)
They also list five basic principles that have proven their worth in the creation of to be models:
Parallel processing of sub-processes and individual activities is preferable to sequential processing – it contains the greater potential for optimization.
The development of a sub-process should be carried out as consistently as possible by one person or group – this allows the best model quality to be achieved.
Self-monitoring should be made possible for individual sub-processes and individual activities during processing – this reduces quality assurance costs.
If not otherwise possible, at least one internal customer/user should be defined for each process – this strengthens customer awareness and improves the assessability of process performance.
Learning effects that arise during the introduction of the target processes should be taken into account – this strengthens the employees' awareness of value creation.
The business process model created by as is modeling or to be modeling consists of:
==== Sub-processes ====
Delimitation
August W. Scheer is said to have said in his lectures: A process is a process is a process. This is intended to express the recursiveness of the term, because almost every process can be broken down into smaller processes (sub-processes). In this respect, terms such as business process, main process, sub-process or elementary process are only a desperate attempt to name the level of process decomposition. As there is no universally valid agreement on the granularity of a business process, main process, sub-process or elementary process, the terms are not universally defined, but can only be understood in the context of the respective business process model.
In addition, some German-speaking schools of business informatics do not use the terms process (in the sense of representing the sequence of actions) and function (in the sense of a delimited corporate function/action (activity) area that is clearly assigned to a corporate function owner).
For example, in August W. Scheer's ARIS it is possible to use functions from the function view as processes in the control view and vice versa. Although this has the advantage that already defined processes or functions can be reused across the board, it also means that the proper purpose of the function view is diluted and the ARIS user is no longer able to separate processes and functions from one another.
The first image shows as a value chain diagram how the business process Edit sales pipeline has been broken down into sub-processes (in the sense of representing the sequence of actions (activities)) based on its phases.
The second image shows an excerpt of typical functions (in the sense of delimited corporate function/action (activity) areas, which are assigned to a corporate function owner), which are structured based on the areas of competence and responsibility hierarchy. The corporate functions that support the business process Edit sales pipeline are marked in the function tree.
Utilization
A business process can be decomposed into sub-processes until further decomposition is no longer meaningful/possible (smallest meaningful sub-process = elementary process). Usually, all levels of decomposition of a business process are documented in the same methodology: Process symbols. The process symbols used when modeling one level of decomposition then usually refer to the sub-processes of the next level until the level of elementary processes is reached. Value chain diagrams are often used to represent business processes, main processes, sub-processes and elementary processes.
Workflow
A workflow is a representation of a sequence of tasks, declared as work of a person, of a simple or complex mechanism, of a group of persons, of an organization of staff, or of machines (including IT-systems). A workflow is therefore always located at the elementary process level. The workflow may be seen as any abstraction of real work, segregated into workshare, work split, or other types of ordering. For control purposes, the workflow may be a view of real work under a chosen aspect.
==== Functions (Tasks) ====
Delimitation
The term functions is often used synonymously for a delimited corporate function/action (activita) area, which is assigned to a corporate function owner, and the atomic activity (task) at the level of the elementary processes. In order to avoid the double meaning of the term function, the term task can be used for the atomic activities at the level of the elementary processes in accordance with the naming in BPMN. Modern tools also offer the automatic conversion of a task into a process, so that it is possible to create a further level of process decomposition at any time, in which a task must then be upgraded to an elementary process.
Utilization
The graphical elements used at the level of elementary processes then describe the (temporal-logical) sequence with the help of functions (tasks). The sequence of the functions (tasks) within the elementary processes is determined by their logical linking with each other (by logical operators or Gateways), provided it is not already specified by input/output relationships or Milestones. It is common to use additional graphical elements to illustrate interfaces, states (events), conditions (rules), milestones, etc. in order to better clarify the process. Depending on the modeling tool used, very different graphical representation (models) are used.
Furthermore, the functions (tasks) can be supplemented with graphical elements to describe inputs, outputs, systems, roles, etc. with the aim of improving the accuracy of the description and/or increasing the number of details. However, these additions quickly make the model confusing. To resolve the contradiction between accuracy of description and clarity, there are two main solutions: Outsourcing the additional graphical elements for describing inputs, outputs, systems, roles, etc. to a Function Allocation Diagram (FAD) or selectively showing/hiding these elements depending on the question/application.
The function allocation diagram shown in the image illustrates the addition of graphical elements for the description of inputs, outputs, systems, roles, etc. to functions (tasks) very well.
==== Master data (artifacts) ====
The term master data is neither defined by The Open Group (The Open Group Architecture Framework, TOGAF) or John A. Zachman (Zachman Framework) nor any of the five relevant German-speaking schools of business informatics: 1) August W. Scheer, 2) Hubert Österle, 3) Otto K. Ferstl and Elmar J. Sinz, 4) Hermann Gehring and 5) Andreas Gadatsch and is commonly used in the absence of a suitable term in the literature. It is based on the general term for data that represents basic information about operationally relevant objects and refers to basic information that is not primary information of the business process.
For August W. Scheer in ARIS, this would be the basic information of the organization view, data view, function view and performance view. (Chapter 1 The vision: A common language for IT and management) ← automatic translation from German
For Andreas Gadatsch in GPM (Ganzheitliche Prozessmodellierung (German), means holistic process modelling), this would be the basic information of the organizational structure view, activity structure view, data structure view, and application structure view. (Chapter 3.2 GPM – Holistic process modelling) ← automatic translation from German
For Otto K. Ferstl and Elmar J. Sinz in SOM (Semantic Objektmodell), this would be the basic information of the levels Business plan and Resourcen.
Master data can be, for example:
The business unit in whose area of responsibility a process takes place
The business object whose information is required to execute the process
The product that is produced by the process
The policy to be observed when executing the process
The risk that occurs in a process
The measure that is carried out to increase the process capability
The control that is performed to ensure the governance of the process
The IT-system that supports the execution of the business process
The milestone that divides processes into process phases
etc.
By adding master data to the business process modeling, the same business process model can be used for different application and a return on investment for the business process modeling can be achieved more quickly with the resulting synergy.
Depending on how much value is given to master data in business process modeling, it is still possible to embed the master data in the process model without negatively affecting the readability of the model or the master data should be outsourced to a separate view, e.g. Function Allocation Diagrams.
If master data is systematically added to the business process model, this is referred to as an artifact-centric business process model.
Artifact-centric business process
The artifact-centric business process model has emerged as a holistic approach for modeling business processes, as it provides a highly flexible solution to capture operational specifications of business processes. It particularly focuses on describing the data of business processes, known as "artifacts", by characterizing business-relevant data objects, their life-cycles, and related services. The artifact-centric process modelling approach fosters the automation of the business operations and supports the flexibility of the workflow enactment and evolution.
==== Integration of external documents and IT-systems ====
The integration of external documents and IT-systems can significantly increase the added value of a business process model.
For example, direct access to objects in a knowledge database or documents in a rule framework can significantly increase the benefits of the business process model in everyday life and thus the acceptance of business process modeling. All IT-systems involved can exploit their specific advantages and cross-fertilize each other (e.g. link to each other or standardize the filing structure):
short response times of the knowledge database; characterized by a relatively high number of auditors, very fast adaptation of content, and low requirements for the publication of content – e.g. realized with a wiki
Legally compliant documents of the rule framework; characterized by a very small number of specially trained auditors, validated adaptation of content, and high requirements for the release of content – e.g. implemented with a document management system
Integrating graphical representation of processes by a BPM system; characterized by a medium number of auditors, moderately fast adaptation of content, and modest requirements for the release of content
If all relevant objects of the knowledge database and / or documents of the rule framework are connected to the processes, the end users have context-related access to this information and do not need to be familiar with the respective filing structure of the connected systems.
The direct connection of external systems can also be used to integrate current measurement results or system statuses into the processes (and, for example, to display the current operating status of the processes), to display widgets and show output from external systems or to jump to external systems and initiate a transaction there with a preconfigured dialog.
Further connections to external systems can be used, for example, for electronic data interchange (EDI).
=== Model consolidation ===
This is about checking whether there are any redundancies. If so, the relevant sub-processes are combined. Or sub-processes that are used more than once are outsourced to support processes. For a successful model consolidation, it may be necessary to revise the original decomposition of the sub-processes.
Ansgar Schwegmann and Michael Laske explain: "A consolidation of the models of different modeling complexes is necessary in order to obtain an integrated ... model." (Chapter 5.2.4 Model consolidation) ← automatic translation from German They also list a number of aspects for which model consolidation is important:
"Modeling teams need to drive harmonization of models during model creation to facilitate later consolidation."
"If an object-oriented decomposition of the problem domain is carried out, it must be analyzed at an early stage whether similar structures and processes of different objects exist."
"If a function-oriented decomposition of the problem domain is undertaken, the interfaces between the modelled areas in particular must be harmonized."
"In general, a uniform level of detail of the models" (in each decomposition level) "should be aimed for during modeling in order to facilitate the comparability of the submodels and the precise definition of interfaces."
"After completion of the modeling activities in the teams of the individual modeling complexes, [the] created partial models are to be integrated into an overall model."
"In order to facilitate the traceability of the mapped processes, it makes sense to explicitly model selected business transactions that are particularly important for the company and to map them at the top level. ... Colour coding, for example, can also be used to differentiate between associated organizational units." (Chapter 5.2.4 Model consolidation) ← automatic translation from German
=== Process chaining and control flow patterns ===
The chaining of the sub-processes with each other and the chaining of the functions (tasks) in the sub-processes is modeled using Control Flow Patterns.
Material details of the chaining (What does the predecessor deliver to the successor?) are specified in the process interfaces if intended.
=== Process interfaces ===
Process interfaces are defined in order to
Show the relationships between the sub-processes after the decomposition of business processes or
Determine what the business processes or their sub-processes must 'pass on' to each other.
As a rule, this what and its structure is determined by the requirements in the subsequent process.
Process interfaces represent the exit from the current business process/sub-process and the entry into the subsequent business process/sub-process.
Process interfaces are therefore description elements for linking processes section by section. A process interface can
Represent a business process model/sub-process model without the business process model referenced by it already being defined.
Represent a business process model/sub-process model that is referenced from two/multiple superordinate or neighboring business process models.
Represent two/multiple variants of the same business process model/sub-process model.
Process interfaces are agreed between the participants of superordinate/subordinate or neighboring business process models. They are defined and linked once and used as often as required in process models.
Interfaces can be defined by:
Transfer of responsibility/accountability from one business unit to another,
Transfer of data from one IT-system to another,
Original input (information / materials at the beginning of the business process),
Transfer of intermediate results between sub-processes (output at the predecessor and input at the successor are usually identical) or
Final output (the actual result / goal of the business process).
In real terms, the transferred inputs/outputs are often data or information, but any other business objects are also conceivable (material, products in their final or semi-finished state, documents such as a delivery bill). They are provided via suitable transport media (e.g. data storage in the case of data).
=== Business process management ===
See article Business process management.
In order to put improved business processes into practice, change management programs are usually required. With advances in software design, the vision of BPM models being fully executable (enabling simulations and round-trip engineering) is getting closer to reality.
==== Adaptation of process models ====
In business process management, process flows are regularly reviewed and optimized (adapted) if necessary. Regardless of whether this adaptation of process flows is triggered by continuous process improvement or by process reorganization (business process re-engineering), it entails an update of individual sub-processes or an entire business process.
== Representation type and notation ==
In practice, combinations of informal, semiformal and formal models are common: informal textual descriptions for explanation, semiformal graphical representation for visualization, and formal language representation to support simulation and transfer into executable code.
=== Modelling techniques ===
There are various standards for notations; the most common are:
Business Process Model and Notation (BPMN), proposed in 2002 by Stephen A. White, published by the Business Process Management Initiative – merged in June 2005 with Object Management Group
Event-driven process chain (EPC), proposed in 1992 by a working group under the leadership of August-Wilhelm Scheer
Value-added chain diagram (VAD), for visualizing processes mainly at a high level of abstraction
Petri net, developed by Carl Adam Petri in 1962
Follow-up plans (e.g. in the specific form of a Flowchart), proposed in 1997 by Fischermanns and Liebelt
HIPO model, developed by IBM around 1970 as a design aid and documentation technology for software (in a non-technical, but business-oriented form)
Lifecycle Modeling Language (LML), originally designed by the LML steering committee and published in 2013
Subject-oriented business process management (S-BPM)
Cognition enhanced Natural language Information Analysis Method (CogNIAM)
SIPOC diagram, invented in the 1980s as part of the Total Quality Management movement and then adopted by Lean Management and Six Sigma practitioners
Unified Modelling Language (UML), proposed in 1996 by Grady Booch, Ivar Jacobson, and James Rumbaugh, continuously revised under the aegis of the OMG (provides extensions for business process)
ICAM DEFinition (IDEF0), developed for the US Air Force in the early 1980s
Formalized Administrative Notation (FAN), created by Pablo Iacub and Leonardo Mayo in the 1990s
Harbarian process modeling (HPM)
Business Process Execution Language (BPEL), an XML-based language developed in 2002 by OASIS for the description and automation of business processes
Turtle diagram (also turtle method, turtle model, 8W method), a simple, clear and easy-to-understand graphical representation of facts about the process
Furthermore:
Communication structure analysis, proposed in 1989 by Prof. Hermann Krallmann at the Systems Analysis Department of the TU Berlin.
Extended Business Modelling Language (xBML) (seems to be outdated, as the founding company is no longer online)
Notation from OMEGA (object-oriented method for business process modeling and analysis, Objektorientierte Methode zur Geschäftsprozessmodellierung und -analyse in German), presented by Uta Fahrwinkel in 1995
Semantic object model (SOM), proposed in 1990 by Otto K. Ferstl and Elmar J. Sinz
PICTURE-Methode for the documentation and modeling of business processes in public administration
Data-flow diagram, a way of representing a flow of data through a process or a system
Swimlane technique, mainly known through BPMN but also SIPOC, the Process chain diagram (PCD) and other methods use this technique
ProMet, a method set for business engineering
State diagram, used to describe the behavior of systems
In addition, representation types from software architecture can also be used:
Flowchart, standardized in DIN 66001 from September 1966 and last revised in December 1983 or standardized in ISO 5807 from 1985
Nassi-Shneiderman diagram or structure diagram, proposed in 1972/73 by Isaac Nassi and Ben Shneiderman, standardized in DIN 66261.
==== Business Process Model and Notation (BPMN) ====
==== Event-driven process chain (EPC) ====
==== Petri net ====
==== Flowchart ====
==== Hierarchical input process output model (HIPO) ====
==== Lifecycle Modeling Language (LML) ====
==== Subject-oriented business process management ====
==== Cognition enhanced Natural language Information Analysis Method ====
==== SIPOC (suppliers, inputs, process, outputs and customers) ====
==== Unified Modelling Language (UML) ====
==== Integration Definition (IDEF) ====
==== Formalized Administrative Notation (FAN) ====
==== Harbarian process modeling (HPM) ====
==== Business Process Execution Language (BPEL) ====
=== Tools ===
Business process modelling tools provide business users with the ability to model their business processes, implement and execute those models, and refine the models based on as-executed data. As a result, business process modelling tools can provide transparency into business processes, as well as the centralization of corporate business process models and execution metrics. Modelling tools may also enable collaborate modelling of complex processes by users working in teams, where users can share and simulate models collaboratively. Business process modelling tools should not be confused with business process automation systems – both practices have modeling the process as the same initial step and the difference is that process automation gives you an 'executable diagram' and that is drastically different from traditional graphical business process modelling tools.
=== Programming language tools ===
BPM suite software provides programming interfaces (web services, application program interfaces (APIs)) which allow enterprise applications to be built to leverage the BPM engine. This component is often referenced as the engine of the BPM suite.
Programming languages that are being introduced for BPM include:
Business Process Execution Language (BPEL),
Web Services Choreography Description Language (WS-CDL).
XML Process Definition Language (XPDL),
Some vendor-specific languages:
Architecture of Integrated Information Systems (ARIS) supports EPC,
Java Process Definition Language (JBPM),
Other technologies related to business process modelling include model-driven architecture and service-oriented architecture.
=== Simulation ===
The simulation functionality of such tools allows for pre-execution "what-if" modelling (which has particular requirements for this application) and simulation. Post-execution optimization is available based on the analysis of actual as-performed metrics.
Use case diagrams created by Ivar Jacobson, 1992 (integrated into UML)
Activity diagrams (also adopted by UML)
== Related concepts ==
=== Business reference model ===
A business reference model is a reference model, concentrating on the functional and organizational aspects of an enterprise, service organization, or government agency. In general, a reference model is a model of something that embodies the basic goal or idea of something and can then be looked at as a reference for various purposes. A business reference model is a means to describe the business operations of an organization, independent of the organizational structure that performs them. Other types of business reference models can also depict the relationship between the business processes, business functions, and the business area's business reference model. These reference models can be constructed in layers, and offer a foundation for the analysis of service components, technology, data, and performance.
The most familiar business reference model is the Business Reference Model of the US federal government. That model is a function-driven framework for describing the business operations of the federal government independent of the agencies that perform them. The Business Reference Model provides an organized, hierarchical construct for describing the day-to-day business operations of the federal government. While many models exist for describing organizations – organizational charts, location maps, etc. – this model presents the business using a functionally driven approach.
=== Business process integration ===
A business model, which may be considered an elaboration of a business process model, typically shows business data and business organizations as well as business processes. By showing business processes and their information flows, a business model allows business stakeholders to define, understand, and validate their business enterprise. The data model part of the business model shows how business information is stored, which is useful for developing software code. See the figure on the right for an example of the interaction between business process models and data models.
Usually, a business model is created after conducting an interview, which is part of the business analysis process. The interview consists of a facilitator asking a series of questions to extract information about the subject business process. The interviewer is referred to as a facilitator to emphasize that it is the participants, not the facilitator, who provide the business process information. Although the facilitator should have some knowledge of the subject business process, but this is not as important as the mastery of a pragmatic and rigorous method interviewing business experts. The method is important because for most enterprises a team of facilitators is needed to collect information across the enterprise, and the findings of all the interviewers must be compiled and integrated once completed.
Business models are developed to define either the current state of the process, resulting in the 'as is' snapshot model, or a vision of what the process should evolve into, leading to a 'to be' model. By comparing and contrasting the 'as is' and 'to be' models, business analysts can determine if existing business processes and information systems require minor modifications or if reengineering is necessary to enhance efficiency. As a result, business process modeling and subsequent analysis can fundamentally reshape the way an enterprise conducts its operations.
=== Business process re-engineering ===
Business process reengineering (BPR) aims to improve the efficiency and effectiveness of the processes that exist within and across organizations. It examines business processes from a "clean slate" perspective to determine how best to construct them.
Business process re-engineering (BPR) began as a private sector technique to help organizations fundamentally rethink how they do their work. A key stimulus for re-engineering has been the development and deployment of sophisticated information systems and networks. Leading organizations use this technology to support innovative business processes, rather than refining current ways of doing work.
=== Business process management ===
Change management programs are typically involved to put any improved business processes into practice. With advances in software design, the vision of BPM models becoming fully executable (and capable of simulations and round-trip engineering) is coming closer to reality.
==== Adaptation of process models ====
In business process management, process flows are regularly reviewed and, if necessary, optimized (adapted). Regardless of whether this adaptation of process flows is triggered by continual improvement process or business process re-engineering, it entails updating individual sub-processes or an entire business process.
== See also ==
Business architecture
Business Model Canvas
Business plan
Business process mapping
Capability Maturity Model Integration
Drakon-chart
Generalised Enterprise Reference Architecture and Methodology
Model Driven Engineering
Outline of consulting
Value Stream Mapping
== References ==
== Further reading ==
Aguilar-Saven, Ruth Sara. "Business process modelling: Review and framework Archived 2020-08-07 at the Wayback Machine." International Journal of production economics 90.2 (2004): 129–149.
Barjis, Joseph (2008). "The importance of business process modeling in software systems design". Science of Computer Programming. 71: 73–87. doi:10.1016/j.scico.2008.01.002.
Becker, Jörg, Michael Rosemann, and Christoph von Uthmann. "Guidelines of business process modelling." Business Process Management. Springer Berlin Heidelberg, 2000. 30–49.
Hommes, L.J. The Evaluation of Business Process Modelling Techniques. Doctoral thesis. Technische Universiteit Delft.
Håvard D. Jørgensen (2004). Interactive Process Models. Thesis Norwegian University of Science and Technology Trondheim, Norway.
Manuel Laguna, Johan Marklund (2004). Business Process Modeling, Simulation, and Design. Pearson/Prentice Hall, 2004.
Ovidiu S. Noran (2000). Business Modelling: UML vs. IDEF Paper Griffh University
Jan Recker (2005). "Process Modelling in the 21st Century". In: BP Trends, May 2005.
Ryan K. L. Ko, Stephen S. G. Lee, Eng Wah Lee (2009) Business Process Management (BPM) Standards: A Survey. In: Business Process Management Journal, Emerald Group Publishing Limited. Volume 15 Issue 5. ISSN 1463-7154.
Jan Vanthienen, S. Goedertier and R. Haesen (2007). "EM-BrA2CE v0.1: A vocabulary and execution model for declarative business process modelling". DTEW – KBI_0728.
== External links ==
Media related to Business process modeling at Wikimedia Commons
{{|bot=InternetArchiveBot |fix-attempted=yes}} | Wikipedia/Business_process_modeling |
The Architecture Analysis & Design Language (AADL) is an architecture description language standardized by SAE. AADL was first developed in the field of avionics, and was known formerly as the Avionics Architecture Description Language. It was funded in part by the US Army.
The Architecture Analysis & Design Language is derived from MetaH, an architecture description language made by the Advanced Technology Center of Honeywell. AADL is used to model the software and hardware architecture of an embedded, real-time system. Due to its emphasis on the embedded domain, AADL contains constructs for modeling both software and hardware components (with the hardware components named "execution platform" components within the standard). This architecture model can then be used either as a design documentation, for analyses (such as schedulability and flow control) or for code generation (of the software portion), like UML.
== AADL ecosystem ==
AADL is defined by a core language with a single notation for both system and software aspects. Having a single model eases the analysis tools by having only one single representation of the system. The language specifies system-specific characteristics using properties.
The language can be extended with the following methods:
user-defined properties: users can extend the set of applicable properties and add their own to specify their own requirements
language annexes: the core language is enhanced by annex languages that enrich the architecture description. So far, the following annexes have been defined:
Behavior annex: add components behavior with state machines
Error-model annex: specifies fault and propagation concerns
ARINC653 annex: defines modelling patterns for avionics systems
Data-Model annex: describes the modelling of specific data constraints with AADL
== AADL tools ==
AADL is supported by a wide range of tools:
MASIW - is an open source Eclipse-based IDE for development and analysis of AADL models. It is developed by ISP RAS
OSATE is an open source tool that includes a modeling platform, a graphical viewer and a constraint query languages. More information is available at the OSATE website.
Ocarina, an AADL toolchain for generating code from models
TASTE toolchain, supported by the European Space Agency
A complete list of the tool set can be found on the AADL public wiki
== Related projects ==
AADL has been used for the following research projects:
AVSI/SAVI: an initiative that leverages AADL (among other languages) to perform virtual integration of aerospace and defense systems
META: a DARPA project for improving software engineering methods
PARSEC: a French initiative to validate and implement avionics systems from architecture models
TASTE: a platform for designing safety-critical systems from models
A complete list of the past and current projects/initiatives can not be found on the AADL public wiki because it has been retired. No replacement has been provided as of Dec 2020.
== References ==
== External links ==
AADL.info
AADL public wiki
AADL tools
AADL at Axlog
AADL at Ecole Nationale Supérieure des Télécommunications de Paris (ENST) Archived 2006-11-27 at the Wayback Machine
AADL performance analysis with Cheddar, Univ. of Brest (real time scheduling and queueing system analysis) Archived 2011-02-28 at the Wayback Machine
Industrial project support using Stood for AADL
AADL In Practice, a book dedicated to the use of the languages and its related modeling tools | Wikipedia/Architecture_Analysis_&_Design_Language |
WSML or Web Service Modeling Language is a formal language that provides a syntax and semantics for the Web Service Modeling Ontology (WSMO).
In other words, the WSML provides means to formally describe the WSMO elements as Ontologies, Semantic Web services, Goals, and Mediators.
The WSML is based on the logical formalisms as Description Logic, First-order Logic and Logic Programming.
== Language variants of WSML ==
WSML Core, defined as an intersection of the Description Logic and Horn Logic. Supports modeling classes, attributes, binary relations and instances.
WSML-DL, extension of the WSML Core, fully captures the Description Logic
S
H
I
Q
(
D
)
{\displaystyle {\mathcal {SHIQ}}^{\mathcal {(D)}}}
.
WSML-Flight, extension of the WSML Core, provides features as meta-modeling, constraints and nonmonotonic negation.
WSML-Rule, extension of the WSML-Flight, provides Logic Programming capabilities.
WSML-Full, a unification of the WSML-DL and WSML-Rule.
== See also ==
Ontology (computer science)
Semantic Web
Semantic Web Services
Web Ontology Language (OWL), OWL-S, WSDL
WSMO
== References ==
== External links ==
WSML Home Web Site
WSML syntax
WSML submission in W3C
WSMO Working Group Web Site | Wikipedia/Web_Services_Modeling_Language |
Model-driven architecture (MDA) is a software design approach for the development of software systems. It provides a set of guidelines for the structuring of specifications, which are expressed as models. Model Driven Architecture is a kind of domain engineering, and supports model-driven engineering of software systems. It was launched by the Object Management Group (OMG) in 2001.
== Overview ==
Model Driven Architecture® (MDA®) "provides an approach for deriving value from models and architecture in support of the full life cycle of physical, organizational and I.T. systems". A model is a (representation of) an abstraction of a system. MDA® provides value by producing models at varying levels of abstraction, from a conceptual view down to the smallest implementation detail. OMG literature speaks of three such levels of abstraction, or architectural viewpoints: the Computation-independent Model (CIM), the Platform-independent model (PIM), and the Platform-specific model (PSM). The CIM describes a system conceptually, the PIM describes the computational aspects of a system without reference to the technologies that may be used to implement it, and the PSM provides the technical details necessary to implement the system. The OMG Guide notes, though, that these three architectural viewpoints are useful, but are just three of many possible viewpoints.
The OMG organization provides specifications rather than implementations, often as answers to Requests for Proposals (RFPs). Implementations come from private companies or open source groups.
=== Related standards ===
The MDA model is related to multiple standards, including the Unified Modeling Language (UML), the Meta-Object Facility (MOF), XML Metadata Interchange (XMI), Enterprise Distributed Object Computing (EDOC), the Software Process Engineering Metamodel (SPEM), and the Common Warehouse Metamodel (CWM). Note that the term “architecture” in Model Driven Architecture does not refer to the architecture of the system being modeled, but rather to the architecture of the various standards and model forms that serve as the technology basis for MDA.
Executable UML was the UML profile used when MDA was born. Now, the OMG is promoting fUML, instead. (The action language for fUML is ALF.)
=== Trademark ===
The Object Management Group holds registered trademarks on the term Model Driven Architecture and its acronym MDA, as well as trademarks for terms such as: Model Based Application Development, Model Driven Application Development, Model Based Application Development, Model Based Programming, Model Driven Systems, and others.
== Model Driven Architecture topics ==
=== MDA approach ===
OMG focuses Model Driven Architecture® on forward engineering, i.e. producing code from abstract, human-elaborated modeling diagrams (e.g. class diagrams). OMG's ADTF (Analysis and Design Task Force) group leads this effort. With some humour, the group chose ADM (MDA backwards) to name the study of reverse engineering. ADM decodes to Architecture-Driven Modernization. The objective of ADM is to produce standards for model-based reverse engineering of legacy systems. Knowledge Discovery Metamodel (KDM) is the furthest along of these efforts, and describes information systems in terms of various assets (programs, specifications, data, test files, database schemas, etc.).
As the concepts and technologies used to realize designs and the concepts and technologies used to realize architectures have changed at their own pace, decoupling them allows system developers to choose from the best and most fitting in both domains. The design addresses the functional (use case) requirements while architecture provides the infrastructure through which non-functional requirements like scalability, reliability and performance are realized. MDA envisages that the platform independent model (PIM), which represents a conceptual design realizing the functional requirements, will survive changes in realization technologies and software architectures.
Of particular importance to Model Driven Architecture is the notion of model transformation. A specific standard language for model transformation has been defined by OMG called QVT.
=== MDA tools ===
The OMG organization provides rough specifications rather than implementations, often as answers to Requests for Proposals (RFPs). The OMG documents the overall process in a document called the MDA Guide.
Basically, an MDA tool is a tool used to develop, interpret, compare, align, measure, verify, transform, etc. models or metamodels. In the following section "model" is interpreted as meaning any kind of model (e.g. a UML model) or metamodel (e.g. the CWM metamodel). In any MDA approach we have essentially two kinds of models: initial models are created manually by human agents while derived models are created automatically by programs. For example, an analyst may create a UML initial model from its observation of some loose business situation while a Java model may be automatically derived from this UML model by a Model transformation operation.
An MDA tool may be a tool used to check models for completeness, inconsistencies, or error and warning conditions.
Some tools perform more than one of the functions listed above. For example, some creation tools may also have transformation and test capabilities. There are other tools that are solely for creation, solely for graphical presentation, solely for transformation, etc.
Implementations of the OMG specifications come from private companies or open source groups. One important source of implementations for OMG specifications is the Eclipse Foundation (EF). Many implementations of OMG modeling standards may be found in the Eclipse Modeling Framework (EMF) or Graphical Modeling Framework (GMF), the Eclipse foundation is also developing other tools of various profiles as GMT. Eclipse's compliance to OMG specifications is often not strict. This is true for example for OMG's EMOF standard, which EMF approximates with its Ecore implementation. More examples may be found in the M2M project implementing the QVT standard or in the M2T project implementing the MOF2Text standard.
One should be careful not to confuse the List of MDA Tools and the List of UML tools, the former being much broader. This distinction can be made more general by distinguishing 'variable metamodel tools' and 'fixed metamodel tools'. A UML CASE tool is typically a 'fixed metamodel tool' since it has been hard-wired to work only with a given version of the UML metamodel (e.g. UML 2.1). On the contrary, other tools have internal generic capabilities allowing them to adapt to arbitrary metamodels or to a particular kind of metamodels.
Usually MDA tools focus rudimentary architecture specification, although in some cases the tools are architecture-independent (or platform independent).
Simple examples of architecture specifications include:
Selecting one of a number of supported reference architectures like Java EE or Microsoft .NET,
Specifying the architecture at a finer level including the choice of presentation layer technology, business logic layer technology, persistence technology and persistence mapping technology (e.g. object-relational mapper).
Metadata: information about data.
=== MDA concerns ===
Some key concepts that underpin the MDA approach (launched in 2001) were first elucidated by the Shlaer–Mellor method during the late 1980s. Indeed, a key absent technical standard of the MDA approach (that of an action language syntax for Executable UML) has been bridged by some vendors by adapting the original Shlaer–Mellor Action Language (modified for UML). However, during this period the MDA approach has not gained mainstream industry acceptance; with the Gartner Group still identifying MDA as an "on the rise" technology in its 2006 "Hype Cycle", and Forrester Research declaring MDA to be "D.O.A." in 2006. Potential concerns that have been raised with the OMG MDA approach include:
Incomplete Standards: The MDA approach is underpinned by a variety of technical standards, some of which are yet to be specified (e.g. an action semantic language for xtUML), or are yet to be implemented in a standard manner (e.g. a QVT transformation engine or a PIM with a virtual execution environment).
Vendor Lock-in: Although MDA was conceived as an approach for achieving (technical) platform independence, current MDA vendors have been reluctant to engineer their MDA toolsets to be interoperable. Such an outcome could result in vendor lock-in for those pursuing an MDA approach.
Idealistic: MDA is conceived as a forward engineering approach in which models that incorporate Action Language programming are transformed into implementation artifacts (e.g. executable code, database schema) in one direction via a fully or partially automated "generation" step. This aligns with OMG's vision that MDA should allow modelling of a problem domain's full complexity in UML (and related standards) with subsequent transformation to a complete (executable) application. This approach does, however, imply that changes to implementation artifacts (e.g. database schema tuning) are not supported . This constitutes a problem in situations where such post-transformation "adapting" of implementation artifacts is seen to be necessary. Evidence that the full MDA approach may be too idealistic for some real world deployments has been seen in the rise of so-called "pragmatic MDA". Pragmatic MDA blends the literal standards from OMG's MDA with more traditional model driven approaches such as round-trip engineering that provides support for adapting implementation artifacts (though not without substantial disadvantages).
Specialised Skillsets: Practitioners of MDA based software engineering are (as with other toolsets) required to have a high level of expertise in their field. Current expert MDA practitioners (often referred to as Modeller/Architects) are scarce relative to the availability of traditional developers.
OMG Track Record: The OMG consortium who sponsor the MDA approach (and own the MDA trademark) also introduced and sponsored the CORBA standard which itself failed to materialise as a widely utilised standard.
Uncertain Value Proposition (UVP): As discussed, the vision of MDA allows for the specification of a system as an abstract model, which may be realized as a concrete implementation (program) for a particular computing platform (e.g. .NET). Thus an application that has been successfully developed via a pure MDA approach could theoretically be ported to a newer release .NET platform (or even a Java platform) in a deterministic manner – although significant questions remain as to real-world practicalities during translation (such as user interface implementation). Whether this capability represents a significant value proposition remains a question for particular adopters. Regardless, adopters of MDA who are seeking value via an "alternative to programming" should be very careful when assessing this approach. The complexity of any given problem domain will always remain, and the programming of business logic needs to be undertaken in MDA as with any other approach. The difference with MDA is that the programming language used (e.g. xtUML) is more abstract (than, say, Java or C#) and exists interwoven with traditional UML artifacts (e.g. class diagrams). Whether programming in a language that is more abstract than mainstream 3GL languages will result in systems of better quality, cheaper cost or faster delivery, is a question that has yet to be adequately answered.
MDA was recognized as a possible way to bring various independently developed standardized solutions together. For the simulation community, it was recommended as a business and industry based alternative to yet another US DoD mandated standard.
== See also ==
== References ==
== Further reading ==
Kevin Lano. "Model-Driven Software Development With UML and Java". CENGAGE Learning, ISBN 978-1-84480-952-3
David S. Frankel. Model Driven Architecture: Applying MDA to Enterprise Computing. John Wiley & Sons, ISBN 0-471-31920-1
Meghan Kiffer The MDA Journal: Model Driven Architecture Straight From The Masters. ISBN 0-929652-25-8
Anneke Kleppe (2003). MDA Explained, The Model Driven Architecture: Practice and Promise. Addison-Wesley. ISBN 0-321-19442-X
Stephen J. Mellor (2004). MDA Distilled, Principles of Model Driven Architecture. Addison-Wesley Professional. ISBN 0-201-78891-8
Chris Raistrick. Model Driven Architecture With Executable UML. Cambridge University Press, ISBN 0-521-53771-1
Marco Brambilla, Jordi Cabot, Manuel Wimmer, Model Driven Software Engineering in Practice, foreword by Richard Soley (OMG Chairman), Morgan & Claypool, USA, 2012, Synthesis Lectures on Software Engineering #1. 182 pages. ISBN 9781608458820 (paperback), ISBN 9781608458837 (ebook). http://www.mdse-book.com
Stanley J. Sewall. Executive Justification for MDA
Soylu A., De Causmaecker Patrick. Merging model driven and ontology driven system development approaches pervasive computing perspective, in Proc 24th Intl Symposium on Computer and Information Sciences. 2009, pp 730–735.
== External links ==
OMG's MDA Web site
Model-Driven Software Development Course, B. Tekinerdogan, Bilkent University | Wikipedia/Model-driven_architecture |
Domain-specific modeling (DSM) is a software engineering methodology for designing and developing systems, such as computer software. It involves systematic use of a domain-specific language to represent the various facets of a system.
Domain-specific modeling languages tend to support higher-level abstractions than general-purpose modeling languages, so they require less effort and fewer low-level details to specify a given system.
== Overview ==
Domain-specific modeling often also includes the idea of code generation: automating the creation of executable source code directly from the domain-specific language models. Being free from the manual creation and maintenance of source code means domain-specific language can significantly improve developer productivity. The reliability of automatic generation compared to manual coding will also reduce the number of defects in the resulting programs thus improving quality.
Domain-specific language differs from earlier code generation attempts in the CASE tools of the 1980s or UML tools of the 1990s. In both of these, the code generators and modeling languages were built by tool vendors. While it is possible for a tool vendor to create a domain-specific language and generators, it is more normal for domain-specific language to occur within one organization. One or a few expert developers creates the modeling language and generators, and the rest of the developers use them.
Having the modeling language and generator built by the organization that will use them allows a tight fit with their exact domain and in response to changes in the domain.
Domain-specific languages can usually cover a range of abstraction levels for a particular domain. For example, a domain-specific modeling language for mobile phones could allow users to specify high-level abstractions for the user interface, as well as lower-level abstractions for storing data such as phone numbers or settings. Likewise, a domain-specific modeling language for financial services could permit users to specify high-level abstractions for clients, as well as lower-level abstractions for implementing stock and bond trading algorithms.
== Topics ==
=== Defining domain-specific languages ===
To define a language, one needs a language to write the definition in. The language of a model is often called a metamodel, hence the language for defining a modeling language is a meta-metamodel. Meta-metamodels can be divided into two groups: those that are derived from or customizations of existing languages, and those that have been developed specifically as meta-metamodels.
Derived meta-metamodels include entity–relationship diagrams, formal languages, extended Backus–Naur form (EBNF), ontology languages, XML schema, and Meta-Object Facility (MOF). The strengths of these languages tend to be in the familiarity and standardization of the original language.
The ethos of domain-specific modeling favors the creation of a new language for a specific task, and so there are unsurprisingly new languages designed as meta-metamodels. The most widely used family of such languages is that of OPRR, GOPRR, and GOPPRR, which focus on supporting things found in modeling languages with the minimum effort.
=== Tool support for domain-specific languages ===
Many General-Purpose Modeling languages already have tool support available in the form of CASE tools. Domain-specific language languages tend to have too small a market size to support the construction of a bespoke CASE tool from scratch. Instead, most tool support for domain-specific language languages is built based on existing domain-specific language frameworks or through domain-specific language environments.
A domain-specific language environment may be thought of as a metamodeling tool, i.e., a modeling tool used to define a modeling tool or CASE tool. The resulting tool may either work within the domain-specific language environment, or less commonly be produced as a separate stand-alone program. In the more common case, the domain-specific language environment supports an additional layer of abstraction when compared to a traditional CASE tool.
Using a domain-specific language environment can significantly lower the cost of obtaining tool support for a domain-specific language, since a well-designed domain-specific language environment will automate the creation of program parts that are costly to build from scratch, such as domain-specific editors, browsers and components. The domain expert only needs to specify the domain specific constructs and rules, and the domain-specific language environment provides a modeling tool tailored for the target domain.
Most existing domain-specific language takes place with domain-specific language environments, either commercial such as MetaEdit+ or Actifsource, open source such as GEMS, or academic such as GME. The increasing popularity of domain-specific language has led to domain-specific language frameworks being added to existing IDEs, e.g. Eclipse Modeling Project (EMP) with EMF and GMF, or in Microsoft's DSL Tools for Software Factories.
== Domain-specific language and UML ==
The Unified Modeling Language (UML) is a general-purpose modeling language for software-intensive systems that is designed to support mostly object oriented programming. Consequently, in contrast to domain-specific language languages, UML is used for a wide variety of purposes across a broad range of domains. The primitives offered by UML are those of object oriented programming, while domain-specific languages offer primitives whose semantics are familiar to all practitioners in that domain. For example, in the domain of automotive engineering, there will be software models to represent the properties of an anti-lock braking system, or a steering wheel, etc.
UML includes a profile mechanism that allows it to be constrained and customized for specific domains and platforms. UML profiles use stereotypes, stereotype attributes (known as tagged values before UML 2.0), and constraints to restrict and extend the scope of UML to a particular domain. Perhaps the best known example of customizing UML for a specific domain is SysML, a domain specific language for systems engineering.
UML is a popular choice for various model-driven development approaches whereby technical artifacts such as source code, documentation, tests, and more are generated algorithmically from a domain model. For instance, application profiles of the legal document standard Akoma Ntoso can be developed by representing legal concepts and ontologies in UML class objects.
== See also ==
Computer-aided software engineering
Domain-driven design
Domain-specific language
Framework-specific modeling language
General-purpose modeling
Domain-specific multimodeling
Model-driven engineering
Model-driven architecture
Software factories
Discipline-Specific Modeling
== References ==
== External links ==
Domain-specific modeling for generative software development Archived 2010-01-31 at the Wayback Machine, Web-article by Martijn Iseger, 2010
Domain Specific Modeling in IoC frameworks Web-article by Ke Jin, 2007
Domain-Specific Modeling for Full Code Generation from Methods & Tools Web-article by Juha-Pekka Tolvanen, 2005
Creating a Domain-Specific Modeling Language for an Existing Framework Web-article by Juha-Pekka Tolvanen, 2006 | Wikipedia/Domain-specific_modeling |
Model-based testing is an application of model-based design for designing and optionally also executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a system under test (SUT), or to represent testing strategies and a test environment. The picture on the right depicts the former approach.
A model describing a SUT is usually an abstract, partial presentation of the SUT's desired behavior.
Test cases derived from such a model are functional tests on the same level of abstraction as the model.
These test cases are collectively known as an abstract test suite.
An abstract test suite cannot be directly executed against an SUT because the suite is on the wrong level of abstraction.
An executable test suite needs to be derived from a corresponding abstract test suite.
The executable test suite can communicate directly with the system under test.
This is achieved by mapping the abstract test cases to
concrete test cases suitable for execution. In some model-based testing environments, models contain enough information to generate executable test suites directly.
In others, elements in the abstract test suite must be mapped to specific statements or method calls in the software to create a concrete test suite. This is called solving the "mapping problem".
In the case of online testing (see below), abstract test suites exist only conceptually but not as explicit artifacts.
Tests can be derived from models in different ways. Because testing is usually experimental and based on heuristics,
there is no known single best approach for test derivation.
It is common to consolidate all test derivation related parameters into a
package that is often known as "test requirements", "test purpose" or even "use case(s)".
This package can contain information about those parts of a model that should be focused on, or the conditions for finishing testing (test stopping criteria).
Because test suites are derived from models and not from source code, model-based testing is usually seen as one form of black-box testing.
Model-based testing for complex software systems is still an evolving field.
== Models ==
Especially in Model Driven Engineering or in Object Management Group's (OMG's) model-driven architecture, models are built before or parallel with the corresponding systems. Models can also be constructed from completed systems. Typical modeling languages for test generation include UML, SysML, mainstream programming languages, finite machine notations, and mathematical formalisms such as Z, B (Event-B), Alloy or Coq.
== Deploying model-based testing ==
There are various known ways to deploy model-based testing, which include online testing, offline generation of executable tests, and offline generation of manually deployable tests.
Online testing means that a model-based testing tool connects directly to an SUT and tests it dynamically.
Offline generation of executable tests means that a model-based testing tool generates test cases as computer-readable assets that can be later run automatically; for example, a collection of Python classes that embodies the generated testing logic.
Offline generation of manually deployable tests means that a model-based testing tool generates test cases as human-readable assets that can later assist in manual testing; for instance, a PDF document in a human language describing the generated test steps.
== Deriving tests algorithmically ==
The effectiveness of model-based testing is primarily due to the potential for automation it offers. If a model is machine-readable and formal to the extent that it has a well-defined behavioral interpretation, test cases can in principle be derived mechanically.
=== From finite-state machines ===
Often the model is translated to or interpreted as a finite-state automaton or a state transition system. This automaton represents the possible configurations of the system under test. To find test cases, the automaton is searched for executable paths. A possible execution path can serve as a test case. This method works if the model is deterministic or can be transformed into a deterministic one. Valuable off-nominal test cases may be obtained by leveraging unspecified transitions in these models.
Depending on the complexity of the system under test and the corresponding model the number of paths can be very large, because of the huge amount of possible configurations of the system. To find test cases that can cover an appropriate, but finite, number of paths, test criteria are needed to guide the selection. This technique was first proposed by Offutt and Abdurazik in the paper that started model-based testing. Multiple techniques for test case generation have been developed and are surveyed by Rushby. Test criteria are described in terms of general graphs in the testing textbook.
=== Theorem proving ===
Theorem proving was originally used for automated proving of logical formulas. For model-based testing approaches, the system is modeled by a set of predicates, specifying the system's behavior. To derive test cases, the model is partitioned into equivalence classes over the valid interpretation of the set of the predicates describing the system under test. Each class describes a certain system behavior, and, therefore, can serve as a test case. The simplest partitioning is with the disjunctive normal form approach wherein the logical expressions describing the system's behavior are transformed into the disjunctive normal form.
=== Constraint logic programming and symbolic execution ===
Constraint programming can be used to select test cases satisfying specific constraints by solving a set of constraints over a set of variables. The system is described by the means of constraints. Solving the set of constraints can be done by Boolean solvers (e.g. SAT-solvers based on the Boolean satisfiability problem) or by numerical analysis, like the Gaussian elimination. A solution found by solving the set of constraints formulas can serve as a test cases for the corresponding system.
Constraint programming can be combined with symbolic execution. In this approach a system model is executed symbolically, i.e. collecting data constraints over different control paths, and then using the constraint programming method for solving the constraints and producing test cases.
=== Model checking ===
Model checkers can also be used for test case generation. Originally model checking was developed as a technique to check if a property of a specification is valid in a model. When used for testing, a model of the system under test, and a property to test is provided to the model checker. Within the procedure of proofing, if this property is valid in the model, the model checker detects witnesses and counterexamples. A witness is a path where the property is satisfied, whereas a counterexample is a path in the execution of the model where the property is violated. These paths can again be used as test cases.
=== Test case generation by using a Markov chain test model ===
Markov chains are an efficient way to handle Model-based Testing. Test models realized with Markov chains can be understood as a usage model: it is referred to as Usage/Statistical Model Based Testing. Usage models, so Markov chains, are mainly constructed of 2 artifacts : the finite-state machine (FSM) which represents all possible usage scenario of the tested system and the Operational Profiles (OP) which qualify the FSM to represent how the system is or will be used statistically. The first (FSM) helps to know what can be or has been tested and the second (OP) helps to derive operational test cases.
Usage/Statistical Model-based Testing starts from the facts that is not possible to exhaustively test a system and that failure can appear with a very low rate. This approach offers a pragmatic way to statically derive test cases which are focused on improving the reliability of the system under test. Usage/Statistical Model Based Testing was recently extended to be applicable to embedded software systems.
== See also ==
Domain-specific language
Domain-specific modeling
Model-driven architecture
Model-driven engineering
Object-oriented analysis and design
Time partition testing
== References ==
== Further reading ==
OMG UML 2 Testing Profile; [2]
Bringmann, E.; Krämer, A. (2008). "2008 International Conference on Software Testing, Verification, and Validation". 2008 International Conference on Software Testing, Verification, and Validation. International Conference on Software Testing, Verification, and Validation (ICST). pp. 485–493. CiteSeerX 10.1.1.729.8107. doi:10.1109/ICST.2008.45. ISBN 978-0-7695-3127-4.
Practical Model-Based Testing: A Tools Approach, Mark Utting and Bruno Legeard, ISBN 978-0-12-372501-1, Morgan-Kaufmann 2007.
Model-Based Software Testing and Analysis with C#, Jonathan Jacky, Margus Veanes, Colin Campbell, and Wolfram Schulte, ISBN 978-0-521-68761-4, Cambridge University Press 2008.
Model-Based Testing of Reactive Systems Advanced Lecture Series, LNCS 3472, Springer-Verlag, 2005. ISBN 978-3-540-26278-7.
Hong Zhu; et al. (2008). AST '08: Proceedings of the 3rd International Workshop on Automation of Software Test. ACM Press. ISBN 978-1-60558-030-2.
Santos-Neto, P.; Resende, R.; Pádua, C. (2007). "Proceedings of the 2007 ACM symposium on Applied computing - SAC '07". Proceedings of the 2007 ACM symposium on Applied computing - SAC '07. Symposium on Applied Computing. pp. 1409–1415. doi:10.1145/1244002.1244306. ISBN 978-1-59593-480-2.
Roodenrijs, E. (Spring 2010). "Model-Based Testing Adds Value". Methods & Tools. 18 (1): 33–39. ISSN 1661-402X.
A Systematic Review of Model Based Testing Tool Support, Muhammad Shafique, Yvan Labiche, Carleton University, Technical Report, May 2010.
Zander, Justyna; Schieferdecker, Ina; Mosterman, Pieter J., eds. (2011). Model-Based Testing for Embedded Systems. Computational Analysis, Synthesis, and Design of Dynamic Systems. Vol. 13. Boca Raton: CRC Press. ISBN 978-1-4398-1845-9.
2011/2012 Model-based Testing User Survey: Results and Analysis. Robert V. Binder. System Verification Associates, February 2012 | Wikipedia/Model-based_testing |
EXPRESS is a standard for generic data modeling language for product data. EXPRESS is formalized in the ISO Standard for the Exchange of Product model STEP (ISO 10303), and standardized as ISO 10303-11.
== Overview ==
Data models formally define data objects and relationships among data objects for a domain of interest. Some typical applications of data models include supporting the development of databases and enabling the exchange of data for a particular area of interest. Data models are specified in a data modeling language. EXPRESS is a data modeling language defined in ISO 10303-11, the EXPRESS Language Reference Manual.
An EXPRESS data model can be defined in two ways, textually and graphically. For formal verification and as input for tools such as SDAI the textual representation within an ASCII file is the most important one. The graphical representation on the other hand is often more suitable for human use such as explanation and tutorials. The graphical representation, called EXPRESS-G, is not able to represent all details that can be formulated in the textual form.
EXPRESS is similar to programming languages such as Pascal. Within a SCHEMA various datatypes can be defined together with structural constraints and algorithmic rules. A main feature of EXPRESS is the possibility to formally validate a population of datatypes - this is to check for all the structural and algorithmic rules.
=== EXPRESS-G ===
EXPRESS-G is a standard graphical notation for information models. It is a companion to the EXPRESS language for displaying entity and type definitions, relationships and cardinality. This graphical notation supports a subset of the EXPRESS language. One of the advantages of using EXPRESS-G over EXPRESS is that the structure of a data model can be presented in a more understandable manner. A disadvantage of EXPRESS-G is that complex constraints cannot be formally specified. Figure 1 is an example. The data model presented in figure could be used to specify the requirements of a database for an audio compact disc (CD) collection.
== Simple example ==
A simple EXPRESS data model looks like fig 2, and the code like this:
SCHEMA Family;
ENTITY Person
ABSTRACT SUPERTYPE OF (ONEOF (Male, Female));
name: STRING;
mother: OPTIONAL Female;
father: OPTIONAL Male;
END_ENTITY;
ENTITY Female
SUBTYPE OF (Person);
END_ENTITY;
ENTITY Male
SUBTYPE of (Person);
END_ENTITY;
END_SCHEMA;
The data model is enclosed within the EXPRESS schema Family. It contains a supertype entity Person with the two subtypes Male and Female. Since Person is declared to be ABSTRACT only occurrences of either (ONEOF) the subtype Male or Female can exist. Every occurrence of a person has a mandatory name attribute and optionally attributes mother and father. There is a fixed style of reading for attributes of some entity type:
a Female can play the role of mother for a Person
a Male can play the role of father for a Person
== EXPRESS Building blocks ==
=== Datatypes ===
EXPRESS offers a series of datatypes, with specific data type symbols of the EXPRESS-G notation:
Entity data type: This is the most important datatype in EXPRESS. It is covered below in more detail. Entity datatypes can be related in two ways, in a sub-supertype tree and/or by attributes.
Enumeration data type: Enumeration values are simple strings such as red, green, and blue for an rgb-enumeration. In the case that an enumeration type is declared extensible it can be extended in other schemas.
Defined data type: This further specializes other datatypes—e.g., define a datatype positive that is of type integer with a value > 0.
Select data type: Selects define a choice or an alternative between different options. Most commonly used are selects between different entity_types. More rare are selects that include defined types. In the case that an enumeration type is declared extensible, it can be extended in other schemas.
Simple data type
String: This is the most often used simple type. EXPRESS strings can be of any length and can contain any character (ISO 10646/Unicode).
Binary: This data type is only very rarely used. It covers a number of bits (not bytes). For some implementations the size is limited to 32 bit.
Logical: Similar to the Boolean datatype a logical has the possible values TRUE and FALSE and in addition UNKNOWN.
Boolean: With the Boolean values TRUE and FALSE.
Number: The number data type is a supertype of both, integer and real. Most implementations take uses a double type to represent a real_type, even if the actual value is an integer.
Integer: EXPRESS integers can have in principle any length, but most implementations restricted them to a signed 32 bit value.
Real: Ideally an EXPRESS real value is unlimited in accuracy and size. But in practice a real value is represented by a floating point value of type double.
Aggregation data type: The possible kinds of aggregation_types are SET, BAG, LIST and ARRAY. While SET and BAG are unordered, LIST and ARRAY are ordered. A BAG may contain a particular value more than once, this is not allowed for SET. An ARRAY is the only aggregate that may contain unset members. This is not possible for SET, LIST, BAG. The members of an aggregate may be of any other data type.
A few general things are to be mentioned for datatypes.
Constructed datatypes can be defined within an EXPRESS schema. They are mainly used to define entities, and to specify the type of entity attributes and aggregate members.
Datatypes can be used in a recursive way to build up more and more complex data types. E.g. it is possible to define a LIST of an ARRAY of a SELECT of either some entities or other datatypes. If it makes sense to define such datatypes is a different question.
EXPRESS defines a couple of rules how a datatype can be further specialized. This is important for re-declared attributes of entities.
GENERIC data types can be used for procedures, functions and abstract entities.
=== Entity-Attribute ===
Entity attributes allow to add "properties" to entities and to relate one entity with another one in a specific role. The name of the attribute specifies the role. Most datatypes can directly serve as type of an attribute. This includes aggregation as well.
There are three different kinds of attributes, explicit, derived and inverse attributes. And all these can be re-declared in a subtype. In addition an explicit attribute can be re-declared as derived in a subtype. No other change of the kind of attributes is possible.
Explicit attributes are those with direct values visible in a STEP-File.
Derived attributes get their values from an expression. In most cases the expression refers to other attributes of THIS instance. The expression may also use EXPRESS functions.
Inverse attributes do not add "information" to an entity, but only name and constrain an explicit attribute to an entity from the other end.
Specific attribute symbols of the EXPRESS-G notation:
=== Supertypes and subtypes ===
An entity can be defined to be a subtype of one or several other entities (multiple inheritance is allowed!). A supertype can have any number of subtypes. It is very common practice in STEP to build very complex sub-supertype graphs. Some graphs relate 100 and more entities with each other.
An entity instance can be constructed for either a single entity (if not abstract) or for a complex combination of entities in such a sub-supertype graph. For the big graphs the number of possible combinations is likely to grow in astronomic ranges. To restrict the possible combinations special supertype constraints got introduced such as ONEOF and TOTALOVER. Furthermore, an entity can be declared to be abstract to enforce that no instance can be constructed of just this entity but only if it contains a non-abstract subtype.
=== Algorithmic constraints ===
Entities and defined data types may be further constrained with WHERE rules. WHERE rules are also part of global rules. A WHERE rule is an expression, which must evaluate to TRUE, otherwise a population of an EXPRESS schema, is not valid. Like derived attributes these expression may invoke EXPRESS functions, which may further invoke EXPRESS procedures. The functions and procedures allow formulating complex statements with local variables, parameters and constants - very similar to a programming language.
The EXPRESS language can describe local and global rules.
For example:
This example describes that area_unit entity must have square value of
length. For this the attribute dimensions.length_exponent must be equal to 2 and all other exponents of basic SI units must be 0.
Another example:
That is, it means that week value cannot exceed 7.
And so, you can describe some rules to your entities. More details on the given examples can be found in ISO 10303-41
== See also ==
ISO related subjects
ISO 10303: ISO standard for the computer-interpretable representation and exchange of industrial product data.
ISO 10303-21: Data exchange form of STEP with an ASCII structure
ISO 10303-22: Standard data access interface, part of the implementation methods of STEP
ISO 10303-28: STEP-XML specifies the use of the Extensible Markup Language (XML) to represent EXPRESS schema
ISO 13584-24: The logical model of PLIB is specified in EXPRESS
ISO 13399: ISO standard for cutting tool data representation and exchange
ISO/PAS 16739: Industry Foundation Classes is specified in EXPRESS
List of STEP (ISO 10303) parts
Other related subjects
CAD data exchange
EDIF: Electronic Design Interchange Format
Diagram
General-purpose modeling
Modeling language
Wirth syntax notation
DOT (graph description language)
== References ==
This article incorporates public domain material from the National Institute of Standards and Technology
== Further reading ==
ISO 10303, the main page for STEP, the Standard for the Exchange of Product model data
Douglas A. Schenck and Peter R. Wilson, Information Modeling the EXPRESS Way, Oxford University Press, 1993, ISBN 978-0-19-508714-7
EXPRESS Language Foundation, an organization devoted to promoting the EXPRESS language family | Wikipedia/EXPRESS_(data_modeling_language) |
A model transformation, in model-driven engineering, is an automated way of modifying and creating platform-specific model from platform-independent ones. An example use of model transformation is ensuring that a family of models is consistent, in a precise sense which the software engineer can define. The aim of using a model transformation is to save effort and reduce errors by automating the building and modification of models where possible.
== Overview ==
Model transformations can be thought of as programs that take models as input. There is a wide variety of kinds of model transformation and uses of them, which differ in their inputs and outputs and also in the way they are expressed.
A model transformation usually specifies which models are acceptable as input, and if appropriate what models it may produce as output, by specifying the metamodel to which a model must conform.
== Classification of model transformations ==
Model transformations and languages for them have been classified in many ways.
Some of the more common distinctions drawn are:
=== Number and type of inputs and outputs ===
In principle a model transformation may have many inputs and outputs of various types; the only absolute limitation is that a model transformation will take at least one model as input. However, a model transformation that did not produce any model as output would more commonly be called a model analysis or model query.
=== Endogenous versus exogenous ===
Endogenous transformations are transformations between models expressed in the same language. Exogenous transformations are transformations between models expressed using different languages. For example, in a process conforming to the OMG Model Driven Architecture, a platform-independent model might be transformed into a platform-specific model by an exogenous model transformation.
=== Unidirectional versus bidirectional ===
A unidirectional model transformation has only one mode of execution: that is, it always takes the same type of input and produces the same type of output. Unidirectional model transformations are useful in compilation-like situations, where any output model is read-only. The relevant notion of consistency is then very simple: the input model is consistent with the model that the transformation would produce as output, only.
For a bidirectional model transformation, the same type of model can sometimes be input and other times be output. Bidirectional transformations are necessary in situations where people are working on more than one model and the models must be kept consistent. Then a change to either model might necessitate a change to the other, in order to maintain consistency between the models. Because each model can incorporate information which is not reflected in the other, there may be many models which are consistent with a given model. Important special cases are:
bijective transformations, in which there is exactly one model which is consistent with any given model; that is, the consistency relation is bijective. A pair of models is consistent if and only if it is related by the consistency bijection. Both models contain the same information, but presented differently.
view transformations, in which a concrete model determines a single view model, but the same view model might be produced from many different concrete models. The view model is an abstraction of the concrete model. If the view may be updated, a bidirectional transformation is needed. This situation is known in the database field as view-update. Any concrete model is consistent with its view.
It is particularly important that a bidirectional model transformation has appropriate properties to make it behave sensibly: for example, not making changes unnecessarily, or discarding deliberately made changes.
== Languages for model transformations ==
A model transformation may be written in a general purpose programming language, but specialised model transformation languages are also available. Bidirectional transformations, in particular, are best written in a language that ensures the directions are appropriately related. The OMG-standardised model transformation languages are collectively known as QVT.
In some model transformation languages, for example the QVT languages, a model transformation is itself a model, that is, it conforms to a metamodel which is part of the model transformation language's definition. This facilitates the definition of Higher Order Transformations (HOTs), i.e. transformations which have other transformations as input and/or output.
== See also ==
Model-driven engineering (MDE)
Model-driven architecture (MDA)
Domain-specific language (DSL)
Model transformation language
Refinement
Transformation (disambiguation)
Program transformation
Data transformation
Graph transformation
== References ==
== Further reading ==
Model Driven Software Engineering in Practice, Marco Brambilla, Jordi Cabot, Manuel Wimmer, foreword by Richard Soley (OMG Chairman), Morgan & Claypool, USA, 2012, Synthesis Lectures on Software Engineering #1. 182 pages. ISBN 9781608458820 (paperback), ISBN 9781608458837 (ebook) http://www.mdse-book.com | Wikipedia/Model_transformation |
The general algebraic modeling system (GAMS) is a high-level modeling system for mathematical optimization. GAMS is designed for modeling and solving linear, nonlinear, and mixed-integer optimization problems. The system is tailored for complex, large-scale modeling applications and allows the user to build large maintainable models that can be adapted to new situations. The system is available for use on various computer platforms. Models are portable from one platform to another.
GAMS was the first algebraic modeling language (AML) and is formally similar to commonly used fourth-generation programming languages. GAMS contains an integrated development environment (IDE) and is connected to a group of third-party optimization solvers. Among these solvers are BARON, COIN-OR solvers, CONOPT, COPT Cardinal Optimizer, CPLEX, DICOPT, IPOPT, MOSEK, SNOPT, and XPRESS.
GAMS allows the users to implement a sort of hybrid algorithm combining different solvers. Models are described in concise, human-readable algebraic statements. GAMS is among the most popular input formats for the NEOS Server. Although initially designed for applications related to economics and management science, it has a community of users from various backgrounds of engineering and science.
== Timeline ==
1976 GAMS idea is presented at the International Symposium on Mathematical Programming (ISMP), Budapest
1978 Phase I: GAMS supports linear programming. Supported platforms: Mainframes and Unix Workstations
1979 Phase II: GAMS supports nonlinear programming.
1987 GAMS becomes a commercial product
1988 First PC System (16 bit)
1988 Alex Meeraus, the initiator of GAMS and founder of GAMS Development Corporation, is awarded INFORMS Computing Society Prize
1990 32 bit Dos Extender
1990 GAMS moves to Georgetown, Washington, D.C.
1991 Mixed Integer Non-Linear Programs capability (DICOPT)
1994 GAMS supports mixed complementarity problems
1995 MPSGE language is added for CGE modeling
1996 European branch opens in Germany
1998 32 bit native Windows
1998 Stochastic programming capability (OSL/SE, DECIS)
1999 Introduction of the GAMS Integrated development environment (IDE)
2000 End of support for DOS & Win 3.11
2000 GAMS World initiative started
2001 GAMS Data Exchange (GDX) is introduced
2002 GAMS is listed in OR/MS 50th Anniversary list of milestones
2003 Conic programming is added
2003 Global optimization in GAMS
2004 Quality assurance initiative starts
2004 Support for Quadratic Constrained programs
2005 Support for 64 bit PC Operating systems (Mac PowerPC / Linux / Win)
2006 GAMS supports parallel grid computing
2007 GAMS supports open-source solvers from COIN-OR
2007 Support for Solaris on Sparc64
2008 Support for 32 and 64 bit Mac OS X
2009 GAMS available on the Amazon Elastic Compute Cloud
2009 GAMS supports extended mathematical programs (EMP)
2010 GAMS is awarded the company award of the German Society of Operations Research (GOR)
2010 GDXMRW interface between GAMS and Matlab
2010 End of support for Mac PowerPC / Dec Alpha / SGI IRIX / HP-9000/HP-UX
2011 Support for Extrinsic Function Libraries
2011 End of support for Win95 / 98 / ME, and Win2000
2012 The Winners of the 2012 INFORMS Impact Prize included Alexander Meeraus. The prize was awarded to the originators of the five most important algebraic modeling languages.
2012 Introduction of Object Oriented API for .NET, Java, and Python
2012 The winners of the 2012 Coin OR Cup included Michael Bussieck, Steven Dirkse, & Stefan Vigerske for GAMSlinks
2012 End of support for 32 bit on Mac OS X
2013 Support for distributed MIP (Cplex)
2013 Stochastic programming extension of GAMS EMP
2013 GDXRRW interface between GAMS and R
2014 Local search solver LocalSolver added to solver portfolio
2014 End of support for 32 bit Linux and 32 bit Solaris
2015 LaTeX documentation from GAMS source (Model2TeX)
2015 End of support for Win XP
2016 New Management Team
2017 EmbeddedCode Facility
2017 C++ API
2017 Introduction of "Core" and "Peripheral" platforms
2018 GAMS Studio (Beta)
2018 End of support for x86-64 Solaris
2019 GAMS MIRO - Model Interface with Rapid Orchestration (Beta)
2019 End of support for Win7, moved 32 bit Windows to peripheral platforms
2019 Altered versioning scheme to XX.Y.Z
2020 Introduction of demo and community licensing scheme
2020 Official release of GAMS MIRO (Model Interface with Rapid Orchestration) for deployment of GAMS models as interactive applications
2021 Official release of GAMS Engine, the new solution for running GAMS jobs in cloud environments
2022 Official release of GAMS Engine SaaS, the hosted version of GAMS Engine
2023 Release of GAMSPy, a Python package that allows algebraic modelling in Python, using GAMS as a backend
2024 ISO27001 certification
2024 Purchase of CONOPT non-linear Solver IP by GAMS
== Background ==
The driving forces behind the development of GAMS were the users of mathematical programming who believed in optimization as a powerful and elegant framework for solving real life problems in science and engineering. At the same time, these users were frustrated by high costs, skill requirements, and an overall low reliability of applying optimization tools. Most of the system's initiatives and support for new development arose in response to problems in the fields of economics, finance, and chemical engineering, since these disciplines view and understand the world as a mathematical program.
GAMS’s impetus for development arose from the frustrating experience of a large economic modeling group at the World Bank. In hindsight, one may call it a historic accident that in the 1970s mathematical economists and statisticians were assembled to address problems of development. They used the best techniques available at that time to solve multi-sector economy-wide models and large simulation and optimization models in agriculture, steel, fertilizer, power, water use, and other sectors. Although the group produced impressive research, initial success was difficult to reproduce outside their well functioning research environment. The existing techniques to construct, manipulate, and solve such models required several manual, time-consuming, and error-prone translations into different, problem-specific representations required by each solution method. During seminar presentations, modelers had to defend the existing versions of their models, sometimes quite irrationally, because of time and money considerations. Their models just could not be moved to other environments, because special programming knowledge was needed, and data formats and solution methods were not portable.
The idea of an algebraic approach to represent, manipulate, and solve large-scale mathematical models fused old and new paradigms into a consistent and computationally tractable system. Using generator matrices for linear programs revealed the importance of naming rows and columns in a consistent manner. The connection to the emerging relational data model became evident. Experience using traditional programming languages to manage those name spaces naturally lead one to think in terms of sets and tuples, and this led to the relational data model.
Combining multi-dimensional algebraic notation with the relational data model was the obvious answer. Compiler writing techniques were by now widespread, and languages like GAMS could be implemented relatively quickly. However, translating this rigorous mathematical representation into the algorithm-specific format required the computation of partial derivatives on very large systems. In the 1970s, TRW developed a system called PROSE that took the ideas of chemical engineers to compute point derivatives that were exact derivatives at a given point, and to embed them in a consistent, Fortran-style calculus modeling language. The resulting system allowed the user to use automatically generated exact first and second order derivatives. This was a pioneering system and an important demonstration of a concept. However, PROSE had a number of shortcomings: it could not handle large systems, problem representation was tied to an array-type data structure that required address calculations, and the system did not provide access to state-of-the art solution methods. From linear programming, GAMS learned that exploitation of sparsity was key to solving large problems. Thus, the final piece of the puzzle was the use of sparse data structures.
Lines starting with an * in column one are treated as comments.: 32
== A sample model ==
A transportation problem from George Dantzig is used to provide a sample GAMS model. This model is part of the model library which contains many more complete GAMS models. This problem finds a least cost shipping schedule that meets requirements at markets and supplies at factories.
Dantzig, G B, Chapter 3.3. In Linear Programming and Extensions. Princeton University Press, Princeton, New Jersey, 1963.
Sets
i canning plants / seattle, san-diego /
j markets / new-york, Chicago, topeka / ;
Parameters
a(i) capacity of plant i in cases
/ seattle 350
san-diego 600 /
b(j) demand at market j in cases
/ new-york 325
Chicago 300
topeka 275 / ;
Table d(i,j) distance in thousands of miles
new-york Chicago topeka
seattle 2.5 1.7 1.8
san-diego 2.5 1.8 1.4 ;
Scalar f freight in dollars per case per thousand miles /90/ ;
Parameter c(i,j) transport cost in thousands of dollars per case ;
c(i,j) = f * d(i,j) / 1000 ;
Variables
x(i,j) shipment quantities in cases
z total transportation costs in thousands of dollars ;
Positive Variable x ;
Equations
cost define objective function
supply(i) observe supply limit at plant i
demand(j) satisfy demand at market j ;
cost .. z =e= sum((i,j), c(i,j)*x(i,j)) ;
supply(i) .. sum(j, x(i,j)) =l= a(i) ;
demand(j) .. sum(i, x(i,j)) =g= b(j) ;
Model transport /all/ ;
Solve transport using lp minimizing z ;
Display x.l, x.m ;
== Subsystems ==
The Mathematical Programming System for General Equilibrium analysis (MPSGE) is a language used for formulating and solving Arrow–Debreu economic equilibrium models and exists as a subsystem within GAMS.
== See also ==
Extended Mathematical Programming (EMP) – an extension to mathematical programming languages available within GAMS
GNU MathProg – an open-source mathematical programming language based on AMPL
== References ==
== External links ==
GAMS Development Corporation
GAMS Software GmbH
GAMS World
[1] | Wikipedia/General_Algebraic_Modeling_System |
Service-oriented modeling is the discipline of modeling business and software systems, for the purpose of designing and specifying service-oriented business systems within a variety of architectural styles and paradigms, such as application architecture, service-oriented architecture, microservices, and cloud computing.
Any service-oriented modeling method typically includes a modeling language that can be employed by both the "problem domain organization" (the business), and "solution domain organization" (the information technology department), whose unique perspectives typically influence the service development life-cycle strategy and the projects implemented using that strategy.
Service-oriented modeling typically strives to create models that provide a comprehensive view of the analysis, design, and architecture of all software entities in an organization, which can be understood by individuals with diverse levels of business and technical understanding. Service-oriented modeling typically encourages viewing software entities as "assets" (service-oriented assets), and refers to these assets collectively as "services." A key service design concern is to find the right service granularity both on the business (domain) level and on a technical (interface contract) level.
== Popular approaches ==
Several approaches have been proposed specifically for designing and modeling services, including SDDM, SOMA and SOMF.
=== Service-oriented design and development methodology ===
Service-oriented design and development methodology (SDDM) is a fusion method created and compiled by M. Papazoglou and W.J. van den Heuvel. The paper argues that SOA designers and service developers cannot be expected to oversee a complex service-oriented development project without relying on a sound design and development methodology. It provides an overview of the methods and techniques used in service-oriented design, approaches the service development methodology from the point of view of both service producers and requesters, and reviews the range of SDDM elements that are available to these roles.
An update to SDDM was later published in Web Services and SOA: Principles and Technology by M. Papazoglou.
=== Service-oriented modeling and architecture ===
IBM announced service-oriented modeling and architecture (SOMA) as its SOA-related methodology in 2004 and published parts of it subsequently. SOMA refers to the more general domain of service modeling necessary to design and create SOA. SOMA covers a broader scope and implements service-oriented analysis and design (SOAD) through the identification, specification and realization of services, components that realize those services (a.k.a. "service components"), and flows that can be used to compose services.
SOMA includes an analysis and design method that extends traditional object-oriented and component-based analysis and design methods to include concerns relevant to and supporting SOA. It consists of three major phases of identification, specification and realization of the three main elements of SOA, namely, services, components that realize those services (aka service components) and flows that can be used to compose services.
SOMA is an end-to-end SOA method for the identification, specification, realization and implementation of services (including information services), components, flows (processes/composition). SOMA builds on current techniques in areas such as domain analysis, functional areas grouping, variability-oriented analysis (VOA) process modeling, component-based development, object-oriented analysis and design and use case modeling. SOMA introduces new techniques such as goal-service modeling, service model creation and a service litmus test to help determine the granularity of a service.
SOMA identifies services, component boundaries, flows, compositions, and information through complementary techniques which include domain decomposition, goal-service modeling and existing asset analysis.
The service lifecycle in SOMA consists of the phases of identification, specification, realization, implementation, deployment and management in which the fundamental building blocks of SOA are identified then refined and implemented in each phase. The fundamental building blocks of SOA consist of services, components, flows and related to them, information, policy and contracts.
=== Service-oriented modeling framework ===
Service-oriented modeling framework (SOMF) has been devised by author Michael Bell as a holistic and anthropomorphic modeling language for software development that employs disciplines and a universal language to provide tactical and strategic solutions to enterprise problems. The term "holistic language" pertains to a modeling language that can be employed to design any application, business and technological environment, either local distributed, or federated. This universality may include design of application-level and enterprise-level solutions, including SOA landscapes, cloud computing, or big data environments. The term "anthropomorphic", on the other hand, affiliates the SOMF language with intuitiveness of implementation and simplicity of usage.
==== Discipline-specific modeling process ====
SOMF is a service-oriented development life cycle methodology, a discipline-specific modeling process. It offers a number of modeling practices and related disciplines that contribute to a successful service-oriented life cycle development and modeling during a project. The image below illustrates the major elements that identify the “what to do” aspects of a service development scheme. These are the modeling pillars that will enable practitioners to craft an effective project plan and to identify the milestones of a service-oriented initiative—either a small or large-scale business or a technological venture.
==== SOMF building blocks ====
Furthermore, the video clip below, depicts the three SOMF building blocks, segments that drive the service-oriented modeling process:
Practices and Modeling Environments. These are the two overlapping Abstraction and Realization Practices that are implemented in three service-oriented modeling environments: Conceptual Environment, Analysis Environment, and Logical Environment.
Modeling Disciplines. Each service-oriented modeling environment is driven by a related discipline: Conceptual Architecture Discipline, Service Discovery & Analysis Discipline, and Logical Architecture Discipline.
Artifacts. This SOMF segment identifies the chief artifacts required for each modeling environment.
== See also ==
== References ==
== Further reading ==
Ali Arsanjani et al. (2008). "SOMA: A method for developing service-oriented solutions ". IBM systems Journal Oct 2008
Michael Bell (2008). Service-Oriented Modeling: Service Analysis, Design, and Architecture. Wiley.
Birol Berkem (2008). "From The Business Motivation Model (BMM) To Service Oriented Architecture (SOA)" In: Journal of Object Technology Vol 7, no. 8
M. Brian Blake (2007). "Decomposing Composition: Service-Oriented Software Engineers". In: IEEE Software. Nov/Dec 2007. pp. 68–77.
Michael P. Papazoglou, Web Services - Principles and Technology. Prentice Hall 2008, ISBN 978-0-321-15555-9
Dick A. Quartel, Maarten W. Steen, Stanislav Pokraev, Marten J. Sinderen, COSMO: A conceptual framework for service modelling and refinement, Information Systems Frontiers, v.9 n.2-3, p. 225–244, July 2007
Luba Cherbakov et al. (2006). "SOA in action inside IBM, Part 1: SOA case studies". IBM developerWorks
== External links ==
Elements of Service-Oriented Analysis and Design, IBM developerWorks Web services zone, June 2004
"Service-Oriented Design and Development Methodology" (IJWET paper). Inderscience Enterprises Ltd.
"Service-oriented modeling and architecture: How to identify, specify, and realize services for your SOA" (Softcopy). IBM Corporation.
"SOMF 2.1 Service-Oriented Conceptualization Model Specifications" (PDF). Methodologies Corporation. Archived from the original (Softcopy) on 2012-04-17. Retrieved 2011-02-08.
"SOMF Examples & Language Notation" (Softcopy). Methodologies Corporation. | Wikipedia/Service-oriented_modeling |
Business Process Model and Notation (BPMN) is a graphical representation for specifying business processes in a business process model.
Originally developed by the Business Process Management Initiative (BPMI), BPMN has been maintained by the Object Management Group (OMG) since the two organizations merged in 2005. Version 2.0 of BPMN was released in January 2011, at which point the name was amended to Business Process Model and Notation to reflect the introduction of execution semantics, which were introduced alongside the existing notational and diagramming elements. Though it is an OMG specification, BPMN is also ratified as ISO 19510. The latest version is BPMN 2.0.2, published in January 2014.
== Overview ==
Business Process Model and Notation (BPMN) is a standard for business process modeling that provides a graphical notation for specifying business processes in a Business Process Diagram (BPD), based on a flowcharting technique very similar to activity diagrams from Unified Modeling Language (UML). The objective of BPMN is to support business process management, for both technical users and business users, by providing a notation that is intuitive to business users, yet able to represent complex process semantics. The BPMN specification also provides a mapping between the graphics of the notation and the underlying constructs of execution languages, particularly Business Process Execution Language (BPEL).
BPMN has been designed to provide a standard notation readily understandable by all business stakeholders, typically including business analysts, technical developers and business managers. BPMN can therefore be used to support the generally desirable aim of all stakeholders on a project adopting a common language to describe processes, helping to avoid communication gaps that can arise between business process design and implementation.
BPMN is one of a number of business process modeling language standards used by modeling tools and processes. While the current variety of languages may suit different modeling environments, there are those who advocate for the development or emergence of a single, comprehensive standard, combining the strengths of different existing languages. It is suggested that in time, this could help to unify the expression of basic business process concepts (e.g., public and private processes, choreographies), as well as advanced process concepts (e.g., exception handling, transaction compensation).
Two new standards, using a similar approach to BPMN have been developed, addressing case management modeling (Case Management Model and Notation) and decision modeling (Decision Model and Notation).
== Topics ==
=== Scope ===
BPMN is constrained to support only the concepts of modeling applicable to business processes. Other types of modeling done by organizations for non-process purposes are out of scope for BPMN. Examples of modeling excluded from BPMN are:
Organizational structures
Functional breakdowns
Data models
In addition, while BPMN shows the flow of data (messages), and the association of data artifacts to activities, it is not a data flow diagram.
=== Elements ===
BPMN models are expressed by simple diagrams constructed from a limited set of graphical elements. For both business users and developers, they simplify understanding of business activities' flow and process.
BPMN's four basic element categories are:
Flow objects
Events, activities, gateways
Connecting objects
Sequence flow, message flow, association
Swim lanes
Pool, lane, Dark Pool
Artifacts
Data object, group, annotation
These four categories enable creation of simple business process diagrams (BPDs). BPDs also permit making new types of flow object or artifact, to make the diagram more understandable.
=== Flow objects and connecting objects ===
Flow objects are the main describing elements within BPMN, and consist of three core elements: events, activities, and gateways.
==== Event ====
An Event is represented with a circle and denotes something that happens (compared with an activity, which is something that is done). Icons within the circle denote the type of event (e.g., an envelope representing a message, or a clock representing time). Events are also classified as Catching (for example, if catching an incoming message starts a process) or Throwing (such as throwing a completion message when a process ends).
Start event
Acts as a process trigger; indicated by a single narrow border, and can only be Catch, so is shown with an open (outline) icon.
Intermediate event
Represents something that happens between the start and end events; is indicated by a double border, and can Throw or Catch (using solid or open icons as appropriate). For example, a task could flow to an event that throws a message across to another pool, where a subsequent event waits to catch the response before continuing.
End event
Represents the result of a process; indicated by a single thick or bold border, and can only Throw, so is shown with a solid icon.
==== Activity ====
An activity is represented with a rounded-corner rectangle and describes the kind of work which must be done. An activity is a generic term for work that a company performs. It can be atomic or compound.
Task
A task represents a single unit of work that is not or cannot be broken down to a further level of business process detail. It is referred to as an atomic activity. A task is the lowest level activity illustrated on a process diagram. A set of tasks may represent a high-level procedure.
Sub-process
Used to hide or reveal additional levels of business process detail. When collapsed, a sub-process is indicated by a plus sign against the bottom line of the rectangle; when expanded, the rounded rectangle expands to show all flow objects, connecting objects, and artifacts. A sub-process is referred to as a compound activity.
Has its own self-contained start and end events; sequence flows from the parent process must not cross the boundary.
Transaction
A form of sub-process in which all contained activities must be treated as a whole; i.e., they must all be completed to meet an objective, and if any one of them fails, they must all be compensated (undone). Transactions are differentiated from expanded sub-processes by being surrounded by a double border.
Call Activity
A point in the process where a global process or a global Task is reused. A call activity is differentiated from other activity types by a bolded border around the activity area.
==== Gateway ====
A gateway is represented with a diamond shape and determines forking and merging of paths, depending on the conditions expressed.
Exclusive
Used to create alternative flows in a process. Because only one of the paths can be taken, it is called exclusive.
Event Based
The condition determining the path of a process is based on an evaluated event.
Parallel
Used to create parallel paths without evaluating any conditions.
Inclusive
Used to create alternative flows where all paths are evaluated.
Exclusive Event Based
An event is being evaluated to determine which of mutually exclusive paths will be taken.
Complex
Used to model complex synchronization behavior.
Parallel Event Based
Two parallel processes are started based on an event, but there is no evaluation of the event.
==== Connections ====
Flow objects are connected to each other using Connecting objects, which are of three types: sequences, messages, and associations.
Sequence Flow
A Sequence Flow is represented with a solid line and arrowhead, and shows in which order the activities are performed. The sequence flow may also have a symbol at its start, a small diamond indicates one of a number of conditional flows from an activity, while a diagonal slash indicates the default flow from a decision or activity with conditional flows.
Message Flow
A Message Flow is represented with a dashed line, an open circle at the start, and an open arrowhead at the end. It tells us what messages flow across organizational boundaries (i.e., between pools). A message flow can never be used to connect activities or events within the same pool.
Association
An Association is represented with a dotted line. It is used to associate an Artifact or text to a Flow Object, and can indicate some directionality using an open arrowhead (toward the artifact to represent a result, from the artifact to represent an input, and both to indicate it is read and updated). No directionality is used when the Artifact or text is associated with a sequence or message flow (as that flow already shows the direction).
=== Pools, Lanes, and artifacts ===
Swim lanes are a visual mechanism of organising and categorising activities, based on cross functional flowcharting, and in BPMN consist of two types:
Pool
Represents major participants in a process, typically separating different organisations. A pool contains one or more lanes (like a real swimming pool). A pool can be open (i.e., showing internal detail) when it is depicted as a large rectangle showing one or more lanes, or collapsed (i.e., hiding internal detail) when it is depicted as an empty rectangle stretching the width or height of the diagram.
Lane
Used to organise and categorise activities within a pool according to function or role, and depicted as a rectangle stretching the width or height of the pool. A lane contains the flow objects, connecting objects and artifacts.
Artifacts allow developers to bring some more information into the model/diagram. In this way the model/diagram becomes more readable. There are three pre-defined Artifacts, and they are:
Data objects: Data objects show the reader which data is required or produced in an activity.
Group: A Group is represented with a rounded-corner rectangle and dashed lines. The group is used to group different activities but does not affect the flow in the diagram.
Annotation: An annotation is used to give the reader of the model/diagram an understandable impression.
=== Examples of business process diagrams ===
Click on small images for full-size version
=== BPMN 2.0.2 ===
The vision of BPMN 2.0.2 is to have one single specification for a new Business Process Model and Notation that defines the notation, metamodel and interchange format but with a modified name that still preserves the "BPMN" brand. The features include:
Formalizes the execution semantics for all BPMN elements.
Defines an extensibility mechanism for both Process model extensions and graphical extensions.
Refines Event composition and correlation.
Extends the definition of human interactions.
Defines a Choreography model.
The current version of the specification was released in January 2014.
== Comparison of BPMN versions ==
== Types of BPMN sub-model ==
Business process modeling is used to communicate a wide variety of information to a wide variety of audiences. BPMN is designed to cover this wide range of usage and allows modeling of end-to-end business processes to allow the viewer of the Diagram to be able to easily differentiate between sections of a BPMN Diagram. There are three basic types of sub-models within an end-to-end BPMN model: Private (internal) business processes, Abstract (public) processes, and Collaboration (global) processes:
Private (internal) business processes
Private business processes are those internal to a specific organization and are the type of processes that have been generally called workflow or BPM processes. If swim lanes are used then a private business process will be contained within a single Pool. The Sequence Flow of the Process is therefore contained within the Pool and cannot cross the boundaries of the Pool. Message Flow can cross the Pool boundary to show the interactions that exist between separate private business processes.
Abstract (public) processes
This represents the interactions between a private business process and another process or participant. Only those activities that communicate outside the private business process are included in the abstract process. All other “internal” activities of the private business process are not shown in the abstract process. Thus, the abstract process shows to the outside world the sequence of messages that are required to interact with that business process. Abstract processes are contained within a Pool and can be modeled separately or within a larger BPMN Diagram to show the Message Flow between the abstract process activities and other entities. If the abstract process is in the same Diagram as its corresponding private business process, then the activities that are common to both processes can be associated.
Collaboration (global) processes
A collaboration process depicts the interactions between two or more business entities. These interactions are defined as a sequence of activities that represent the message exchange patterns between the entities involved. Collaboration processes may be contained within a Pool and the different participant business interactions are shown as Lanes within the Pool. In this situation, each Lane would represent two participants and a direction of travel between them. They may also be shown as two or more Abstract Processes interacting through Message Flow (as described in the previous section). These processes can be modeled separately or within a larger BPMN Diagram to show the Associations between the collaboration process activities and other entities. If the collaboration process is in the same Diagram as one of its corresponding private business process, then the activities that are common to both processes can be associated.
Within and between these three BPMN sub-models, many types of Diagrams can be created. The following are the types of business processes that can be modeled with BPMN (those with asterisks may not map to an executable language):
High-level private process activities (not functional breakdown)*
Detailed private business process
As-is or old business process*
To-be or new business process
Detailed private business process with interactions to one or more external entities (or “Black Box” processes)
Two or more detailed private business processes interacting
Detailed private business process relationship to Abstract Process
Detailed private business process relationship to Collaboration Process
Two or more Abstract Processes*
Abstract Process relationship to Collaboration Process*
Collaboration Process only (e.g., ebXML BPSS or RosettaNet)*
Two or more detailed private business processes interacting through their Abstract Processes and/or a Collaboration Process
BPMN is designed to allow all the above types of Diagrams. However, it should be cautioned that if too many types of sub-models are combined, such as three or more private processes with message flow between each of them, then the Diagram may become difficult to understand. Thus, the OMG recommends that the modeler pick a focused purpose for the BPD, such as a private or collaboration process.
== Comparison with other process modeling notations ==
Event-driven process chains (EPC) and BPMN are two notations with similar expressivity when process modeling is concerned. A BPMN model can be transformed into an EPC model. Conversely, an EPC model can be transformed into a BPMN model with only a slight loss of information. A study showed that for the same process, the BPMN model may need around 40% fewer elements than the corresponding EPC model, but with a slightly larger set of symbols. The BPMN model would therefore be easier to read. The conversion between the two notations can be automated.
UML activity diagrams and BPMN are two notations that can be used to model the same processes: a subset of the activity diagram elements have a similar semantic than BPMN elements, despite the smaller and less expressive set of symbols. A study showed that both types of process models appear to have the same level of readability for inexperienced users, despite the higher formal constraints of an activity diagram.
== BPM Certifications ==
The Business Process Management (BPM) world acknowledges the critical importance of modeling standards for optimizing and standardizing business processes. The Business Process Model and Notation (BPMN) version 2 has brought significant improvements in event and subprocess modeling, significantly enriching the capabilities for documenting, analyzing, and optimizing business processes.
Elemate positions itself as a guide in exploring the various BPM certifications and dedicated training paths, thereby facilitating the mastery of BPMN and continuous improvement of processes within companies.
=== OMG OCEB certification ===
The Object Management Group (OMG), the international consortium behind the BPMN standard, offers the OCEB certification (OMG Certified Expert in BPM). This certification specifically targets business process modeling with particular emphasis on BPMN 2. The OCEB certification is structured into five levels: Fundamental, Business Intermediate (BUS INT), Technical Intermediate (TECH INT), Business Advanced (BUS ADV), and Technical Advanced (TECH ADV), thus providing a comprehensive pathway for BPM professionals.
=== Other BPM certifications ===
Beyond the OCEB, there are other recognized certifications in the BPM field:
CBPA (Certified Business Process Associate): Offered by the ABPMP (Association of Business Process Management Professionals), this certification is aimed at professionals starting in BPM.
CBPP (Certified Business Process Professional): Also awarded by the ABPMP, the CBPP certification targets experienced professionals, offering validation of their global expertise in BPM.
=== The interest of a BPMN certification ===
While BPMN 2 has established itself as an essential standard in business process modeling, a specific certification for BPMN could provide an additional guarantee regarding the quality and compliance of the models used. This becomes particularly relevant when companies employ external providers for the modeling of their business processes.
=== BPM certifying training with BPMN 2 ===
Although OMG does not offer a certification exclusively dedicated to BPMN 2, various organizations provide certifying training that encompasses this standard. These trainings cover not just BPMN but also the principles of management, automation, and digitization of business processes. They enable learners to master process mapping and modeling using BPMN 2, essential for optimizing business operations.
== See also ==
DRAKON
Business process management
Business process modeling
Comparison of Business Process Model and Notation modeling tools
CMMN (Case Management Model and Notation)
Process Driven Messaging Service
Function model
Functional software architecture
Workflow patterns
Service Component Architecture
XPDL
YAWL
== References ==
== Further reading ==
Grosskopf, Decker and Weske. (Feb 28, 2009). The Process: Business Process Modeling using BPMN. Meghan Kiffer Press. ISBN 978-0-929652-26-9. Archived from the original on April 30, 2019. Retrieved July 9, 2020.
Ryan K. L. Ko, Stephen S. G. Lee, Eng Wah Lee (2009) Business Process Management (BPM) Standards: A Survey. In: Business Process Management Journal, Emerald Group Publishing Limited. Volume 15 Issue 5. ISSN 1463-7154. PDF
Stephen A. White; Conrad Bock (2011). BPMN 2.0 Handbook Second Edition: Methods, Concepts, Case Studies and Standards in Business Process Management Notation. Future Strategies Inc. ISBN 978-0-9849764-0-9.
== External links ==
OMG BPMN Specification
BPMN Tool Matrix
BPMN Information Home Page OMG information page for BPMN. | Wikipedia/Business_Process_Modeling_Notation |
In computer science, function-level programming refers to one of the two contrasting programming paradigms identified by John Backus in his work on programs as mathematical objects, the other being value-level programming.
In his 1977 Turing Award lecture, Backus set forth what he considered to be the need to switch to a different philosophy in programming language design:
Programming languages appear to be in trouble. Each successive language incorporates, with a little cleaning up, all the features of its predecessors plus a few more. [...] Each new language claims new and fashionable features... but the plain fact is that few languages make programming sufficiently cheaper or more reliable to justify the cost of producing and learning to use them.
He designed FP to be the first programming language to specifically support the function-level programming style.
A function-level program is variable-free (cf. point-free programming), since program variables, which are essential in value-level definitions, are not needed in function-level programs.
== Introduction ==
In the function-level style of programming, a program is built directly from programs that are given at the outset, by combining them with program-forming operations or functionals. Thus, in contrast with the value-level approach that applies the given programs to values to form a succession of values culminating in the desired result value, the function-level approach applies program-forming operations to the given programs to form a succession of programs culminating in the desired result program.
As a result, the function-level approach to programming invites study of the space of programs under program-forming operations, looking to derive useful algebraic properties of these program-forming operations. The function-level approach offers the possibility of making the set of programs a mathematical space by emphasizing the algebraic properties of the program-forming operations over the space of programs.
Another potential advantage of the function-level view is the ability to use only strict functions and thereby have bottom-up semantics, which are the simplest kind of all. Yet another is the existence of function-level definitions that are not the lifted (that is, lifted from a lower value-level to a higher function-level) image of any existing value-level one: these (often terse) function-level definitions represent a more powerful style of programming not available at the value-level.
== Contrast to functional programming ==
When Backus studied and publicized his function-level style of programming, his message was mostly misunderstood as supporting the traditional functional programming style languages instead of his own FP and its successor FL.
Backus calls functional programming applicative programming; his function-level programming is a particular, constrained type.
A key distinction from functional languages is that Backus' language has the following hierarchy of types:
atoms
functions, which take atoms to atoms
Higher-order functions (which he calls "functional forms"), which take one or two functions to functions
...and the only way to generate new functions is to use one of the functional forms, which are fixed: you cannot build your own functional form (at least not within FP; you can within FFP (Formal FP)).
This restriction means that functions in FP are a module (generated by the built-in functions) over the algebra of functional forms, and are thus algebraically tractable. For instance, the general question of equality of two functions is equivalent to the halting problem, and is undecidable, but equality of two functions in FP is just equality in the algebra, and thus (Backus imagines) easier.
Even today, many users of lambda style languages often misinterpret Backus' function-level approach as a restrictive variant of the lambda style, which is a de facto value-level style. In fact, Backus would not have disagreed with the 'restrictive' accusation: he argued that it was precisely due to such restrictions that a well-formed mathematical space could arise, in a manner analogous to the way structured programming limits programming to a restricted version of all the control-flow possibilities available in plain, unrestricted unstructured programs.
The value-free style of FP is closely related to the equational logic of a cartesian-closed category.
== Example languages ==
The canonical function-level programming language is FP. Others include FL, and J.
== See also ==
Concatenative programming language
Functional programming, declarative programming (compare)
Tacit programming
Value-level programming, imperative programming (contrast)
== References ==
== External links ==
Function Level Programs As Mathematical Objects from John Backus
From Function Level Semantics to Program Transformation and Optimization SpringerLink see point 1.2 and 1.3
Closed applicative languages, FP and FL, in John W. Backus (Publications) or the original Programming Language Semantics and Closed Applicative Languages
Instance variables, a way out of the variable abstinence | Wikipedia/Function-level_programming |
In computer science, Programming Computable Functions (PCF), or Programming with Computable Functions, or Programming language for Computable Functions, is a programming language which is typed and based on functional programming, introduced by Gordon Plotkin in 1977, based on prior unpublished material by Dana Scott. It can be considered as an extended version of the typed lambda calculus, or a simplified version of modern typed functional languages such as ML or Haskell.
A fully abstract model for PCF was first given by Robin Milner. However, since Milner's model was essentially based on the syntax of PCF it was considered less than satisfactory. The first two fully abstract models not employing syntax were formulated during the 1990s. These models are based on game semantics and Kripke logical relations. For a time it was felt that neither of these models was completely satisfactory, since they were not effectively presentable. However, Ralph Loader demonstrated that no effectively presentable fully abstract model could exist, since the question of program equivalence in the finitary fragment of PCF is not decidable.
== Syntax ==
The data types of PCF are inductively defined as
nat is a type
For types σ and τ, there is a type σ → τ
A context is a list of pairs x : σ, where x is a variable name and σ is a type, such that no variable name is duplicated. One then defines typing judgments of terms-in-context in the usual way for the following syntactical constructs:
Variables (if x : σ is part of a context Γ, then Γ ⊢ x : σ)
Application (of a term of type σ → τ to a term of type σ)
λ-abstraction
The Y fixed point combinator (making terms of type σ out of terms of type σ → σ)
The successor (succ) and predecessor (pred) operations on nat and the constant 0
The conditional if with the typing rule:
Γ
⊢
t
:
nat
,
Γ
⊢
s
0
:
σ
,
Γ
⊢
s
1
:
σ
Γ
⊢
if
(
t
,
s
0
,
s
1
)
:
σ
{\displaystyle {\frac {\Gamma \;\vdash \;t\;:{\textbf {nat}},\quad \quad \Gamma \;\vdash \;s_{0}\;:\sigma ,\quad \quad \Gamma \;\vdash \;s_{1}\;:\sigma }{\Gamma \;\vdash \;{\textbf {if}}(t,s_{0},s_{1})\;:\sigma }}}
(nats will be interpreted as booleans here with a convention like zero denoting truth, and any other number denoting falsity)
== Semantics ==
=== Denotational semantics ===
A relatively straightforward semantics for the language is the Scott model. In this model,
Types are interpreted as certain domains.
[
[
nat
]
]
:=
N
⊥
{\displaystyle [\![{\textbf {nat}}]\!]:=\mathbb {N} _{\bot }}
(the natural numbers with a bottom element adjoined, with the flat ordering)
[
[
σ
→
τ
]
]
{\displaystyle [\![\sigma \to \tau \,]\!]}
is interpreted as the domain of Scott-continuous functions from
[
[
σ
]
]
{\displaystyle [\![\sigma ]\!]\,}
to
[
[
τ
]
]
{\displaystyle [\![\tau ]\!]\,}
, with the pointwise ordering.
A context
x
1
:
σ
1
,
…
,
x
n
:
σ
n
{\displaystyle x_{1}:\sigma _{1},\;\dots ,\;x_{n}:\sigma _{n}}
is interpreted as the product
[
[
σ
1
]
]
×
…
×
[
[
σ
n
]
]
{\displaystyle [\![\sigma _{1}]\!]\times \;\dots \;\times [\![\sigma _{n}]\!]}
Terms in context
Γ
⊢
x
:
σ
{\displaystyle \Gamma \;\vdash \;x\;:\;\sigma }
are interpreted as continuous functions
[
[
Γ
]
]
→
[
[
σ
]
]
{\displaystyle [\![\Gamma ]\!]\;\to \;[\![\sigma ]\!]}
Variable terms are interpreted as projections
Lambda abstraction and application are interpreted by making use of the cartesian closed structure of the category of domains and continuous functions
Y is interpreted by taking the least fixed point of the argument
This model is not fully abstract for PCF; but it is fully abstract for the language obtained by adding a parallel or operator to PCF.: 293
== Notes ==
== References ==
Scott, Dana S. (1969). "A type-theoretic alternative to CUCH, ISWIM, OWHY" (PDF). Unpublished Manuscript. Appeared as Scott, Dana S. (1993). "A type-theoretic alternative to CUCH, ISWIM, OWHY". Theoretical Computer Science. 121: 411–440. doi:10.1016/0304-3975(93)90095-b.
Mitchell, John C. (1996). "The Language PCF". Foundations for Programming Languages. MIT Press. ISBN 9780262133210.
== External links ==
Introduction to RealPCF
Lexer and Parser for PCF written in SML | Wikipedia/Programming_Computable_Functions |
The ACM Transactions on Programming Languages and Systems (TOPLAS) is a bimonthly, open access, peer-reviewed scientific journal on the topic of programming languages published by the Association for Computing Machinery.
== Background ==
Published since 1979, the journal's scope includes programming language design, implementation, and semantics of programming languages, compilers and interpreters, run-time systems, storage allocation and garbage collection, and formal specification, testing, and verification of software. It is indexed in Scopus and SCImago.
The editor-in-chief is Andrew Myers (Cornell University). According to the Journal Citation Reports, the journal had a 2020 impact factor of 0.410.
== References ==
== External links ==
Official website
TOPLAS at ACM Digital Library
TOPLAS at DBLP | Wikipedia/ACM_Transactions_on_Programming_Languages_and_Systems |
A program transformation is any operation that takes a computer program and generates another program. In many cases the transformed program is required to be semantically equivalent to the original, relative to a particular formal semantics and in fewer cases the transformations result in programs that semantically differ from the original in predictable ways.
While the transformations can be performed manually, it is often more practical to use a program transformation system that applies specifications of the required transformations. Program transformations may be specified as automated procedures that modify compiler data structures (e.g. abstract syntax trees) representing the program text, or may be specified more conveniently using patterns or templates representing parameterized source code fragments.
A practical requirement for source code transformation systems is that they be able to effectively process programs written in a programming language. This usually requires integration of a full front-end for the programming language of interest, including source code parsing, building internal program representations of code structures, the meaning of program symbols, useful static analyses, and regeneration of valid source code from transformed program representations. The problem of building and integrating adequate front ends for conventional languages (Java, C++, PHP etc.) may be of equal difficulty as building the program transformation system itself because of the complexity of such languages. To be widely useful, a transformation system must be able to handle many target programming languages, and must provide some means of specifying such front ends.
A generalisation of semantic equivalence is the notion of program refinement: one program is a refinement of another if it terminates on all the initial states for which the original program terminates, and for each such state it is guaranteed to terminate in a possible final state for the original program. In other words, a refinement of a program is more defined and more deterministic than the original program. If two programs are refinements of each other, then the programs are equivalent.
== See also ==
List of program transformation systems
Metaprogramming
Program synthesis
Source-to-source compiler
Source code generation
Transformation language
Transformational grammar
Dynamic recompilation
Operation reduction for low power
== References ==
== External links ==
The Program transformation Wiki
Papers on program transformation theory and practicE
Transformation Technology Bibliography
DMS Software Reengineering Toolkit: A Program Transformation System for DSLs and modern (C++, Java, ...) and legacy (COBOL, RPG) computer languages
Spoon: A library to analyze, transform, rewrite, and transpile Java source code. It parses source files to build a well-designed AST with powerful analysis and transformation API.
JavaParser: The JavaParser library provides you with an Abstract Syntax Tree of your Java code. The AST structure then allows you to work with your Java code in an easy programmatic way.. | Wikipedia/Program_transformation |
In computer science and software programming, a value is the representation of some entity that can be manipulated by a program. The members of a type are the values of that type.
The "value of a variable" is given by the corresponding mapping in the environment. In languages with assignable variables, it becomes necessary to distinguish between the r-value (or contents) and the l-value (or location) of a variable.
In declarative (high-level) languages, values have to be referentially transparent. This means that the resulting value is independent of the location of the expression needed to compute the value. Only the contents of the location (the bits, whether they are 1 or 0) and their interpretation are significant.
== Value category ==
Despite its name, in the C++ language standards this terminology is used to categorize expressions, not values.: 8.2.1
=== Assignment: l-values and r-values ===
Some languages use the idea of l-values and r-values, deriving from the typical mode of evaluation on the left and right-hand side of an assignment statement. An l-value refers to an object that persists beyond a single expression. An r-value is a temporary value that does not persist beyond the expression that uses it.
The notion of l-values and r-values was introduced by Combined Programming Language (CPL). The notions in an expression of r-value, l-value, and r-value/l-value are analogous to the parameter modes of input parameter (has a value), output parameter (can be assigned), and input/output parameter (has a value and can be assigned), though the technical details differ between contexts and languages.
=== R-values and addresses ===
In many languages, notably the C family, l-values have storage addresses that are programmatically accessible to the running program (e.g., via some address-of operator like "&" in C/C++), meaning that they are variables or de-referenced references to a certain memory location. R-values can be l-values (see below) or non-l-values—a term only used to distinguish from l-values. Consider the C expression 4 + 9. When executed, the computer generates an integer value of 13, but because the program has not explicitly designated where in the computer this 13 is stored, the expression is a non l-value. On the other hand, if a C program declares a variable x and assigns the value of 13 to x, then the expression x has a value of 13 and is an l-value.
In C, the term l-value originally meant something that could be assigned to (hence the name, indicating it is on the left side of the assignment operator), but since the reserved word const (constant) was added to the language, the term is now 'modifiable l-value'. In C++11 a special semantic-glyph && exists ( not to be confused with the && operator used for logical operations ), to denote the use/access of the expression's address for the compiler only; i.e., the address cannot be retrieved using the address-of & operator during the run-time of the program (see the use of move semantics). The addition of move semantics complicated the value classification taxonomy by adding to it the concept of an xvalue (expiring value) which refers to an object near the end of its lifetime whose resources can be reused (typically by moving them). This also lead to the creation of the categories glvalue (generalized lvalue) which are lvalues and xvalues and prvalues (pure rvalues) which are rvalues that are not xvalues.
This type of reference can be applied to all r-values including non-l-values as well as l-values. Some processors provide one or more instructions which take an immediate value, sometimes referred to as "immediate" for short. An immediate value is stored as part of the instruction which employs it, usually to load into, add to, or subtract from, a register. The other parts of the instruction are the opcode, and destination. The latter may be implicit. (A non-immediate value may reside in a register, or be stored elsewhere in memory, requiring the instruction to contain a direct or indirect address [e.g., index register address] to the value.)
The l-value expression designates (refers to) an object. A non-modifiable l-value is addressable, but not assignable. A modifiable l-value allows the designated object to be changed as well as examined. An r-value is any expression, a non-l-value is any expression that is not an l-value. One example is an "immediate value" (see above) and consequently not addressable.
== In assembly language ==
A value can be virtually any kind of data by a given data type, for instance a string, a digit, a single letter.
Processors often support more than one size of immediate data, e.g. 8 or 16 bit, employing a unique opcode and mnemonic for each instruction variant. If a programmer supplies a data value that will not fit, the assembler issues an "Out of range" error message. Most assemblers allow an immediate value to be expressed as ASCII, decimal, hexadecimal, octal, or binary data. Thus, the ASCII character 'A' is the same as 65 or 0x41. The byte order of strings may differ between processors, depending on the assembler and computer architecture.
== Notes ==
== References ==
Mitchell, John C. (1996). Foundations for Programming Languages. The MIT Press. ISBN 0-262-13321-0.
Strachey, Christopher (2000). "Fundamental Concepts in Programming Languages". Higher-Order and Symbolic Computation. 13: 11–49. doi:10.1023/A:1010000313106. S2CID 14124601.
== External links ==
Value Object
Transfer Object Pattern | Wikipedia/Value_(computer_science) |
Programming language theory (PLT) is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of formal languages known as programming languages. Programming language theory is closely related to other fields including linguistics, mathematics, and software engineering.
== History ==
In some ways, the history of programming language theory predates even the development of programming languages. The lambda calculus, developed by Alonzo Church and Stephen Cole Kleene in the 1930s, is considered by some to be the world's first programming language, even though it was intended to model computation rather than being a means for programmers to describe algorithms to a computer system. Many modern functional programming languages have been described as providing a "thin veneer" over the lambda calculus, and many are described easily in terms of it.
The first programming language to be invented was Plankalkül, which was designed by Konrad Zuse in the 1940s, but not publicly known until 1972, and not implemented until 1998. The first widely known and successful high-level programming language was FORTRAN (for Formula Translation), developed from 1954 to 1957 by a team of IBM researchers led by John Backus. The success of FORTRAN led to the formation of a committee of scientists to develop a "universal" computer language; the result of their effort was ALGOL 58. Separately, John McCarthy of Massachusetts Institute of Technology (MIT) developed Lisp, the first language with origins in academia to be successful. With the success of these initial efforts, programming languages became an active topic of research in the 1960s and beyond.
=== Timeline ===
Some other key events in the history of programming language theory since then:
1950s
Noam Chomsky developed the Chomsky hierarchy in the field of linguistics, a discovery which has directly impacted programming language theory and other branches of computer science.
1960s
In 1962, the Simula language was developed by Ole-Johan Dahl and Kristen Nygaard; it is widely considered to be the first example of an object-oriented programming language; Simula also introduced the concept of coroutines.
In 1964, Peter Landin is the first to realize Church's lambda calculus can be used to model programming languages. He introduces the SECD machine which "interprets" lambda expressions.
In 1965, Landin introduces the J operator, essentially a form of continuation.
In 1966, Landin introduces ISWIM, an abstract computer programming language in his article The Next 700 Programming Languages. It is influential in the design of languages leading to the Haskell language.
In 1966, Corrado Böhm introduced the language CUCH (Curry-Church).
In 1967, Christopher Strachey publishes his influential set of lecture notes Fundamental Concepts in Programming Languages, introducing the terminology R-values, L-values, parametric polymorphism, and ad hoc polymorphism.
In 1969, J. Roger Hindley publishes The Principal Type-Scheme of an Object in Combinatory Logic, later generalized into the Hindley–Milner type inference algorithm.
In 1969, Tony Hoare introduces the Hoare logic, a form of axiomatic semantics.
In 1969, William Alvin Howard observed that a "high-level" proof system, referred to as natural deduction, can be directly interpreted in its intuitionistic version as a typed variant of the model of computation known as lambda calculus. This became known as the Curry–Howard correspondence.
1970s
In 1970, Dana Scott first publishes his work on denotational semantics.
In 1972, logic programming and Prolog were developed thus allowing computer programs to be expressed as mathematical logic.
A team of scientists at Xerox PARC led by Alan Kay develop Smalltalk, an object-oriented language widely known for its innovative development environment.
In 1974, John C. Reynolds discovers System F. It had already been discovered in 1971 by the mathematical logician Jean-Yves Girard.
From 1975, Gerald Jay Sussman and Guy Steele develop the Scheme language, a Lisp dialect incorporating lexical scoping, a unified namespace, and elements from the actor model including first-class continuations.
Backus, at the 1977 Turing Award lecture, assailed the current state of industrial languages and proposed a new class of programming languages now known as function-level programming languages.
In 1977, Gordon Plotkin introduces Programming Computable Functions, an abstract typed functional language.
In 1978, Robin Milner introduces the Hindley–Milner type system inference algorithm for ML language. Type theory became applied as a discipline to programming languages, this application has led to great advances in type theory over the years.
1980s
In 1981, Gordon Plotkin publishes his paper on structured operational semantics.
In 1988, Gilles Kahn published his paper on natural semantics.
There emerged process calculi, such as the Calculus of Communicating Systems of Robin Milner, and the Communicating sequential processes model of C. A. R. Hoare, as well as similar models of concurrency such as the actor model of Carl Hewitt.
In 1985, the release of Miranda sparks an academic interest in lazy-evaluated purely functional programming languages. A committee was formed to define an open standard resulting in the release of the Haskell 1.0 standard in 1990.
Bertrand Meyer created the methodology design by contract and incorporated it into the Eiffel language.
1990s
Gregor Kiczales, Jim Des Rivieres and Daniel G. Bobrow published the book The Art of the Metaobject Protocol.
Eugenio Moggi and Philip Wadler introduced the use of monads for structuring programs written in functional programming languages.
== Sub-disciplines and related fields ==
There are several fields of study that either lie within programming language theory, or which have a profound influence on it; many of these have considerable overlap. In addition, PLT makes use of many other branches of mathematics, including computability theory, category theory, and set theory.
=== Formal semantics ===
Formal semantics is the formal specification of the behaviour of computer programs and programming languages. Three common approaches to describe the semantics or "meaning" of a computer program are denotational semantics, operational semantics and axiomatic semantics.
=== Type theory ===
Type theory is the study of type systems; which are "a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute". Many programming languages are distinguished by the characteristics of their type systems.
=== Program analysis and transformation ===
Program analysis is the general problem of examining a program and determining key characteristics (such as the absence of classes of program errors). Program transformation is the process of transforming a program in one form (language) to another form.
=== Comparative programming language analysis ===
Comparative programming language analysis seeks to classify languages into different types based on their characteristics; broad categories of languages are often known as programming paradigms.
=== Generic and metaprogramming ===
Metaprogramming is the generation of higher-order programs which, when executed, produce programs (possibly in a different language, or in a subset of the original language) as a result.
=== Domain-specific languages ===
Domain-specific languages are those constructed to efficiently solve problems in a given domain, or part of such.
=== Compiler construction ===
Compiler theory is the theory of writing compilers (or more generally, translators); programs that translate a program written in one language into another form. The actions of a compiler are traditionally broken up into syntax analysis (scanning and parsing), semantic analysis (determining what a program should do), optimization (improving the performance of a program as indicated by some metric; typically execution speed) and code generation (generation and output of an equivalent program in some target language; often the instruction set architecture of a central processing unit (CPU)).
=== Run-time systems ===
Run-time systems refer to the development of programming language runtime environments and their components, including virtual machines, garbage collection, and foreign function interfaces.
== Journals, publications, and conferences ==
Conferences are the primary venue for presenting research in programming languages. The most well known conferences include the Symposium on Principles of Programming Languages (POPL), Programming Language Design and Implementation (PLDI), the International Conference on Functional Programming (ICFP), the international conference on Object-Oriented Programming, Systems, Languages & Applications (OOPSLA) and the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS).
Notable journals that publish PLT research include the ACM Transactions on Programming Languages and Systems (TOPLAS), Journal of Functional Programming (JFP), Journal of Functional and Logic Programming, and Higher-Order and Symbolic Computation.
== See also ==
SIGPLAN
Very high-level programming language
== References ==
== Further reading ==
Abadi, Martín and Cardelli, Luca. A Theory of Objects. Springer-Verlag.
Michael J. C. Gordon. Programming Language Theory and Its Implementation. Prentice Hall.
Gunter, Carl and Mitchell, John C. (eds.). Theoretical Aspects of Object Oriented Programming Languages: Types, Semantics, and Language Design. MIT Press.
Harper, Robert. Practical Foundations for Programming Languages. Draft version.
Knuth, Donald E. (2003). Selected Papers on Computer Languages. Stanford, California: Center for the Study of Language and Information.
Mitchell, John C. Foundations for Programming Languages.
Mitchell, John C. Introduction to Programming Language Theory.
O'Hearn, Peter. W. and Tennent, Robert. D. (1997). ALGOL-like Languages. Progress in Theoretical Computer Science. Birkhauser, Boston.
Pierce, Benjamin C. (2002). Types and Programming Languages. MIT Press.
Pierce, Benjamin C. Advanced Topics in Types and Programming Languages.
Pierce, Benjamin C. et al. (2010). Software Foundations.
== External links ==
Lambda the Ultimate, a community weblog for professional discussion and repository of documents on programming language theory.
Great Works in Programming Languages. Collected by Benjamin C. Pierce (University of Pennsylvania).
Classic Papers in Programming Languages and Logic. Collected by Karl Crary (Carnegie Mellon University).
Programming Language Research. Directory by Mark Leone.
λ-Calculus: Then & Now by Dana S. Scott for the ACM Turing Centenary Celebration
Grand Challenges in Programming Languages. Panel session at POPL 2009. | Wikipedia/Theory_of_programming |
A foreign function interface (FFI) is a mechanism by which a program written in one programming language can call routines or make use of services written or compiled in another one. An FFI is often used in contexts where calls are made into a binary dynamic-link library.
== Naming ==
The term comes from the specification for Common Lisp, which explicitly refers to the programming language feature enabling for inter-language calls as such; the term is also often used officially by the interpreter and compiler documentation for Haskell, Rust, PHP, Python, and LuaJIT (Lua): 35 . Other languages use other terminology: Ada has language bindings, while Java has Java Native Interface (JNI) or Java Native Access (JNA). Foreign function interface has become generic terminology for mechanisms which provide such services.
== Operation ==
The primary function of a foreign function interface is to mate the semantics and calling conventions of one programming language (the host language, or the language which defines the FFI), with the semantics and conventions of another (the guest language). This process must also take into consideration the runtime environments and application binary interfaces of both. This can be done in several ways:
Requiring that guest-language functions which are to be host-language callable be specified or implemented in a particular way, often using a compatibility library of some sort.
Use of a tool to automatically wrap guest-language functions with appropriate glue code, which performs any necessary translation.
Use of a wrapper library
Restricting the set of host language abilities which can be used cross-language. For example, C++ functions called from C may not (in general) include reference parameters or throw exceptions.
FFIs may be complicated by the following considerations:
If one language supports garbage collection (GC) and the other does not; care must be taken that the non-GC language code does nothing to cause GC in the other to fail. In JNI, for example, C code which "holds on to" object references that it receives from Java must communicate this information successfully to the Java virtual machine or Java Runtime Environment (JRE), otherwise, Java may delete objects before C finishes with them. (The C code must also explicitly release its link to any such object once C has no further need of that object.)
Complicated or non-trivial objects or datatypes may be difficult to map from one environment to another.
It may not be possible for both languages to maintain references to the same instance of a mutable object, due to the mapping issue above.
One or both languages may be running on a virtual machine (VM); moreover, if both are, these are often different VMs.
Cross-language inheritance and other differences, such as between type systems or between object composition models, may be especially difficult.
Examples of FFIs include:
Ada language bindings, allowing not only to call foreign functions but also to export its functions and methods to be called from non-Ada code.
C++ has a trivial FFI with C, as the languages share a significant common subset. The primary effect of the extern "C" declaration in C++ is to disable C++ name mangling. With other languages, separate utils or middleware are used, examples include:
GNOME project: GObject Introspection
SWIG
Chromium project: Blink and V8 engine use an interface description language (IDL) compiler for standard JavaScript interfaces
Other IDL compilers
Clean provides a bidirectional FFI with all languages following C or the stdcall calling convention.
Common Lisp
Compiled Native Interface (CNI), alternative to JNI used in the GNU compiler environment.
One of the bases of the Component Object Model is a common interface format, which natively uses the same types as Visual Basic for strings and arrays.
D does it the same way as C++ does, with extern "C" through extern (C++)
Dart includes dart:ffi library to call native C code for mobile, command-line, and server applications
Dynamic programming languages, such as Python, Perl, Tcl, and Ruby, all provide easy access to native code written in C, C++, or any other language obeying C/C++ calling conventions.
Factor has FFIs for C, Fortran, Objective-C, and Windows COM; all of these enable importing and calling arbitrary shared libraries dynamically.
Fortran 2003 has a module ISO_C_BINDING which provides interoperable data types (both intrinsic types and POD structs), interoperable pointers, interoperable global data stores, and mechanisms for calling C from Fortran and for calling Fortran from C. It has been improved in the Fortran 2018 standard.
Go can call C code directly via the "C" pseudo-package.
Google Web Toolkit (GWT), in which Java is compiled to JavaScript, has an FFI named JSNI which allows Java source code to call arbitrary JavaScript functions, and for JavaScript to call back into Java.
Haskell
Java Native Interface (JNI), which provides an interface between Java and C/C++, the preferred systems languages on most systems where Java is deployed. Java Native Access (JNA) provides an interface with native libraries without having to write glue code. Another example is JNR
LuaJIT, a just-in-time implementation of Lua, has an FFI that allows "calling external C functions and using C data structures from pure Lua code".: 35
.NET have FFI through its LibraryImport attribute.
Nim has an FFI which enables it to use source from C, C++, and Objective-C. It can also interface with JavaScript.
JavaScript usually runs inside web browser runtimes that don't provide direct access to system libraries or commands to run, but there are few exceptions:
Node.js provides functions to open precompiled .node modules that in turn may provide access to non-builtin resources.
Deno, provides kind of FFI interface via dlopen(...) functions.
Bun provides a built-in module, bun:ffi, to efficiently call native libraries directly from JavaScript.
Julia has ccall keyword to call C (and other languages, e.g., Fortran); while packages, providing similar no-boilerplate support, are available for some languages e.g., for Python (to e.g. provide OO support and GC support), Java (and supports other JDK-languages, such as Scala) and R. Interactive use with C++ is also possible with Cxx.jl package.
PhoneGap (was named Apache Callback, but is now Apache Cordova) is a platform for building native mobile applications using HTML, CSS and JavaScript. Also, it has FFIs via JavaScript callback functions for access to methods and properties of mobile phone's native features including accelerometer, camera (also PhotoLibrary and SavedPhotoAlbum), compass, storage (SQL database and localStorage), notification, media and capture (playing and recording or audio and video), file, contacts (address book), events, device, and connection information.[1],[2].
PHP provides FFI to C.
Pony supports integration with C via its FFI.
Python provides the ctypes and cffi modules. For example, the ctypes module can load C functions from a shared library, or dynamic-link library (DLL) on-the-fly and translate simple data types automatically between Python and C semantics as follows:
P/Invoke, which provides an interface between the Microsoft Common Language Runtime and native code.
Racket has a native FFI based heavily on macros that enables importing arbitrary shared libraries dynamically.
Raku can call Ruby, Python, Perl, Brainfuck, Lua, C, C++, Go, Scheme (Guile, Gambit), and Rust
Ruby provides FFI either through the ffi gem, or through the standard library fiddle.
Rust defines a foreign function interface to functions with various standard application binary interface (ABIs). There is also a library for interfacing with Elixir, Rustler.
V (Vlang) can include and supports the use of C source code and libraries.
Visual Basic has a declarative syntax that allows it to call non-Unicode C functions.
Wolfram Language provides a technology named Wolfram Symbolic Transfer Protocol (WSTP) which enables bidirectional calling of code between other languages with bindings for C++, Java, .NET. and other languages.
Zig provides FFI to C using the builtin cImport function.
In addition, many FFIs can be generated automatically: for example, SWIG. However, in the case of an extension language a semantic inversion of the relationship of guest and host can occur, when a smaller body of extension language is the guest invoking services in the larger body of host language, such as writing a small plugin for GIMP.
Some FFIs are restricted to free standing functions, while others also allow calls of functions embedded in an object or class (often called method calls); some even permit migration of complex datatypes or objects across the language boundary.
In most cases, an FFI is defined by a higher-level language, so that it may employ services defined and implemented in a lower-level language, typically a system programming language like C or C++. This is typically done to either access operating system (OS) services in the language in which the OS API is defined, or for performance goals.
Many FFIs also provide the means for the called language to invoke services in the host language also.
The term foreign function interface is generally not used to describe multi-lingual runtimes such as the Microsoft Common Language Runtime, where a common substrate is provided which enables any CLR-compliant language to use services defined in any other. (However, in this case the CLR does include an FFI, P/Invoke, to call outside the runtime.) In addition, many distributed computing architectures such as the Java remote method invocation (RMI), RPC, CORBA, SOAP and D-Bus permit different services to be written in different languages; such architectures are generally not considered FFIs.
== Special cases ==
There are some special cases, in which the languages compile into the same bytecode VM, like Clojure and Java, as well as Elixir and Erlang. Since there is no interface, it is not an FFI, strictly speaking, while it offers the same functions to the user.
== See also ==
Language interoperability
Interface definition language
Calling convention
Name mangling
Application programming interface
Application binary interface
Comparison of application virtual machines
SWIG
Remote procedure call
libffi
== References ==
== External links ==
c2.com: Foreign function interface
Haskell 98 Foreign Function Interface
Allegro Common Lisp FFI | Wikipedia/Foreign_function_interface |
In computing, external memory algorithms or out-of-core algorithms are algorithms that are designed to process data that are too large to fit into a computer's main memory at once. Such algorithms must be optimized to efficiently fetch and access data stored in slow bulk memory (auxiliary memory) such as hard drives or tape drives, or when memory is on a computer network. External memory algorithms are analyzed in the external memory model.
== Model ==
External memory algorithms are analyzed in an idealized model of computation called the external memory model (or I/O model, or disk access model). The external memory model is an abstract machine similar to the RAM machine model, but with a cache in addition to main memory. The model captures the fact that read and write operations are much faster in a cache than in main memory, and that reading long contiguous blocks is faster than reading randomly using a disk read-and-write head. The running time of an algorithm in the external memory model is defined by the number of reads and writes to memory required. The model was introduced by Alok Aggarwal and Jeffrey Vitter in 1988. The external memory model is related to the cache-oblivious model, but algorithms in the external memory model may know both the block size and the cache size. For this reason, the model is sometimes referred to as the cache-aware model.
The model consists of a processor with an internal memory or cache of size M, connected to an unbounded external memory. Both the internal and external memory are divided into blocks of size B. One input/output or memory transfer operation consists of moving a block of B contiguous elements from external to internal memory, and the running time of an algorithm is determined by the number of these input/output operations.
== Algorithms ==
Algorithms in the external memory model take advantage of the fact that retrieving one object from external memory retrieves an entire block of size B. This property is sometimes referred to as locality.
Searching for an element among N objects is possible in the external memory model using a B-tree with branching factor B. Using a B-tree, searching, insertion, and deletion can be achieved in
O
(
log
B
N
)
{\displaystyle O(\log _{B}N)}
time (in Big O notation). Information theoretically, this is the minimum running time possible for these operations, so using a B-tree is asymptotically optimal.
External sorting is sorting in an external memory setting. External sorting can be done via distribution sort, which is similar to quicksort, or via a
M
B
{\displaystyle {\tfrac {M}{B}}}
-way merge sort. Both variants achieve the asymptotically optimal runtime of
O
(
N
B
log
M
B
N
B
)
{\displaystyle O\left({\frac {N}{B}}\log _{\frac {M}{B}}{\frac {N}{B}}\right)}
to sort N objects. This bound also applies to the fast Fourier transform in the external memory model.
The permutation problem is to rearrange N elements into a specific permutation. This can either be done either by sorting, which requires the above sorting runtime, or inserting each element in order and ignoring the benefit of locality. Thus, permutation can be done in
O
(
min
(
N
,
N
B
log
M
B
N
B
)
)
{\displaystyle O\left(\min \left(N,{\frac {N}{B}}\log _{\frac {M}{B}}{\frac {N}{B}}\right)\right)}
time.
== Applications ==
The external memory model captures the memory hierarchy, which is not modeled in other common models used in analyzing data structures, such as the random-access machine, and is useful for proving lower bounds for data structures. The model is also useful for analyzing algorithms that work on datasets too big to fit in internal memory.
A typical example is geographic information systems, especially digital elevation models, where the full data set easily exceeds several gigabytes or even terabytes of data.
This methodology extends beyond general purpose CPUs and also includes GPU computing as well as classical digital signal processing. In general-purpose computing on graphics processing units (GPGPU), powerful graphics cards (GPUs) with little memory (compared with the more familiar system memory, which is most often referred to simply as RAM) are utilized with relatively slow CPU-to-GPU memory transfer (when compared with computation bandwidth).
== History ==
An early use of the term "out-of-core" as an adjective is in 1962 in reference to devices that are other than the core memory of an IBM 360. An early use of the term "out-of-core" with respect to algorithms appears in 1971.
== See also ==
Cache-oblivious algorithm
External memory graph traversal
Online algorithm
Parallel external memory
Streaming algorithm
== References ==
== External links ==
Out of Core SVD and QR
Out of core graphics
Scalapack design | Wikipedia/External_memory_model |
A Kahn process network (KPN, or process network) is a distributed model of computation in which a group of deterministic sequential processes communicate through unbounded first in, first out channels. The model requires that reading from a channel is blocking while writing is non-blocking. Due to these key restrictions, the resulting process network exhibits deterministic behavior that does not depend on the timing of computation nor on communication delays.
Kahn process networks were originally developed for modeling parallel programs, but have proven convenient for modeling embedded systems, high-performance computing systems, signal processing systems, stream processing systems, dataflow programming languages, and other computational tasks. KPNs were introduced by Gilles Kahn in 1974.
== Execution model ==
KPN is a common model for describing signal processing systems where infinite streams of data are incrementally transformed by processes executing in sequence or parallel. Despite parallel processes, multitasking or parallelism are not required for executing this model.
In a KPN, processes communicate via unbounded FIFO channels. Processes read and write atomic data elements, alternatively called tokens, from and to channels. Writing to a channel is non-blocking, i.e. it always succeeds and does not stall the process, while reading from a channel is blocking, i.e. a process that reads from an empty channel will stall and can only continue when the channel contains sufficient data items (tokens). Processes are not allowed to test an input channel for existence of tokens without consuming them. A FIFO cannot be consumed by multiple processes, nor can multiple processes write to a single FIFO. Given a specific input (token) history for a process, the process must be deterministic so that it always produces the same outputs (tokens). Timing or execution order of processes must not affect the result and therefore testing input channels for tokens is forbidden.
=== Notes on processes ===
A process need not read any input or have any input channels as it may act as a pure data source
A process need not write any output or have any output channels
Testing input channels for emptiness (or non-blocking reads) could be allowed for optimization purposes, but it should not affect outputs. It can be beneficial and/or possible to do something in advance rather than wait for a channel. For example, assume there were two reads from different channels. If the first read would stall (wait for a token) but the second read could succeed directly, it could be beneficial to read the second one first to save time, because the reading itself often consumes some time (e.g. time for memory allocation or copying).
=== Process firing semantics as Petri nets ===
Assuming process P in the KPN above is constructed so that it first reads data from channel A, then channel B, computes something and then writes data to channel C, the execution model of the process can be modeled with the Petri net shown on the right. The single token in the PE resource place forbids the process from executing simultaneously for different input data. When data arrives at channel A or B, tokens are placed into places FIFO A and FIFO B respectively. The transitions of the Petri net are associated with the respective I/O operations and computation. When the data has been written to channel C, PE resource is filled with its initial marking again allowing new data to be read.
=== Process as a finite state machine ===
A process can be modeled as a finite state machine that is in one of two states:
Active; the process computes or writes data
Wait; the process is blocked (waiting) for data
Assuming the finite state machine reads program elements associated with the process, it may read three kinds of tokens, which are "Compute", "Read" and "Write token". Additionally, in the Wait state it can only come back to Active state by reading a special "Get token" which means the communication channel associated with the wait contains readable data.
== Properties ==
=== Boundedness of channels ===
A channel is strictly bounded by
b
{\displaystyle b}
if it has at most
b
{\displaystyle b}
unconsumed tokens for any possible execution. A KPN is strictly bounded by
b
{\displaystyle b}
if all channels are strictly bounded by
b
{\displaystyle b}
.
The number of unconsumed tokens depends on the execution order (scheduling) of processes. A spontaneous data source could produce arbitrarily many tokens into a channel if the scheduler would not execute processes consuming those tokens.
A real application can not have unbounded FIFOs and therefore scheduling and maximum capacity of FIFOs must be designed into a practical implementation. The maximum capacity of FIFOs can be handled in several ways:
FIFO bounds can be mathematically derived in design to avoid FIFO overflows. This is however not possible for all KPNs. It is an undecidable problem to test whether a KPN is strictly bounded by
b
{\displaystyle b}
. Moreover, in practical situations, the bound may be data dependent.
FIFO bounds can be grown on demand.
Blocking writes can be used so that a process blocks if a FIFO is full. This approach may unfortunately lead to an artificial deadlock unless the designer properly derives safe bounds for FIFOs (Parks, 1995). Local artificial detection at run-time may be necessary to guarantee the production of the correct output.
=== Closed and open systems ===
A closed KPN has no external input or output channels. Processes that have no input channels act as data sources and processes that have no output channels act as data sinks. In an open KPN each process has at least one input and output channel.
=== Determinism ===
Processes of a KPN are deterministic. For the same input history they must always produce exactly the same output. Processes can be modeled as sequential programs that do reads and writes to ports in any order or quantity as long as determinism property is preserved. As a consequence, KPN model is deterministic so that following factors entirely determine outputs of the system:
processes
the network
initial tokens
Hence, timing of the processes does not affect outputs of the system.
=== Monotonicity ===
KPN processes are monotonic. Reading more tokens can only lead to writing more tokens. Tokens read in the future can only affect tokens written in the future. In a KPN there is a total order of events inside a signal. However, there is no order relation between events in different signals. Thus, KPNs are only partly ordered, which classifies them as an untimed model.
== Applications ==
Due to its high expressiveness and succinctness, KPNs as underlying the model of computation are applied in several academic modeling tools to represent streaming applications, which have certain properties (e.g., dataflow-oriented, stream-based).
The open source Daedalus framework maintained by Leiden Embedded Research Center at Leiden University accepts sequential programs written in C and generates a corresponding KPN. This KPN could, for example, be used to map the KPN onto an FPGA-based platform systematically.
The Ambric Am2045 massively parallel processor array is a KPN implemented in actual silicon. Its 336 32-bit processors are connected by a programmable interconnect of dedicated FIFOs. Thus its channels are strictly bounded with blocking writes.
The AI Engine's in some AMD Xilinx Versals are building blocks of a Kahn Process Network.
== See also ==
Synchronous Data Flow
Communicating sequential processes
Flow-based programming
Dataflow programming
== References ==
== Further reading ==
Lee, E.A.; Parks, T.M. (1995). "Dataflow process networks" (PDF). Proceedings of the IEEE. 83 (5): 773–801. doi:10.1109/5.381846. ISSN 0018-9219. Retrieved 2019-02-13.
Josephs, Mark B. (2005). "Models for Data-Flow Sequential Processes". In Abdallah, Ali E.; Jones, Cliff B.; Sanders, Jeff W. (eds.). Communicating Sequential Processes. The First 25 Years: Symposium on the Occasion of 25 Years of CSP, London, UK, July 7-8, 2004. Revised Invited Papers. Lecture Notes in Computer Science. Vol. 3525. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 85–97. CiteSeerX 10.1.1.60.5694. doi:10.1007/11423348_6. ISBN 978-3-540-32265-8. | Wikipedia/Kahn_process_networks |
In theoretical computer science, a nondeterministic Turing machine (NTM) is a theoretical model of computation whose governing rules specify more than one possible action when in some given situations. That is, an NTM's next state is not completely determined by its action and the current symbol it sees, unlike a deterministic Turing machine.
NTMs are sometimes used in thought experiments to examine the abilities and limits of computers. One of the most important open problems in theoretical computer science is the P versus NP problem, which (among other equivalent formulations) concerns the question of how difficult it is to simulate nondeterministic computation with a deterministic computer.
== Background ==
In essence, a Turing machine is imagined to be a simple computer that reads and writes symbols one at a time on an endless tape by strictly following a set of rules. It determines what action it should perform next according to its internal state and what symbol it currently sees. An example of one of a Turing Machine's rules might thus be: "If you are in state 2 and you see an 'A', then change it to 'B', move left, and switch to state 3."
=== Deterministic Turing machine ===
In a deterministic Turing machine (DTM), the set of rules prescribes at most one action to be performed for any given situation.
A deterministic Turing machine has a transition function that, for a given state and symbol under the tape head, specifies three things:
the symbol to be written to the tape (it may be the same as the symbol currently in that position, or not even write at all, resulting in no practical change),
the direction (left, right or neither) in which the head should move, and
the subsequent state of the finite control.
For example, an X on the tape in state 3 might make the DTM write a Y on the tape, move the head one position to the right, and switch to state 5.
== Description ==
In contrast to a deterministic Turing machine, in a nondeterministic Turing machine (NTM) the set of rules may prescribe more than one action to be performed for any given situation. For example, an X on the tape in state 3 might allow the NTM to:
Write a Y, move right, and switch to state 5
or
Write an X, move left, and stay in state 3.
Because there can be multiple actions that can follow from a given situation, there can be multiple possible sequences of steps that the NTM can take starting from a given input. If at least one of these possible sequences leads to an "accept" state, the NTM is said to accept the input. While a DTM has a single "computation path" that it follows, an NTM has a "computation tree".
== Formal definition ==
A nondeterministic Turing machine can be formally defined as a six-tuple
M
=
(
Q
,
Σ
,
ι
,
⊔
,
A
,
δ
)
{\displaystyle M=(Q,\Sigma ,\iota ,\sqcup ,A,\delta )}
, where
Q
{\displaystyle Q}
is a finite set of states
Σ
{\displaystyle \Sigma }
is a finite set of symbols (the tape alphabet)
ι
∈
Q
{\displaystyle \iota \in Q}
is the initial state
⊔
∈
Σ
{\displaystyle \sqcup \in \Sigma }
is the blank symbol
A
⊆
Q
{\displaystyle A\subseteq Q}
is the set of accepting (final) states
δ
⊆
(
Q
∖
A
×
Σ
)
×
(
Q
×
Σ
×
{
L
,
S
,
R
}
)
{\displaystyle \delta \subseteq \left(Q\backslash A\times \Sigma \right)\times \left(Q\times \Sigma \times \{L,S,R\}\right)}
is a relation on states and symbols called the transition relation.
L
{\displaystyle L}
is the movement to the left,
S
{\displaystyle S}
is no movement, and
R
{\displaystyle R}
is the movement to the right.
The difference with a standard (deterministic) Turing machine is that, for deterministic Turing machines, the transition relation is a function rather than just a relation.
Configurations and the yields relation on configurations, which describes the possible actions of the Turing machine given any possible contents of the tape, are as for standard Turing machines, except that the yields relation is no longer single-valued. (If the machine is deterministic, the possible computations are all prefixes of a single, possibly infinite, path.)
The input for an NTM is provided in the same manner as for a deterministic Turing machine: the machine is started in the configuration in which the tape head is on the first character of the string (if any), and the tape is all blank otherwise.
An NTM accepts an input string if and only if at least one of the possible computational paths starting from that string puts the machine into an accepting state. When simulating the many branching paths of an NTM on a deterministic machine, we can stop the entire simulation as soon as any branch reaches an accepting state.
=== Alternative definitions ===
As a mathematical construction used primarily in proofs, there are a variety of minor variations on the definition of an NTM, but these variations all accept equivalent languages.
The head movement in the output of the transition relation is often encoded numerically instead of using letters to represent moving the head Left (-1), Stationary (0), and Right (+1); giving a transition function output of
(
Q
×
Σ
×
{
−
1
,
0
,
+
1
}
)
{\displaystyle \left(Q\times \Sigma \times \{-1,0,+1\}\right)}
. It is common to omit the stationary (0) output, and instead insert the transitive closure of any desired stationary transitions.
Some authors add an explicit reject state,
which causes the NTM to halt without accepting. This definition still retains the asymmetry that any nondeterministic branch can accept, but every branch must reject for the string to be rejected.
== Computational equivalence with DTMs ==
Any computational problem that can be solved by a DTM can also be solved by a NTM, and vice versa. However, it is believed that in general the time complexity may not be the same.
=== DTM as a special case of NTM ===
NTMs include DTMs as special cases, so every computation that can be carried out by a DTM can also be carried out by the equivalent NTM.
=== DTM simulation of NTM ===
It might seem that NTMs are more powerful than DTMs, since they can allow trees of possible computations arising from the same initial configuration, accepting a string if any one branch in the tree accepts it. However, it is possible to simulate NTMs with DTMs, and in fact this can be done in more than one way.
==== Multiplicity of configuration states ====
One approach is to use a DTM of which the configurations represent multiple configurations of the NTM, and the DTM's operation consists of visiting each of them in turn, executing a single step at each visit, and spawning new configurations whenever the transition relation defines multiple continuations.
==== Multiplicity of tapes ====
Another construction simulates NTMs with 3-tape DTMs, of which the first tape always holds the original input string, the second is used to simulate a particular computation of the NTM, and the third encodes a path in the NTM's computation tree. The 3-tape DTMs are easily simulated with a normal single-tape DTM.
==== Time complexity and P versus NP ====
In the second construction, the constructed DTM effectively performs a breadth-first search of the NTM's computation tree, visiting all possible computations of the NTM in order of increasing length until it finds an accepting one. Therefore, the length of an accepting computation of the DTM is, in general, exponential in the length of the shortest accepting computation of the NTM. This is believed to be a general property of simulations of NTMs by DTMs. The P = NP problem, the most famous unresolved question in computer science, concerns one case of this issue: whether or not every problem solvable by a NTM in polynomial time is necessarily also solvable by a DTM in polynomial time.
== Bounded nondeterminism ==
An NTM has the property of bounded nondeterminism. That is, if an NTM always halts on a given input tape T then it halts in a bounded number of steps, and therefore can only have a bounded number of possible configurations.
== Comparison with quantum computers ==
Because quantum computers use quantum bits, which can be in superpositions of states, rather than conventional bits, there is sometimes a misconception that quantum computers are NTMs. However, it is believed by experts (but has not been proven) that the power of quantum computers is, in fact, incomparable to that of NTMs; that is, problems likely exist that an NTM could efficiently solve that a quantum computer cannot and vice versa. In particular, it is likely that NP-complete problems are solvable by NTMs but not by quantum computers in polynomial time.
Intuitively speaking, while a quantum computer can indeed be in a superposition state corresponding to all possible computational branches having been executed at the same time (similar to an NTM), the final measurement will collapse the quantum computer into a randomly selected branch. This branch then does not, in general, represent the sought-for solution, unlike the NTM, which is allowed to pick the right solution among the exponentially many branches.
== See also ==
Probabilistic Turing machine
== References ==
=== General ===
Martin, John C. (1997). "Section 9.6: Nondeterministic Turing machines". Introduction to Languages and the Theory of Computation (2nd ed.). McGraw-Hill. pp. 277–281. ISBN 978-0073191461.
Papadimitriou, Christos (1993). "Section 2.7: Nondeterministic machines". Computational Complexity (1st ed.). Addison-Wesley. pp. 45–50. ISBN 978-0201530827.
== External links ==
C++ Simulator of a Nondeterministic Multitape Turing Machine (free software).
C++ Simulator of a Nondeterministic Multitape Turing Machine download link from sourceforge.net | Wikipedia/Nondeterministic_model_of_computation |
In mathematics, computer science and physics, a deterministic system is a system in which no randomness is involved in the development of future states of the system. A deterministic model will thus always produce the same output from a given starting condition or initial state.
== In physics ==
Physical laws that are described by differential equations represent deterministic systems, even though the state of the system at a given point in time may be difficult to describe explicitly.
In quantum mechanics, the Schrödinger equation, which describes the continuous time evolution of a system's wave function, is deterministic. However, the relationship between a system's wave function and the observable properties of the system appears to be non-deterministic.
== In mathematics ==
The systems studied in chaos theory are deterministic. If the initial state were known exactly, then the future state of such a system could theoretically be predicted. However, in practice, knowledge about the future state is limited by the precision with which the initial state can be measured, and chaotic systems are characterized by a strong dependence on the initial conditions. This sensitivity to initial conditions can be measured with Lyapunov exponents.
Markov chains and other random walks are not deterministic systems, because their development depends on random choices.
== In computer science ==
A deterministic model of computation, for example a deterministic Turing machine, is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state.
A deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. There may be non-deterministic algorithms that run on a deterministic machine, for example, an algorithm that relies on random choices. Generally, for such random choices, one uses a pseudorandom number generator, but one may also use some external physical process, such as the last digits of the time given by the computer clock.
A pseudorandom number generator is a deterministic algorithm, that is designed to produce sequences of numbers that behave as random sequences. A hardware random number generator, however, may be non-deterministic.
== Others ==
In economics, the Ramsey–Cass–Koopmans model is deterministic. The stochastic equivalent is known as real business-cycle theory.
As determinism relates to modeling in the natural sciences, a deterministic model uses existing data to model the future behavior of a system. The deterministic model is useful for systems that do not experience frequent or unexpected behavior - unless that behavior is already present in the system via existing data. This type of modeling is distinct from stochastic modeling or forward modeling. Stochastic modeling uses random data in the model while forward modeling uses a given model to predict future behavior in a system. Deterministic models are used across the natural sciences, including geology, oceanography, physics, and other disciplines.
== See also ==
Deterministic system (philosophy)
Dynamical system
Scientific modelling
Statistical model
Stochastic process
== References == | Wikipedia/Deterministic_model |
In computer science, the cell-probe model is a model of computation similar to the random-access machine, except that all operations are free except memory access. This model is useful for proving lower bounds of algorithms for data structure problems.
== Overview ==
The cell-probe model is a modification of the random-access machine model, in which computational cost is only assigned to accessing memory cells.
The model is intended for proving lower bounds on the complexity of data structure problems.
One type of such problems has two phases: the preprocessing phase and the query phase. The input to the first phase, the preprocessing phase, is a set of data from which to build some structure from memory cells. The input to the second phase, the query phase, is a query parameter. The query has to consult the data structure in order to compute its result; for example, a query may be asked to determine if the query parameter was included in the original input data set.
Another type of problem involves both update operations, that modify the data structure, and query operations. For example, an update may add an element to the structure, or remove one. In both cases, the cell-probe complexity of the data structure is characterized by the number of memory cells accessed during preprocessing, query and (if relevant) update.
The cell probe complexity is a lower bound on the time complexity of the corresponding operations on a random-access machine, where memory transfers are part of the operations counted in measuring time.
An example of such a problem is the dynamic partial sum problem.
== History ==
Andrew Yao's 1981 paper "Should Tables Be Sorted?" is considered as the introduction of the cell-probe model. Yao used it to give a minimum number of memory cell "probes" or accesses necessary to determine whether a given query datum exists within a table stored in memory. In 1989, Fredman and Saks initiated the study of cell probe lower bounds for dynamic data-structure problems (i.e., involving updates and queries), and introduced the notation CPROBE(b) for the cell-probe model assuming that a memory cell (word) consists of b bits.
== Notable results ==
=== Searching Tables ===
Yao considered a static data-structure problem where one has to build a data structure ("table") to represent a set
S
{\displaystyle S}
of
n
{\displaystyle n}
elements out of
1
,
…
,
m
{\displaystyle 1,\dots ,m}
. The query parameter is a number
x
≤
m
{\displaystyle x\leq m}
and the query has to report whether
x
{\displaystyle x}
is in the table. A crucial requirement is that the table consist of exactly
n
{\displaystyle n}
entries, where each entry is an integer between
1
{\displaystyle 1}
and
m
{\displaystyle m}
. Yao showed that as long as the table size is bounded independently of
m
{\displaystyle m}
and
m
{\displaystyle m}
is large enough, a query must perform
⌈
log
(
n
+
1
)
⌉
{\displaystyle \lceil \log(n+1)\rceil }
probes in the worst case. This shows that a sorted table together with binary search for queries is an optimal scheme, in this restricted setting.
It is worth noting that in the same paper, Yao also showed, that if the problem is relaxed to allow the data structure to store
n
+
1
{\displaystyle n+1}
entries, then the queries can be performed using only two probes. This upper bound, similarly to the lower bound described above, also requires
m
{\displaystyle m}
to be sufficiently large, as a function of
n
{\displaystyle n}
. Remarkably, this upper bound uses only one additional table entry than the setting for which the lower bound applies.
=== Dynamic Partial Sums ===
The dynamic partial sum problem defines two operations Update(k,v) which sets the value in an array A at index k to be v, and Sum(k) which returns the sum of the values in A at indices 0 through k. A naïve implementation would take
O
(
1
)
{\displaystyle O(1)}
time for Update and
O
(
n
)
{\displaystyle O(n)}
time for Sum.
Instead, values can be stored as leaves in a tree whose inner nodes store the sum over the subtree rooted at that node. In this structure Update requires
O
(
log
n
)
{\displaystyle O(\log n)}
time to update each node in the leaf to root path, and Sum similarly requires
O
(
log
n
)
{\displaystyle O(\log n)}
time to traverse the tree from leaf to root summing the values of all subtrees left of the query index.
Improving on a result of Fredman and Saks, Mihai Pătraşcu used the cell-probe model and an information transfer argument to show that the partial sums problem requires
Ω
(
log
n
)
{\displaystyle \Omega \left(\log n\right)}
time per operation in the worst case (i.e., the worst of query and update must consume such time), assuming
b
=
Ω
(
log
n
)
{\displaystyle b=\Omega (\log n)}
bits per word. He further exhibited the trade-off curve between update time and query time and investigated the case that updates are restricted to small numbers (of
δ
=
o
(
b
)
{\displaystyle \delta =o(b)}
bits).
=== Disjoint Set Maintenance (Union-Find) ===
In the disjoint-set data structure, the structure represents a collection of disjoint sets; there is an update operation, called Union, which unites two sets, and a query operation, called Find, which identifies the set to which a given element belongs. Fredman and Saks proved that in the model CPROBE(log n), any solution for this problem requires
Ω
(
m
α
(
m
,
n
)
)
{\displaystyle \Omega (m\alpha (m,n))}
probes in the worst case (even in expectation) to execute
n
−
1
{\displaystyle n-1}
unions and
m
≥
n
{\displaystyle m\geq n}
finds. This shows that the classic data structure described in the article on disjoint-set data structure is optimal.
=== Approximate Nearest Neighbor Searching ===
The exact nearest neighbor search problem is to determine the closest in a set of input points to a given query point. An approximate version of this problem is often considered since many applications of this problem are in very high dimension spaces and solving the problem in high dimensions requires exponential time or space with respect to the dimension.
Chakrabarti and Regev proved that the approximate nearest neighbor search problem on the Hamming cube using polynomial storage and
d
O
(
1
)
{\displaystyle d^{O(1)}}
word size requires a worst-case query time of
Ω
(
log
log
d
log
log
log
d
)
{\displaystyle \Omega \left({\frac {\log \log d}{\log \log \log d}}\right)}
. This proof used the cell-probe model and information theoretic techniques from communication complexity.
== The Cell-Probe Model versus Random Access Machines ==
In the cell probe model, limiting the range of values that can be stored in a cell is paramount (otherwise one could encode the whole data structure in one cell). The idealized random access machine used as a computational model in Computer Science does not impose a limit on the contents of each cell (in contrast to the word RAM). Thus cell probe lower bounds apply to the word RAM, but do not apply to the idealized RAM. Certain techniques for cell-probe lower bounds can, however, be carried over to the idealized RAM with an algebraic instruction set and similar lower bounds result.
== External links ==
NIST's Dictionary of Algorithms and Data Structures entry on the cell-probe model
== References ==
=== Notes ===
=== Citations === | Wikipedia/Cell-probe_model |
A computational model uses computer programs to simulate and study complex systems using an algorithmic or mechanistic approach and is widely used in a diverse range of fields spanning from physics, engineering, chemistry and biology to economics, psychology, cognitive science and computer science.
The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by adjusting the parameters of the system in the computer, and studying the differences in the outcome of the experiments. Operation theories of the model can be derived/deduced from these computational experiments.
Examples of common computational models are weather forecasting models, earth simulator models, flight simulator models, molecular protein folding models, Computational Engineering Models (CEM), and neural network models.
== See also ==
Computational engineering
Computational cognition
Reversible computing
Agent-based model
Artificial neural network
Computational linguistics
Data-driven model
Decision field theory
Dynamical systems model of cognition
Membrane computing
Ontology (information science)
Programming language theory
Microscale and macroscale models
== References == | Wikipedia/Computational_model |
In computational complexity theory, the decision tree model is the model of computation in which an algorithm can be considered to be a decision tree, i.e. a sequence of queries or tests that are done adaptively, so the outcome of previous tests can influence the tests performed next.
Typically, these tests have a small number of outcomes (such as a yes–no question) and can be performed quickly (say, with unit computational cost), so the worst-case time complexity of an algorithm in the decision tree model corresponds to the depth of the corresponding tree. This notion of computational complexity of a problem or an algorithm in the decision tree model is called its decision tree complexity or query complexity.
Decision tree models are instrumental in establishing lower bounds for the complexity of certain classes of computational problems and algorithms. Several variants of decision tree models have been introduced, depending on the computational model and type of query algorithms are allowed to perform.
For example, a decision tree argument is used to show that a comparison sort of
n
{\displaystyle n}
items must make
n
log
(
n
)
{\displaystyle n\log(n)}
comparisons. For comparison sorts, a query is a comparison of two items
a
,
b
{\displaystyle a,b}
, with two outcomes (assuming no items are equal): either
a
<
b
{\displaystyle a<b}
or
a
>
b
{\displaystyle a>b}
. Comparison sorts can be expressed as decision trees in this model, since such sorting algorithms only perform these types of queries.
== Comparison trees and lower bounds for sorting ==
Decision trees are often employed to understand algorithms for sorting and other similar problems; this was first done by Ford and Johnson.
For example, many sorting algorithms are comparison sorts, which means that they only gain information about an input sequence
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\ldots ,x_{n}}
via local comparisons: testing whether
x
i
<
x
j
{\displaystyle x_{i}<x_{j}}
,
x
i
=
x
j
{\displaystyle x_{i}=x_{j}}
, or
x
i
>
x
j
{\displaystyle x_{i}>x_{j}}
. Assuming that the items to be sorted are all distinct and comparable, this can be rephrased as a yes-or-no question: is
x
i
>
x
j
{\displaystyle x_{i}>x_{j}}
?
These algorithms can be modeled as binary decision trees, where the queries are comparisons: an internal node corresponds to a query, and the node's children correspond to the next query when the answer to the question is yes or no. For leaf nodes, the output corresponds to a permutation
π
{\displaystyle \pi }
that describes how the input sequence was scrambled from the fully ordered list of items. (The inverse of this permutation,
π
−
1
{\displaystyle \pi ^{-1}}
, re-orders the input sequence.)
One can show that comparison sorts must use
Ω
(
n
log
(
n
)
)
{\displaystyle \Omega (n\log(n))}
comparisons through a simple argument: for an algorithm to be correct, it must be able to output every possible permutation of
n
{\displaystyle n}
elements; otherwise, the algorithm would fail for that particular permutation as input. So, its corresponding decision tree must have at least as many leaves as permutations:
n
!
{\displaystyle n!}
leaves. Any binary tree with at least
n
!
{\displaystyle n!}
leaves has depth at least
log
2
(
n
!
)
=
Ω
(
n
log
2
(
n
)
)
{\displaystyle \log _{2}(n!)=\Omega (n\log _{2}(n))}
, so this is a lower bound on the run time of a comparison sorting algorithm. In this case, the existence of numerous comparison-sorting algorithms having this time complexity, such as mergesort and heapsort, demonstrates that the bound is tight.: 91
This argument does not use anything about the type of query, so it in fact proves a lower bound for any sorting algorithm that can be modeled as a binary decision tree. In essence, this is a rephrasing of the information-theoretic argument that a correct sorting algorithm must learn at least
log
2
(
n
!
)
{\displaystyle \log _{2}(n!)}
bits of information about the input sequence. As a result, this also works for randomized decision trees as well.
Other decision tree lower bounds do use that the query is a comparison. For example, consider the task of only using comparisons to find the smallest number among
n
{\displaystyle n}
numbers. Before the smallest number can be determined, every number except the smallest must "lose" (compare greater) in at least one comparison. So, it takes at least
n
−
1
{\displaystyle n-1}
comparisons to find the minimum. (The information-theoretic argument here only gives a lower bound of
log
(
n
)
{\displaystyle \log(n)}
.) A similar argument works for general lower bounds for computing order statistics.: 214
== Linear and algebraic decision trees ==
Linear decision trees generalize the above comparison decision trees to computing functions that take real vectors
x
∈
R
n
{\displaystyle x\in \mathbb {R} ^{n}}
as input. The tests in linear decision trees are linear functions: for a particular choice of real numbers
a
0
,
…
,
a
n
{\displaystyle a_{0},\dots ,a_{n}}
, output the sign of
a
0
+
∑
i
=
1
n
a
i
x
i
{\displaystyle a_{0}+\textstyle \sum _{i=1}^{n}a_{i}x_{i}}
. (Algorithms in this model can only depend on the sign of the output.) Comparison trees are linear decision trees, because the comparison between
x
i
{\displaystyle x_{i}}
and
x
j
{\displaystyle x_{j}}
corresponds to the linear function
x
i
−
x
j
{\displaystyle x_{i}-x_{j}}
. From its definition, linear decision trees can only specify functions
f
{\displaystyle f}
whose fibers can be constructed by taking unions and intersections of half-spaces.
Algebraic decision trees are a generalization of linear decision trees that allow the test functions to be polynomials of degree
d
{\displaystyle d}
. Geometrically, the space is divided into semi-algebraic sets (a generalization of hyperplane).
These decision tree models, defined by Rabin and Reingold, are often used for proving lower bounds in computational geometry. For example, Ben-Or showed that element uniqueness (the task of computing
f
:
R
n
→
{
0
,
1
}
{\displaystyle f:\mathbb {R} ^{n}\to \{0,1\}}
, where
f
(
x
)
{\displaystyle f(x)}
is 0 if and only if there exist distinct coordinates
i
,
j
{\displaystyle i,j}
such that
x
i
=
x
j
{\displaystyle x_{i}=x_{j}}
) requires an algebraic decision tree of depth
Ω
(
n
log
(
n
)
)
{\displaystyle \Omega (n\log(n))}
. This was first showed for linear decision models by Dobkin and Lipton. They also show a
n
2
{\displaystyle n^{2}}
lower bound for linear decision trees on the knapsack problem, generalized to algebraic decision trees by Steele and Yao.
== Boolean decision tree complexities ==
For Boolean decision trees, the task is to compute the value of an n-bit Boolean function
f
:
{
0
,
1
}
n
→
{
0
,
1
}
{\displaystyle f:\{0,1\}^{n}\to \{0,1\}}
for an input
x
∈
{
0
,
1
}
n
{\displaystyle x\in \{0,1\}^{n}}
. The queries correspond to reading a bit of the input,
x
i
{\displaystyle x_{i}}
, and the output is
f
(
x
)
{\displaystyle f(x)}
. Each query may be dependent on previous queries. There are many types of computational models using decision trees that could be considered, admitting multiple complexity notions, called complexity measures.
=== Deterministic decision tree ===
If the output of a decision tree is
f
(
x
)
{\displaystyle f(x)}
, for all
x
∈
{
0
,
1
}
n
{\displaystyle x\in \{0,1\}^{n}}
, the decision tree is said to "compute"
f
{\displaystyle f}
. The depth of a tree is the maximum number of queries that can happen before a leaf is reached and a result obtained.
D
(
f
)
{\displaystyle D(f)}
, the deterministic decision tree complexity of
f
{\displaystyle f}
is the smallest depth among all deterministic decision trees that compute
f
{\displaystyle f}
.
=== Randomized decision tree ===
One way to define a randomized decision tree is to add additional nodes to the tree, each controlled by a probability
p
i
{\displaystyle p_{i}}
. Another equivalent definition is to define it as a distribution over deterministic decision trees. Based on this second definition, the complexity of the randomized tree is defined as the largest depth among all the trees in the support of the underlying distribution.
R
2
(
f
)
{\displaystyle R_{2}(f)}
is defined as the complexity of the lowest-depth randomized decision tree whose result is
f
(
x
)
{\displaystyle f(x)}
with probability at least
2
/
3
{\displaystyle 2/3}
for all
x
∈
{
0
,
1
}
n
{\displaystyle x\in \{0,1\}^{n}}
(i.e., with bounded 2-sided error).
R
2
(
f
)
{\displaystyle R_{2}(f)}
is known as the Monte Carlo randomized decision-tree complexity, because the result is allowed to be incorrect with bounded two-sided error. The Las Vegas decision-tree complexity
R
0
(
f
)
{\displaystyle R_{0}(f)}
measures the expected depth of a decision tree that must be correct (i.e., has zero-error). There is also a one-sided bounded-error version which is denoted by
R
1
(
f
)
{\displaystyle R_{1}(f)}
.
=== Nondeterministic decision tree ===
The nondeterministic decision tree complexity of a function is known more commonly as the certificate complexity of that function. It measures the number of input bits that a nondeterministic algorithm would need to look at in order to evaluate the function with certainty.
Formally, the certificate complexity of
f
{\displaystyle f}
at
x
{\displaystyle x}
is the size of the smallest subset of indices
S
⊂
[
n
]
{\displaystyle S\subset [n]}
such that, for all
y
∈
{
0
,
1
}
n
{\displaystyle y\in \{0,1\}^{n}}
, if
y
i
=
x
i
{\displaystyle y_{i}=x_{i}}
for all
i
∈
S
{\displaystyle i\in S}
, then
f
(
y
)
=
f
(
x
)
{\displaystyle f(y)=f(x)}
. The certificate complexity of
f
{\displaystyle f}
is the maximum certificate complexity over all
x
{\displaystyle x}
. The analogous notion where one only requires the verifier to be correct with 2/3 probability is denoted
R
C
(
f
)
{\displaystyle RC(f)}
.
=== Quantum decision tree ===
The quantum decision tree complexity
Q
2
(
f
)
{\displaystyle Q_{2}(f)}
is the depth of the lowest-depth quantum decision tree that gives the result
f
(
x
)
{\displaystyle f(x)}
with probability at least
2
/
3
{\displaystyle 2/3}
for all
x
∈
{
0
,
1
}
n
{\displaystyle x\in \{0,1\}^{n}}
. Another quantity,
Q
E
(
f
)
{\displaystyle Q_{E}(f)}
, is defined as the depth of the lowest-depth quantum decision tree that gives the result
f
(
x
)
{\displaystyle f(x)}
with probability 1 in all cases (i.e. computes
f
{\displaystyle f}
exactly).
Q
2
(
f
)
{\displaystyle Q_{2}(f)}
and
Q
E
(
f
)
{\displaystyle Q_{E}(f)}
are more commonly known as quantum query complexities, because the direct definition of a quantum decision tree is more complicated than in the classical case. Similar to the randomized case, we define
Q
0
(
f
)
{\displaystyle Q_{0}(f)}
and
Q
1
(
f
)
{\displaystyle Q_{1}(f)}
.
These notions are typically bounded by the notions of degree and approximate degree. The degree of
f
{\displaystyle f}
, denoted
deg
(
f
)
{\displaystyle \deg(f)}
, is the smallest degree of any polynomial
p
{\displaystyle p}
satisfying
f
(
x
)
=
p
(
x
)
{\displaystyle f(x)=p(x)}
for all
x
∈
{
0
,
1
}
n
{\displaystyle x\in \{0,1\}^{n}}
. The approximate degree of
f
{\displaystyle f}
, denoted
deg
~
(
f
)
{\displaystyle {\widetilde {\deg }}(f)}
, is the smallest degree of any polynomial
p
{\displaystyle p}
satisfying
p
(
x
)
∈
[
0
,
1
/
3
]
{\displaystyle p(x)\in [0,1/3]}
whenever
f
(
x
)
=
0
{\displaystyle f(x)=0}
and
p
(
x
)
∈
[
2
/
3
,
1
]
{\displaystyle p(x)\in [2/3,1]}
whenever
f
(
x
)
=
1
{\displaystyle f(x)=1}
.
Beals et al. established that
Q
0
(
f
)
≥
deg
(
f
)
/
2
{\displaystyle Q_{0}(f)\geq \deg(f)/2}
and
Q
2
(
f
)
≥
deg
~
(
f
)
/
2
{\displaystyle Q_{2}(f)\geq {\widetilde {\deg }}(f)/2}
.
== Relationships between Boolean function complexity measures ==
It follows immediately from the definitions that for all
n
{\displaystyle n}
-bit Boolean functions
f
{\displaystyle f}
,
Q
2
(
f
)
≤
R
2
(
f
)
≤
R
1
(
f
)
≤
R
0
(
f
)
≤
D
(
f
)
≤
n
{\displaystyle Q_{2}(f)\leq R_{2}(f)\leq R_{1}(f)\leq R_{0}(f)\leq D(f)\leq n}
, and
Q
2
(
f
)
≤
Q
0
(
f
)
≤
D
(
f
)
≤
n
{\displaystyle Q_{2}(f)\leq Q_{0}(f)\leq D(f)\leq n}
. Finding the best upper bounds in the converse direction is a major goal in the field of query complexity.
All of these types of query complexity are polynomially related. Blum and Impagliazzo, Hartmanis and Hemachandra, and Tardos independently discovered that
D
(
f
)
≤
R
0
(
f
)
2
{\displaystyle D(f)\leq R_{0}(f)^{2}}
. Noam Nisan found that the Monte Carlo randomized decision tree complexity is also polynomially related to deterministic decision tree complexity:
D
(
f
)
=
O
(
R
2
(
f
)
3
)
{\displaystyle D(f)=O(R_{2}(f)^{3})}
. (Nisan also showed that
D
(
f
)
=
O
(
R
1
(
f
)
2
)
{\displaystyle D(f)=O(R_{1}(f)^{2})}
.) A tighter relationship is known between the Monte Carlo and Las Vegas models:
R
0
(
f
)
=
O
(
R
2
(
f
)
2
log
R
2
(
f
)
)
{\displaystyle R_{0}(f)=O(R_{2}(f)^{2}\log R_{2}(f))}
. This relationship is optimal up to polylogarithmic factors. As for quantum decision tree complexities,
D
(
f
)
=
O
(
Q
2
(
f
)
4
)
{\displaystyle D(f)=O(Q_{2}(f)^{4})}
, and this bound is tight. Midrijanis showed that
D
(
f
)
=
O
(
Q
0
(
f
)
3
)
{\displaystyle D(f)=O(Q_{0}(f)^{3})}
, improving a quartic bound due to Beals et al.
It is important to note that these polynomial relationships are valid only for total Boolean functions. For partial Boolean functions, that have a domain a subset of
{
0
,
1
}
n
{\displaystyle \{0,1\}^{n}}
, an exponential separation between
Q
0
(
f
)
{\displaystyle Q_{0}(f)}
and
D
(
f
)
{\displaystyle D(f)}
is possible; the first example of such a problem was discovered by Deutsch and Jozsa.
=== Sensitivity conjecture ===
For a Boolean function
f
:
{
0
,
1
}
n
→
{
0
,
1
}
{\displaystyle f:\{0,1\}^{n}\to \{0,1\}}
, the sensitivity of
f
{\displaystyle f}
is defined to be the maximum sensitivity of
f
{\displaystyle f}
over all
x
{\displaystyle x}
, where the sensitivity of
f
{\displaystyle f}
at
x
{\displaystyle x}
is the number of single-bit changes in
x
{\displaystyle x}
that change the value of
f
(
x
)
{\displaystyle f(x)}
. Sensitivity is related to the notion of total influence from the analysis of Boolean functions, which is equal to average sensitivity over all
x
{\displaystyle x}
.
The sensitivity conjecture is the conjecture that sensitivity is polynomially related to query complexity; that is, there exists exponent
c
,
c
′
{\displaystyle c,c'}
such that, for all
f
{\displaystyle f}
,
D
(
f
)
=
O
(
s
(
f
)
c
)
{\displaystyle D(f)=O(s(f)^{c})}
and
s
(
f
)
=
O
(
D
(
f
)
c
′
)
{\displaystyle s(f)=O(D(f)^{c'})}
. One can show through a simple argument that
s
(
f
)
≤
D
(
f
)
{\displaystyle s(f)\leq D(f)}
, so the conjecture is specifically concerned about finding a lower bound for sensitivity. Since all of the previously-discussed complexity measures are polynomially related, the precise type of complexity measure is not relevant. However, this is typically phrased as the question of relating sensitivity with block sensitivity.
The block sensitivity of
f
{\displaystyle f}
, denoted
b
s
(
f
)
{\displaystyle bs(f)}
, is defined to be the maximum block sensitivity of
f
{\displaystyle f}
over all
x
{\displaystyle x}
. The block sensitivity of
f
{\displaystyle f}
at
x
{\displaystyle x}
is the maximum number
t
{\displaystyle t}
of disjoint subsets
S
1
,
…
,
S
t
⊂
[
n
]
{\displaystyle S_{1},\ldots ,S_{t}\subset [n]}
such that, for any of the subsets
S
i
{\displaystyle S_{i}}
, flipping the bits of
x
{\displaystyle x}
corresponding to
S
i
{\displaystyle S_{i}}
changes the value of
f
(
x
)
{\displaystyle f(x)}
.
In 2019, Hao Huang proved the sensitivity conjecture, showing that
b
s
(
f
)
=
O
(
s
(
f
)
4
)
{\displaystyle bs(f)=O(s(f)^{4})}
.
== See also ==
Comparison sort
Decision tree
Aanderaa–Karp–Rosenberg conjecture
Minimum spanning tree § Decision trees
== References ==
=== Surveys ===
Buhrman, Harry; de Wolf, Ronald (2002), "Complexity Measures and Decision Tree Complexity: A Survey" (PDF), Theoretical Computer Science, 288 (1): 21–43, doi:10.1016/S0304-3975(01)00144-X | Wikipedia/Decision_tree_model |
In computer science, the Robertson–Webb (RW) query model is a model of computation used by algorithms for the problem of fair cake-cutting. In this problem, there is a resource called a "cake", and several agents with different value measures on the cake. The goal is to divide the cake among the agents such that each agent will consider his/her piece as "fair" by his/her personal value measure. Since the agents' valuations can be very complex, they cannot - in general - be given as inputs to a fair division algorithm. The RW model specifies two kinds of queries that a fair division algorithm may ask the agents: Eval and Cut. Informally, an Eval query asks an agent to specify his/her value to a given piece of the cake, and a Cut query (also called a Mark query) asks an agent to specify a piece of cake with a given value.
Despite the simplicity of the model, many classic cake-cutting algorithms can be described only by these two queries. On the other hand, there are fair cake-cutting problems that provably cannot be solved in the RW model using finitely many queries.
The Eval and Cut queries were first described in the book of Jack M. Robertson and William A. Webb. The name "Robertson–Webb model" was coined and formalized by Woeginger and Sgall.
== Definitions ==
The standard RW model assumes that the cake is an interval, usually the interval [0,1]. There are n agents, and each agent i has a value measure vi on the cake. The algorithm does not know vi, but can access it using two kinds of queries:
An eval query: given two real numbers x and y, Evali(x,y) asks agent i to report the value of the interval [x,y], i.e., vi ([x,y]).
A mark query (also called a cut query): given two real numbers x and r, Marki(x,r) asks agent i to report some value y such that vi([x,y]) = r.
== Example ==
The classic Divide and choose algorithm, for cutting a cake between two children, can be done using four queries.
Ask Alice an Eval(0,1) query; let V1 be the answer (this is Alice's value of the entire cake).
Ask Alice a Mark(0, V1 / 2) query; let x1 be the answer (this is Alice's mark which yields two pieces equal in her eyes).
Ask George an Eval(0, x1) and an Eval(x1, 1) queries.
If the former value is larger, give (0,x1) to George and (x1,1) to Alice; else, give (0,x1) to Alice and (x1,1) to George.
== Results ==
Besides divide-and-choose, many cake-cutting algorithms can be performed using RW queries whose number is polynomial in n (the number of agents). For example: Last diminisher can be done by O(n2) RW queries and Even–Paz protocol can be done by O(n log n) RW queries. In parallel, there are many hardness results, proving that certain fair division problems require many RW queries to complete. Some such hardness results are shown below.
Proportional cake-cutting requires Ω(n log n) RW queries when either
the pieces must be connected, or
the protocol is deterministic, or
the precision of cutting the cake is finite.
The only protocol which uses O(n) RW queries is a randomized protocol, which can return disconnected pieces, and the allocation might be only fractionally-proportional.
Proportional cake-cutting with different entitlements requires at least Ω(n log(D)) RW queries, where D is the common denominator of the entitlements (in particular, it cannot be found using a bounded number of queries if the entitlements are irrational). There is an algorithm that uses O(n log(D)) RW queries for rational entitlements, and a finite algorithm for irrational entitlements.
Envy-free cake-cutting requires
Ω(n2) RW queries when the pieces may be disconnected,
Infintiely many queries when the pieces must be connected and there are at least 3 agents. In other words, there is no algorithm that always finds an envy-free allocation among 3 or more agents using finitely-many RW queries.
For any ε > 0, an ε-envy-free connected cake-cutting requires at least Ω(log ε−1) queries. For 3 agents, an O(log ε−1) protocol exists. For 4 agents, an O(poly(log ε−1)) protocol exists. For 5 or more agents, the best known protocol requires O(n ε−1), which shows an exponential gap in the query complexity.
Equitable cake-cutting cannot be done using finitely-many RW queries even for 2 agents. Moreover, for any ε > 0:
A connected ε-equitable cake-cutting requires at least Ω(log ε−1) queries. For 2 agents, an O(log ε−1) protocol exists. For 3 or more agents, the best known protocol requires O(n (log n + log ε−1)) queries.
Even without connectivity, ε-equitable cake-cutting requires at least Ω(log ε−1 / log log ε−1 ) RW queries.
Exact cake-cutting (also known as perfect cake-cutting) cannot be done using finitely-many RW queries even for 2 agents. Moreover, for any ε > 0:
An ε-perfect cake-cutting with the minimum possible number of cuts requires at least Ω(log ε−1) queries. For 2 agents, an O(log ε−1) protocol exists. For 3 or more agents, the best known protocol requires O(n3 ε−1) queries.
Maximin share cake-cutting, when the pieces must be separated by a positive distance, cannot be done using finitely-many RW queries. Moreover, even for a single agent, there is no algorithm that computes the agent's maximin-share using finitely-many RW queries. However:
For any ε > 0, it is possible to compute a value between the MMS and the MMS-ε using O(n log ε−1) RW queries.
When the cake is circular (i.e., in fair pie-cutting), it is possible to compute a value between the MMS and the MMS-ε using O(n ε−1) RW queries. It is open whether O(n log ε−1) RW queries suffice.
Average-proportional cake-cutting (i.e., an allocation between n families, such that for each family, the average value is at least 1/n of the total) cannot be computed using finitely-many RW queries, even when there are 2 families with 2 members in each family. The proof is by reduction from equitable cake-cutting.
== Variants ==
=== Left-mark and right-mark ===
When the value measure of an agent is not strictly positive (i.e., there are parts that the agent values at 0), a mark query can, in principle, return infinitely many values. For example, if an agent values [0,0.9] at 1 and [0.9,1] at 0, then the query Mark(0,1) can return any value between 0.9 and 1. Some algorithms require a more specific value:
The left-mark query, LeftMark(x,r), returns the leftmost (smallest) y such that vi ([x,y]) = r;
The right-mark query, RightMark(x,r), returns the rightmost (largest) y such that vi ([x,y]) = r;
If only one of these two variants is given (in addition to the Eval query), the other variant cannot be computed in finite time.
=== Two-dimensional cakes ===
The RW query model has been generalized to two-dimensional cakes and multi-dimensional cakes.
== Alternative models ==
There are many cake-cutting algorithms that do not use the RW model. They usually use one of the following models.
=== Direct revelation model ===
Algorithms for restricted classes of valuations, such as piecewise-linear, piecewise-constant or piecewise-uniform, which can be given explicitly as input to the algorithm. Some such algorithms were developed for truthful cake-cutting.
=== Moving-knife model ===
In this model, there are knives moving continuously along the cake (see moving-knife procedures). This model is related to the RW model as follows: any moving-knife procedure with a fixed number of agents and a fixed number of knives can be simulated using O(log ε−1) RW queries.
=== Simultaneous queries model ===
In this model, agents simultaneously send discretizations of their preferences. A discretization is a sequence of cut-points, and the values of pieces between these cut-points (for example: a protocol for two agents might require each agent to report a sequence of three cut-points (0,x,1) where the values of (0,x) and (x,1) are 1/2). These reports are used to compute a fair allocation. The complexity of an algorithm in this model is defined as the maximum number of intervals in a required discretization (so the complexity of the above protocol is 2).
One advantage of this model over the RW model is that it enables to elicit preferences in parallel. This allows to compute a proportional cake-cutting in time O(n) by simultaneously asking each agent for a discretization with n intervals (of equal value). In contrast, in the RW model there is an O(n log n) lower bound. On the other hand, in the simultaneous model, it is impossible to compute an envy-free cake-cutting using a finite discretization for 3 or more agents; but for every e>0, there exists a simultaneous protocol with complexity O(n/e2), that attains an e-approximate envy-free division.
== See also ==
Demand oracle (and value oracle) - a similar query model in a setting with indivisible objects.
== References == | Wikipedia/Robertson–Webb_query_model |
The étale or algebraic fundamental group is an analogue in algebraic geometry, for schemes, of the usual fundamental group of topological spaces.
== Topological analogue/informal discussion ==
In algebraic topology, the fundamental group
π
1
(
X
,
x
)
{\displaystyle \pi _{1}(X,x)}
of a pointed topological space
(
X
,
x
)
{\displaystyle (X,x)}
is defined as the group of homotopy classes of loops based at
x
{\displaystyle x}
. This definition works well for spaces such as real and complex manifolds, but gives undesirable results for an algebraic variety with the Zariski topology.
In the classification of covering spaces, it is shown that the fundamental group is exactly the group of deck transformations of the universal covering space. This is more promising: finite étale morphisms of algebraic varieties are the appropriate analogue of covering spaces of topological spaces. Unfortunately, an algebraic variety
X
{\displaystyle X}
often fails to have a "universal cover" that is finite over
X
{\displaystyle X}
, so one must consider the entire category of finite étale coverings of
X
{\displaystyle X}
. One can then define the étale fundamental group as an inverse limit of finite automorphism groups.
== Formal definition ==
Let
X
{\displaystyle X}
be a connected and locally noetherian scheme, let
x
{\displaystyle x}
be a geometric point of
X
,
{\displaystyle X,}
and let
C
{\displaystyle C}
be the category of pairs
(
Y
,
f
)
{\displaystyle (Y,f)}
such that
f
:
Y
→
X
{\displaystyle f\colon Y\to X}
is a finite étale morphism from a scheme
Y
.
{\displaystyle Y.}
Morphisms
(
Y
,
f
)
→
(
Y
′
,
f
′
)
{\displaystyle (Y,f)\to (Y',f')}
in this category are morphisms
Y
→
Y
′
{\displaystyle Y\to Y'}
as schemes over
X
.
{\displaystyle X.}
This category has a natural functor to the category of sets, namely the functor:
F
(
Y
)
=
Hom
X
(
x
,
Y
)
;
{\displaystyle F(Y)=\operatorname {Hom} _{X}(x,Y);}
geometrically this is the fiber of
Y
→
X
{\displaystyle Y\to X}
over
x
,
{\displaystyle x,}
and abstractly it is the Yoneda functor represented by
x
{\displaystyle x}
in the category of schemes over
X
{\displaystyle X}
. The functor
F
{\displaystyle F}
is typically not representable in
C
{\displaystyle C}
; however, it is pro-representable in
C
{\displaystyle C}
, in fact by Galois covers of
X
{\displaystyle X}
. This means that we have a projective system
{
X
j
→
X
i
∣
i
<
j
∈
I
}
{\displaystyle \{X_{j}\to X_{i}\mid i<j\in I\}}
in
C
{\displaystyle C}
, indexed by a directed set
I
,
{\displaystyle I,}
where the
X
i
{\displaystyle X_{i}}
are Galois covers of
X
{\displaystyle X}
, i.e., finite étale schemes over
X
{\displaystyle X}
such that
#
Aut
X
(
X
i
)
=
deg
(
X
i
/
X
)
{\displaystyle \#\operatorname {Aut} _{X}(X_{i})=\operatorname {deg} (X_{i}/X)}
. It also means that we have given an isomorphism of functors:
F
(
Y
)
=
lim
→
i
∈
I
Hom
C
(
X
i
,
Y
)
{\displaystyle F(Y)=\varinjlim _{i\in I}\operatorname {Hom} _{C}(X_{i},Y)}
.
In particular, we have a marked point
P
∈
lim
←
i
∈
I
F
(
X
i
)
{\displaystyle P\in \varprojlim _{i\in I}F(X_{i})}
of the projective system.
For two such
X
i
,
X
j
{\displaystyle X_{i},X_{j}}
the map
X
j
→
X
i
{\displaystyle X_{j}\to X_{i}}
induces a group homomorphism
Aut
X
(
X
j
)
→
Aut
X
(
X
i
)
{\displaystyle \operatorname {Aut} _{X}(X_{j})\to \operatorname {Aut} _{X}(X_{i})}
which produces a projective system of automorphism groups from the projective system
{
X
i
}
{\displaystyle \{X_{i}\}}
. We then make the following definition: the étale fundamental group
π
1
(
X
,
x
)
{\displaystyle \pi _{1}(X,x)}
of
X
{\displaystyle X}
at
x
{\displaystyle x}
is the inverse limit:
π
1
(
X
,
x
)
=
lim
←
i
∈
I
Aut
X
(
X
i
)
,
{\displaystyle \pi _{1}(X,x)=\varprojlim _{i\in I}{\operatorname {Aut} }_{X}(X_{i}),}
with the inverse limit topology.
The functor
F
{\displaystyle F}
is now a functor from
C
{\displaystyle C}
to the category of finite and continuous
π
1
(
X
,
x
)
{\displaystyle \pi _{1}(X,x)}
-sets and establishes an equivalence of categories between
C
{\displaystyle C}
and the category of finite and continuous
π
1
(
X
,
x
)
{\displaystyle \pi _{1}(X,x)}
-sets.
== Examples and theorems ==
The most basic example of is
π
1
(
Spec
k
)
{\displaystyle \pi _{1}(\operatorname {Spec} k)}
, the étale fundamental group of a field
k
{\displaystyle k}
. Essentially by definition, the fundamental group of
k
{\displaystyle k}
can be shown to be isomorphic to the absolute Galois group
Gal
(
k
s
e
p
/
k
)
{\displaystyle \operatorname {Gal} (k^{sep}/k)}
. More precisely, the choice of a geometric point of
Spec
(
k
)
{\displaystyle \operatorname {Spec} (k)}
is equivalent to giving a separably closed extension field
K
{\displaystyle K}
, and the étale fundamental group with respect to that base point identifies with the Galois group
Gal
(
K
/
k
)
{\displaystyle \operatorname {Gal} (K/k)}
. This interpretation of the Galois group is known as Grothendieck's Galois theory.
More generally, for any geometrically connected variety
X
{\displaystyle X}
over a field
k
{\displaystyle k}
(i.e.,
X
{\displaystyle X}
is such that
X
s
e
p
:=
X
×
k
k
s
e
p
{\displaystyle X^{sep}:=X\times _{k}k^{sep}}
is connected) there is an exact sequence of profinite groups:
1
→
π
1
(
X
s
e
p
,
x
¯
)
→
π
1
(
X
,
x
¯
)
→
Gal
(
k
s
e
p
/
k
)
→
1.
{\displaystyle 1\to \pi _{1}(X^{sep},{\overline {x}})\to \pi _{1}(X,{\overline {x}})\to \operatorname {Gal} (k^{sep}/k)\to 1.}
=== Schemes over a field of characteristic zero ===
For a scheme
X
{\displaystyle X}
that is of finite type over
C
{\displaystyle \mathbb {C} }
, the complex numbers, there is a close relation between the étale fundamental group of
X
{\displaystyle X}
and the usual, topological, fundamental group of
X
(
C
)
{\displaystyle X(\mathbb {C} )}
, the complex analytic space attached to
X
{\displaystyle X}
. The algebraic fundamental group, as it is typically called in this case, is the profinite completion of
π
1
(
X
)
{\displaystyle \pi _{1}(X)}
. This is a consequence of the Riemann existence theorem, which says that all finite étale coverings of
X
(
C
)
{\displaystyle X(\mathbb {C} )}
stem from ones of
X
{\displaystyle X}
. In particular, as the fundamental group of smooth curves over
C
{\displaystyle \mathbb {C} }
(i.e., open Riemann surfaces) is well understood; this determines the algebraic fundamental group. More generally, the fundamental group of a proper scheme over any algebraically closed field of characteristic zero is known, because an extension of algebraically closed fields induces isomorphic fundamental groups.
=== Schemes over a field of positive characteristic and the tame fundamental group ===
For an algebraically closed field
k
{\displaystyle k}
of positive characteristic, the results are different, since Artin–Schreier coverings exist in this situation. For example, the fundamental group of the affine line
A
k
1
{\displaystyle \mathbf {A} _{k}^{1}}
is not topologically finitely generated. The tame fundamental group of some scheme U is a quotient of the usual fundamental group of
U
{\displaystyle U}
which takes into account only covers that are tamely ramified along
D
{\displaystyle D}
, where
X
{\displaystyle X}
is some compactification and
D
{\displaystyle D}
is the complement of
U
{\displaystyle U}
in
X
{\displaystyle X}
. For example, the tame fundamental group of the affine line is zero.
=== Affine schemes over a field of characteristic p ===
It turns out that every affine scheme
X
⊂
A
k
n
{\displaystyle X\subset \mathbf {A} _{k}^{n}}
is a
K
(
π
,
1
)
{\displaystyle K(\pi ,1)}
-space, in the sense that the etale homotopy type of
X
{\displaystyle X}
is entirely determined by its etale homotopy group. Note
π
=
π
1
e
t
(
X
,
x
¯
)
{\displaystyle \pi =\pi _{1}^{et}(X,{\overline {x}})}
where
x
¯
{\displaystyle {\overline {x}}}
is a geometric point.
=== Further topics ===
From a category-theoretic point of view, the fundamental group is a functor:
{Pointed algebraic varieties} → {Profinite groups}.
The inverse Galois problem asks what groups can arise as fundamental groups (or Galois groups of field extensions). Anabelian geometry, for example Grothendieck's section conjecture, seeks to identify classes of varieties which are determined by their fundamental groups.
Friedlander (1982) studies higher étale homotopy groups by means of the étale homotopy type of a scheme.
== The pro-étale fundamental group ==
Bhatt & Scholze (2015, §7) have introduced a variant of the étale fundamental group called the pro-étale fundamental group. It is constructed by considering, instead of finite étale covers, maps which are both étale and satisfy the valuative criterion of properness. For geometrically unibranch schemes (e.g., normal schemes), the two approaches agree, but in general the pro-étale fundamental group is a finer invariant: its profinite completion is the étale fundamental group.
== See also ==
étale morphism
Fundamental group
Fundamental group scheme
== Notes ==
== References ==
Bhatt, Bhargav; Scholze, Peter (2015), "The pro-étale topology for schemes", Astérisque: 99–201, arXiv:1309.1198, Bibcode:2013arXiv1309.1198B, MR 3379634
Friedlander, Eric M. (1982), Étale homotopy of simplicial schemes, Annals of Mathematics Studies, vol. 104, Princeton University Press, ISBN 978-0-691-08288-2
Murre, J. P. (1967), Lectures on an introduction to Grothendieck's theory of the fundamental group, Bombay: Tata Institute of Fundamental Research, MR 0302650
Tamagawa, Akio (1997), "The Grothendieck conjecture for affine curves", Compositio Mathematica, 109 (2): 135–194, doi:10.1023/A:1000114400142, MR 1478817
This article incorporates material from étale fundamental group on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Algebraic_fundamental_group |
In mathematics, the Birch and Swinnerton-Dyer conjecture (often called the Birch–Swinnerton-Dyer conjecture) describes the set of rational solutions to equations defining an elliptic curve. It is an open problem in the field of number theory and is widely recognized as one of the most challenging mathematical problems. It is named after mathematicians Bryan John Birch and Peter Swinnerton-Dyer, who developed the conjecture during the first half of the 1960s with the help of machine computation. Only special cases of the conjecture have been proven.
The modern formulation of the conjecture relates to arithmetic data associated with an elliptic curve E over a number field K to the behaviour of the Hasse–Weil L-function L(E, s) of E at s = 1. More specifically, it is conjectured that the rank of the abelian group E(K) of points of E is the order of the zero of L(E, s) at s = 1. The first non-zero coefficient in the Taylor expansion of L(E, s) at s = 1 is given by more refined arithmetic data attached to E over K (Wiles 2006).
The conjecture was chosen as one of the seven Millennium Prize Problems listed by the Clay Mathematics Institute, which has offered a $1,000,000 prize for the first correct proof.
== Background ==
Mordell (1922) proved Mordell's theorem: the group of rational points on an elliptic curve has a finite basis. This means that for any elliptic curve there is a finite subset of the rational points on the curve, from which all further rational points may be generated.
If the number of rational points on a curve is infinite then some point in a finite basis must have infinite order. The number of independent basis points with infinite order is called the rank of the curve, and is an important invariant property of an elliptic curve.
If the rank of an elliptic curve is 0, then the curve has only a finite number of rational points. On the other hand, if the rank of the curve is greater than 0, then the curve has an infinite number of rational points.
Although Mordell's theorem shows that the rank of an elliptic curve is always finite, it does not give an effective method for calculating the rank of every curve. The rank of certain elliptic curves can be calculated using numerical methods but (in the current state of knowledge) it is unknown if these methods handle all curves.
An L-function L(E, s) can be defined for an elliptic curve E by constructing an Euler product from the number of points on the curve modulo each prime p. This L-function is analogous to the Riemann zeta function and the Dirichlet L-series that is defined for a binary quadratic form. It is a special case of a Hasse–Weil L-function.
The natural definition of L(E, s) only converges for values of s in the complex plane with Re(s) > 3/2. Helmut Hasse conjectured that L(E, s) could be extended by analytic continuation to the whole complex plane. This conjecture was first proved by Deuring (1941) for elliptic curves with complex multiplication. It was subsequently shown to be true for all elliptic curves over Q, as a consequence of the modularity theorem in 2001.
Finding rational points on a general elliptic curve is a difficult problem. Finding the points on an elliptic curve modulo a given prime p is conceptually straightforward, as there are only a finite number of possibilities to check. However, for large primes it is computationally intensive.
== History ==
In the early 1960s Peter Swinnerton-Dyer used the EDSAC-2 computer at the University of Cambridge Computer Laboratory to calculate the number of points modulo p (denoted by Np) for a large number of primes p on elliptic curves whose rank was known. From these numerical results Birch & Swinnerton-Dyer (1965) conjectured that Np for a curve E with rank r obeys an asymptotic law
∏
p
≤
x
N
p
p
≈
C
log
(
x
)
r
as
x
→
∞
{\displaystyle \prod _{p\leq x}{\frac {N_{p}}{p}}\approx C\log(x)^{r}{\mbox{ as }}x\rightarrow \infty }
where C is a constant.
Initially, this was based on somewhat tenuous trends in graphical plots; this induced a measure of skepticism in J. W. S. Cassels (Birch's Ph.D. advisor). Over time the numerical evidence stacked up.
This in turn led them to make a general conjecture about the behavior of a curve's L-function L(E, s) at s = 1, namely that it would have a zero of order r at this point. This was a far-sighted conjecture for the time, given that the analytic continuation of L(E, s) was only established for curves with complex multiplication, which were also the main source of numerical examples. (NB that the reciprocal of the L-function is from some points of view a more natural object of study; on occasion, this means that one should consider poles rather than zeroes.)
The conjecture was subsequently extended to include the prediction of the precise leading Taylor coefficient of the L-function at s = 1. It is conjecturally given by
L
(
r
)
(
E
,
1
)
r
!
=
#
S
h
a
(
E
)
Ω
E
R
E
∏
p
|
N
c
p
(
#
E
t
o
r
)
2
{\displaystyle {\frac {L^{(r)}(E,1)}{r!}}={\frac {\#\mathrm {Sha} (E)\Omega _{E}R_{E}\prod _{p|N}c_{p}}{(\#E_{\mathrm {tor} })^{2}}}}
where the quantities on the right-hand side are invariants of the curve, studied by Cassels, Tate, Shafarevich and others (Wiles 2006):
#
E
t
o
r
{\displaystyle \#E_{\mathrm {tor} }}
is the order of the torsion group,
#
S
h
a
(
E
)
=
{\displaystyle \#\mathrm {Sha} (E)=}
#Ш(E) is the order of the Tate–Shafarevich group,
Ω
E
{\displaystyle \Omega _{E}}
is the real period of E multiplied by the number of connected components of E,
R
E
{\displaystyle R_{E}}
is the regulator of E which is defined via the canonical heights of a basis of rational points,
c
p
{\displaystyle c_{p}}
is the Tamagawa number of E at a prime p dividing the conductor N of E. It can be found by Tate's algorithm.
At the time of the inception of the conjecture little was known, not even the well-definedness of the left side (referred to as analytic) or the right side (referred to as algebraic) of this equation. John Tate expressed this in 1974 in a famous quote.: 198
This remarkable conjecture relates the behavior of a function
L
{\displaystyle L}
at a point where it is not at present known to be defined to the order of a group Ш which is not known to be finite!
By the modularity theorem proved in 2001 for elliptic curves over
Q
{\displaystyle \mathbb {Q} }
the left side is now known to be well-defined and the finiteness of Ш(E) is known when additionally the analytic rank is at most 1, i.e., if
L
(
E
,
s
)
{\displaystyle L(E,s)}
vanishes at most to order 1 at
s
=
1
{\displaystyle s=1}
. Both parts remain open.
== Current status ==
The Birch and Swinnerton-Dyer conjecture has been proved only in special cases:
Coates & Wiles (1977) proved that if E is a curve over a number field F with complex multiplication by an imaginary quadratic field K of class number 1, F = K or Q, and L(E, 1) is not 0 then E(F) is a finite group. This was extended to the case where F is any finite abelian extension of K by Arthaud (1978).
Gross & Zagier (1986) showed that if a modular elliptic curve has a first-order zero at s = 1 then it has a rational point of infinite order; see Gross–Zagier theorem.
Kolyvagin (1989) showed that a modular elliptic curve E for which L(E, 1) is not zero has rank 0, and a modular elliptic curve E for which L(E, 1) has a first-order zero at s = 1 has rank 1.
Rubin (1991) showed that for elliptic curves defined over an imaginary quadratic field K with complex multiplication by K, if the L-series of the elliptic curve was not zero at s = 1, then the p-part of the Tate–Shafarevich group had the order predicted by the Birch and Swinnerton-Dyer conjecture, for all primes p > 7.
Breuil et al. (2001), extending work of Wiles (1995), proved that all elliptic curves defined over the rational numbers are modular, which extends results #2 and #3 to all elliptic curves over the rationals, and shows that the L-functions of all elliptic curves over Q are defined at s = 1.
Bhargava & Shankar (2015) proved that the average rank of the Mordell–Weil group of an elliptic curve over Q is bounded above by 7/6. Combining this with the p-parity theorem of Nekovář (2009) and Dokchitser & Dokchitser (2010) and with the proof of the main conjecture of Iwasawa theory for GL(2) by Skinner & Urban (2014), they conclude that a positive proportion of elliptic curves over Q have analytic rank zero, and hence, by Kolyvagin (1989), satisfy the Birch and Swinnerton-Dyer conjecture.
There are currently no proofs involving curves with a rank greater than 1.
There is extensive numerical evidence for the truth of the conjecture.
== Consequences ==
Much like the Riemann hypothesis, this conjecture has multiple consequences, including the following two:
Let n be an odd square-free integer. Assuming the Birch and Swinnerton-Dyer conjecture, n is the area of a right triangle with rational side lengths (a congruent number) if and only if the number of triplets of integers (x, y, z) satisfying 2x2 + y2 + 8z2 = n is twice the number of triplets satisfying 2x2 + y2 + 32z2 = n. This statement, due to Tunnell's theorem (Tunnell 1983), is related to the fact that n is a congruent number if and only if the elliptic curve y2 = x3 − n2x has a rational point of infinite order (thus, under the Birch and Swinnerton-Dyer conjecture, its L-function has a zero at 1). The interest in this statement is that the condition is easily verified.
In a different direction, certain analytic methods allow for an estimation of the order of zero in the center of the critical strip of families of L-functions. Admitting the BSD conjecture, these estimations correspond to information about the rank of families of elliptic curves in question. For example: suppose the generalized Riemann hypothesis and the BSD conjecture, the average rank of curves given by y2 = x3 + ax+ b is smaller than 2.
Because of the existence of the functional equation of the L-function of an elliptic curve, BSD allows us to calculate the parity of the rank of an elliptic curve. This is a conjecture in its own right called the parity conjecture, and it relates the parity of the rank of an elliptic curve to its global root number. This leads to many explicit arithmetic phenomena which are yet to be proved unconditionally. For instance:
Every positive integer n ≡ 5, 6 or 7 (mod 8) is a congruent number.
The elliptic curve given by y2 = x3 + ax + b where a ≡ b (mod 2) has infinitely many solutions over
Q
(
ζ
8
)
{\displaystyle \mathbb {Q} (\zeta _{8})}
.
Every positive rational number d can be written in the form d = s2(t3 – 91t – 182) for s and t in
Q
{\displaystyle \mathbb {Q} }
.
For every rational number t, the elliptic curve given by y2 = x(x2 – 49(1 + t4)2) has rank at least 1.
There are many more examples for elliptic curves over number fields.
== Generalizations ==
There is a version of this conjecture for general abelian varieties over number fields. A version for abelian varieties over
Q
{\displaystyle \mathbb {Q} }
is the following:: 462
lim
s
→
1
L
(
A
/
Q
,
s
)
(
s
−
1
)
r
=
#
S
h
a
(
A
)
Ω
A
R
A
∏
p
|
N
c
p
#
A
(
Q
)
tors
⋅
#
A
^
(
Q
)
tors
.
{\displaystyle \lim _{s\to 1}{\frac {L(A/\mathbb {Q} ,s)}{(s-1)^{r}}}={\frac {\#\mathrm {Sha} (A)\Omega _{A}R_{A}\prod _{p|N}c_{p}}{\#A(\mathbb {Q} )_{\text{tors}}\cdot \#{\hat {A}}(\mathbb {Q} )_{\text{tors}}}}.}
All of the terms have the same meaning as for elliptic curves, except that the square of the order of the torsion needs to be replaced by the product
#
A
(
Q
)
tors
⋅
#
A
^
(
Q
)
tors
{\displaystyle \#A(\mathbb {Q} )_{\text{tors}}\cdot \#{\hat {A}}(\mathbb {Q} )_{\text{tors}}}
involving the dual abelian variety
A
^
{\displaystyle {\hat {A}}}
. Elliptic curves as 1-dimensional abelian varieties are their own duals, i.e.
E
^
=
E
{\displaystyle {\hat {E}}=E}
, which simplifies the statement of the BSD conjecture. The regulator
R
A
{\displaystyle R_{A}}
needs to be understood for the pairing between a basis for the free parts of
A
(
Q
)
{\displaystyle A(\mathbb {Q} )}
and
A
^
(
Q
)
{\displaystyle {\hat {A}}(\mathbb {Q} )}
relative to the Poincare bundle on the product
A
×
A
^
{\displaystyle A\times {\hat {A}}}
.
The rank-one Birch-Swinnerton-Dyer conjecture for modular elliptic curves and modular abelian varieties of GL(2)-type over totally real number fields was proved by Shou-Wu Zhang in 2001.
Another generalization is given by the Bloch-Kato conjecture.
== Notes ==
== References ==
Shoeib, Maisara (26 May 2025). "A Topological Perspective on the Birch and Swinnerton–Dyer Conjecture". arXiv:2505.19796.
== External links ==
Weisstein, Eric W. "Swinnerton-Dyer Conjecture". MathWorld.
"Birch and Swinnerton-Dyer Conjecture". PlanetMath.
The Birch and Swinnerton-Dyer Conjecture: An Interview with Professor Henri Darmon by Agnes F. Beaudry
What is the Birch and Swinnerton-Dyer Conjecture? lecture by Manjul Bhargava (September 2016) given during the Clay Research Conference held at the University of Oxford | Wikipedia/BSD_conjecture |
In mathematics, non-abelian class field theory is a catchphrase, meaning the extension of the results of class field theory, the relatively complete and classical set of results on abelian extensions of any number field K, to the general Galois extension L/K. While class field theory was essentially known by 1930, the corresponding non-abelian theory has never been formulated in a definitive and accepted sense.
== History ==
A presentation of class field theory in terms of group cohomology was carried out by Claude Chevalley, Emil Artin and others, mainly in the 1940s. This resulted in a formulation of the central results by means of the group cohomology of the idele class group. The theorems of the cohomological approach are independent of whether or not the Galois group G of L/K is abelian. This theory has never been regarded as the sought-after non-abelian theory. The first reason that can be cited for that is that it did not provide fresh information on the splitting of prime ideals in a Galois extension; a common way to explain the objective of a non-abelian class field theory is that it should provide a more explicit way to express such patterns of splitting.
The cohomological approach therefore was of limited use in even formulating non-abelian class field theory. Behind the history was the wish of Chevalley to write proofs for class field theory without using Dirichlet series: in other words to eliminate L-functions. The first wave of proofs of the central theorems of class field theory was structured as consisting of two 'inequalities' (the same structure as in the proofs now given of the fundamental theorem of Galois theory, though much more complex). One of the two inequalities involved an argument with L-functions.
In a later reversal of this development, it was realised that to generalize Artin reciprocity to the non-abelian case, it was essential in fact to seek a new way of expressing Artin L-functions. The contemporary formulation of this ambition is by means of the Langlands program: in which grounds are given for believing Artin L-functions are also L-functions of automorphic representations. As of the early twenty-first century, this is the formulation of the notion of non-abelian class field theory that has widest expert acceptance.
== See also ==
Anabelian geometry
Frobenioid
Langlands correspondences
== Notes == | Wikipedia/Nonabelian_class_field_theory |
Freeform surface modelling is a technique for engineering freeform surfaces with a CAD or CAID system.
The technology has encompassed two main fields. Either creating aesthetic surfaces (class A surfaces) that also perform a function; for example, car bodies and consumer product outer forms, or technical surfaces for components such as gas turbine blades and other fluid dynamic engineering components.
CAD software packages use two basic methods for the creation of surfaces. The first begins with construction curves (splines) from which the 3D surface is then swept (section along guide rail) or meshed (lofted) through.
The second method is direct creation of the surface with manipulation of the surface poles/control points.
From these initially created surfaces, other surfaces are constructed using either derived methods such as offset or angled extensions from surfaces; or via bridging and blending between groups of surfaces.
== Surfaces ==
Freeform surface, or freeform surfacing, is used in CAD and other computer graphics software to describe the skin of a 3D geometric element. Freeform surfaces do not have rigid radial dimensions, unlike regular surfaces such as planes, cylinders and conic surfaces. They are used to describe forms such as turbine blades, car bodies and boat hulls. Initially developed for the automotive and aerospace industries, freeform surfacing is now widely used in all engineering design disciplines from consumer goods products to ships. Most systems today use nonuniform rational B-spline (NURBS) mathematics to describe the surface forms; however, there are other methods such as Gordon surfaces or Coons surfaces .
The forms of freeform surfaces (and curves) are not stored or defined in CAD software in terms of polynomial equations, but by their poles, degree, and number of patches (segments with spline curves). The degree of a surface determines its mathematical properties, and can be seen as representing the shape by a polynomial with variables to the power of the degree value. For example, a surface with a degree of 1 would be a flat cross section surface. A surface with degree 2 would be curved in one direction, while a degree 3 surface could (but does not necessarily) change once from concave to convex curvature. Some CAD systems use the term order instead of degree. The order of a polynomial is one greater than the degree, and gives the number of coefficients rather than the greatest exponent.
The poles (sometimes known as control points) of a surface define its shape. The natural surface edges are defined by the positions of the first and last poles. (Note that a surface can have trimmed boundaries.) The intermediate poles act like magnets drawing the surface in their direction. The surface does not, however, go through these points. The second and third poles as well as defining shape, respectively determine the start and tangent angles and the curvature. In a single patch surface (Bézier surface), there is one more pole than the degree values of the surface. Surface patches can be merged into a single NURBS surface; at these points are knot lines. The number of knots will determine the influence of the poles on either side and how smooth the transition is. The smoothness between patches, known as continuity, is often referred to in terms of a C value:
C0: just touching, could have a nick
C1: tangent, but could have sudden change in curvature
C2: the patches are curvature continuous to one another
Two more important aspects are the U and V parameters. These are values on the surface ranging from 0 to 1, used in the mathematical definition of the surface and for defining paths on the surface: for example, a trimmed boundary edge. Note that they are not proportionally spaced along the surface. A curve of constant U or constant V is known as an isoperimetric curve, or U (V) line. In CAD systems, surfaces are often displayed with their poles of constant U or constant V values connected together by lines; these are known as control polygons.
== Modelling ==
When defining a form, an important factor is the continuity between surfaces - how smoothly they connect to one another.
One example of where surfacing excels is automotive body panels. Just blending two curved areas of the panel with different radii of curvature together, maintaining tangential continuity (meaning that the blended surface doesn't change direction suddenly, but smoothly) won't be enough. They need to have a continuous rate of curvature change between the two sections, or else their reflections will appear disconnected.
The continuity is defined using the terms:
G0 – position (touching)
G1 – tangent (angle)
G2 – curvature (radius)
G3 – acceleration (rate of change of curvature)
To achieve a high quality NURBS or Bézier surface, degrees of 5 or greater are generally used.
== Freeform surface modelling software ==
== See also ==
== References == | Wikipedia/Freeform_surface_modeling |
Function Representation (FRep or F-Rep) is used in solid modeling, volume modeling and computer graphics. FRep was introduced in "Function representation in geometric modeling: concepts, implementation and applications" as a uniform representation of multidimensional geometric objects (shapes). An object as a point set in multidimensional space is defined by a single continuous real-valued function
f
(
X
)
{\displaystyle f(X)}
of point coordinates
X
[
x
1
,
x
2
,
.
.
.
,
x
n
]
{\displaystyle X[x_{1},x_{2},...,x_{n}]}
which is evaluated at the given point by a procedure traversing a tree structure with primitives in the leaves and operations in the nodes of the tree. The points with
f
(
x
1
,
x
2
,
.
.
.
,
x
n
)
≥
0
{\displaystyle f(x_{1},x_{2},...,x_{n})\geq 0}
belong to the object, and the points with
f
(
x
1
,
x
2
,
.
.
.
,
x
n
)
<
0
{\displaystyle f(x_{1},x_{2},...,x_{n})<0}
are outside of the object. The point set with
f
(
x
1
,
x
2
,
.
.
.
,
x
n
)
=
0
{\displaystyle f(x_{1},x_{2},...,x_{n})=0}
is called an isosurface.
== Geometric domain ==
The geometric domain of FRep in 3D space includes solids with non-manifold models and lower-dimensional entities (surfaces, curves, points) defined by zero value of the function. A primitive can be defined by an equation or by a "black box" procedure converting point coordinates into the function value. Solids bounded by algebraic surfaces, skeleton-based implicit surfaces, and convolution surfaces, as well as procedural objects (such as solid noise), and voxel objects can be used as primitives (leaves of the construction tree). In the case of a voxel object (discrete field), it should be converted to a continuous real function, for example, by applying the trilinear or higher-order interpolation.
Many operations such as set-theoretic, blending, offsetting, projection, non-linear deformations, metamorphosis, sweeping, hypertexturing, and others, have been formulated for this representation in such a manner that they yield continuous real-valued functions as output, thus guaranteeing the closure property of the representation. R-functions originally introduced in V.L. Rvachev's "On the analytical description of some geometric objects", provide
C
k
{\displaystyle C^{k}}
continuity for the functions exactly defining the set-theoretic operations (min/max functions are a particular case). Because of this property, the result of any supported operation can be treated as the input for a subsequent operation; thus very complex models can be created in this way from a single functional expression. FRep modeling is supported by the special-purpose language HyperFun.
== Shape Models ==
FRep combines and generalizes different shape models like
algebraic surfaces
skeleton based "implicit" surfaces
set-theoretic solids or CSG (Constructive Solid Geometry)
sweeps
volumetric objects
parametric models
procedural models
A more general "constructive hypervolume" allows for modeling multidimensional point sets with attributes (volume models in 3D case). Point set geometry and attributes have independent representations but are treated uniformly. A point set in a geometric space of an arbitrary dimension is an FRep based geometric model of a real object. An attribute that is also represented by a real-valued function (not necessarily continuous) is a mathematical model of an object property of an arbitrary nature (material, photometric, physical, medicine, etc.). The concept of "implicit complex" proposed in "Cellular-functional modeling of heterogeneous objects" provides a framework for including geometric elements of different dimensionality by combining polygonal, parametric, and FRep components into a single cellular-functional model of a heterogeneous object.
== See also ==
Boundary representation
Constructive Solid Geometry
Solid modeling
Isosurface
Signed distance function
HyperFun
Digital materialization
== References ==
== External links ==
http://hyperfun.org/FRep/
https://github.com/cbiffle/ruckus
http://libfive.com/
http://www.implicitcad.org/ | Wikipedia/Function_representation |
In 3D computer graphics, a wire-frame model (also spelled wireframe model) is a visual representation of a three-dimensional (3D) physical object. It is based on a polygon mesh or a volumetric mesh, created by specifying each edge of the physical object where two mathematically continuous smooth surfaces meet, or by connecting an object's constituent vertices using (straight) lines or curves.
The object is projected into screen space and rendered by drawing lines at the location of each edge. The term "wire frame" comes from designers using metal wire to represent the three-dimensional shape of solid objects. 3D wireframe computer models allow for the construction and manipulation of solids and solid surfaces. 3D solid modeling efficiently draws higher quality representations of solids than conventional line drawing.
Using a wire-frame model allows for the visualization of the underlying design structure of a 3D model. Traditional two-dimensional views and drawings/renderings can be created by the appropriate rotation of the object, and the selection of hidden-line removal via cutting planes.
Since wire-frame renderings are relatively simple and fast to calculate, they are often used in cases where a relatively high screen frame rate is needed (for instance, when working with a particularly complex 3D model, or in real-time systems that model exterior phenomena).
When greater graphical detail is desired, surface textures can be added automatically after the completion of the initial rendering of the wire frame. This allows a designer to quickly review solids, or rotate objects to different views without the long delays associated with more realistic rendering, or even the processing of faces and simple flat shading.
The wire frame format is also well-suited and widely used in programming tool paths for direct numerical control (DNC) machine tools.
Hand-drawn wire-frame-like illustrations date back as far as the Italian Renaissance. Wire-frame models were also used extensively in video games to represent 3D objects during the 1980s and early 1990s, when "properly" filled 3D objects would have been too complex to calculate and draw with the computers of the time. Wire-frame models are also used as the input for computer-aided manufacturing (CAM).
There are three main types of 3D computer-aided design (CAD) models; wire frame is the most abstract and least realistic. The other types are surface and solid. The wire-frame method of modelling consists of only lines and curves that connect the points or vertices and thereby define the edges of an object.
== Simple example of wireframe model ==
An object is specified by two tables: (1) Vertex Table, and, (2) Edge Table.
The vertex table consists of three-dimensional coordinate values for each vertex with reference to the origin.
Edge table specifies the start and end vertices for each edge.
A naive interpretation could create a wire-frame representation by simply drawing straight lines between the screen coordinates of the appropriate vertices using the edge list.
Unlike representations designed for more detailed rendering, face information is not specified (it must be calculated if required for solid rendering).
Appropriate calculations have to be performed to transform the 3D coordinates of the vertices into 2D screen coordinates.
== See also ==
Animation
3D computer graphics
Computer animation
Computer-generated imagery (CGI)
Mockup
Polygon mesh
Vector graphics
Virtual cinematography
== References ==
Principles of Engineering Graphics by Maxwell Macmillan International Editions
ASME Engineer's Data Book by Clifford Matthews
Engineering Drawing by N.D. Bhatt
Texturing and Modeling by Davis S. Ebert
3D Computer Graphics by Alan Watt | Wikipedia/Wireframe_modeling |
In mathematics, an R-function, or Rvachev function, is a real-valued function whose sign does not change if none of the signs of its arguments change; that is, its sign is determined solely by the signs of its arguments.
Interpreting positive values as true and negative values as false, an R-function is transformed into a "companion" Boolean function (the two functions are called friends). For instance, the R-function ƒ(x, y) = min(x, y) is one possible friend of the logical conjunction (AND). R-functions are used in computer graphics and geometric modeling in the context of implicit surfaces and the function representation. They also appear in certain boundary-value problems, and are also popular in certain artificial intelligence applications, where they are used in pattern recognition.
R-functions were first proposed by Vladimir Logvinovich Rvachev (Russian: Влади́мир Логвинович Рвачёв) in 1963, though the name, "R-functions", was given later on by Ekaterina L. Rvacheva-Yushchenko, in memory of their father, Logvin Fedorovich Rvachev (Russian: Логвин Фёдорович Рвачёв).
== See also ==
Function representation
== Notes ==
== References ==
Meshfree Modeling and Analysis, R-Functions (University of Wisconsin)
Pattern Recognition Methods Based on Rvachev Functions (Purdue University)
Shape Modeling and Computer Graphics with Real Functions | Wikipedia/Rvachev_function |
In mathematics, specifically in topology,
the interior of a subset S of a topological space X is the union of all subsets of S that are open in X.
A point that is in the interior of S is an interior point of S.
The interior of S is the complement of the closure of the complement of S.
In this sense interior and closure are dual notions.
The exterior of a set S is the complement of the closure of S; it consists of the points that are in neither the set nor its boundary.
The interior, boundary, and exterior of a subset together partition the whole space into three blocks (or fewer when one or more of these is empty).
The interior and exterior of a closed curve are a slightly different concept; see the Jordan curve theorem.
== Definitions ==
=== Interior point ===
If
S
{\displaystyle S}
is a subset of a Euclidean space, then
x
{\displaystyle x}
is an interior point of
S
{\displaystyle S}
if there exists an open ball centered at
x
{\displaystyle x}
which is completely contained in
S
.
{\displaystyle S.}
(This is illustrated in the introductory section to this article.)
This definition generalizes to any subset
S
{\displaystyle S}
of a metric space
X
{\displaystyle X}
with metric
d
{\displaystyle d}
:
x
{\displaystyle x}
is an interior point of
S
{\displaystyle S}
if there exists a real number
r
>
0
,
{\displaystyle r>0,}
such that
y
{\displaystyle y}
is in
S
{\displaystyle S}
whenever the distance
d
(
x
,
y
)
<
r
.
{\displaystyle d(x,y)<r.}
This definition generalizes to topological spaces by replacing "open ball" with "open set".
If
S
{\displaystyle S}
is a subset of a topological space
X
{\displaystyle X}
then
x
{\displaystyle x}
is an interior point of
S
{\displaystyle S}
in
X
{\displaystyle X}
if
x
{\displaystyle x}
is contained in an open subset of
X
{\displaystyle X}
that is completely contained in
S
.
{\displaystyle S.}
(Equivalently,
x
{\displaystyle x}
is an interior point of
S
{\displaystyle S}
if
S
{\displaystyle S}
is a neighbourhood of
x
.
{\displaystyle x.}
)
=== Interior of a set ===
The interior of a subset
S
{\displaystyle S}
of a topological space
X
,
{\displaystyle X,}
denoted by
int
X
S
{\displaystyle \operatorname {int} _{X}S}
or
int
S
{\displaystyle \operatorname {int} S}
or
S
∘
,
{\displaystyle S^{\circ },}
can be defined in any of the following equivalent ways:
int
S
{\displaystyle \operatorname {int} S}
is the largest open subset of
X
{\displaystyle X}
contained in
S
.
{\displaystyle S.}
int
S
{\displaystyle \operatorname {int} S}
is the union of all open sets of
X
{\displaystyle X}
contained in
S
.
{\displaystyle S.}
int
S
{\displaystyle \operatorname {int} S}
is the set of all interior points of
S
.
{\displaystyle S.}
If the space
X
{\displaystyle X}
is understood from context then the shorter notation
int
S
{\displaystyle \operatorname {int} S}
is usually preferred to
int
X
S
.
{\displaystyle \operatorname {int} _{X}S.}
== Examples ==
In any space, the interior of the empty set is the empty set.
In any space
X
,
{\displaystyle X,}
if
S
⊆
X
,
{\displaystyle S\subseteq X,}
then
int
S
⊆
S
.
{\displaystyle \operatorname {int} S\subseteq S.}
If
X
{\displaystyle X}
is the real line
R
{\displaystyle \mathbb {R} }
(with the standard topology), then
int
(
[
0
,
1
]
)
=
(
0
,
1
)
{\displaystyle \operatorname {int} ([0,1])=(0,1)}
whereas the interior of the set
Q
{\displaystyle \mathbb {Q} }
of rational numbers is empty:
int
Q
=
∅
.
{\displaystyle \operatorname {int} \mathbb {Q} =\varnothing .}
If
X
{\displaystyle X}
is the complex plane
C
,
{\displaystyle \mathbb {C} ,}
then
int
(
{
z
∈
C
:
|
z
|
≤
1
}
)
=
{
z
∈
C
:
|
z
|
<
1
}
.
{\displaystyle \operatorname {int} (\{z\in \mathbb {C} :|z|\leq 1\})=\{z\in \mathbb {C} :|z|<1\}.}
In any Euclidean space, the interior of any finite set is the empty set.
On the set of real numbers, one can put other topologies rather than the standard one:
If
X
{\displaystyle X}
is the real numbers
R
{\displaystyle \mathbb {R} }
with the lower limit topology, then
int
(
[
0
,
1
]
)
=
[
0
,
1
)
.
{\displaystyle \operatorname {int} ([0,1])=[0,1).}
If one considers on
R
{\displaystyle \mathbb {R} }
the topology in which every set is open, then
int
(
[
0
,
1
]
)
=
[
0
,
1
]
.
{\displaystyle \operatorname {int} ([0,1])=[0,1].}
If one considers on
R
{\displaystyle \mathbb {R} }
the topology in which the only open sets are the empty set and
R
{\displaystyle \mathbb {R} }
itself, then
int
(
[
0
,
1
]
)
{\displaystyle \operatorname {int} ([0,1])}
is the empty set.
These examples show that the interior of a set depends upon the topology of the underlying space.
The last two examples are special cases of the following.
In any discrete space, since every set is open, every set is equal to its interior.
In any indiscrete space
X
,
{\displaystyle X,}
since the only open sets are the empty set and
X
{\displaystyle X}
itself,
int
X
=
X
{\displaystyle \operatorname {int} X=X}
and for every proper subset
S
{\displaystyle S}
of
X
,
{\displaystyle X,}
int
S
{\displaystyle \operatorname {int} S}
is the empty set.
== Properties ==
Let
X
{\displaystyle X}
be a topological space and let
S
{\displaystyle S}
and
T
{\displaystyle T}
be subsets of
X
.
{\displaystyle X.}
int
S
{\displaystyle \operatorname {int} S}
is open in
X
.
{\displaystyle X.}
If
T
{\displaystyle T}
is open in
X
{\displaystyle X}
then
T
⊆
S
{\displaystyle T\subseteq S}
if and only if
T
⊆
int
S
.
{\displaystyle T\subseteq \operatorname {int} S.}
int
S
{\displaystyle \operatorname {int} S}
is an open subset of
S
{\displaystyle S}
when
S
{\displaystyle S}
is given the subspace topology.
S
{\displaystyle S}
is an open subset of
X
{\displaystyle X}
if and only if
int
S
=
S
.
{\displaystyle \operatorname {int} S=S.}
Intensive:
int
S
⊆
S
.
{\displaystyle \operatorname {int} S\subseteq S.}
Idempotence:
int
(
int
S
)
=
int
S
.
{\displaystyle \operatorname {int} (\operatorname {int} S)=\operatorname {int} S.}
Preserves/distributes over binary intersection:
int
(
S
∩
T
)
=
(
int
S
)
∩
(
int
T
)
.
{\displaystyle \operatorname {int} (S\cap T)=(\operatorname {int} S)\cap (\operatorname {int} T).}
However, the interior operator does not distribute over unions since only
int
(
S
∪
T
)
⊇
(
int
S
)
∪
(
int
T
)
{\displaystyle \operatorname {int} (S\cup T)~\supseteq ~(\operatorname {int} S)\cup (\operatorname {int} T)}
is guaranteed in general and equality might not hold. For example, if
X
=
R
,
S
=
(
−
∞
,
0
]
,
{\displaystyle X=\mathbb {R} ,S=(-\infty ,0],}
and
T
=
(
0
,
∞
)
{\displaystyle T=(0,\infty )}
then
(
int
S
)
∪
(
int
T
)
=
(
−
∞
,
0
)
∪
(
0
,
∞
)
=
R
∖
{
0
}
{\displaystyle (\operatorname {int} S)\cup (\operatorname {int} T)=(-\infty ,0)\cup (0,\infty )=\mathbb {R} \setminus \{0\}}
is a proper subset of
int
(
S
∪
T
)
=
int
R
=
R
.
{\displaystyle \operatorname {int} (S\cup T)=\operatorname {int} \mathbb {R} =\mathbb {R} .}
Monotone/nondecreasing with respect to
⊆
{\displaystyle \subseteq }
: If
S
⊆
T
{\displaystyle S\subseteq T}
then
int
S
⊆
int
T
.
{\displaystyle \operatorname {int} S\subseteq \operatorname {int} T.}
Other properties include:
If
S
{\displaystyle S}
is closed in
X
{\displaystyle X}
and
int
T
=
∅
{\displaystyle \operatorname {int} T=\varnothing }
then
int
(
S
∪
T
)
=
int
S
.
{\displaystyle \operatorname {int} (S\cup T)=\operatorname {int} S.}
Relationship with closure
The above statements will remain true if all instances of the symbols/words
"interior", "int", "open", "subset", and "largest"
are respectively replaced by
"closure", "cl", "closed", "superset", and "smallest"
and the following symbols are swapped:
"
⊆
{\displaystyle \subseteq }
" swapped with "
⊇
{\displaystyle \supseteq }
"
"
∪
{\displaystyle \cup }
" swapped with "
∩
{\displaystyle \cap }
"
For more details on this matter, see interior operator below or the article Kuratowski closure axioms.
== Interior operator ==
The interior operator
int
X
{\displaystyle \operatorname {int} _{X}}
is dual to the closure operator, which is denoted by
cl
X
{\displaystyle \operatorname {cl} _{X}}
or by an overline —, in the sense that
int
X
S
=
X
∖
(
X
∖
S
)
¯
{\displaystyle \operatorname {int} _{X}S=X\setminus {\overline {(X\setminus S)}}}
and also
S
¯
=
X
∖
int
X
(
X
∖
S
)
,
{\displaystyle {\overline {S}}=X\setminus \operatorname {int} _{X}(X\setminus S),}
where
X
{\displaystyle X}
is the topological space containing
S
,
{\displaystyle S,}
and the backslash
∖
{\displaystyle \,\setminus \,}
denotes set-theoretic difference.
Therefore, the abstract theory of closure operators and the Kuratowski closure axioms can be readily translated into the language of interior operators, by replacing sets with their complements in
X
.
{\displaystyle X.}
In general, the interior operator does not commute with unions. However, in a complete metric space the following result does hold:
The result above implies that every complete metric space is a Baire space.
== Exterior of a set ==
The exterior of a subset
S
{\displaystyle S}
of a topological space
X
,
{\displaystyle X,}
denoted by
ext
X
S
{\displaystyle \operatorname {ext} _{X}S}
or simply
ext
S
,
{\displaystyle \operatorname {ext} S,}
is the largest open set disjoint from
S
,
{\displaystyle S,}
namely, it is the union of all open sets in
X
{\displaystyle X}
that are disjoint from
S
.
{\displaystyle S.}
The exterior is the interior of the complement, which is the same as the complement of the closure; in formulas,
ext
S
=
int
(
X
∖
S
)
=
X
∖
S
¯
.
{\displaystyle \operatorname {ext} S=\operatorname {int} (X\setminus S)=X\setminus {\overline {S}}.}
Similarly, the interior is the exterior of the complement:
int
S
=
ext
(
X
∖
S
)
.
{\displaystyle \operatorname {int} S=\operatorname {ext} (X\setminus S).}
The interior, boundary, and exterior of a set
S
{\displaystyle S}
together partition the whole space into three blocks (or fewer when one or more of these is empty):
X
=
int
S
∪
∂
S
∪
ext
S
,
{\displaystyle X=\operatorname {int} S\cup \partial S\cup \operatorname {ext} S,}
where
∂
S
{\displaystyle \partial S}
denotes the boundary of
S
.
{\displaystyle S.}
The interior and exterior are always open, while the boundary is closed.
Some of the properties of the exterior operator are unlike those of the interior operator:
The exterior operator reverses inclusions; if
S
⊆
T
,
{\displaystyle S\subseteq T,}
then
ext
T
⊆
ext
S
.
{\displaystyle \operatorname {ext} T\subseteq \operatorname {ext} S.}
The exterior operator is not idempotent. It does have the property that
int
S
⊆
ext
(
ext
S
)
.
{\displaystyle \operatorname {int} S\subseteq \operatorname {ext} \left(\operatorname {ext} S\right).}
== Interior-disjoint shapes ==
Two shapes
a
{\displaystyle a}
and
b
{\displaystyle b}
are called interior-disjoint if the intersection of their interiors is empty.
Interior-disjoint shapes may or may not intersect in their boundary.
== See also ==
Algebraic interior – Generalization of topological interior
DE-9IM – Topological model
Interior algebra – Algebraic structure
Jordan curve theorem – A closed curve divides the plane into two regions
Quasi-relative interior – Generalization of algebraic interior
Relative interior – Generalization of topological interior
== References ==
== Bibliography ==
Bourbaki, Nicolas (1989) [1966]. General Topology: Chapters 1–4 [Topologie Générale]. Éléments de mathématique. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64241-1. OCLC 18588129.
Dixmier, Jacques (1984). General Topology. Undergraduate Texts in Mathematics. Translated by Berberian, S. K. New York: Springer-Verlag. ISBN 978-0-387-90972-1. OCLC 10277303.
Császár, Ákos (1978). General topology. Translated by Császár, Klára. Bristol England: Adam Hilger Ltd. ISBN 0-85274-275-4. OCLC 4146011.
Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485.
Joshi, K. D. (1983). Introduction to General Topology. New York: John Wiley and Sons Ltd. ISBN 978-0-85226-444-7. OCLC 9218750.
Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153.
Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. (accessible to patrons with print disabilities)
Schubert, Horst (1968). Topology. London: Macdonald & Co. ISBN 978-0-356-02077-8. OCLC 463753.
Wilansky, Albert (17 October 2008) [1970]. Topology for Analysis. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-46903-4. OCLC 227923899.
Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
== External links ==
Interior at PlanetMath. | Wikipedia/Exterior_(topology) |
Romulus is boundary representation (b-rep) solid modeling software, released first in 1978 by Ian Braid, Charles Lang, Alan Grayer, and the Shape Data team in Cambridge, England. It was the first commercial solid modeling kernel designed for straightforward integration into computer-aided design (CAD) software. Romulus incorporated the CAM-I AIS (Computer Aided Manufacturers International's Application Interface Specification) and was the only solid modeler (other than its successors Parasolid and ACIS) ever to offer a third-party standard application programming interface (API) to facilitate high-level integration into a host CAD software program. Romulus was quickly licensed by Siemens, Hewlett-Packard (HP), and several other CAD software vendors.
== See also ==
Comparison of computer-aided design software
Shape Data
== References == | Wikipedia/Romulus_(b-rep_solid_modeler) |
A node is a basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Links between nodes are often implemented by pointers.
== Nodes and trees ==
Nodes are often arranged into tree structures. A node represents the information contained in a single data structure. These nodes may contain a value or condition, or possibly serve as another independent data structure. Nodes are represented by a single parent node. The highest point on a tree structure is called a root node, which does not have a parent node, but serves as the parent or 'grandparent' of all of the nodes below it in the tree. The height of a node is determined by the total number of edges on the path from that node to the furthest leaf node, and the height of the tree is equal to the height of the root node. Node depth is determined by the distance between that particular node and the root node. The root node is said to have a depth of zero. Data can be discovered along these network paths.
An IP address uses this kind of system of nodes to define its location in a network.
=== Definitions ===
Child: A child node is a node extending from another node. For example, a computer with internet access could be considered a child node of a node representing the internet. The inverse relationship is that of a parent node. If node C is a child of node A, then A is the parent node of C.
Degree: the degree of a node is the number of children of the node.
Depth: the depth of node A is the length of the path from A to the root node. The root node is said to have depth 0.
Edge: the connection between nodes.
Forest: a set of trees.
Height: the height of node A is the length of the longest path through children to a leaf node.
Internal node: a node with at least one child.
Leaf node: a node with no children.
Root node: a node distinguished from the rest of the tree nodes. Usually, it is depicted as the highest node of the tree.
Sibling nodes: these are nodes connected to the same parent node.
== Markup languages ==
Another common use of node trees is in web development. In programming, XML is used to communicate information between computer programmers and computers alike. For this reason XML is used to create common communication protocols used in office productivity software, and serves as the base for the development of modern web markup languages like XHTML. Though similar in how it is approached by a programmer, HTML and CSS is typically the language used to develop website text and design. While XML, HTML and XHTML provide the language and expression, the DOM serves as a translator.
=== Node type ===
Different types of nodes in a tree are represented by specific interfaces. In other words, the node type is defined by how it communicates with other nodes. Each node has a node type property, which specifies the type of node, such as sibling or leaf.
For example, if the node type property is the constant properties for a node, this property specifies the type of the node. So if a node type property is the constant node ELEMENT_NODE, one can know that this node object is an object Element. This object uses the Element interface to define all the methods and properties of that particular node.
Different W3C World Wide Web Consortium node types and descriptions:
Document represents the entire document (the root-node of the DOM tree)
DocumentFragment represents a "lightweight" Document object, which can hold a portion of a document
DocumentType provides an interface to the entities defined for the document
ProcessingInstruction represents a processing instruction
EntityReference represents an entity reference
Element represents an element
Attr represents an attribute
Text represents textual content in an element or attribute
CDATASection represents a CDATA section in a document (text that will NOT be parsed by a parser)
Comment represents a comment
Entity represents an entity
Notation represents a notation declared in the DTD
=== Node object ===
A node object is represented by a single node in a tree. It can be an element node, attribute node, text node, or any type that is described in section "node type". All objects can inherit properties and methods for dealing with parent and child nodes, but not all of the objects have parent or child nodes. For example, with text nodes that cannot have child nodes, trying to add child nodes results in a DOM error.
Objects in the DOM tree may be addressed and manipulated by using methods on the objects. The public interface of a DOM is specified in its application programming interface (API). The history of the Document Object Model is intertwined with the history of the "browser wars" of the late 1990s between Netscape Navigator and Microsoft Internet Explorer, as well as with that of JavaScript and JScript, the first scripting languages to be widely implemented in the layout engines of web browsers.
== See also ==
Vertex (graph theory)
== References ==
== External links ==
Data Trees as a Means of Presenting Complex Data Analysis by Sally Knipe
STL-like C++ tree class Archived 2020-11-26 at the Wayback Machine
Description of tree data structures from ideainfo.8m.com
WormWeb.org: Interactive Visualization of the C. elegans Cell Tree - Visualize the entire cell lineage tree of the nematode C. elegans (javascript) | Wikipedia/Node_(computer_science) |
The Common Algebraic Specification Language (CASL) is a general-purpose specification language based on first-order logic with induction. Partial functions and subsorting are also supported.
== Overview ==
CASL has been designed by CoFI, the Common Framework Initiative (CoFI), with the aim to subsume many existing specification languages.
CASL comprises four levels:
basic specifications, for the specification of single software modules,
structured specifications, for the modular specification of modules,
architectural specifications, for the prescription of the structure of implementations,
specification libraries, for storing specifications distributed over the Internet.
The four levels are orthogonal to each other. In particular, it is possible to use CASL structured and architectural specifications and libraries with logics other than CASL. For this purpose, the logic has to be formalized as an institution. This feature is also used by the CASL extensions.
== Extensions ==
Several extensions of CASL have been designed:
HasCASL, a higher-order extension
CoCASL, a coalgebraic extension
CspCASL, a concurrent extension based on CSP
ModalCASL, a modal logic extension
CASL-LTL, a temporal logic extension
HetCASL, an extension for heterogeneous specification
== References ==
== External links ==
Official CoFI website
CASL
The heterogeneous tool set Hets, the main analysis tool for CASL | Wikipedia/Common_Algebraic_Specification_Language |
Attempto Controlled English (ACE) is a controlled natural language, i.e. a subset of standard English with a restricted syntax and restricted semantics described by a small set of construction and interpretation rules. It has been under development at the University of Zurich since 1995. In 2013, ACE version 6.7 was announced.
ACE can serve as knowledge representation, specification, and query language, and is intended for professionals who want to use formal notations and formal methods, but may not be familiar with them. Though ACE appears perfectly natural—it can be read and understood by any speaker of English—it is in fact a formal language.
ACE and its related tools have been used in the fields of software specifications, theorem proving, proof assistants, text summaries, ontologies, rules, querying, medical documentation and planning.
Here are some simple examples:
Every woman is a human.
A woman is a human.
A man tries-on a new tie. If the tie pleases his wife then the man buys it.
ACE construction rules require that each noun be introduced by a determiner (a, every, no, some, at least 5, ...). Regarding the list of examples above, ACE interpretation rules decide that (1) is interpreted as universally quantified, while (2) is interpreted as existentially quantified. Sentences like "Women are human" do not follow ACE syntax and are consequently not valid.
Interpretation rules resolve the anaphoric references in (3): the tie and it of the second sentence refer to a new tie of the first sentence, while his and the man of the second sentence refer to a man of the first sentence. Thus an ACE text is a coherent entity of anaphorically linked sentences.
The Attempto Parsing Engine (APE) translates ACE texts unambiguously into discourse representation structures (DRS) that use a variant of the language of first-order logic. A DRS can be further translated into other formal languages, for instance AceRules with various semantics, OWL, and SWRL. Translating an ACE text into (a fragment of) first-order logic allows users to reason about the text, for instance to verify, to validate, and to query it.
== Overview ==
As an overview of the current version 6.6 of ACE this section:
Briefly describes the vocabulary
Gives an account of the syntax
Summarises the handling of ambiguity
Explains the processing of anaphoric references.
=== Vocabulary ===
The vocabulary of ACE comprises:
Predefined function words (e.g. determiners, conjunctions)
Predefined phrases (e.g. "it is false that ...", "it is possible that ...")
Content words (e.g. nouns, verbs, adjectives, adverbs).
=== Grammar ===
The grammar of ACE defines and constrains the form and the meaning of ACE sentences and texts. ACE's grammar is expressed as a set of construction rules. The meaning of sentences is described as a small set of interpretation rules. A Troubleshooting Guide describes how to use ACE and how to avoid pitfalls.
==== ACE texts ====
An ACE text is a sequence of declarative sentences that can be anaphorically interrelated. Furthermore, ACE supports questions and commands.
==== Simple sentences ====
A simple sentence asserts that something is the case—a fact, an event, a state.
The temperature is −2 °C.
A customer inserts 2 cards.
A card and a code are valid.
Simple ACE sentences have the following general structure:
subject + verb + complements + adjuncts
Every sentence has a subject and a verb. Complements (direct and indirect objects) are necessary for transitive verbs (insert something) and ditransitive verbs (give something to somebody), whereas adjuncts (adverbs, prepositional phrases) are optional.
All elements of a simple sentence can be elaborated upon to describe the situation in more detail. To further specify the nouns customer and card, we could add adjectives:
A trusted customer inserts two valid cards.
possessive nouns and of-prepositional phrases:
John's customer inserts a card of Mary.
or variables as appositions:
John inserts a card A.
Other modifications of nouns are possible through relative sentences:
A customer who is trusted inserts a card that he owns.
which are described below since they make a sentence composite. We can also detail the insertion event, e.g. by adding an adverb:
A customer inserts some cards manually.
or, equivalently:
A customer manually inserts some cards.
or, by adding prepositional phrases:
A customer inserts some cards into a slot.
We can combine all of these elaborations to arrive at:
John's customer who is trusted inserts a valid card of Mary manually into a slot A.
==== Composite sentences ====
Composite sentences are recursively built from simpler sentences through coordination, subordination, quantification, and negation. Note that ACE composite sentences overlap with what linguists call compound sentences and complex sentences.
===== Coordination =====
Coordination by and is possible between sentences and between phrases of the same syntactic type.
A customer inserts a card and the machine checks the code.
There is a customer who inserts a card and who enters a code.
A customer inserts a card and enters a code.
An old and trusted customer enters a card and a code.
Note that the coordination of the noun phrases a card and a code represents a plural object.
Coordination by or is possible between sentences, verb phrases, and relative clauses.
A customer inserts a card or the machine checks the code.
A customer inserts a card or enters a code.
A customer owns a card that is invalid or that is damaged.
Coordination by and and or is governed by the standard binding order of logic, i.e. and binds stronger than or. Commas can be used to override the standard binding order. Thus the sentence:
A customer inserts a VisaCard or inserts a MasterCard, and inserts a code.
means that the customer inserts a VisaCard and a code, or alternatively a MasterCard and a code.
===== Subordination =====
There are four constructs of subordination: relative sentences, if-then sentences, modality, and sentence subordination.
Relative sentences starting with who, which, and that allow to add detail to nouns:
A customer who is trusted inserts a card that he owns.
With the help of if-then sentences we can specify conditional or hypothetical situations:
If a card is valid then a customer inserts it.
Note the anaphoric reference via the pronoun it in the then-part to the noun phrase a card in the if-part.
Modality allows us to express possibility and necessity:
A trusted customer can/must insert a card.
It is possible/necessary that a trusted customer inserts a card.
Sentence subordination comes in various forms:
It is true/false that a customer inserts a card.
It is not provable that a customer inserts a card.
A clerk believes that a customer inserts a card.
===== Quantification =====
Quantification allows us to speak about all objects of a certain class (universal quantification), or to denote explicitly the existence of at least one object of this class (existential quantification). The textual occurrence of a universal or existential quantifier opens its scope that extends to the end of the sentence, or in coordinations to the end of the respective coordinated sentence.
To express that all involved customers insert cards we can write
Every customer inserts a card.
This sentence means that each customer inserts a card that may, or may not, be the same as the one inserted by another customer. To specify that all customers insert the same card—however unrealistic that situation seems—we can write:
A card is inserted by every customer.
or, equivalently:
There is a card that every customer inserts.
To state that every card is inserted by a customer we write:
Every card is inserted by a customer.
or, somewhat indirectly:
For every card there is a customer who inserts it.
===== Negation =====
Negation allows us to express that something is not the case:
A customer does not insert a card.
A card is not valid.
To negate something for all objects of a certain class one uses no:
No customer inserts more than 2 cards.
or, there is no:
There is no customer who inserts a card.
To negate a complete statement one uses sentence negation:
It is false that a customer inserts a card.
These forms of negation are logical negations, i.e. they state that something is provably not the case. Negation as failure states that a state of affairs cannot be proved, i.e. there is no information whether the state of affairs is the case or not.
It is not provable that a customer inserts a card.
==== Queries ====
ACE supports two forms of queries: yes/no-queries and wh-queries.
Yes/no-queries ask for the existence or non-existence of a specified situation. If we specified:
A customer inserts a card.
then we can ask:
Does a customer insert a card?
to get a positive answer. Note that interrogative sentences always end with a question mark.
With the help of wh-queries, i.e. queries with query words, we can interrogate a text for details of the specified situation. If we specified:
A trusted customer inserts a valid card manually in the morning in a bank.
we can ask for each element of the sentence with the exception of the verb.
Who inserts a card?
Which customer inserts a card?
What does a customer insert?
How does a customer insert a card?
When does a customer enter a card?
Where does a customer enter a card?
Queries can also be constructed by a sequence of declarative sentences followed by one interrogative sentence, for example:
There is a customer and there is a card that the customer enters. Does a customer enter a card?
==== Commands ====
ACE also supports commands. Some examples:
John, go to the bank!
John and Mary, wait!
Every dog, bark!
A brother of John, give a book to Mary!
A command always consists of a noun phrase (the addressee), followed by a comma, followed by an uncoordinated verb phrase. Furthermore, a command has to end with an exclamation mark.
=== Constraining ambiguity ===
To constrain the ambiguity of full natural language ACE employs three simple means:
Some ambiguous constructs are not part of the language; unambiguous alternatives are available in their place.
All remaining ambiguous constructs are interpreted deterministically on the basis of a small number of interpretation rules.
Users can either accept the assigned interpretation, or they must rephrase the input to obtain another one.
==== Avoidance of ambiguity ====
In natural language, relative sentences combined with coordinations can introduce ambiguity:
A customer inserts a card that is valid and opens an account.
In ACE the sentence has the unequivocal meaning that the customer opens an account, as reflected by the paraphrase:
A card is valid. A customer inserts the card. The customer opens an account.
To express the alternative—though not very realistic—meaning that the card opens an account, the relative pronoun that must be repeated, thus yielding a coordination of relative sentences:
A customer inserts a card that is valid and that opens an account.
This sentence is unambiguously equivalent in meaning to the paraphrase:
A card is valid. The card opens an account. A customer inserts the card.
==== Interpretation rules ====
Not all ambiguities can be safely removed from ACE without rendering it artificial. To deterministically interpret otherwise syntactically correct ACE sentences we use a small set of interpretation rules. For example, if we write:
A customer inserts a card with a code.
then with a code attaches to the verb inserts, but not to a card. However, this is probably not what we meant to say. To express that the code is associated with the card we can employ the interpretation rule that a relative sentence always modifies the immediately preceding noun phrase, and rephrase the input as:
A customer inserts a card that carries a code.
yielding the paraphrase:
A card carries a code. A customer inserts the card.
or—to specify that the customer inserts a card and a code—as:
A customer inserts a card and a code.
=== Anaphoric references ===
Usually ACE texts consist of more than one sentence:
A customer enters a card and a code. If a code is valid then SimpleMat accepts a card.
To express that all occurrences of card and code should mean the same card and the same code, ACE provides anaphoric references via the definite article:
A customer enters a card and a code. If the code is valid then SimpleMat accepts the card.
During the processing of the ACE text, all anaphoric references are replaced by the most recent and most specific accessible noun phrase that agrees in gender and number. As an example of "most recent and most specific", suppose an ACE parser is given the sentence:
A customer enters a red card and a blue card.
Then:
The card is correct.
refers to the second card, while:
The red card is correct.
refers to the first card.
Noun phrases within if-then sentences, universally quantified sentences, negations, modality, and subordinated sentences cannot be referred to anaphorically from subsequent sentences, i.e. such noun phrases are not "accessible" from the following text. Thus for each of the sentences:
If a customer owns a card then he enters it.
Every customer enters a card.
A customer does not enter a card.
A customer can enter a card.
A clerk believes that a customer enters a card.
we cannot refer to a card with:
The card is correct.
Anaphoric references are also possible via personal pronouns:
A customer enters a card and a code. If it is valid then SimpleMat accepts the card.
or via variables:
A customer enters a card X and a code Y. If Y is valid then SimpleMat accepts X.
Anaphoric references via definite articles and variables can be combined:
A customer enters a card X and a code Y. If the code Y is valid then SimpleMat accepts the card X.
Note that proper names like SimpleMat always refer to the same object.
== See also ==
Gellish
Natural language processing
Natural language programming
Structured English
ClearTalk, another machine-readable knowledge representation language
Inform 7, a programming language with English syntax
== References ==
== External links ==
Official website, Project Attempto | Wikipedia/Attempto_Controlled_English |
UPPAAL is an integrated tool environment for modeling, validation and verification of real-time systems modeled as networks of timed automata, extended with data types (bounded integers, arrays etc.).
It has been used in at least 17 case studies since its release in 1995, including on Lego Mindstorms, for the Philips audio protocol, and in gearbox controllers for Mecel.
The tool has been developed in collaboration between the Design and Analysis of Real-Time Systems group at Uppsala University, Sweden and Basic Research in Computer Science at Aalborg University, Denmark.
There are the following extensions available:
Cora for Cost Optimal Reachability Analysis.
Tron for Testing Real-time systems ON-line (black-box conformance testing).
Cover for COVERerage-optimal off-line test generation.
Tiga for TImed GAmes based controller synthesis.
Port for component based timed systems, exploiting Partial Order Reduction Techniques.
Pro for PRObabilistic reachability analysis. (Discontinued)
SMC for Statistical Model Checking.
== References ==
== External links ==
UPPAAL academic website
UPPAAL commercial website
Design and Analysis of Real-Time Systems group
DEIS unit, Dept. Computer Science at AAU | Wikipedia/Uppaal_Model_Checker |
In its most common sense, methodology is the study of research methods. However, the term can also refer to the methods themselves or to the philosophical discussion of associated background assumptions. A method is a structured procedure for bringing about a certain goal, like acquiring knowledge or verifying knowledge claims. This normally involves various steps, like choosing a sample, collecting data from this sample, and interpreting the data. The study of methods concerns a detailed description and analysis of these processes. It includes evaluative aspects by comparing different methods. This way, it is assessed what advantages and disadvantages they have and for what research goals they may be used. These descriptions and evaluations depend on philosophical background assumptions. Examples are how to conceptualize the studied phenomena and what constitutes evidence for or against them. When understood in the widest sense, methodology also includes the discussion of these more abstract issues.
Methodologies are traditionally divided into quantitative and qualitative research. Quantitative research is the main methodology of the natural sciences. It uses precise numerical measurements. Its goal is usually to find universal laws used to make predictions about future events. The dominant methodology in the natural sciences is called the scientific method. It includes steps like observation and the formulation of a hypothesis. Further steps are to test the hypothesis using an experiment, to compare the measurements to the expected results, and to publish the findings.
Qualitative research is more characteristic of the social sciences and gives less prominence to exact numerical measurements. It aims more at an in-depth understanding of the meaning of the studied phenomena and less at universal and predictive laws. Common methods found in the social sciences are surveys, interviews, focus groups, and the nominal group technique. They differ from each other concerning their sample size, the types of questions asked, and the general setting. In recent decades, many social scientists have started using mixed-methods research, which combines quantitative and qualitative methodologies.
Many discussions in methodology concern the question of whether the quantitative approach is superior, especially whether it is adequate when applied to the social domain. A few theorists reject methodology as a discipline in general. For example, some argue that it is useless since methods should be used rather than studied. Others hold that it is harmful because it restricts the freedom and creativity of researchers. Methodologists often respond to these objections by claiming that a good methodology helps researchers arrive at reliable theories in an efficient way. The choice of method often matters since the same factual material can lead to different conclusions depending on one's method. Interest in methodology has risen in the 20th century due to the increased importance of interdisciplinary work and the obstacles hindering efficient cooperation.
== Definitions ==
The term "methodology" is associated with a variety of meanings. In its most common usage, it refers either to a method, to the field of inquiry studying methods, or to philosophical discussions of background assumptions involved in these processes. Some researchers distinguish methods from methodologies by holding that methods are modes of data collection while methodologies are more general research strategies that determine how to conduct a research project. In this sense, methodologies include various theoretical commitments about the intended outcomes of the investigation.
=== As method ===
The term "methodology" is sometimes used as a synonym for the term "method". A method is a way of reaching some predefined goal. It is a planned and structured procedure for solving a theoretical or practical problem. In this regard, methods stand in contrast to free and unstructured approaches to problem-solving. For example, descriptive statistics is a method of data analysis, radiocarbon dating is a method of determining the age of organic objects, sautéing is a method of cooking, and project-based learning is an educational method. The term "technique" is often used as a synonym both in the academic and the everyday discourse. Methods usually involve a clearly defined series of decisions and actions to be used under certain circumstances, usually expressable as a sequence of repeatable instructions. The goal of following the steps of a method is to bring about the result promised by it. In the context of inquiry, methods may be defined as systems of rules and procedures to discover regularities of nature, society, and thought. In this sense, methodology can refer to procedures used to arrive at new knowledge or to techniques of verifying and falsifying pre-existing knowledge claims. This encompasses various issues pertaining both to the collection of data and their analysis. Concerning the collection, it involves the problem of sampling and of how to go about the data collection itself, like surveys, interviews, or observation. There are also numerous methods of how the collected data can be analyzed using statistics or other ways of interpreting it to extract interesting conclusions.
=== As study of methods ===
However, many theorists emphasize the differences between the terms "method" and "methodology". In this regard, methodology may be defined as "the study or description of methods" or as "the analysis of the principles of methods, rules, and postulates employed by a discipline". This study or analysis involves uncovering assumptions and practices associated with the different methods and a detailed description of research designs and hypothesis testing. It also includes evaluative aspects: forms of data collection, measurement strategies, and ways to analyze data are compared and their advantages and disadvantages relative to different research goals and situations are assessed. In this regard, methodology provides the skills, knowledge, and practical guidance needed to conduct scientific research in an efficient manner. It acts as a guideline for various decisions researchers need to take in the scientific process.
Methodology can be understood as the middle ground between concrete particular methods and the abstract and general issues discussed by the philosophy of science. In this regard, methodology comes after formulating a research question and helps the researchers decide what methods to use in the process. For example, methodology should assist the researcher in deciding why one method of sampling is preferable to another in a particular case or which form of data analysis is likely to bring the best results. Methodology achieves this by explaining, evaluating and justifying methods. Just as there are different methods, there are also different methodologies. Different methodologies provide different approaches to how methods are evaluated and explained and may thus make different suggestions on what method to use in a particular case.
According to Aleksandr Georgievich Spirkin, "[a] methodology is a system of principles and general ways of organising and structuring theoretical and practical activity, and also the theory of this system". Helen Kara defines methodology as "a contextual framework for research, a coherent and logical scheme based on views, beliefs, and values, that guides the choices researchers make". Ginny E. Garcia and Dudley L. Poston understand methodology either as a complex body of rules and postulates guiding research or as the analysis of such rules and procedures. As a body of rules and postulates, a methodology defines the subject of analysis as well as the conceptual tools used by the analysis and the limits of the analysis. Research projects are usually governed by a structured procedure known as the research process. The goal of this process is given by a research question, which determines what kind of information one intends to acquire.
=== As discussion of background assumptions ===
Some theorists prefer an even wider understanding of methodology that involves not just the description, comparison, and evaluation of methods but includes additionally more general philosophical issues. One reason for this wider approach is that discussions of when to use which method often take various background assumptions for granted, for example, concerning the goal and nature of research. These assumptions can at times play an important role concerning which method to choose and how to follow it. For example, Thomas Kuhn argues in his The Structure of Scientific Revolutions that sciences operate within a framework or a paradigm that determines which questions are asked and what counts as good science. This concerns philosophical disagreements both about how to conceptualize the phenomena studied, what constitutes evidence for and against them, and what the general goal of researching them is. So in this wider sense, methodology overlaps with philosophy by making these assumptions explicit and presenting arguments for and against them. According to C. S. Herrman, a good methodology clarifies the structure of the data to be analyzed and helps the researchers see the phenomena in a new light. In this regard, a methodology is similar to a paradigm. A similar view is defended by Spirkin, who holds that a central aspect of every methodology is the world view that comes with it.
The discussion of background assumptions can include metaphysical and ontological issues in cases where they have important implications for the proper research methodology. For example, a realist perspective considering the observed phenomena as an external and independent reality is often associated with an emphasis on empirical data collection and a more distanced and objective attitude. Idealists, on the other hand, hold that external reality is not fully independent of the mind and tend, therefore, to include more subjective tendencies in the research process as well.
For the quantitative approach, philosophical debates in methodology include the distinction between the inductive and the hypothetico-deductive interpretation of the scientific method. For qualitative research, many basic assumptions are tied to philosophical positions such as hermeneutics, pragmatism, Marxism, critical theory, and postmodernism. According to Kuhn, an important factor in such debates is that the different paradigms are incommensurable. This means that there is no overarching framework to assess the conflicting theoretical and methodological assumptions. This critique puts into question various presumptions of the quantitative approach associated with scientific progress based on the steady accumulation of data.
Other discussions of abstract theoretical issues in the philosophy of science are also sometimes included. This can involve questions like how and whether scientific research differs from fictional writing as well as whether research studies objective facts rather than constructing the phenomena it claims to study. In the latter sense, some methodologists have even claimed that the goal of science is less to represent a pre-existing reality and more to bring about some kind of social change in favor of repressed groups in society.
=== Related terms and issues ===
Viknesh Andiappan and Yoke Kin Wan use the field of process systems engineering to distinguish the term "methodology" from the closely related terms "approach", "method", "procedure", and "technique". On their view, "approach" is the most general term. It can be defined as "a way or direction used to address a problem based on a set of assumptions". An example is the difference between hierarchical approaches, which consider one task at a time in a hierarchical manner, and concurrent approaches, which consider them all simultaneously. Methodologies are a little more specific. They are general strategies needed to realize an approach and may be understood as guidelines for how to make choices. Often the term "framework" is used as a synonym. A method is a still more specific way of practically implementing the approach. Methodologies provide the guidelines that help researchers decide which method to follow. The method itself may be understood as a sequence of techniques. A technique is a step taken that can be observed and measured. Each technique has some immediate result. The whole sequence of steps is termed a "procedure". A similar but less complex characterization is sometimes found in the field of language teaching, where the teaching process may be described through a three-level conceptualization based on "approach", "method", and "technique".
One question concerning the definition of methodology is whether it should be understood as a descriptive or a normative discipline. The key difference in this regard is whether methodology just provides a value-neutral description of methods or what scientists actually do. Many methodologists practice their craft in a normative sense, meaning that they express clear opinions about the advantages and disadvantages of different methods. In this regard, methodology is not just about what researchers actually do but about what they ought to do or how to perform good research.
== Types ==
Theorists often distinguish various general types or approaches to methodology. The most influential classification contrasts quantitative and qualitative methodology.
=== Quantitative and qualitative ===
Quantitative research is closely associated with the natural sciences. It is based on precise numerical measurements, which are then used to arrive at exact general laws. This precision is also reflected in the goal of making predictions that can later be verified by other researchers. Examples of quantitative research include physicists at the Large Hadron Collider measuring the mass of newly created particles and positive psychologists conducting an online survey to determine the correlation between income and self-assessed well-being.
Qualitative research is characterized in various ways in the academic literature but there are very few precise definitions of the term. It is often used in contrast to quantitative research for forms of study that do not quantify their subject matter numerically. However, the distinction between these two types is not always obvious and various theorists have argued that it should be understood as a continuum and not as a dichotomy. A lot of qualitative research is concerned with some form of human experience or behavior, in which case it tends to focus on a few individuals and their in-depth understanding of the meaning of the studied phenomena. Examples of the qualitative method are a market researcher conducting a focus group in order to learn how people react to a new product or a medical researcher performing an unstructured in-depth interview with a participant from a new experimental therapy to assess its potential benefits and drawbacks. It is also used to improve quantitative research, such as informing data collection materials and questionnaire design. Qualitative research is frequently employed in fields where the pre-existing knowledge is inadequate. This way, it is possible to get a first impression of the field and potential theories, thus paving the way for investigating the issue in further studies.
Quantitative methods dominate in the natural sciences but both methodologies are used in the social sciences. Some social scientists focus mostly on one method while others try to investigate the same phenomenon using a variety of different methods. It is central to both approaches how the group of individuals used for the data collection is selected. This process is known as sampling. It involves the selection of a subset of individuals or phenomena to be measured. Important in this regard is that the selected samples are representative of the whole population, i.e. that no significant biases were involved when choosing. If this is not the case, the data collected does not reflect what the population as a whole is like. This affects generalizations and predictions drawn from the biased data. The number of individuals selected is called the sample size. For qualitative research, the sample size is usually rather small, while quantitative research tends to focus on big groups and collecting a lot of data. After the collection, the data needs to be analyzed and interpreted to arrive at interesting conclusions that pertain directly to the research question. This way, the wealth of information obtained is summarized and thus made more accessible to others. Especially in the case of quantitative research, this often involves the application of some form of statistics to make sense of the numerous individual measurements.
Many discussions in the history of methodology center around the quantitative methods used by the natural sciences. A central question in this regard is to what extent they can be applied to other fields, like the social sciences and history. The success of the natural sciences was often seen as an indication of the superiority of the quantitative methodology and used as an argument to apply this approach to other fields as well. However, this outlook has been put into question in the more recent methodological discourse. In this regard, it is often argued that the paradigm of the natural sciences is a one-sided development of reason, which is not equally well suited to all areas of inquiry. The divide between quantitative and qualitative methods in the social sciences is one consequence of this criticism.
Which method is more appropriate often depends on the goal of the research. For example, quantitative methods usually excel for evaluating preconceived hypotheses that can be clearly formulated and measured. Qualitative methods, on the other hand, can be used to study complex individual issues, often with the goal of formulating new hypotheses. This is especially relevant when the existing knowledge of the subject is inadequate. Important advantages of quantitative methods include precision and reliability. However, they have often difficulties in studying very complex phenomena that are commonly of interest to the social sciences. Additional problems can arise when the data is misinterpreted to defend conclusions that are not directly supported by the measurements themselves. In recent decades, many researchers in the social sciences have started combining both methodologies. This is known as mixed-methods research. A central motivation for this is that the two approaches can complement each other in various ways: some issues are ignored or too difficult to study with one methodology and are better approached with the other. In other cases, both approaches are applied to the same issue to produce more comprehensive and well-rounded results.
Qualitative and quantitative research are often associated with different research paradigms and background assumptions. Qualitative researchers often use an interpretive or critical approach while quantitative researchers tend to prefer a positivistic approach. Important disagreements between these approaches concern the role of objectivity and hard empirical data as well as the research goal of predictive success rather than in-depth understanding or social change.
=== Others ===
Various other classifications have been proposed. One distinguishes between substantive and formal methodologies. Substantive methodologies tend to focus on one specific area of inquiry. The findings are initially restricted to this specific field but may be transferrable to other areas of inquiry. Formal methodologies, on the other hand, are based on a variety of studies and try to arrive at more general principles applying to different fields. They may also give particular prominence to the analysis of the language of science and the formal structure of scientific explanation. A closely related classification distinguishes between philosophical, general scientific, and special scientific methods.
One type of methodological outlook is called "proceduralism". According to it, the goal of methodology is to boil down the research process to a simple set of rules or a recipe that automatically leads to good research if followed precisely. However, it has been argued that, while this ideal may be acceptable for some forms of quantitative research, it fails for qualitative research. One argument for this position is based on the claim that research is not a technique but a craft that cannot be achieved by blindly following a method. In this regard, research depends on forms of creativity and improvisation to amount to good science.
Other types include inductive, deductive, and transcendental methods. Inductive methods are common in the empirical sciences and proceed through inductive reasoning from many particular observations to arrive at general conclusions, often in the form of universal laws. Deductive methods, also referred to as axiomatic methods, are often found in formal sciences, such as geometry. They start from a set of self-evident axioms or first principles and use deduction to infer interesting conclusions from these axioms. Transcendental methods are common in Kantian and post-Kantian philosophy. They start with certain particular observations. It is then argued that the observed phenomena can only exist if their conditions of possibility are fulfilled. This way, the researcher may draw general psychological or metaphysical conclusions based on the claim that the phenomenon would not be observable otherwise.
== Importance ==
It has been argued that a proper understanding of methodology is important for various issues in the field of research. They include both the problem of conducting efficient and reliable research as well as being able to validate knowledge claims by others. Method is often seen as one of the main factors of scientific progress. This is especially true for the natural sciences where the developments of experimental methods in the 16th and 17th century are often seen as the driving force behind the success and prominence of the natural sciences. In some cases, the choice of methodology may have a severe impact on a research project. The reason is that very different and sometimes even opposite conclusions may follow from the same factual material based on the chosen methodology.
Aleksandr Georgievich Spirkin argues that methodology, when understood in a wide sense, is of great importance since the world presents us with innumerable entities and relations between them. Methods are needed to simplify this complexity and find a way of mastering it. On the theoretical side, this concerns ways of forming true beliefs and solving problems. On the practical side, this concerns skills of influencing nature and dealing with each other. These different methods are usually passed down from one generation to the next. Spirkin holds that the interest in methodology on a more abstract level arose in attempts to formalize these techniques to improve them as well as to make it easier to use them and pass them on. In the field of research, for example, the goal of this process is to find reliable means to acquire knowledge in contrast to mere opinions acquired by unreliable means. In this regard, "methodology is a way of obtaining and building up ... knowledge".
Various theorists have observed that the interest in methodology has risen significantly in the 20th century. This increased interest is reflected not just in academic publications on the subject but also in the institutionalized establishment of training programs focusing specifically on methodology. This phenomenon can be interpreted in different ways. Some see it as a positive indication of the topic's theoretical and practical importance. Others interpret this interest in methodology as an excessive preoccupation that draws time and energy away from doing research on concrete subjects by applying the methods instead of researching them. This ambiguous attitude towards methodology is sometimes even exemplified in the same person. Max Weber, for example, criticized the focus on methodology during his time while making significant contributions to it himself. Spirkin believes that one important reason for this development is that contemporary society faces many global problems. These problems cannot be solved by a single researcher or a single discipline but are in need of collaborative efforts from many fields. Such interdisciplinary undertakings profit a lot from methodological advances, both concerning the ability to understand the methods of the respective fields and in relation to developing more homogeneous methods equally used by all of them.
== Criticism ==
Most criticism of methodology is directed at one specific form or understanding of it. In such cases, one particular methodological theory is rejected but not methodology at large when understood as a field of research comprising many different theories. In this regard, many objections to methodology focus on the quantitative approach, specifically when it is treated as the only viable approach. Nonetheless, there are also more fundamental criticisms of methodology in general. They are often based on the idea that there is little value to abstract discussions of methods and the reasons cited for and against them. In this regard, it may be argued that what matters is the correct employment of methods and not their meticulous study. Sigmund Freud, for example, compared methodologists to "people who clean their glasses so thoroughly that they never have time to look through them". According to C. Wright Mills, the practice of methodology often degenerates into a "fetishism of method and technique".
Some even hold that methodological reflection is not just a waste of time but actually has negative side effects. Such an argument may be defended by analogy to other skills that work best when the agent focuses only on employing them. In this regard, reflection may interfere with the process and lead to avoidable mistakes. According to an example by Gilbert Ryle, "[w]e run, as a rule, worse, not better, if we think a lot about our feet". A less severe version of this criticism does not reject methodology per se but denies its importance and rejects an intense focus on it. In this regard, methodology has still a limited and subordinate utility but becomes a diversion or even counterproductive by hindering practice when given too much emphasis.
Another line of criticism concerns more the general and abstract nature of methodology. It states that the discussion of methods is only useful in concrete and particular cases but not concerning abstract guidelines governing many or all cases. Some anti-methodologists reject methodology based on the claim that researchers need freedom to do their work effectively. But this freedom may be constrained and stifled by "inflexible and inappropriate guidelines". For example, according to Kerry Chamberlain, a good interpretation needs creativity to be provocative and insightful, which is prohibited by a strictly codified approach. Chamberlain uses the neologism "methodolatry" to refer to this alleged overemphasis on methodology. Similar arguments are given in Paul Feyerabend's book "Against Method".
However, these criticisms of methodology in general are not always accepted. Many methodologists defend their craft by pointing out how the efficiency and reliability of research can be improved through a proper understanding of methodology.
A criticism of more specific forms of methodology is found in the works of the sociologist Howard S. Becker. He is quite critical of methodologists based on the claim that they usually act as advocates of one particular method usually associated with quantitative research. An often-cited quotation in this regard is that "[m]ethodology is too important to be left to methodologists". Alan Bryman has rejected this negative outlook on methodology. He holds that Becker's criticism can be avoided by understanding methodology as an inclusive inquiry into all kinds of methods and not as a mere doctrine for converting non-believers to one's preferred method.
== In different fields ==
Part of the importance of methodology is reflected in the number of fields to which it is relevant. They include the natural sciences and the social sciences as well as philosophy and mathematics.
=== Natural sciences ===
The dominant methodology in the natural sciences (like astronomy, biology, chemistry, geoscience, and physics) is called the scientific method. Its main cognitive aim is usually seen as the creation of knowledge, but various closely related aims have also been proposed, like understanding, explanation, or predictive success. Strictly speaking, there is no one single scientific method. In this regard, the expression "scientific method" refers not to one specific procedure but to different general or abstract methodological aspects characteristic of all the aforementioned fields. Important features are that the problem is formulated in a clear manner and that the evidence presented for or against a theory is public, reliable, and replicable. The last point is important so that other researchers are able to repeat the experiments to confirm or disconfirm the initial study. For this reason, various factors and variables of the situation often have to be controlled to avoid distorting influences and to ensure that subsequent measurements by other researchers yield the same results. The scientific method is a quantitative approach that aims at obtaining numerical data. This data is often described using mathematical formulas. The goal is usually to arrive at some universal generalizations that apply not just to the artificial situation of the experiment but to the world at large. Some data can only be acquired using advanced measurement instruments. In cases where the data is very complex, it is often necessary to employ sophisticated statistical techniques to draw conclusions from it.
The scientific method is often broken down into several steps. In a typical case, the procedure starts with regular observation and the collection of information. These findings then lead the scientist to formulate a hypothesis describing and explaining the observed phenomena. The next step consists in conducting an experiment designed for this specific hypothesis. The actual results of the experiment are then compared to the expected results based on one's hypothesis. The findings may then be interpreted and published, either as a confirmation or disconfirmation of the initial hypothesis.
Two central aspects of the scientific method are observation and experimentation. This distinction is based on the idea that experimentation involves some form of manipulation or intervention. This way, the studied phenomena are actively created or shaped. For example, a biologist inserting viral DNA into a bacterium is engaged in a form of experimentation. Pure observation, on the other hand, involves studying independent entities in a passive manner. This is the case, for example, when astronomers observe the orbits of astronomical objects far away. Observation played the main role in ancient science. The scientific revolution in the 16th and 17th century affected a paradigm change that gave a much more central role to experimentation in the scientific methodology. This is sometimes expressed by stating that modern science actively "puts questions to nature". While the distinction is usually clear in the paradigmatic cases, there are also many intermediate cases where it is not obvious whether they should be characterized as observation or as experimentation.
A central discussion in this field concerns the distinction between the inductive and the hypothetico-deductive methodology. The core disagreement between these two approaches concerns their understanding of the confirmation of scientific theories. The inductive approach holds that a theory is confirmed or supported by all its positive instances, i.e. by all the observations that exemplify it. For example, the observations of many white swans confirm the universal hypothesis that "all swans are white". The hypothetico-deductive approach, on the other hand, focuses not on positive instances but on deductive consequences of the theory. This way, the researcher uses deduction before conducting an experiment to infer what observations they expect. These expectations are then compared to the observations they actually make. This approach often takes a negative form based on falsification. In this regard, positive instances do not confirm a hypothesis but negative instances disconfirm it. Positive indications that the hypothesis is true are only given indirectly if many attempts to find counterexamples have failed. A cornerstone of this approach is the null hypothesis, which assumes that there is no connection (see causality) between whatever is being observed. It is up to the researcher to do all they can to disprove their own hypothesis through relevant methods or techniques, documented in a clear and replicable process. If they fail to do so, it can be concluded that the null hypothesis is false, which provides support for their own hypothesis about the relation between the observed phenomena.
=== Social sciences ===
Significantly more methodological variety is found in the social sciences, where both quantitative and qualitative approaches are used. They employ various forms of data collection, such as surveys, interviews, focus groups, and the nominal group technique. Surveys belong to quantitative research and usually involve some form of questionnaire given to a large group of individuals. It is paramount that the questions are easily understandable by the participants since the answers might not have much value otherwise. Surveys normally restrict themselves to closed questions in order to avoid various problems that come with the interpretation of answers to open questions. They contrast in this regard to interviews, which put more emphasis on the individual participant and often involve open questions. Structured interviews are planned in advance and have a fixed set of questions given to each individual. They contrast with unstructured interviews, which are closer to a free-flow conversation and require more improvisation on the side of the interviewer for finding interesting and relevant questions. Semi-structured interviews constitute a middle ground: they include both predetermined questions and questions not planned in advance. Structured interviews make it easier to compare the responses of the different participants and to draw general conclusions. However, they also limit what may be discovered and thus constrain the investigation in many ways. Depending on the type and depth of the interview, this method belongs either to quantitative or to qualitative research. The terms research conversation and muddy interview have been used to describe interviews conducted in informal settings which may not occur purely for the purposes of data collection. Some researcher employ the go-along method by conducting interviews while they and the participants navigate through and engage with their environment.
Focus groups are a qualitative research method often used in market research. They constitute a form of group interview involving a small number of demographically similar people. Researchers can use this method to collect data based on the interactions and responses of the participants. The interview often starts by asking the participants about their opinions on the topic under investigation, which may, in turn, lead to a free exchange in which the group members express and discuss their personal views. An important advantage of focus groups is that they can provide insight into how ideas and understanding operate in a cultural context. However, it is usually difficult to use these insights to discern more general patterns true for a wider public. One advantage of focus groups is that they can help the researcher identify a wide range of distinct perspectives on the issue in a short time. The group interaction may also help clarify and expand interesting contributions. One disadvantage is due to the moderator's personality and group effects, which may influence the opinions stated by the participants. When applied to cross-cultural settings, cultural and linguistic adaptations and group composition considerations are important to encourage greater participation in the group discussion.
The nominal group technique is similar to focus groups with a few important differences. The group often consists of experts in the field in question. The group size is similar but the interaction between the participants is more structured. The goal is to determine how much agreement there is among the experts on the different issues. The initial responses are often given in written form by each participant without a prior conversation between them. In this manner, group effects potentially influencing the expressed opinions are minimized. In later steps, the different responses and comments may be discussed and compared to each other by the group as a whole.
Most of these forms of data collection involve some type of observation. Observation can take place either in a natural setting, i.e. the field, or in a controlled setting such as a laboratory. Controlled settings carry with them the risk of distorting the results due to their artificiality. Their advantage lies in precisely controlling the relevant factors, which can help make the observations more reliable and repeatable. Non-participatory observation involves a distanced or external approach. In this case, the researcher focuses on describing and recording the observed phenomena without causing or changing them, in contrast to participatory observation.
An important methodological debate in the field of social sciences concerns the question of whether they deal with hard, objective, and value-neutral facts, as the natural sciences do. Positivists agree with this characterization, in contrast to interpretive and critical perspectives on the social sciences. According to William Neumann, positivism can be defined as "an organized method for combining deductive logic with precise empirical observations of individual behavior in order to discover and confirm a set of probabilistic causal laws that can be used to predict general patterns of human activity". This view is rejected by interpretivists. Max Weber, for example, argues that the method of the natural sciences is inadequate for the social sciences. Instead, more importance is placed on meaning and how people create and maintain their social worlds. The critical methodology in social science is associated with Karl Marx and Sigmund Freud. It is based on the assumption that many of the phenomena studied using the other approaches are mere distortions or surface illusions. It seeks to uncover deeper structures of the material world hidden behind these distortions. This approach is often guided by the goal of helping people effect social changes and improvements.
=== Philosophy ===
Philosophical methodology is the metaphilosophical field of inquiry studying the methods used in philosophy. These methods structure how philosophers conduct their research, acquire knowledge, and select between competing theories. It concerns both descriptive issues of what methods have been used by philosophers in the past and normative issues of which methods should be used. Many philosophers emphasize that these methods differ significantly from the methods found in the natural sciences in that they usually do not rely on experimental data obtained through measuring equipment. Which method one follows can have wide implications for how philosophical theories are constructed, what theses are defended, and what arguments are cited in favor or against. In this regard, many philosophical disagreements have their source in methodological disagreements. Historically, the discovery of new methods, like methodological skepticism and the phenomenological method, has had important impacts on the philosophical discourse.
A great variety of methods has been employed throughout the history of philosophy:
Methodological skepticism gives special importance to the role of systematic doubt. This way, philosophers try to discover absolutely certain first principles that are indubitable.
The geometric method starts from such first principles and employs deductive reasoning to construct a comprehensive philosophical system based on them.
Phenomenology gives particular importance to how things appear to be. It consists in suspending one's judgments about whether these things actually exist in the external world. This technique is known as epoché and can be used to study appearances independent of assumptions about their causes.
The method of conceptual analysis came to particular prominence with the advent of analytic philosophy. It studies concepts by breaking them down into their most fundamental constituents to clarify their meaning.
Common sense philosophy uses common and widely accepted beliefs as a philosophical tool. They are used to draw interesting conclusions. This is often employed in a negative sense to discredit radical philosophical positions that go against common sense.
Ordinary language philosophy has a very similar method: it approaches philosophical questions by looking at how the corresponding terms are used in ordinary language.
Many methods in philosophy rely on some form of intuition. They are used, for example, to evaluate thought experiments, which involve imagining situations to assess their possible consequences in order to confirm or refute philosophical theories.
The method of reflective equilibrium tries to form a coherent perspective by examining and reevaluating all the relevant beliefs and intuitions.
Pragmatists focus on the practical consequences of philosophical theories to assess whether they are true or false.
Experimental philosophy is a recently developed approach that uses the methodology of social psychology and the cognitive sciences for gathering empirical evidence and justifying philosophical claims.
=== Mathematics ===
In the field of mathematics, various methods can be distinguished, such as synthetic, analytic, deductive, inductive, and heuristic methods. For example, the difference between synthetic and analytic methods is that the former start from the known and proceed to the unknown while the latter seek to find a path from the unknown to the known. Geometry textbooks often proceed using the synthetic method. They start by listing known definitions and axioms and proceed by taking inferential steps, one at a time, until the solution to the initial problem is found. An important advantage of the synthetic method is its clear and short logical exposition. One disadvantage is that it is usually not obvious in the beginning that the steps taken lead to the intended conclusion. This may then come as a surprise to the reader since it is not explained how the mathematician knew in the beginning which steps to take. The analytic method often reflects better how mathematicians actually make their discoveries. For this reason, it is often seen as the better method for teaching mathematics. It starts with the intended conclusion and tries to find another formula from which it can be deduced. It then goes on to apply the same process to this new formula until it has traced back all the way to already proven theorems. The difference between the two methods concerns primarily how mathematicians think and present their proofs. The two are equivalent in the sense that the same proof may be presented either way.
=== Statistics ===
Statistics investigates the analysis, interpretation, and presentation of data. It plays a central role in many forms of quantitative research that have to deal with the data of many observations and measurements. In such cases, data analysis is used to cleanse, transform, and model the data to arrive at practically useful conclusions. There are numerous methods of data analysis. They are usually divided into descriptive statistics and inferential statistics. Descriptive statistics restricts itself to the data at hand. It tries to summarize the most salient features and present them in insightful ways. This can happen, for example, by visualizing its distribution or by calculating indices such as the mean or the standard deviation. Inferential statistics, on the other hand, uses this data based on a sample to draw inferences about the population at large. That can take the form of making generalizations and predictions or by assessing the probability of a concrete hypothesis.
=== Pedagogy ===
Pedagogy can be defined as the study or science of teaching methods. In this regard, it is the methodology of education: it investigates the methods and practices that can be applied to fulfill the aims of education. These aims include the transmission of knowledge as well as fostering skills and character traits. Its main focus is on teaching methods in the context of regular schools. But in its widest sense, it encompasses all forms of education, both inside and outside schools. In this wide sense, pedagogy is concerned with "any conscious activity by one person designed to enhance learning in another". The teaching happening this way is a process taking place between two parties: teachers and learners. Pedagogy investigates how the teacher can help the learner undergo experiences that promote their understanding of the subject matter in question.
Various influential pedagogical theories have been proposed. Mental-discipline theories were already common in ancient Greek and state that the main goal of teaching is to train intellectual capacities. They are usually based on a certain ideal of the capacities, attitudes, and values possessed by educated people. According to naturalistic theories, there is an inborn natural tendency in children to develop in a certain way. For them, pedagogy is about how to help this process happen by ensuring that the required external conditions are set up. Herbartianism identifies five essential components of teaching: preparation, presentation, association, generalization, and application. They correspond to different phases of the educational process: getting ready for it, showing new ideas, bringing these ideas in relation to known ideas, understanding the general principle behind their instances, and putting what one has learned into practice. Learning theories focus primarily on how learning takes place and formulate the proper methods of teaching based on these insights. One of them is apperception or association theory, which understands the mind primarily in terms of associations between ideas and experiences. On this view, the mind is initially a blank slate. Learning is a form of developing the mind by helping it establish the right associations. Behaviorism is a more externally oriented learning theory. It identifies learning with classical conditioning, in which the learner's behavior is shaped by presenting them with a stimulus with the goal of evoking and solidifying the desired response pattern to this stimulus.
The choice of which specific method is best to use depends on various factors, such as the subject matter and the learner's age. Interest and curiosity on the side of the student are among the key factors of learning success. This means that one important aspect of the chosen teaching method is to ensure that these motivational forces are maintained, through intrinsic or extrinsic motivation. Many forms of education also include regular assessment of the learner's progress, for example, in the form of tests. This helps to ensure that the teaching process is successful and to make adjustments to the chosen method if necessary.
== Related concepts ==
Methodology has several related concepts, such as paradigm and algorithm. In the context of science, a paradigm is a conceptual worldview. It consists of a number of basic concepts and general theories, that determine how the studied phenomena are to be conceptualized and which scientific methods are considered reliable for studying them. Various theorists emphasize similar aspects of methodologies, for example, that they shape the general outlook on the studied phenomena and help the researcher see them in a new light.
In computer science, an algorithm is a procedure or methodology to reach the solution of a problem with a finite number of steps. Each step has to be precisely defined so it can be carried out in an unambiguous manner for each application. For example, the Euclidean algorithm is an algorithm that solves the problem of finding the greatest common divisor of two integers. It is based on simple steps like comparing the two numbers and subtracting one from the other.
== See also ==
Paradigm – Set of distinct concepts or thought patterns
Philosophical methodology
Political methodology
Scientific method
Software development process
Survey methodology
== References ==
== Further reading ==
Berg, Bruce L., 2009, Qualitative Research Methods for the Social Sciences. Seventh Edition. Boston MA: Pearson Education Inc.
Creswell, J. (1998). Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, California: Sage Publications.
Creswell, J. (2003). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks, California: Sage Publications.
Franklin, M.I. (2012). Understanding Research: Coping with the Quantitative-Qualitative Divide. London and New York: Routledge.
Guba, E. and Lincoln, Y. (1989). Fourth Generation Evaluation. Newbury Park, California: Sage Publications.
Herrman, C. S. (2009). "Fundamentals of Methodology", a series of papers On the Social Science Research Network (SSRN), online.
Howell, K. E. (2013) Introduction to the Philosophy of Methodology. London, UK: Sage Publications.
Ndira, E. Alana, Slater, T. and Bucknam, A. (2011). Action Research for Business, Nonprofit, and Public Administration - A Tool for Complex Times . Thousand Oaks, CA: Sage.
Joubish, Farooq Dr. (2009). Educational Research Department of Education, Federal Urdu University, Karachi, Pakistan
Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd edition). Thousand Oaks, California: Sage Publications.
Silverman, David (Ed). (2011). Qualitative Research: Issues of Theory, Method and Practice, Third Edition. London, Thousand Oaks, New Delhi, Singapore: Sage Publications
Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan. 2014. Handbook of Research Methods in Military Studies New York: Routledge.
Ioannidis, J. P. (2005). "Why Most Published Research Findings Are False". PLOS Medicine. 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMC 1182327. PMID 16060722.
== External links ==
Freedictionary, usage note on the word Methodology
Researcherbook, research methodology forum and resources | Wikipedia/Methodology |
The B method is a method of software development based on B, a tool-supported formal method based on an abstract machine notation, used in the development of computer software.
== Overview ==
B was originally developed in the 1980s by Jean-Raymond Abrial in France and the UK. B is related to the Z notation (also originated by Abrial) and supports development of programming language code from specifications. B has been used in major safety-critical system applications in Europe (such as the automatic Paris Métro lines 14 and 1 and the Ariane 5 rocket). It has robust, commercially available tool support for specification, design, proof and code generation.
Compared to Z, B is slightly more low-level and more focused on refinement to code rather than just formal specification — hence it is easier to correctly implement a specification written in B than one in Z. In particular, there is good tool support for this.
The same language is used in specification, design and programming.
Mechanisms include encapsulation and data locality.
== Event-B ==
Subsequently, another formal method called Event-B has been developed based on the B-Method, support by the Rodin Platform. Event-B is a formal method aimed at system-level modelling and analysis. Features of Event-B are the use of set theory for modelling, the use of refinement to represent systems at different levels of abstraction, and the use of mathematical proof for verifying consistency between these refinement levels.
== The main components ==
The B notation depends on set theory and first order logic in order to specify different versions of software that covers the complete cycle of project development.
=== Abstract machine ===
In the first and the most abstract version, which is called the Abstract Machine, the designer should specify the goal of the design.
=== Refinement ===
Then, during a refinement step, they may pad the specification in order to clarify the goal or to turn the abstract machine more concrete by adding details about data structures and algorithms that define how the goal is achieved.
The new version, which is called Refinement, should be proven to be coherent and include all the properties of the abstract machine.
The designer may make use of B libraries in order to model data structures or to include or import existing components.
=== Implementation ===
The refinement continues until a deterministic version is achieved: the Implementation.
During all of the development steps, the same notation is used, and the last version may be translated to a programming language for compilation.
== Software ==
=== B-Toolkit ===
The B-Toolkit is a collection of programming tools designed to support the use of the B-Tool, is a set theory-based mathematical interpreter for the purposes of supporting the B-Method. Development was originally undertaken by Ib Holm Sørensen and others, at BP Research and then at B-Core (UK) Limited.
The toolkit uses a custom X Window Motif Interface for GUI management and runs primarily on the Linux, Mac OS X and Solaris operating systems.
The B-Toolkit source code is now available.
=== Atelier B ===
Developed by ClearSy, Atelier B is an industrial tool that allows for the operational use of the B Method to develop defect-free proven software (formal software). Two versions are available: 1) Community Edition, available to anyone without any restriction; 2) Maintenance Edition for maintenance contract holders only. Atelier B has been used to develop safety automatisms for the various subways installed throughout the world by Alstom and Siemens, and also for Common Criteria certification and the development of system models by ATMEL and STMicroelectronics.
=== Click'n'Prove ===
The Click'n'Prove tool provides an environment for the generation and discharge of proof obligations, for consistency and refinement checking.
=== Rodin ===
The Rodin Platform is a tool that supports Event-B. Rodin is based on an Eclipse software IDE (integrated development environment) and provides support for refinement and mathematical proof. The platform is open source and forms part of the Eclipse framework It is extendable using software component plug-ins. The development of Rodin has been supported by the European Union projects DEPLOY (2008–2012), RODIN (2004–2007), and ADVANCE (2011–2014).
=== BHDL ===
BHDL provides a method for the correct design of digital circuits, combining
the advantages of the hardware description language VHDL with the formality of B.
== APCB ==
APCB (French: Association de Pilotage des Conférences B, the International B Conference Steering Committee) has organized meetings associated with the B-Method. It has organized ZB conferences with the Z User Group and ABZ conferences, including Abstract State Machines (ASM) as well as the Z notation.
== Books ==
The B-Book: Assigning Programs to Meanings, Jean-Raymond Abrial, Cambridge University Press, 1996. ISBN 0-521-49619-5.
The B-Method: An Introduction, Steve Schneider, Palgrave Macmillan, Cornerstones of Computing series, 2001. ISBN 0-333-79284-X.
Software Engineering with B, John Wordsworth, Addison Wesley Longman, 1996. ISBN 0-201-40356-0.
The B Language and Method: A Guide to Practical Formal Development, Kevin Lano, Springer-Verlag, FACIT series, 1996. ISBN 3-540-76033-4.
Specification in B: An Introduction using the B Toolkit, Kevin Lano, World Scientific Publishing Company, Imperial College Press, 1996. ISBN 1-86094-008-0.
Modeling in Event-B: System and Software Engineering, Jean-Raymond Abrial, Cambridge University Press, 2010. ISBN 978-0-521-89556-9.
== Conferences ==
The following conferences have explicitly included the B-Method and/or Event-B:
Z2B Conference, Nantes, France, 10–12 October 1995
First B Conference, Nantes, France, 25–27 November 1996
Second B Conference, Montpellier, France, 22–24 April 1998
ZB 2000, York, United Kingdom, 28 August – 2 September 2000
ZB 2002, Grenoble, France, 23–25 January 2002
ZB 2003, Turku, Finland, 4–6 June 2003
ZB 2005, Guildford, United Kingdom, 2005
B 2007, Besançon, France, 2007
B, from research to teaching, Nantes, France, 16 June 2008
B, from research to teaching, Nantes, France, 8 June 2009
B, from research to teaching, Nantes, France, 7 June 2010
ABZ 2008, British Computer Society, London, United Kingdom, 16–18 September 2008
ABZ 2010, Orford, Québec, Canada, 23–25 February 2010
ABZ 2012, Pisa, Italy, 18–22 June 2012
ABZ 2014, Toulouse, France, 2–6 June 2014
ABZ 2016, Linz, Austria, 23–27 May 2016
ABZ 2018, Southampton, United Kingdom, 2018
ABZ 2020, Ulm, Germany, 2021 (delayed due to the COVID-19 pandemic)
ABZ 2021, Ulm, Germany, 2021
== See also ==
Formal methods
Z notation
== References ==
== External links ==
B Method.com – work and subjects concerning the B method, a formal method with proof
Atelier B.eu Archived 2008-02-21 at the Wayback Machine: Atelier B is a systems engineering workshop, which enables software to be developed that is guaranteed to be flawless
Site B Grenoble | Wikipedia/B-Method |
The Vienna Development Method (VDM) is one of the longest-established formal methods for the development of computer-based systems. Originating in work done at the IBM Laboratory Vienna in the 1970s, it has grown to include a group of techniques and tools based on a formal specification language—the VDM Specification Language (VDM-SL). It has an extended form, VDM++, which supports the modeling of object-oriented and concurrent systems. Support for VDM includes commercial and academic tools for analyzing models, including support for testing and proving properties of models and generating program code from validated VDM models. There is a history of industrial usage of VDM and its tools and a growing body of research in the formalism has led to notable contributions to the engineering of critical systems, compilers, concurrent systems and in logic for computer science.
== Philosophy ==
Computing systems may be modeled in VDM-SL at a higher level of abstraction than is achievable using programming languages, allowing the analysis of designs and identification of key features, including defects, at an early stage of system development. Models that have been validated can be transformed into detailed system designs through a refinement process. The language has a formal semantics, enabling proof of the properties of models to a high level of assurance. It also has an executable subset, so that models may be analyzed by testing and can be executed through graphical user interfaces, so that models can be evaluated by experts who are not necessarily familiar with the modeling language itself.
== History ==
The origins of VDM-SL lie in the IBM Laboratory in Vienna where the first version of the language was called the Vienna Definition Language (VDL). The VDL was essentially used for giving operational semantics descriptions in contrast to the VDM – Meta-IV which provided denotational semantics
«Towards the end of 1972 the Vienna group again turned their attention to the problem of systematically developing a compiler from a language definition. The overall approach adopted has been termed the "Vienna Development Method"... The meta-language actually adopted ("Meta-IV") is used to define major portions of PL/1 (as given in ECMA 74 – interestingly a "formal standards document written as an abstract interpreter") in BEKIČ 74.»
There is no connection between Meta-IV, and Schorre's META II language, or its successor Tree Meta; these were compiler-compiler systems rather than being suitable for formal problem descriptions.
So Meta-IV was "used to define major portions of" the PL/I programming language. Other programming languages retrospectively described, or partially described, using Meta-IV and VDM-SL include the BASIC programming language, FORTRAN, the APL programming language, ALGOL 60, the Ada programming language and the Pascal programming language. Meta-IV evolved into several variants, generally described as the Danish, English and Irish Schools.
The "English School" derived from work by Cliff Jones on the aspects of VDM not specifically related to language definition and compiler design (Jones 1980, 1990). It stresses modelling persistent state through the use of data types constructed from a rich collection of base types. Functionality is typically described through operations which may have side-effects on the state and which are mostly specified implicitly using a precondition and postcondition. The "Danish School" (Bjørner et al. 1982) has tended to stress a constructive approach with explicit operational specification used to a greater extent. Work in the Danish school led to the first European validated Ada compiler.
An ISO Standard for the language was released in 1996 (ISO, 1996).
== VDM features ==
The VDM-SL and VDM++ syntax and semantics are described at length in the VDMTools language manuals and in the available texts. The ISO Standard contains a formal definition of the language's semantics. In the remainder of this article, the ISO-defined interchange (ASCII) syntax is used. Some texts prefer a more concise mathematical syntax.
A VDM-SL model is a system description given in terms of the functionality performed on data. It consists of a series of definitions of data types and functions or operations performed upon them.
=== Basic types: numeric, character, token and quote types ===
VDM-SL includes basic types modelling numbers and characters as follows:
Data types are defined to represent the main data of the modelled system. Each type definition introduces a new type name and gives a representation in terms of the basic types or in terms of types already introduced. For example, a type modelling user identifiers for a log-in management system might be defined as follows:
For manipulating values belonging to data types, operators are defined on the values. Thus, natural number addition, subtraction etc. are provided, as are Boolean operators such as equality and inequality. The language does not fix a maximum or minimum representable number or a precision for real numbers. Such constraints are defined where they are required in each model by means of data type invariants—Boolean expressions denoting conditions that must be respected by all elements of the defined type. For example, a requirement that user identifiers must be no greater than 9999 would be expressed as follows (where <= is the "less than or equal to" Boolean operator on natural numbers):
Since invariants can be arbitrarily complex logical expressions, and membership of a defined type is limited to only those values satisfying the invariant, type correctness in VDM-SL is not automatically decidable in all situations.
The other basic types include char for characters. In some cases, the representation of a type is not relevant to the model's purpose and would only add complexity. In such cases, the members of the type may be represented as structureless tokens. Values of token types can only be compared for equality – no other operators are defined on them. Where specific named values are required, these are introduced as quote types. Each quote type consists of one named value of the same name as the type itself. Values of quote types (known as quote literals) may only be compared for equality.
For example, in modelling a traffic signal controller, it may be convenient to define values to represent the colours of the traffic signal as quote types:
=== Type constructors: union, product and composite types ===
The basic types alone are of limited value. New, more structured data types are built using type constructors.
The most basic type constructor forms the union of two predefined types. The type (A|B) contains all elements of the type A and all of the type B. In the traffic signal controller example, the type modelling the colour of a traffic signal could be defined as follows:
Enumerated types in VDM-SL are defined as shown above as unions on quote types.
Cartesian product types may also be defined in VDM-SL. The type (A1*…*An) is the type composed of all tuples of values, the first element of which is from the type A1 and the second from the type A2 and so on. The composite or record type is a Cartesian product with labels for the fields. The type
is the Cartesian product with fields labelled f1,…,fn. An element of type T can be composed from its constituent parts by a constructor, written mk_T. Conversely, given an element of type T, the field names can be used to select the named component. For example, the type
models a simple date type. The value mk_Date(1,4,2001) corresponds to 1 April 2001. Given a date d, the expression d.month is a natural number representing the month. Restrictions on days per month and leap years could be incorporated into the invariant if desired. Combining these:
=== Collections ===
Collection types model groups of values. Sets are finite unordered collections in which duplication between values is suppressed. Sequences are finite ordered collections (lists) in which duplication may occur and mappings represent finite correspondences between two sets of values.
==== Sets ====
The set type constructor (written set of T where T is a predefined type) constructs the type composed of all finite sets of values drawn from the type T. For example, the type definition
defines a type UGroup composed of all finite sets of UserId values. Various operators are defined on sets for constructing their union, intersections, determining proper and non-strict subset relationships etc.
==== Sequences ====
The finite sequence type constructor (written seq of T where T is a predefined type) constructs the type composed of all finite lists of values drawn from the type T. For example, the type definition
Defines a type String composed of all finite strings of characters. Various operators are defined on sequences for constructing concatenation, selection of elements and subsequences etc. Many of these operators are partial in the sense that they are not defined for certain applications. For example, selecting the 5th element of a sequence that contains only three elements is undefined.
The order and repetition of items in a sequence is significant, so [a, b] is not equal to [b, a], and [a] is not equal to [a, a].
==== Maps ====
A finite mapping is a correspondence between two sets, the domain and range, with the domain indexing elements of the range. It is therefore similar to a finite function. The mapping type constructor in VDM-SL (written map T1 to T2 where T1 and T2 are predefined types) constructs the type composed of all finite mappings from sets of T1 values to sets of T2 values. For example, the type definition
Defines a type Birthdays which maps character strings to Date. Again, operators are defined on mappings for indexing into the mapping, merging mappings, overwriting extracting sub-mappings.
=== Structuring ===
The main difference between the VDM-SL and VDM++ notations are the way in which structuring is dealt with. In VDM-SL there is a conventional modular extension whereas VDM++ has a traditional object-oriented structuring mechanism with classes and inheritance.
==== Structuring in VDM-SL ====
In the ISO standard for VDM-SL there is an informative annex that contains different structuring principles. These all follow traditional information hiding principles with modules and they can be explained as:
Module naming: Each module is syntactically started with the keyword module followed by the name of the module. At the end of a module the keyword end is written followed again by the name of the module.
Importing: It is possible to import definitions that has been exported from other modules. This is done in an imports section that is started off with the keyword imports and followed by a sequence of imports from different modules. Each of these module imports are started with the keyword from followed by the name of the module and a module signature. The module signature can either simply be the keyword all indicating the import of all definitions exported from that module, or it can be a sequence of import signatures. The import signatures are specific for types, values, functions and operations and each of these are started with the corresponding keyword. In addition these import signatures name the constructs that there is a desire to get access to. In addition optional type information can be present and finally it is possible to rename each of the constructs upon import. For types one needs also to use the keyword struct if one wish to get access to the internal structure of a particular type.
Exporting: The definitions from a module that one wish other modules to have access to are exported using the keyword exports followed by an exports module signature. The exports module signature can either simply consist of the keyword all or as a sequence of export signatures. Such export signatures are specific for types, values, functions and operations and each of these are started with the corresponding keyword. In case one wish to export the internal structure of a type the keyword struct must be used.
More exotic features: In earlier versions of the VDM-SL, tools there was also support for parameterized modules and instantiations of such modules. However, these features were taken out of VDMTools around 2000 because they were hardly ever used in industrial applications and there was a substantial number of tool challenges with these features.
==== Structuring in VDM++ ====
In VDM++ structuring are done using classes and multiple inheritance. The key concepts are:
Class: Each class is syntactically started with the keyword class followed by the name of the class. At the end of a class the keyword end is written followed again by the name of the class.
Inheritance: In case a class inherits constructs from other classes the class name in the class heading can be followed by the keywords is subclass of followed by a comma-separated list of names of superclasses.
Access modifiers: Information hiding in VDM++ is done in the same way as in most object oriented languages using access modifiers. In VDM++ definitions are per default private but in front of all definitions it is possible to use one of the access modifier keywords: private, public and protected.
== Modelling functionality ==
=== Functional modelling ===
In VDM-SL, functions are defined over the data types defined in a model. Support for abstraction requires that it should be possible to characterize the result that a function should compute without having to say how it should be computed. The main mechanism for doing this is the implicit function definition in which, instead of a formula computing a result, a logical predicate over the input and result variables, termed a postcondition, gives the result's properties. For example, a function SQRT for calculating a square root of a natural number might be defined as follows:
Here the postcondition does not define a method for calculating the result r but states what properties can be assumed to hold of it. Note that this defines a function that returns a valid square root; there is no requirement that it should be the positive or negative root. The specification above would be satisfied, for example, by a function that returned the negative root of 4 but the positive root of all other valid inputs. Note that functions in VDM-SL are required to be deterministic so that a function satisfying the example specification above must always return the same result for the same input.
A more constrained function specification is arrived at by strengthening the postcondition. For example, the following definition constrains the function to return the positive root.
All function specifications may be restricted by preconditions which are logical predicates over the input variables only and which describe constraints that are assumed to be satisfied when the function is executed. For example, a square root calculating function that works only on positive real numbers might be specified as follows:
The precondition and postcondition together form a contract that to be satisfied by any program claiming to implement the function. The precondition records the assumptions under which the function guarantees to return a result satisfying the postcondition. If a function is called on inputs that do not satisfy its precondition, the outcome is undefined (indeed, termination is not even guaranteed).
VDM-SL also supports the definition of executable functions in the manner of a functional programming language. In an explicit function definition, the result is defined by means of an expression over the inputs. For example, a function that produces a list of the squares of a list of numbers might be defined as follows:
This recursive definition consists of a function signature giving the types of the input and result and a function body. An implicit definition of the same function might take the following form:
The explicit definition is in a simple sense an implementation of the implicitly specified function. The correctness of an explicit function definition with respect to an implicit specification may be defined as follows.
Given an implicit specification:
and an explicit function:
we say it satisfies the specification iff:
So, "f is a correct implementation" should be interpreted as "f satisfies the specification".
=== State-based modelling ===
In VDM-SL, functions do not have side-effects such as changing the state of a persistent global variable. This is a useful ability in many programming languages, so a similar concept exists; instead of functions, operations are used to change state variables (also known as globals).
For example, if we have a state consisting of a single variable someStateRegister : nat, we could define this in VDM-SL as:
In VDM++ this would instead be defined as:
An operation to load a value into this variable might be specified as:
The externals clause (ext) specifies which parts of the state can be accessed by the operation; rd indicating read-only access and wr being read/write access.
Sometimes it is important to refer to the value of a state before it was modified; for example, an operation to add a value to the variable may be specified as:
Where the ~ symbol on the state variable in the postcondition indicates the value of the state variable before execution of the operation.
== Examples ==
=== The max function ===
This is an example of an implicit function definition. The function returns the largest element from a set of positive integers:
The postcondition characterizes the result rather than defining an algorithm for obtaining it. The precondition is needed because no function could return an r in set s when the set is empty.
=== Natural number multiplication ===
Applying the proof obligation forall p:T_p & pre-f(p) => f(p):T_r and post-f(p, f(p)) to an explicit definition of multp:
Then the proof obligation becomes:
This can be shown correct by:
Proving that the recursion ends (this in turn requires proving that the numbers become smaller at each step)
Mathematical induction
=== Queue abstract data type ===
This is a classical example illustrating the use of implicit operation specification in a state-based model of a well-known data structure. The queue is modelled as a sequence composed of elements of a type Qelt. The representation is Qelt is immaterial and so is defined as a token type.
=== Bank system example ===
As a very simple example of a VDM-SL model, consider a system for maintaining details of customer bank account. Customers are modelled by customer numbers (CustNum), accounts are modelled by account numbers (AccNum). The representations of customer numbers are held to be immaterial and so are modelled by a token type. Balances and overdrafts are modelled by numeric types.
With operations:
NEWC allocates a new customer number:
NEWAC allocates a new account number and sets the balance to zero:
ACINF returns all the balances of all the accounts of a customer, as a map of account number to balance:
== Tool support ==
A number of different tools support VDM:
VDMTools was the leading commercial tools for VDM and VDM++, owned, marketed, maintained and developed by CSK Systems, building on earlier versions developed by the Danish Company IFAD. The manuals and a practical tutorial are available. All licenses are available, free of charge, for the full version of the tool. The full version includes automatic code generation for Java and C++, dynamic link library and CORBA support.
Overture is a community-based open source initiative aimed at providing freely available tool support for all VDM dialects (VDM-SL, VDM++ and VDM-RT) originally on top of the Eclipse platform but subsequently on top of the Visual Studio Code platform. Its aim is to develop a framework for interoperable tools that will be useful for industrial application, research and education.
vdm-mode is a collection of Emacs packages for writing VDM specifications using VDM-SL, VDM++ and VDM-RT. It supports syntax highlighting and editing, on-the-fly syntax checking, template completion and interpreter support.
SpecBox: from Adelard provides syntax checking, some simple semantic checking, and generation of a LaTeX file enabling specifications to be printed in mathematical notation. This tool is freely available but it is not being further maintained.
LaTeX and LaTeX2e macros are available to support the presentation of VDM models in the ISO Standard Language's mathematical syntax. They have been developed and are maintained by the National Physical Laboratory in the UK. Documentation and the macros are available online.
== Industrial experience ==
VDM has been applied widely in a variety of application domains. The most well-known of these applications are:
Ada and CHILL compilers: The first European validated Ada compiler was developed by Dansk Datamatik Center using VDM. Likewise the semantics of CHILL and Modula-2 were described in their standards using VDM.
ConForm: An experiment at British Aerospace comparing the conventional development of a trusted gateway with a development using VDM.
Dust-Expert: A project carried out by Adelard in the UK for a safety related application determining that the safety is appropriate in the layout of industrial plants.
The development of VDMTools: Most components of the VDMTools tool suite are themselves developed using VDM. This development has been made at IFAD in Denmark and CSK in Japan.
TradeOne: Certain key components of the TradeOne back-office system developed by CSK systems for the Japanese stock exchange were developed using VDM. Comparative measurements exist for developer productivity and defect density of the VDM-developed components versus the conventionally developed code.
FeliCa Networks have reported the development of an operating system for an integrated circuit for cellular telephone applications.
== Refinement ==
Use of VDM starts with a very abstract model and develops this into an implementation. Each step involves data reification, then operation decomposition.
Data reification develops the abstract data types into more concrete data structures, while operation decomposition develops the (abstract) implicit specifications of operations and functions into algorithms that can be directly implemented in a computer language of choice.
Specification
Implementation
Abstract data type
→
Data reification
Data structure
Operations
→
Operation decomposition
Algorithms
{\displaystyle {\begin{array}{|rcl|}{\textbf {Specification}}&&{\textbf {Implementation}}\\\hline {\text{Abstract data type}}&\xrightarrow {\text{Data reification}} &{\text{Data structure}}\\{\text{Operations}}&{\xrightarrow[{\text{Operation decomposition}}]{}}&{\text{Algorithms}}\end{array}}}
=== Data reification ===
Data reification (stepwise refinement) involves finding a more concrete representation of the abstract data types used in a specification. There may be several steps before an implementation is reached. Each reification step for an abstract data representation ABS_REP involves proposing a new representation NEW_REP. In order to show that the new representation is accurate, a retrieve function is defined that relates NEW_REP to ABS_REP, i.e. retr : NEW_REP -> ABS_REP. The correctness of a data reification depends on proving adequacy, i.e.
Since the data representation has changed, it is necessary to update the operations and functions so that they operate on NEW_REP. The new operations and functions should be shown to preserve any data type invariants on the new representation. In order to prove that the new operations and functions model those found in the original specification, it is necessary to discharge two proof obligations:
Domain rule:
Modelling rule:
==== Example data reification ====
In a business security system, workers are given ID cards; these are fed into card readers on entry to and exit from the factory.
Operations required:
INIT() initialises the system, assumes that the factory is empty
ENTER(p : Person) records that a worker is entering the factory; the workers' details are read from the ID card)
EXIT(p : Person) records that a worker is exiting the factory
IS-PRESENT(p : Person) r : bool checks to see if a specified worker is in the factory or not
Formally, this would be:
As most programming languages have a concept comparable to a set (often in the form of an array), the first step from the specification is to represent the data in terms of a sequence. These sequences must not allow repetition, as we do not want the same worker to appear twice, so we must add an invariant to the new data type. In this case, ordering is not important, so [a,b] is the same as [b,a].
The Vienna Development Method is valuable for model-based systems. It is not appropriate if the system is time-based. For such cases, the calculus of communicating systems (CCS) is more useful.
== See also ==
Formal methods
Formal specification
Pidgin code
Predicate logic
Propositional calculus
Z specification language, the main alternative to VDM-SL (compare)
== Further reading ==
Bjørner, Dines; Cliff B. Jones (1978). The Vienna Development Method: The Meta-Language, Lecture Notes in Computer Science 61. Berlin, Heidelberg, New York: Springer. ISBN 978-0-387-08766-5.
O'Regan, Gerard (2006). Mathematical Approaches to Software Quality. London: Springer. ISBN 978-1-84628-242-3.
Cliff B. Jones, ed. (1984). Programming Languages and Their Definition — H. Bekič (1936-1982). Lecture Notes in Computer Science. Vol. 177. Berlin, Heidelberg, New York, Tokyo: Springer-Verlag. doi:10.1007/BFb0048933. ISBN 978-3-540-13378-0. S2CID 7488558.
Fitzgerald, J.S. and Larsen, P.G., Modelling Systems: Practical Tools and Techniques in Software Engineering. Cambridge University Press, 1998 ISBN 0-521-62348-0 (Japanese Edition pub. Iwanami Shoten 2003 ISBN 4-00-005609-3).
Fitzgerald, J.S., Larsen, P.G., Mukherjee, P., Plat, N. and Verhoef, M., Validated Designs for Object-oriented Systems. Springer Verlag 2005. ISBN 1-85233-881-4. Supporting web site [1] includes examples and free tool support.
Jones, C.B., Systematic Software Development using VDM, Prentice Hall 1990. ISBN 0-13-880733-7. Also available on-line and free of charge: http://www.csr.ncl.ac.uk/vdm/ssdvdm.pdf.zip
Bjørner, D. and Jones, C.B., Formal Specification and Software Development Prentice Hall International, 1982. ISBN 0-13-880733-7
J. Dawes, The VDM-SL Reference Guide, Pitman 1991. ISBN 0-273-03151-1
International Organization for Standardization, Information technology – Programming languages, their environments and system software interfaces – Vienna Development Method – Specification Language – Part 1: Base language International Standard ISO/IEC 13817-1, December 1996.
Jones, C.B., Software Development: A Rigorous Approach, Prentice Hall International, 1980. ISBN 0-13-821884-6
Jones, C.B. and Shaw, R.C. (eds.), Case Studies in Systematic Software Development, Prentice Hall International, 1990. ISBN 0-13-880733-7
Bicarregui, J.C., Fitzgerald, J.S., Lindsay, P.A., Moore, R. and Ritchie, B., Proof in VDM: a Practitioner's Guide. Springer Verlag Formal Approaches to Computing and Information Technology (FACIT), 1994. ISBN 3-540-19813-X .
== References ==
== External links ==
Information on VDM and VDM++ (archived copy at archive.org)
Vienna Definition Language (VDL)
COMPASS Modelling Language Archived 19 February 2020 at the Wayback Machine (CML), a combination of VDM-SL with CSP, based on Unifying Theories of Programming, developed for modelling Systems of Systems (SoS) | Wikipedia/Vienna_Development_Method |
SPIN is a general tool for verifying the correctness of concurrent software models in a rigorous and mostly automated fashion. It was written by Gerard J. Holzmann and others in the original Unix group of the Computing Sciences Research Center at Bell Labs, beginning in 1980. The software has been available freely since 1991, and continues to evolve to keep pace with new developments in the field.
== Tool ==
Systems to be verified are described in Promela (Process Meta Language), which supports modeling of asynchronous distributed algorithms as non-deterministic automata (SPIN stands for "Simple Promela Interpreter"). Properties to be verified are expressed as Linear Temporal Logic (LTL) formulas, which are negated and then converted into Büchi automata as part of the model-checking algorithm. In addition to model-checking, SPIN can also operate as a simulator, following one possible execution path through the system and presenting the resulting execution trace to the user.
Unlike many model-checkers, SPIN does not actually perform model-checking itself, but instead generates C sources for a problem-specific model checker. This technique saves memory and improves performance, while also allowing the direct insertion of chunks of C code into the model. SPIN also offers a large number of options to further speed up the model-checking process and save memory, such as:
partial order reduction;
state compression;
bitstate hashing (instead of storing whole states, only their hash code is remembered in a bitfield; this saves a lot of memory but voids completeness);
weak fairness enforcement.
Since 1995, (approximately) annual SPIN workshops have been held for SPIN users, researchers, and those generally interested in model checking.
In 2001, the Association for Computing Machinery awarded SPIN its System Software Award.
== See also ==
NuSMV
Uppaal Model Checker
== References ==
== Further reading ==
Holzmann, G. J., The SPIN Model Checker: Primer and Reference Manual. Addison-Wesley, 2004. ISBN 0-321-22862-6.
== External links ==
SPIN website | Wikipedia/SPIN_model_checker |
Methodism, also called the Methodist movement, is a Protestant Christian tradition whose origins, doctrine and practice derive from the life and teachings of John Wesley. George Whitefield and John's brother Charles Wesley were also significant early leaders in the movement. They were named Methodists for "the methodical way in which they carried out their Christian faith". Methodism originated as a revival movement within Anglicanism with roots in the Church of England in the 18th century and became a separate denomination after Wesley's death. The movement spread throughout the British Empire, the United States and beyond because of vigorous missionary work, and today has about 80 million adherents worldwide. Most Methodist denominations are members of the World Methodist Council.
Wesleyan theology, which is upheld by the Methodist denominations, focuses on sanctification and the transforming effect of faith on the character of a Christian. Distinguishing doctrines include the new birth, assurance, imparted righteousness, and obedience to God manifested in performing works of piety. John Wesley held that entire sanctification was "the grand depositum", or foundational doctrine, of the Methodist faith, and its propagation was the reason God brought Methodists into existence. Scripture is considered the primary authority, but Methodists also look to Christian tradition, including the historic creeds. Most Methodists teach that Jesus Christ, the Son of God, died for all of humanity and that salvation is achievable for all. This is the Arminian doctrine, as opposed to the Calvinist position that God has predestined the salvation of a select group of people. However, Whitefield and several other early leaders of the movement were considered Calvinistic Methodists and held to the Calvinist position.
The movement has a wide variety of forms of worship, ranging from high church to low church in liturgical usage, in addition to tent revivals and camp meetings held at certain times of the year. Denominations that descend from the British Methodist tradition are generally less ritualistic, while worship in American Methodism varies depending on the Methodist denomination and congregation. Methodist worship distinctiveness includes the observance of the quarterly lovefeast, the watchnight service on New Year's Eve, as well as altar calls in which people are invited to experience the new birth and entire sanctification. Its emphasis on growing in grace after the new birth (and after being entirely sanctified) led to the creation of class meetings for encouragement in the Christian life. Methodism is known for its rich musical tradition, and Charles Wesley was instrumental in writing much of the hymnody of Methodism.
In addition to evangelism, Methodism is known for its charity, as well as support for the sick, the poor, and the afflicted through works of mercy that "flow from the love of God and neighbor" evidenced in the entirely sanctified believer. These ideals, the Social Gospel, are put into practice by the establishment of hospitals, orphanages, soup kitchens, and schools to follow Christ's command to spread the gospel and serve all people. Methodists are historically known for their adherence to the doctrine of nonconformity to the world, reflected by their traditional standards of a commitment to sobriety, prohibition of gambling, regular attendance at class meetings, and weekly observance of the Friday fast.
Early Methodists were drawn from all levels of society, including the aristocracy, but the Methodist preachers took the message to social outcasts such as criminals. In Britain, the Methodist Church had a major effect in the early decades of the developing working class (1760–1820). In the United States, it became the religion of many slaves, who later formed black churches in the Methodist tradition.
== Origins ==
The Methodist revival began in England with a group of men, including John Wesley (1703–1791) and his younger brother Charles (1707–1788), as a movement within the Church of England in the 18th century. The Wesley brothers founded the "Holy Club" at the University of Oxford, where John was a fellow and later a lecturer at Lincoln College. The club met weekly and they systematically set about living a holy life. They were accustomed to receiving Communion every week, fasting regularly, abstaining from most forms of amusement and luxury, and frequently visiting the sick and the poor and prisoners. The fellowship were branded as "Methodist" by their fellow students because of the way they used "rule" and "method" in their religious affairs.
In 1735, at the invitation of the founder of the Georgia Colony, General James Oglethorpe, both John and Charles Wesley set out for America to be ministers to the colonists and missionaries to the Native Americans. Unsuccessful in their work, the brothers returned to England conscious of their lack of genuine Christian faith. They looked for help from Peter Boehler and other members of the Moravian Church. At a Moravian service in Aldersgate on 24 May 1738, John experienced what has come to be called his evangelical conversion, when he felt his "heart strangely warmed". He records in his journal: "I felt I did trust in Christ, Christ alone, for salvation; and an assurance was given me that He had taken away my sins, even mine, and saved me from the law of sin and death." Charles had reported a similar experience a few days previously. Considering this a pivotal moment, Daniel L. Burnett writes: "The significance of [John] Wesley's Aldersgate Experience is monumental ... Without it the names of Wesley and Methodism would likely be nothing more than obscure footnotes in the pages of church history."
The Wesley brothers immediately began to preach salvation by faith to individuals and groups, in houses, in religious societies, and in the few churches which had not closed their doors to evangelical preachers. John Wesley came under the influence of the Dutch theologian Jacobus Arminius (1560–1609). Arminius had rejected the Calvinist teaching that God had predestined an elect number of people to eternal bliss while others perished eternally. Conversely, George Whitefield (1714–1770), Howell Harris (1714–1773), and Selina Hastings, Countess of Huntingdon (1707–1791) were notable for being Calvinistic Methodists.
Returning from his mission in Georgia, George Whitefield joined the Wesley brothers in what was rapidly becoming a national crusade. Whitefield, who had been a fellow student of the Wesleys and prominent member of the Holy Club at Oxford, became well known for his unorthodox, itinerant ministry, in which he was dedicated to open-air preaching – reaching crowds of thousands. A key step in the development of John Wesley's ministry was, like Whitefield, to preach in fields, collieries, and churchyards to those who did not regularly attend parish church services. Accordingly, many Methodist converts were those disconnected from the Church of England; Wesley remained a cleric of the Established Church and insisted that Methodists attend their local parish church as well as Methodist meetings because only an ordained minister could perform the sacraments of Baptism and Holy Communion.
Faced with growing evangelistic and pastoral responsibilities, Wesley and Whitefield appointed lay preachers and leaders. Methodist preachers focused particularly on evangelising people who had been "neglected" by the established Church of England. Wesley and his assistant preachers organized the new converts into Methodist societies. These societies were divided into groups called classes – intimate meetings where individuals were encouraged to confess their sins to one another and to build up each other. They also took part in love feasts which allowed for the sharing of testimony, a key feature of early Methodism. Growth in numbers and increasing hostility impressed upon the revival converts a deep sense of their corporate identity. Three teachings that Methodists saw as the foundation of Christian faith were:
People are all, by nature, "dead in sin".
They are justified by faith alone.
Faith produces inward and outward holiness.
Wesley's organisational skills soon established him as the primary leader of the movement. Whitefield was a Calvinist, whereas Wesley was an outspoken opponent of the doctrine of predestination. Wesley argued (against Calvinist doctrine) that Christians could enjoy a second blessing – entire sanctification (Christian perfection) in this life: loving God and their neighbours, meekness and lowliness of heart and abstaining from all appearance of evil. These differences put strains on the alliance between Whitefield and Wesley, with Wesley becoming hostile toward Whitefield in what had been previously close relations. Whitefield consistently begged Wesley not to let theological differences sever their friendship, and, in time, their friendship was restored, though this was seen by many of Whitefield's followers to be a doctrinal compromise.
Many clergy in the established church feared that new doctrines promulgated by the Methodists, such as the necessity of a new birth for salvation – the first work of grace, of justification by faith and of the constant and sustained action of the Holy Spirit upon the believer's soul, would produce ill effects upon weak minds. Theophilus Evans, an early critic of the movement, even wrote that it was "the natural Tendency of their Behaviour, in Voice and Gesture and horrid Expressions, to make People mad". In one of his prints, William Hogarth likewise attacked Methodists as "enthusiasts" full of "Credulity, Superstition, and Fanaticism". Other attacks against the Methodists were physically violent – Wesley was nearly murdered by a mob at Wednesbury in 1743. The Methodists responded vigorously to their critics and thrived despite the attacks against them.
Initially, the Methodists merely sought reform within the Church of England (Anglicanism), but the movement gradually departed from that Church. George Whitefield's preference for extemporaneous prayer rather than the fixed forms of prayer in the Book of Common Prayer, in addition to his insistence on the necessity of the new birth, set him at odds with Anglican clergy.
As Methodist societies multiplied, and elements of an ecclesiastical system were, one after another, adopted, the breach between John Wesley and the Church of England gradually widened. In 1784, Wesley responded to the shortage of priests in the American colonies due to the American Revolutionary War by ordaining preachers for America with the power to administer the sacraments. Wesley's actions precipitated the split between American Methodists and the Church of England (which held that only bishops could ordain people to ministry).
With regard to the position of Methodism within Christendom, "John Wesley once noted that what God had achieved in the development of Methodism was no mere human endeavor but the work of God. As such it would be preserved by God so long as history remained." Calling it "the grand depositum" of the Methodist faith, Wesley specifically taught that the propagation of the doctrine of entire sanctification was the reason that God raised up the Methodists in the world. In light of this, Methodists traditionally promote the motto "Holiness unto the Lord".
The influence of Whitefield and Lady Huntingdon on the Church of England was a factor in the founding of the Free Church of England in 1844. At the time of Wesley's death, there were over 500 Methodist preachers in British colonies and the United States. Total membership of the Methodist societies in Britain was recorded as 56,000 in 1791, rising to 360,000 in 1836 and 1,463,000 by the national census of 1851.
Early Methodism experienced a radical and spiritual phase that allowed women authority in church leadership. The role of the woman preacher emerged from the sense that the home should be a place of community care and should foster personal growth. Methodist women formed a community that cared for the vulnerable, extending the role of mothering beyond physical care. Women were encouraged to testify their faith. However, the centrality of women's role sharply diminished after 1790 as Methodist churches became more structured and more male-dominated.
The Wesleyan Education Committee, which existed from 1838 to 1902, has documented the Methodist Church's involvement in the education of children. At first, most effort was placed in creating Sunday Schools. Still, in 1836 the British Methodist Conference gave its blessing to the creation of "Weekday schools".
Methodism spread throughout the British Empire and, mostly through Whitefield's preaching during what historians call the First Great Awakening, in colonial America. However, after Whitefield's death in 1770, American Methodism entered a more lasting Wesleyan and Arminian development phase. Revival services and camp meetings were used "for spreading the Methodist message", with Francis Asbury stating that they were "our harvest seasons". Henry Boehm reported that at a camp meeting in Dover in 1805, 1100 persons received the New Birth and 600 believers were entirely sanctified. Around the time of John Swanel Inskip's leadership of the National Camp Meeting Association for the Promotion of Christian Holiness in the mid to latter 1800s, 80 percent of the membership of the North Georgia Conference of the Methodist Episcopal Church, South professed being entirely sanctified.
== Theology ==
Many Methodist bodies, such as the African Methodist Episcopal Church and the United Methodist Church, base their doctrinal standards on the Articles of Religion, John Wesley's abridgment of the Thirty-nine Articles of the Church of England that excised its Calvinist features. Some Methodist denominations also publish catechisms, which concisely summarise Christian doctrine. Methodists generally accept the Apostles' Creed and the Nicene Creed as declarations of shared Christian faith.: 30–33 Methodism affirms the traditional Christian belief in the triune Godhead (Father, Son and Holy Spirit) as well as the orthodox understanding of the person of Jesus Christ as God incarnate who is both fully divine and fully human. Methodism also emphasizes doctrines that indicate the power of the Holy Spirit to strengthen the faith of believers and to transform their personal lives.
Methodism is broadly evangelical in doctrine and is characterized by Wesleyan theology; John Wesley is studied by Methodists for his interpretation of church practice and doctrine.: 38 At its heart, the theology of John Wesley stressed the life of Christian holiness: to love God with all one's heart, mind, soul and strength and to love one's neighbour as oneself. One popular expression of Methodist doctrine is in the hymns of Charles Wesley. Since enthusiastic congregational singing was a part of the early evangelical movement, Wesleyan theology took root and spread through this channel. Martin V. Clarke, who documented the history of Methodist hymnody, states:
Theologically and doctrinally, the content of the hymns has traditionally been a primary vehicle for expressing Methodism's emphasis on salvation for all, social holiness, and personal commitment, while particular hymns and the communal act of participating in hymn singing have been key elements in the spiritual lives of Methodists.
=== Salvation ===
Wesleyan Methodists identify with the Arminian conception of free will, as opposed to the theological determinism of absolute predestination. Methodism teaches that salvation is initiated when one chooses to respond to God, who draws the individual near to him (the Wesleyan doctrine of prevenient grace), thus teaching synergism. Methodists interpret Scripture as teaching that the saving work of Jesus Christ is for all people (unlimited atonement) but effective only to those who respond and believe, in accordance with the Reformation principles of sola gratia (grace alone) and sola fide (faith alone). John Wesley taught four key points fundamental to Methodism:
A person is free not only to reject salvation but also to accept it by an act of free will.
All people who are obedient to the gospel according to the measure of knowledge given them will be saved.
The Holy Spirit assures a Christian that they are justified by faith in Jesus (assurance of faith).
Christians in this life are capable of Christian perfection and are commanded by God to pursue it.
After the first work of grace (the new birth), Methodist soteriology emphasizes the importance of the pursuit of holiness in salvation, a concept best summarized in a quote by Methodist evangelist Phoebe Palmer who stated that "justification would have ended with me had I refused to be holy." Thus, for Methodists, "true faith ... cannot subsist without works." Methodism, inclusive of the holiness movement, thus teaches that "justification [is made] conditional on obedience and progress in sanctification", emphasizing "a deep reliance upon Christ not only in coming to faith, but in remaining in the faith." John Wesley taught that the keeping of the moral law contained in the Ten Commandments, as well as engaging in the works of piety and the works of mercy, were "indispensable for our sanctification". In its categorization of sin, Methodist doctrine distinguishes between (1) "sin, properly so called" and (2) "involuntary transgression of a divine law, known or unknown"; the former category includes voluntary transgression against God, while the second category includes infirmities (such as "immaturity, ignorance, physical handicaps, forgetfulness, lack of discernment, and poor communication skills").
Wesley explains that those born of God do not sin habitually since to do so means that sin still reigns, which is a mark of an unbeliever. Neither does the Christian sin willfully since the believer's will is now set on living for Christ. He further claims that believers do not sin by desire because the heart has been thoroughly transformed to desire only God's perfect will. Wesley then addresses "sin by infirmities". Since infirmities involve no "concurrence of (the) will", such deviations, whether in thought, word, or deed, are not "properly" sin. He therefore concludes that those born of God do not commit sin, having been saved from "all their sins" (II.2, 7).
This is reflected in the Articles of Religion of the Free Methodist Church (emphasis added in italics), which uses the wording of John Wesley:
Justified persons, while they do not outwardly commit sin, are nevertheless conscious of sin still remaining in the heart. They feel a natural tendency to evil, a proneness to depart from God, and cleave to the things of earth. Those that are sanctified wholly are saved from all inward sin-from evil thoughts and evil tempers. No wrong temper, none contrary to love remains in the soul. All their thoughts, words, and actions are governed by pure love. Entire sanctification takes place subsequently to justification, and is the work of God wrought instantaneously upon the consecrated, believing soul. After a soul is cleansed from all sin, it is then fully prepared to grow in grace" (Discipline, "Articles of Religion", ch. i, § 1, p. 23).
Methodists also believe in the second work of grace – Christian perfection, also known as entire sanctification, which removes original sin, makes the believer holy and empowers them with power to wholly serve God. John Wesley explained, "entire sanctification, or Christian perfection, is neither more nor less than pure love; love expelling sin, and governing both the heart and life of a child of God. The Refiner's fire purges out all that is contrary to love."
Methodist churches teach that apostasy can occur through a loss of faith or through sinning. If a person backslides but later decides to return to God, he or she must repent for sins and be entirely sanctified again (the Arminian doctrine of conditional security).
=== Sacraments ===
Methodists hold that sacraments are sacred acts of divine institution. Methodism has inherited its liturgy from Anglicanism, although Wesleyan theology tends to have a stronger "sacramental emphasis" than that held by evangelical Anglicans.
In common with most Protestants, Methodists recognize two sacraments as being instituted by Christ: Baptism and Holy Communion (also called the Lord's Supper). Most Methodist churches practice infant baptism, in anticipation of a response to be made later (confirmation), as well as baptism of believing adults. The Catechism for the Use of the People Called Methodists states that, "[in Holy Communion] Jesus Christ is present with his worshipping people and gives himself to them as their Lord and Saviour.": 26 In the United Methodist Church, the explanation of how Christ's presence is made manifest in the elements (bread and wine) is described as a "Holy Mystery".
Methodist churches generally recognize sacraments to be a means of grace. John Wesley held that God also imparted grace by other established means such as public and private prayer, Scripture reading, study and preaching, public worship, and fasting; these constitute the works of piety. Wesley considered means of grace to be "outward signs, words, or actions ... to be the ordinary channels whereby [God] might convey to men, preventing [i.e., preparing], justifying or sanctifying grace." Specifically Methodist means, such as the class meetings, provided his chief examples for these prudential means of grace.
=== Sources of teaching ===
American Methodist theologian Albert Outler, in assessing John Wesley's own practices of theological reflection, proposes a methodology termed the "Wesleyan Quadrilateral". Wesley's Quadrilateral is referred to in Methodism as "our theological guidelines" and is taught to its ministers (clergy) in seminary as the primary approach to interpreting Scripture and gaining guidance for moral questions and dilemmas faced in daily living.: 76–88
Traditionally, Methodists declare the Bible (Old and New Testaments) to be the only divinely inspired Scripture and the primary source of authority for Christians. The historic Methodist understanding of Scripture is based on the superstructure of Wesleyan covenant theology. Methodists also make use of tradition, drawing primarily from the teachings of the Church Fathers, as a secondary source of authority. Tradition may serve as a lens through which Scripture is interpreted. Theological discourse for Methodists almost always makes use of Scripture read inside the wider theological tradition of Christianity.
John Wesley contended that a part of the theological method would involve experiential faith. In other words, truth would be vivified in personal experience of Christians (overall, not individually), if it were really truth. And every doctrine must be able to be defended rationally. He did not divorce faith from reason. By reason, one asks questions of faith and seeks to understand God's action and will. Tradition, experience and reason, however, were subject always to Scripture, Wesley argued, because only there is the Word of God revealed "so far as it is necessary for our salvation.": 77
== Prayer, worship, and liturgy ==
With respect to public worship, Methodism was endowed by the Wesley brothers with worship characterised by a twofold practice: the ritual liturgy of the Book of Common Prayer on the one hand and the non-ritualistic preaching service on the other. This twofold practice became distinctive of Methodism because worship in the Church of England was based, by law, solely on the Book of Common Prayer and worship in the Nonconformist churches was almost exclusively that of "services of the word", i.e. preaching services, with Holy Communion being observed infrequently. John Wesley's influence meant that, in Methodism, the two practices were combined, a situation which remains characteristic of the tradition. Methodism has heavily emphasized "offerings of extempore and spontaneous prayer". To this end, Methodist revival services and camp meetings have been characterized by groaning and shouting, as people sought the fullness of salvation that Methodists taught to be embodied by the experience of entire sanctification. To outsiders, Wesleyans were labeled as "Shouting Methodists" due to their free expression during worship.
Historically, Methodist churches have devoutly observed the Lord's Day (Sunday) with a morning service of worship, along with an evening service of worship (with the evening service being aimed at seekers and focusing on "singing, prayer, and preaching"); the holding of a midweek prayer meeting on Wednesday evenings has been customary. 18th-century Methodist church services were characterized by the following pattern: "preliminaries (e.g., singing, prayers, testimonies), to a 'message,' followed by an invitation to commitment", the latter of which took the form altar call—a practice that a remains "a vital part" of worship. A number of Methodist congregations devote a portion of their Sunday evening service and mid-week Wednesday evening prayer meeting to having congregants share their prayer requests, in addition to hearing personal testimonies about their faith and experiences in living the Christian life. After listening to various members of the congregation voice their prayer requests, congregants may kneel for intercessory prayer. The Lovefeast, traditionally practiced quarterly, was another practice that characterized early Methodism as John Wesley taught that it was an apostolic ordinance. Worship, hymnology, devotional and liturgical practices in Methodism were also influenced by Pietistic Lutheranism and, in turn, Methodist worship became influential in the Holiness movement.
Early Methodism was known for its "almost monastic rigors, its living by rule, [and] its canonical hours of prayer". It inherited from its Anglican patrimony the practice of reciting the Daily Office, which Methodist Christians were expected to pray. The first prayer book of Methodism, The Sunday Service of the Methodists with other occasional Services thus included the canonical hours of both Morning Prayer and Evening Prayer; these services were observed everyday in early Christianity, though on the Lord's Day, worship included the Eucharist. Later Methodist liturgical books, such as the Methodist Worship Book (1999) provide for Morning Prayer and Evening Prayer to be prayed daily; the United Methodist Church encourages its communicants to pray the canonical hours as "one of the essential practices" of being a disciple of Jesus. Some Methodist religious orders publish the Daily Office to be used for that community, for example, The Book of Offices and Services of The Order of Saint Luke contains the canonical hours to be prayed traditionally at seven fixed prayer times: Lauds (6 am), Terce (9 am), Sext (12 pm), None (3 pm), Vespers (6 pm), Compline (9 pm) and Vigil (12 am). Some Methodist congregations offer daily Morning Prayer.
In America, the United Methodist Church and Free Methodist Church, as well as the Primitive Methodist Church and Wesleyan Methodist Church, have a wide variety of forms of worship, ranging from high church to low church in liturgical usage. When the Methodists in America were separated from the Church of England because of the American Revolution, John Wesley provided a revised version of the Book of Common Prayer called The Sunday Service of the Methodists; With Other Occasional Services (1784). Today, the primary liturgical books of the United Methodist Church are The United Methodist Hymnal and The United Methodist Book of Worship (1992). Congregations employ its liturgy and rituals as optional resources, but their use is not mandatory. These books contain the liturgies of the church that are generally derived from Wesley's Sunday Service and from the 20th-century liturgical renewal movement.
The British Methodist Church is less ordered, or less liturgical, in worship. It makes use of the Methodist Worship Book (similar to the Church of England's Common Worship), containing set services and rubrics for the celebration of other rites, such as marriage. The Worship Book is also ultimately derived from Wesley's Sunday Service.
A unique feature of American Methodism has been the observance of the season of Kingdomtide, encompassing the last 13 weeks before Advent, thus dividing the long season after Pentecost into two segments. During Kingdomtide, Methodist liturgy has traditionally emphasized charitable work and alleviating the suffering of the poor.
A second distinctive liturgical feature of Methodism is the use of Covenant Services. Although practice varies between national churches, most Methodist churches annually follow the call of John Wesley for a renewal of their covenant with God. It is common for each congregation to use the Covenant Renewal liturgy during the watchnight service in the night of New Year's Eve, though in Britain, these are often on the first Sunday of the year. Wesley's covenant prayer is still used, with minor modification, in the order of service:
Christ has many services to be done. Some are easy, others are difficult. Some bring honour, others bring reproach. Some are suitable to our natural inclinations and temporal interests, others are contrary to both ... Yet the power to do all these things is given to us in Christ, who strengthens us.
...I am no longer my own but yours. Put me to what you will, rank me with whom you will; put me to doing, put me to suffering; let me be employed for you or laid aside for you, exalted for you or brought low for you; let me be full, let me be empty, let me have all things, let me have nothing; I freely and wholeheartedly yield all things to your pleasure and disposal.: 290
As John Wesley advocated outdoor evangelism, revival services are a traditional worship practice of Methodism that are often held in churches, as well as at camp meetings, brush arbor revivals, and tent revivals.
== Membership ==
Traditionally, Methodist connexions descending from the tradition of the Methodist Episcopal Church have a probationary period of six months before an individual is admitted into church membership as a full member of a congregation. Given the wide attendance at Methodist revival meetings, many people started to attend Methodist services of worship regularly, though they had not yet committed to membership. When they made that commitment, becoming a probationer was the first step and during this period, probationers "receive additional instruction and provide evidence of the seriousness of their faith and willingness to abide by church discipline before being accepted into full membership." In addition to this, to be a probationary member of a Methodist congregation, a person traditionally requires an "earnest desire to be saved from [one's] sins". In the historic Methodist system, probationers were eligible to become members of class meetings, where they could be further discipled in their faith.
Catechisms such as The Probationer's Handbook, authored by minister Stephen O. Garrison, have been used by probationers to learn the Methodist faith. After six months, probationers were examined before the Leaders and Stewards' Meeting (which consisted of Class Leaders and Stewards) where they were to provide "satisfactory assurance both of the correctness of his faith and of his willingness to observe and keep the rules of the church." If probationers were able to do this, they were admitted as full members of the congregation by the pastor.
Full members of a Methodist congregation "were obligated to attend worship services on a regular basis" and "were to abide by certain moral precepts, especially as they related to substance use, gambling, divorce, and immoral pastimes." This practice continues in certain Methodist connexions, such as the Lumber River Conference of the Holiness Methodist Church, in which probationers must be examined by the pastor, class leader, and board for full membership, in addition to being baptized. The same structure is found in the African Methodist Episcopal Zion Church, which teaches:
In order that we may not admit improper persons into our church, great care be taken in receiving persons on probation, and let not one be so received or enrolled who does not give satisfactory evidence of his/her desire to flee the wrath to come and to be saved from his/her sins. Such a person satisfying us in these particulars may be received into our church on six months probation; but shall not be admitted to full membership until he/she shall have given satisfactory evidence of saving faith in the Lord Jesus Christ.
The pastor and class leader are to ensure "that all persons on probation be instructed in the Rules and Doctrines of The African Methodist Episcopal Zion Church before they are admitted to Full Membership" and that "probationers are expected to conform to the rules and usages of the Church, and to show evidence of their desire for fellowship in the Church". After the six-month probation period, "A probationer may be admitted to full membership, provided he/she has served out his/her probation, has been baptized, recommended at the Leaders' Meeting, and, if none has been held according to law, recommended by the Leader, and, on examination by the Pastor before the Church as required in ¶600 has given satisfactory assurance both of the correctness of his/her faith, and of his/her willingness to observe and keep the rules of our Church." The Allegheny Wesleyan Methodist Connection admits to associate membership, by vote of the congregation, those who give affirmation to two questions: "1) Does the Lord now forgive your sins? 2) Will you acquaint yourself with the discipline of our connection and earnestly endeavor to govern your life by its rules as God shall give you understanding?" Probationers who wish to become full members are examined by the advisory board before being received as such through four vows (on the new birth, entire sanctification, outward holiness, and assent to the Articles of Religion) and a covenant. In the United Methodist Church, the process of becoming a professing member of a congregation is done through the taking membership vows (normatively in the rite of confirmation) after a period of instruction and receiving the sacrament of baptism. It is the practice of certain Methodist connexions that when people become members of a congregation, they are offered the Right Hand of Fellowship. Methodists traditionally celebrate the Covenant Renewal Service as the watchnight service annually on New Year's Eve, in which members renew their covenant with God and the Church.
== Lifestyle ==
Early Methodists wore plain dress, with Methodist clergy condemning "high headdresses, ruffles, laces, gold, and 'costly apparel' in general". John Wesley recommended that Methodists annually read his thoughts On Dress; in that sermon, Wesley expressed his desire for Methodists: "Let me see, before I die, a Methodist congregation, full as plain dressed as a Quaker congregation." The 1858 Discipline of the Wesleyan Methodist Connection thus stated that "we would ... enjoin on all who fear God plain dress." Peter Cartwright, a Methodist revivalist, stated that in addition to wearing plain dress, the early Methodists distinguished themselves from other members of society by fasting once a week, abstaining from alcohol (teetotalism), and devoutly observing the Sabbath. Methodist circuit riders were known for practicing the spiritual discipline of mortifying the flesh as they "arose well before dawn for solitary prayer; they remained on their knees without food or drink or physical comforts sometimes for hours on end." The early Methodists did not participate in, and condemned, "worldly habits" including "playing cards, racing horses, gambling, attending the theater, dancing (both in frolics and balls), and cockfighting."
In Methodism, fasting is considered one of the works of piety. The Directions Given to Band Societies (25 December 1744) by John Wesley mandate fasting and abstinence from meat on all Fridays of the year (in remembrance of the crucifixion of Jesus). Wesley himself also fasted before receiving Holy Communion "for the purpose of focusing his attention on God," and asked other Methodists to do the same.
Over time, many of these practices were relaxed in mainline Methodism, although practices such as teetotalism and fasting are still encouraged, in addition to the current prohibition of gambling. Denominations of the conservative holiness movement, such as the Allegheny Wesleyan Methodist Connection and Evangelical Methodist Church Conference, continue to reflect the spirit of the historic Methodist practice of wearing plain dress, with members abstaining from the "wearing of apparel which does not modestly and properly clothe the person" and "refraining from the wearing of jewelry" and "superfluous ornaments (including the wedding ring)". The Fellowship of Independent Methodist Churches, which continues to observe the ordinance of women's headcovering, stipulates "renouncing all vain pomp and glory" and "adorning oneself with modest attire." The General Rules of the Methodist Church in America, which are among the doctrinal standards of many Methodist Churches, promote first-day Sabbatarianism as they require "attending upon all the ordinances of God" including "the public worship of God" and prohibit "profaning the day of the Lord, either by doing ordinary work therein or by buying or selling."
== Contemporary Methodist denominations ==
Methodism is a worldwide movement and Methodist churches are present on all populated continents. Although Methodism is declining in Great Britain and North America, it is growing in other places – at a rapid pace in, for example, South Korea. There is no single Methodist Church with universal juridical authority; Methodists belong to multiple independent denominations or "connexions". The great majority of Methodists are members of denominations which are part of the World Methodist Council, an international association of 80 Methodist, Wesleyan, and related uniting denominations, representing about 80 million people.
I look on all the world as my parish; thus far I mean, that, in whatever part of it I am, I judge it meet, right, and my bounden duty, to declare unto all that are willing to hear, the glad tidings of salvation.
=== Europe ===
Methodism is prevalent in the English-speaking world but it is also organized in mainland Europe, largely due to missionary activity of British and American Methodists. British missionaries were primarily responsible for establishing Methodism across Ireland and Italy. Today the United Methodist Church (UMC) – a large denomination based in the United States – has a presence in Albania, Austria, Belarus, Belgium, Bulgaria, the Czech Republic, Croatia, Denmark, Estonia, Finland, France, Germany, Hungary, Latvia, Lithuania, Moldova, North Macedonia, Norway, Poland, Romania, Serbia, Slovakia, Sweden, Switzerland, and Ukraine. Collectively the European and Eurasian regions of the UMC constitute a little over 100,000 Methodists (as of 2017). Other smaller Methodist denominations exist in Europe.
==== Great Britain ====
The original body founded as a result of Wesley's work came to be known as the Wesleyan Methodist Church. Schisms within the original church, and independent revivals, led to the formation of a number of separate denominations calling themselves "Methodist". The largest of these were the Primitive Methodists, deriving from a revival at Mow Cop in Staffordshire, the Bible Christians, and the Methodist New Connexion. The original church adopted the name "Wesleyan Methodist" to distinguish it from these bodies. In 1907, a union of smaller groups with the Methodist New Connexion and Bible Christian Church brought about the United Methodist Church; then the three major streams of British Methodism united in 1932 to form the present Methodist Church of Great Britain. The fourth-largest denomination in the country, the Methodist Church of Great Britain has about 202,000 members in 4,650 congregations.
Early Methodism was particularly prominent in Devon and Cornwall, which were key centers of activity by the Bible Christian faction of Methodists. The Bible Christians produced many preachers, and sent many missionaries to Australia. Methodism also grew rapidly in the old mill towns of Yorkshire and Lancashire, where the preachers stressed that the working classes were equal to the upper classes in the eyes of God. In Wales, three elements separately welcomed Methodism: Welsh-speaking, English-speaking, and Calvinistic.
British Methodists, in particular the Primitive Methodists, took a leading role in the temperance movement of the 19th and early 20th centuries. Methodists saw alcoholic beverages, and alcoholism, as the root of many social ills and tried to persuade people to abstain from these. Temperance appealed strongly to the Methodist doctrines of sanctification and perfection. To this day, alcohol remains banned in Methodist premises, however this restriction no longer applies to domestic occasions in private homes (i.e. the minister may have a drink at home in the manse). The choice to consume alcohol is now a personal decision for any member.
British Methodism does not have bishops; however, it has always been characterised by a strong central organisation, the Connexion, which holds an annual Conference (the church retains the 18th-century spelling connexion for many purposes). The Connexion is divided into Districts in the charge of the chairperson (who may be male or female). Methodist districts often correspond approximately, in geographical terms, to counties – as do Church of England dioceses. The districts are divided into circuits governed by the Circuit Meeting and led and administrated principally by a superintendent minister. Ministers are appointed to Circuits rather than to individual churches, although some large inner-city churches, known as "central halls", are designated as circuits in themselves – of these Westminster Central Hall, opposite Westminster Abbey in central London, is the best known. Most circuits have fewer ministers than churches, and the majority of services are led by lay local preachers, or by supernumerary ministers (ministers who have retired, called supernumerary because they are not counted for official purposes in the numbers of ministers for the circuit in which they are listed). The superintendent and other ministers are assisted in the leadership and administration of the Circuit by circuit stewards - laypeople with particular skills who, who with the ministers, collectively form what is normally known as the Circuit Leadership Team.
The Methodist Council also helps to run a number of schools, including two public schools in East Anglia: Culford School and the Leys School. The council promotes an all round education with a strong Christian ethos.
Other Methodist denominations in Britain include: the Free Methodist Church, the Fellowship of Independent Methodist Churches, the Church of the Nazarene, and The Salvation Army, all of which are Methodist churches aligned with the holiness movement, as well as the Wesleyan Reform Union, an early secession from the Wesleyan Methodist Church, and the Independent Methodist Connexion.
==== Ireland ====
John Wesley visited Ireland on at least twenty-four occasions and established classes and societies. The Methodist Church in Ireland (Irish: Eaglais Mheitidisteach in Éirinn) today operates across both Northern Ireland and the Republic of Ireland on an all-Ireland basis. As of 2018, there were around 50,000 Methodists across Ireland. In 2013, the biggest concentration – 13,171 – was in Belfast, with 2,614 in Dublin. As of 2021, it is the fourth-largest denomination in Northern Ireland, with Methodists accounting for 2.3% of the population, compared to 3% in 2011.
Eric Gallagher was the President of the Church in the 1970s, becoming a well-known figure in Irish politics. He was one of the group of Protestant churchmen who met with Provisional IRA officers in Feakle, County Clare to try to broker peace. The meeting was unsuccessful due to a Garda raid on the hotel.
In 1973, the Fellowship of Independent Methodist Churches (FIMC) was established as a number of theologically conservative congregations departed both the Methodist Church in Ireland and Free Methodist Church due to what they perceived as the rise of Modernism in those denominations.
==== Italy ====
The Italian Methodist Church (Italian: Chiesa Metodista Italiana) is a small Protestant community in Italy, with around 7,000 members. Since 1975, it is in a formal covenant of partnership with the Waldensian Church, with a total of 45,000 members. Waldensians are a Protestant movement which started in Lyon, France, in the late 1170s.
Italian Methodism has its origins in the Italian Free Church, British Wesleyan Methodist Missionary Society, and the American Methodist Episcopal Mission. These movements flowered in the second half of the 19th century in the new climate of political and religious freedom that was established with the end of the Papal States and unification of Italy in 1870.
Bertrand M. Tipple, minister of the American Methodist Church in Rome, founded a college there in 1914.
In April 2016, the World Methodist Council opened an Ecumenical Office in Rome. Methodist leaders and the leader of the Roman Catholic Church, Pope Francis, jointly dedicated the new office. It helps facilitate Methodist relationships with the wider Church, especially the Roman Catholic Church.
==== Nordic and Baltic countries ====
The "Nordic and Baltic Area" of the United Methodist Church covers the Nordic countries (Denmark, Sweden, Norway, and Finland) and the Baltic countries (Estonia, Latvia, and Lithuania). Methodism was introduced to the Nordic countries in the late 19th century. Today the United Methodist Church in Norway (Norwegian: Metodistkirken) is the largest annual meeting in the region with 10,684 members in total (as of 2013). The United Methodist Church in Sweden (Swedish: Metodistkyrkan) joined the Uniting Church in Sweden in 2011.
In Finland, Methodism arrived through Ostrobothnians sailors in the 1860s, and Methodism spread especially in Swedish-speaking Ostrobothnia. The first Methodist congregation was founded in Vaasa in 1881 and the first Finnish-speaking congregation in Pori in 1887. At the turn of the century, the congregation in Vaasa became the largest and most active congregation in Methodism.
==== France ====
The French Methodist movement was founded in the 1820s by Charles Cook in the village of Congénies in Languedoc near Nîmes and Montpellier. The most important chapel of department was built in 1869, where there had been a Quaker community since the 18th century. Sixteen Methodist congregations voted to join the Reformed Church of France in 1938. In the 1980s, missionary work of a Methodist church in Agen led to new initiatives in Fleurance and Mont de Marsan.
Methodism exists today in France under various names. The best-known is the Union of Evangelical Methodist Churches (French: l'Union de l'Eglise Evangélique Méthodiste) or UEEM. It is an autonomous regional conference of the United Methodist Church and is the fruit of a fusion in 2005 between the "Methodist Church of France" and the "Union of Methodist Churches". As of 2014, the UEEM has around 1,200 members and 30 ministers.
==== Germany ====
In Germany, Switzerland and Austria, Evangelisch-methodistische Kirche is the name of the United Methodist Church. The German part of the church had about 52,031 members in 2015. Members are organized into three annual conferences: north, east and south. All three annual conferences belong to the Germany Central Conference. Methodism is most prevalent in southern Saxony and around Stuttgart.
A Methodist missionary returning from Britain introduced (British) Methodism to Germany in 1830, initially in the region of Württemberg. Methodism was also spread in Germany through the missionary work of the Methodist Episcopal Church which began in 1849 in Bremen, soon spreading to Saxony and other parts of Germany. Other Methodist missionaries of the Evangelical Association went near Stuttgart (Württemberg) in 1850. Further Methodist missionaries of the Church of the United Brethren in Christ worked in Franconia and other parts of Germany from 1869 until 1905. Therefore, Methodism has four roots in Germany.
Early opposition towards Methodism was partly rooted in theological differences – northern and eastern regions of Germany were predominantly Lutheran and Reformed, and Methodists were dismissed as fanatics. Methodism was also hindered by its unfamiliar church structure (Connectionalism), which was more centralised than the hierarchical polity in the Lutheran and Reformed churches. After World War I, the 1919 Weimar Constitution allowed Methodists to worship freely and many new chapels were established. In 1936, German Methodists elected their first bishop.
==== Hungary ====
The first Methodist mission in Hungary was established in 1898 in Bácska, in a then mostly German-speaking town of Verbász (since 1918 part of the Serbian province of Vojvodina). In 1905 a Methodist mission was established also in Budapest. In 1974, a group later known as the Hungarian Evangelical Fellowship seceded from the Hungarian Methodist Church over the question of interference by the communist state.
As of 2017, the United Methodist Church in Hungary, known locally as the Hungarian Methodist Church (Hungarian: Magyarországi Metodista Egyház), had 453 professing members in 30 congregations. It runs two student homes, two homes for the elderly, the Forray Methodist High School, the Wesley Scouts and the Methodist Library and Archives. The church has a special ministry among the Roma.
The seceding Hungarian Evangelical Fellowship (Magyarországi Evangéliumi Testvérközösség) also remains Methodist in its organisation and theology. It has eight full congregations and several mission groups, and runs a range of charitable organisations: hostels and soup kitchens for the homeless, a non-denominational theological college, a dozen schools of various kinds, and four old people's homes.
Today there are a dozen Methodist/Wesleyan churches and mission organisations in Hungary, but all Methodist churches lost official church status under new legislation passed in 2011, when the number of officially recognized churches in the country fell to 14. However, the list of recognized churches was lengthened to 32 at the end of February 2012. This gave recognition to the Hungarian Methodist Church and the Salvation Army, which was banned in Hungary in 1949 but had returned in 1990, but not to the Hungarian Evangelical Fellowship. The legislation has been strongly criticised by the Venice Commission of the Council of Europe as discriminatory.
The Hungarian Methodist Church, the Salvation Army and the Church of the Nazarene and other Wesleyan groups formed the Wesley Theological Alliance for theological and publishing purposes in 1998. Today the Alliance has 10 Wesleyan member churches and organisations. The Hungarian Evangelical Fellowship does not belong to it and has its own publishing arm.
==== Russia ====
The Methodist Church established several strongholds in Russia – Saint Petersburg in the west and the Vladivostok region in the east, with large Methodist centres in Moscow and Ekaterinburg (former Sverdlovsk). Methodists began their work in the west among Swedish immigrants in 1881 and started their work in the east in 1910. On 26 June 2009, Methodists celebrated the 120th year since Methodism arrived in Czarist Russia by erecting a new Methodist centre in Saint Petersburg. A Methodist presence was continued in Russia for 14 years after the Russian Revolution of 1917 through the efforts of Deaconess Anna Eklund. In 1939, political antagonism stymied the work of the Church and Deaconess Anna Eklund was coerced to return to her native Finland.
After 1989, the Soviet Union allowed greatly increased religious freedoms and this continued after the USSR's collapse in 1991. During the 1990s, Methodism experienced a powerful wave of revival in the nation. Three sites in particular carried the torch – Samara, Moscow and Ekaterinburg. As of 2011, the United Methodist Church in Eurasia comprised 116 congregations, each with a native pastor. There are currently 48 students enrolled in residential and extension degree programs at the United Methodist Seminary in Moscow.
=== Caribbean ===
Methodism came to the Caribbean in 1760 when the planter, lawyer and Speaker of the Antiguan House of Assembly, Nathaniel Gilbert (c. 1719–1774), returned to his sugar estate home in Antigua. A Methodist revival spread in the British West Indies due to the work of British missionaries. Missionaries established societies which would later become the Methodist Church in the Caribbean and the Americas (MCCA). The MCCA has about 62,000 members in over 700 congregations, ministered by 168 pastors. There are smaller Methodist denominations that have seceded from the parent church.
==== Antigua ====
The story is often told that in 1755, Nathaniel Gilbert, while convalescing, read a treatise of John Wesley, An Appeal to Men of Reason and Religion sent to him by his brother Francis. As a result of having read this book Gilbert, two years later, journeyed to England with three of his slaves and there in a drawing room meeting arranged in Wandsworth on 15 January 1759, met the preacher John Wesley. He returned to the Caribbean that same year and on his subsequent return began to preach to his slaves in Antigua.
When Gilbert died in 1774 his work in Antigua was continued by his brother Francis Gilbert to approximately 200 Methodists. However, within a year Francis took ill and returned to Britain and the work was carried on by Sophia Campbell ("a Negress") and Mary Alley ("a Mulatto"), two devoted women who kept the flock together with class and prayer meetings as well as they could.
On 2 April 1778, John Baxter, a local preacher and skilled shipwright from Chatham in Kent, England, landed at English Harbour in Antigua (now called Nelson's Dockyard) where he was offered a post at the naval dockyard. Baxter was a Methodist and had heard of the work of the Gilberts and their need for a new preacher. He began preaching and meeting with the Methodist leaders, and within a year the Methodist community had grown to 600 persons. By 1783, the first Methodist chapel was built in Antigua, with John Baxter as the local preacher, its wooden structure seating some 2,000 people.
==== St. Bart's ====
In 1785, William Turton (1761–1817) a Barbadian son of a planter, met John Baxter in Antigua, and later, as layman, assisted in the Methodist work in the Swedish colony of St. Bartholomew from 1796.
In 1786, the missionary endeavour in the Caribbean was officially recognized by the Methodist Conference in England, and that same year Thomas Coke, having been made Superintendent of the church two years previously in America by Wesley, was travelling to Nova Scotia, but weather forced his ship to Antigua.
==== Jamaica ====
In 1818 Edward Fraser (1798 – aft. 1850), a privileged Barbadian slave, moved to Bermuda and subsequently met the new minister James Dunbar. The Nova Scotia Methodist Minister noted young Fraser's sincerity and commitment to his congregation and encouraged him by appointing him as assistant. By 1827 Fraser assisted in building a new chapel. He was later freed and admitted to the Methodist Ministry to serve in Antigua and Jamaica.
==== Barbados ====
Following William J. Shrewsbury's preaching in the 1820s, Sarah Ann Gill (1779–1866), a free-born black woman, used civil disobedience in an attempt to thwart magistrate rulings that prevented parishioners holding prayer meetings. In hopes of building a new chapel, she paid an extraordinary £1,700-0s–0d and ended up having militia appointed by the Governor to protect her home from demolition.
In 1884 an attempt was made at autonomy with the formation of two West Indian Conferences, however by 1903 the venture had failed. It was not until the 1960s that another attempt was made at autonomy. This second attempt resulted in the emergence of the Methodist Church in the Caribbean and the Americas in May 1967.
Francis Godson (1864–1953), a Methodist minister, who having served briefly in several of the Caribbean islands, eventually immersed himself in helping those in hardship of the First World War in Barbados. He was later appointed to the Legislative Council of Barbados, and fought for the rights of pensioners. He was later followed by renowned Barbadian Augustus Rawle Parkinson (1864–1932), who also was the first principal of the Wesley Hall School, Bridgetown in Barbados (which celebrated its 125th anniversary in September 2009).
In more recent times in Barbados, Victor Alphonso Cooke (born 1930) and Lawrence Vernon Harcourt Lewis (born 1932) are strong influences on the Methodist Church on the island. Their contemporary and late member of the Dalkeith Methodist Church, was the former secretary of the University of the West Indies, consultant of the Canadian Training Aid Programme and a man of letters – Francis Woodbine Blackman (1922–2010). It was his research and published works that enlightened much of this information on Caribbean Methodism.
=== Africa ===
Most Methodist denominations in Africa follow the British Methodist tradition and see the Methodist Church of Great Britain as their mother church. Originally modelled on the British structure, since independence most of these churches have adopted an episcopal model of church governance.
==== Nigeria ====
The Nigerian Methodist Church is one of the largest Methodist denominations in the world and one of the largest Christian churches in Nigeria, with around two million members in 2000 congregations. It has seen exponential growth since the turn of the millennium.
Christianity was established in Nigeria with the arrival in 1842 of a Wesleyan Methodist missionary. He had come in response to the request for missionaries by the ex-slaves who returned to Nigeria from Sierra Leone. From the mission stations established in Badagry and Abeokuta, the Methodist church spread to various parts of the country west of the River Niger and part of the north. In 1893 missionaries of the Primitive Methodist Church arrived from Fernando Po, an island off the southern coast of Nigeria. From there the Methodist Church spread to other parts of the country, east of the River Niger and also to parts of the north. The church west of the River Niger and part of the north was known as the Western Nigeria District and east of the Niger and another part of the north as the Eastern Nigeria District. Both existed independently of each other until 1962 when they constituted the Conference of Methodist Church Nigeria. The conference is composed of seven districts. The church has continued to spread into new areas and has established a department for evangelism and appointed a director of evangelism. An episcopal system of church governance adopted in 1976 was not fully accepted by all sections of the church until the two sides came together and resolved to end the disagreement. A new constitution was ratified in 1990. The system is still episcopal but the points which caused discontent were amended to be acceptable to both sides. Today, the Nigerian Methodist Church has a prelate, eight archbishops and 44 bishops.
==== Ghana ====
Methodist Church Ghana is one of the largest Methodist denominations, with around 800,000 members in 2,905 congregations, ministered by 700 pastors. It has fraternal links with the British Methodist and United Methodist churches worldwide.
Methodism in Ghana came into existence as a result of the missionary activities of the Wesleyan Methodist Church, inaugurated with the arrival of Joseph Rhodes Dunwell to the Gold Coast in 1835. Like the mother church, the Methodist Church in Ghana was established by people of Protestant background. Roman Catholic and Anglican missionaries came to the Gold Coast from the 15th century. A school was established in Cape Coast by the Anglicans during the time of Philip Quaque, a Ghanaian priest. Those who came out of this school had Bible copies and study supplied by the Society for the Propagation of Christian Knowledge. A member of the resulting Bible study groups, William De-Graft, requested Bibles through Captain Potter of the ship Congo. Not only were Bibles sent, but also a Methodist missionary. In the first eight years of the Church's life, 11 out of 21 missionaries who worked in the Gold Coast died. Thomas Birch Freeman, who arrived at the Gold Coast in 1838 was a pioneer of missionary expansion. Between 1838 and 1857 he carried Methodism from the coastal areas to Kumasi in the Asante hinterland of the Gold Coast. He also established Methodist Societies in Badagry and AbeoKuta in Nigeria with the assistance of William De-Graft.
By 1854, the church was organized into circuits constituting a district with T. B. Freeman as chairman. Freeman was replaced in 1856 by William West. The district was divided and extended to include areas in the then Gold Coast and Nigeria by the synod in 1878, a move confirmed at the British Conference. The districts were Gold Coast District, with T. R. Picot as chairman and Yoruba and Popo District, with John Milum as chairman. Methodist evangelisation of northern Gold Coast began in 1910. After a long period of conflict with the colonial government, missionary work was established in 1955. Paul Adu was the first indigenous missionary to northern Gold Coast.
In July 1961, the Methodist Church in Ghana became autonomous, and was called the Methodist Church Ghana, based on a deed of foundation, part of the church's Constitution and Standing Orders.
==== Southern Africa ====
The Methodist Church operates across South Africa, Namibia, Botswana, Lesotho and Swaziland, with a limited presence in Zimbabwe and Mozambique. It is a member church of the World Methodist Council.
Methodism in Southern Africa began as a result of lay Christian work by an Irish soldier of the English Regiment, John Irwin, who was stationed at the Cape and began to hold prayer meetings as early as 1795. The first Methodist lay preacher at the Cape, George Middlemiss, was a soldier of the 72nd Regiment of the British Army stationed at the Cape in 1805. This foundation paved the way for missionary work by Methodist missionary societies from Great Britain, many of whom sent missionaries with the 1820 English settlers to the Western and Eastern Cape. Among the most notable of the early missionaries were Barnabas Shaw and William Shaw. The largest group was the Wesleyan Methodist Church, but there were a number of others that joined to form the Methodist Church of South Africa, later known as the Methodist Church of Southern Africa.
The Methodist Church of Southern Africa is the largest mainline Protestant denomination in South Africa – 7.3% of the South African population recorded their religious affiliation as 'Methodist' in the last national census.
=== Asia ===
==== China ====
Methodism was brought to China in the autumn of 1847 by the Methodist Episcopal Church. The first missionaries sent out were Judson Dwight Collins and Moses Clark White, who sailed from Boston 15 April 1847, and reached Fuzhou 6 September. They were followed by Henry Hickok and Robert Samuel Maclay, who arrived 15 April 1848. In 1857, the first convert was baptised in connection with its labours. In August 1856, a brick built church was dedicated named the "Church of the True God" (Chinese: 真神堂; pinyin: Zhēnshén táng), the first substantial church building erected in Fuzhou by Protestant Missions. In the winter of the same year another brick built church, located on the hill in the suburbs on the south bank of the Min, was finished and dedicated, called the "Church of Heavenly Peace". In 1862, the number of members was 87. The Fuzhou Conference was organized by Isaac W. Wiley on 6 December 1867, by which time the number of members and probationers had reached 2,011.
Hok Chau (周學; Zhōu Xué; also known as Lai-Tong Chau, 周勵堂; Zhōu Lìtáng) was the first ordained Chinese minister of the South China District of the Methodist Church (incumbent 1877–1916). Benjamin Hobson, a medical missionary sent by the London Missionary Society in 1839, set up Wai Ai Clinic (惠愛醫館; Huì ài yī guǎn). Liang Fa, Hok Chau and others worked there. Liang baptized Chau in 1852. The Methodist Church based in Britain sent missionary George Piercy to China. In 1851, Piercy went to Guangzhou (Canton), where he worked in a trading company. In 1853, he started a church in Guangzhou. In 1877, Chau was ordained by the Methodist Church, where he pastored for 39 years.
In 1867, the mission sent out the first missionaries to Central China, who began work at Jiujiang. In 1869, missionaries were also sent to the capital city Beijing, where they laid the foundations of the work of the North China Mission. In November 1880, the West China Mission was established in Sichuan Province. In 1896, the work in the Hinghua prefecture (modern-day Putian) and surrounding regions was also organized as a Mission Conference.
In 1947, the Methodist Church in the Republic of China celebrated its centenary. In 1949, however, the Methodist Church moved to Taiwan with the Kuomintang government.
==== Hong Kong ====
==== India ====
Methodism came to India twice, in 1817 and in 1856, according to P. Dayanandan who has extensively researched the subject. Thomas Coke and six other missionaries set sail for India on New Year's Day in 1814. Coke, then 66, died en route. Rev. James Lynch was the one who finally arrived in Madras in 1817 at a place called Black Town (Broadway), later known as George Town. Lynch conducted the first Methodist missionary service on 2 March 1817, in a stable.
The first Methodist church was dedicated in 1819 at Royapettah. A chapel at Broadway (Black Town) was later built and dedicated on 25 April 1822. This church was rebuilt in 1844 since the earlier structure was collapsing. At this time there were about 100 Methodist members in all of Madras, and they were either Europeans or Eurasians (European and Indian descent). Among names associated with the founding period of Methodism in India are Elijah Hoole and Thomas Cryer, who came as missionaries to Madras.
In 1857, the Methodist Episcopal Church started its work in India, and with prominent evangelists like William Taylor of the Emmanuel Methodist Church, Vepery, born in 1874. Taylor and the evangelist James Mills Thoburn established the Thoburn Memorial Church in Calcutta in 1873 and the Calcutta Boys' School in 1877.
In 1947, the Wesleyan Methodist Church in India merged with Presbyterians, Anglicans and other Protestant churches to form the Church of South India while the American Methodist Church remained affiliated as the Methodist Church in Southern Asia (MCSA) to the mother church in the USA – the United Methodist Church until 1981, when by an enabling act, the Methodist Church in India (MCI) became an autonomous church in India. Today, the Methodist Church in India is governed by the General Conference of the Methodist Church of India headed by six bishops, with headquarters in Mumbai, India.
==== Malaysia and Singapore ====
Missionaries from Britain, North America, and Australia founded Methodist churches in many Commonwealth countries. These are now independent from their former "mother" churches. In addition to the churches, these missionaries often also founded schools to serve the local community. A good example of such schools are the Methodist Boys' School in Kuala Lumpur, Methodist Girls' School and Methodist Boys' School in George Town, and Anglo-Chinese School, Methodist Girls' School, Paya Lebar Methodist Girls School and Fairfield Methodist Schools in Singapore.
==== Philippines ====
Methodism in the Philippines began shortly after the United States acquired the Philippines in 1898 as a result the Spanish–American War. On 21 June 1898, after the Battle of Manila Bay but before the Treaty of Paris, executives of the American Mission Society of the Methodist Episcopal Church expressed their desire to join other Protestant denominations in starting mission work in the islands and to enter into a Comity Agreement that would facilitate the establishment of such missions. The first Protestant worship service was conducted on 28 August 1898 by an American military chaplain named George C. Stull. Stull was an ordained Methodist minister from the Montana Annual Conference of The Methodist Episcopal Church (later part of the United Methodist Church after 1968).
Methodist and Wesleyan traditions in the Philippines are shared by three of the largest mainline Protestant churches in the country: The United Methodist Church in the Philippines, Iglesia Evangelica Metodista En Las Islas Filipinas ("Evangelical Methodist Church in the Philippine Islands", abbreviated IEMELIF), and The United Church of Christ in the Philippines. There are also evangelical Protestant churches in the country of the Methodist tradition like the Wesleyan Church of the Philippines, the Free Methodist Church of the Philippines, and the Church of the Nazarene. There are also the IEMELIF Reform Movement (IRM), The Wesleyan (Pilgrim Holiness) Church of the Philippines, the Philippine Bible Methodist Church, Incorporated, the Pentecostal Free Methodist Church, Incorporated, the Fundamental Christian Methodist Church, The Reformed Methodist Church, Incorporated, The Methodist Church of the Living Bread, Incorporated, and the Wesley Evangelical Methodist Church & Mission, Incorporated.
There are three episcopal areas of the United Methodist Church in the Philippines: the Baguio Episcopal Area, Davao Episcopal Area and Manila Episcopal Area.
A call for autonomy from groups within the United Methodist Church in the Philippines was discussed at several conferences led mostly by episcopal candidates. This led to the establishment of the Ang Iglesia Metodista sa Pilipinas ("The Methodist Church in the Philippines") in 2010, led by Bishop Lito C. Tangonan, George Buenaventura, Chita Milan and Joe Frank E. Zuñiga. The group finally declared full autonomy and legal incorporation with the Securities and Exchange Commission was approved on 7 December 2011 with papers held by present procurators. It now has 126 local churches in Metro Manila, Palawan, Bataan, Zambales, Pangasinan, Bulacan, Aurora, Nueva Ecija, as well as parts of Pampanga and Cavite. Tangonan was consecrated as the denomination's first Presiding Bishop on 17 March 2012.
==== South Korea ====
The Korean Methodist Church (KMC) is one of the largest churches in South Korea with around 1.5 million members and 8,306 ministers. Methodism in Korea grew out of British and American mission work which began in the late 19th century. The first missionary was Robert Samuel Maclay of the Methodist Episcopal Church, who sailed from Japan in 1884 and was given the authority of medical and schooling permission from emperor Gojong. The Korean church became fully autonomous in 1930, retaining affiliation with Methodist churches in America and later the United Methodist Church. The church experienced rapid growth in membership throughout most of the 20th century – in spite of the Korean War – before stabilizing in the 1990s. The KMC is a member of the World Methodist Council and hosted the first Asia Methodist Convention in 2001.
There are many Korean-language Methodist churches in North America catering to Korean-speaking immigrants.
==== Taiwan ====
In 1947, the Methodist Church in the Republic of China celebrated its centenary. In 1949, however, the Methodist Church moved to Taiwan with the Kuomintang government. On 21 June 1953, Taipei Methodist Church was erected, then local churches and chapels with a baptized membership numbering over 2,500. Various types of educational, medical and social services are provided (including Tunghai University). In 1972, the Methodist Church in the Republic of China became autonomous, and the first bishop was installed in 1986.
=== Americas ===
==== Brazil ====
The Methodist Church in Brazil was founded by American missionaries in 1867 after an initial unsuccessful founding in 1835. It has grown steadily since, becoming autonomous in 1930. In the 1970s it ordained its first woman minister. In 1975 it also founded the first Methodist university in Latin America, the Methodist University of Piracicaba. As of 2011, the Brazilian Methodist Church is divided into eight annual conferences with 162,000 members.
==== Canada ====
The father of Methodism in Canada was Rev. Coughlan, who arrived in Newfoundland in 1763, where he opened a school and travelled widely.
The second was William Black (1760–1834) who began preaching in settlements along the Petitcodiac River of New Brunswick in 1781. A few years afterwards, Methodist Episcopal circuit riders from the U.S. state of New York began to arrive in Canada West at Niagara, and the north shore of Lake Erie in 1786, and at the Kingston region on the northeast shore of Lake Ontario in the early 1790s. At the time the region was part of British North America and became part of Upper Canada after the Constitutional Act of 1791. Upper and Lower Canada were both parts of the New York Episcopal Methodist Conference until 1810 when they were transferred to the newly formed Genesee Conference. Reverend Major George Neal began to preach in Niagara in October 1786 and was ordained in 1810 by Bishop Philip Asbury, at the Lyons, New York Methodist Conference. He was Canada's first saddlebag preacher and travelled from Lake Ontario to Detroit for 50 years preaching the gospel.
The spread of Methodism in the Canadas was seriously disrupted by the War of 1812 but quickly gained lost ground after the Treaty of Ghent was signed in 1815. In 1817, the British Wesleyans arrived in the Canadas from the Maritimes but by 1820 had agreed, with the Episcopal Methodists, to confine their work to Lower Canada (present-day Quebec) while the latter would confine themselves to Upper Canada (present-day Ontario). In the summer of 1818, the first place of public worship was erected for the Wesleyan Methodists in York, later Toronto. The chapel for the First Methodist Church was built on the corner of King Street and Jordan Street, the entire cost of the building was $250, an amount that took the congregation three years to raise. In 1828, Upper Canadian Methodists were permitted by the General Conference in the United States to form an independent Canadian Conference and, in 1833, the Canadian Conference merged with the British Wesleyans to form the Wesleyan Methodist Church in Canada. In 1884, most Canadian Methodists were brought under the umbrella of the Methodist Church, Canada.
In the fall of 1873 and winter of 1874, General Superintendent B. T. Roberts of the Free Methodist Church visited Scarborough on the invitation of Robert Loveless, a Primitive Methodist layman. Later, in 1876 while presiding over the very young North Michigan Conference, he read conference appointments that assigned C.H. Sage his field of labour—Canada. This led to the expansion of the Free Methodist Church in Canada.
In 1925, the Methodist Church, Canada and most Presbyterian congregations (then by far the largest Protestant communion in Canada), most Congregational Union of Ontario and Quebec congregations, Union Churches in Western Canada, and the American Presbyterian Church in Montreal merged to form the United Church of Canada. In 1968, the Evangelical United Brethren Church's Canadian congregations joined the United Church of Canada.
The Free Methodist Church in Canada is the largest Methodist denomination in the country at present. A smaller denomination, the British Methodist Episcopal Church, remains active today as well.
==== Mexico ====
The Methodist Church came to Mexico in 1872, with the arrival of two Methodist commissioners from the United States to observe the possibilities of evangelistic work in México. In December 1872, Bishop Gilbert Haven arrived in Mexico City. He was ordered by M. D. William Butler to go to México. Bishop John C. Keener arrived from the Methodist Episcopal Church, South in January 1873.
In 1874, M. D. William Butler established the first Protestant Methodist school of México, in Puebla. The school was founded under the name "Instituto Metodista Mexicano". Today the school is called "Instituto Mexicano Madero". It is still a Methodist school, and it is one of the most elite, selective, expensive and prestigious private schools in the country, with two campuses in Puebla State, and one in Oaxaca. A few years later the principal of the school created a Methodist university.
On 18 January 1885, the first Annual Conference of the United Episcopal Church of México was established.
==== United States ====
Wesley came to believe that the New Testament evidence did not leave the power of ordination to the priesthood in the hands of bishops but that other priests could ordain. In 1784, he ordained preachers for Scotland, England, and America, with power to administer the sacraments (this was a major reason for Methodism's final split from the Church of England after Wesley's death). At that time, Wesley sent Thomas Coke to America. Francis Asbury founded the Methodist Episcopal Church at the Baltimore Christmas Conference in 1784; Coke (already ordained in the Church of England) ordained Asbury deacon, elder, and bishop each on three successive days. Circuit riders, many of whom were laymen, travelled by horseback to preach the gospel and establish churches in many places. One of the most famous circuit riders was Robert Strawbridge who lived in the vicinity of Carroll County, Maryland, soon after arriving in the Colonies around 1760.
The First Great Awakening was a religious movement in the 1730s and 1740s, beginning in New Jersey, then spreading to New England, and eventually south into Virginia and North Carolina. George Whitefield played a major role, traveling across the colonies and preaching in a dramatic and emotional style, accepting everyone as his audience.
The new style of sermons and the way people practiced their faith breathed new life into religion in America. People became passionately and emotionally involved in their religion, rather than passively listening to intellectual discourse in a detached manner. People began to study the Bible at home. The effect was akin to the individualistic trends present in Europe during the Protestant Reformation.
The Second Great Awakening was a nationwide wave of revivals, from 1790 to 1840. In New England, the renewed interest in religion inspired a wave of social activism among Yankees; Methodism grew and established several colleges, notably Boston University. In the "burned over district" of western New York, the spirit of revival burned brightly. Methodism saw the emergence of a Holiness movement. In the west, especially at Cane Ridge, Kentucky, and in Tennessee, the revival strengthened the Methodists and the Baptists. Methodism grew rapidly in the Second Great Awakening, becoming the nation's largest denomination by 1820. From 58,000 members in 1790, it reached 258,000 in 1820 and 1,661,000 in 1860, growing by a factor of 28.6 in 70 years, while the total American population grew by a factor of eight. Other denominations also used revivals, but the Methodists grew fastest of all because "they combined popular appeal with efficient organization under the command of missionary bishops." Methodism attracted German immigrants, and the first German Methodist Church was erected in Cincinnati, Ohio.
Disputes over slavery placed the church in difficulty in the first half of the 19th century, with the northern church leaders fearful of a split with the South, and reluctant to take a stand. The Wesleyan Methodist Connexion (later renamed the Wesleyan Methodist Church) and the Free Methodist Church were formed by staunch abolitionists, and the Free Methodists were especially active in the Underground Railroad, which helped to free slaves. In 1962, the Evangelical Wesleyan Church separated from the Free Methodist Church. In 1968 the Wesleyan Methodist Church and Pilgrim Holiness Church merged to form the Wesleyan Church; a significant amount dissented from this decision resulting in the independence of the Allegheny Wesleyan Methodist Connection and the formation of the Bible Methodist Connection of Churches, both of which fall within the conservative holiness movement.
In a much larger split, in 1845 at Louisville, Kentucky, the churches of the slaveholding states left the Methodist Episcopal Church and formed the Methodist Episcopal Church, South. The northern and southern branches were reunited in 1939, when slavery was no longer an issue. In this merger also joined the Methodist Protestant Church. Some southerners, more conservative in theology, opposed the merger, and formed the Southern Methodist Church in 1940.
The Third Great Awakening from 1858 to 1908 saw enormous growth in Methodist membership, and a proliferation of institutions such as colleges (e.g., Morningside College). Methodists were often involved in the Missionary Awakening and the Social Gospel Movement. The awakening in so many cities in 1858 started the movement, but in the North it was interrupted by the Civil War. In the South, on the other hand, the Civil War stimulated revivals, especially in Lee's army.
In 1914–1917 many Methodist ministers made strong pleas for world peace. President Woodrow Wilson (a Presbyterian), promised "a war to end all wars", using language of a future peace that had been a watchword for the postmillennial movement. In the 1930s many Methodists favored isolationist policies. Thus in 1936, Methodist Bishop James Baker, of the San Francisco Conference, released a poll of ministers showing 56% opposed warfare. However, the Methodist Federation called for a boycott of Japan, which had invaded China and was disrupting missionary activity there. In Chicago, 62 local African Methodist Episcopal churches voted their support for the Roosevelt administration's policy, while opposing any plan to send American troops overseas to fight. When war came in 1941, the vast majority of Methodists supported the national war effort, but there were also a few (673) conscientious objectors.
The United Methodist Church (UMC) was formed in 1968 as a result of a merger between the Evangelical United Brethren Church (EUB) and the Methodist Church. The former church had resulted from mergers of several groups of German Methodist heritage; however, there was no longer any need or desire to worship in the German language. The latter church was a result of union between the Methodist Protestant Church and the northern and southern factions of the Methodist Episcopal Church. The merged church had approximately nine million members as of the late 1990s. While United Methodist Church in America membership has been declining, associated groups in developing countries are growing rapidly. Prior to the merger that led to the formation of the United Methodist Church, the Evangelical Methodist Church entered into a schism with the Methodist Church, citing modernism in its parent body as the reason for the departure in 1946.
American Methodist churches are generally organized on a connectional model, related, but not identical to that used in Britain. Pastors are assigned to congregations by bishops, distinguishing it from presbyterian government. Methodist denominations typically give lay members representation at regional and national Conferences at which the business of the church is conducted, making it different from most episcopal government. This connectional organizational model differs further from the congregational model, for example of Baptist, and Congregationalist Churches, among others.
In addition to the United Methodist Church, there are over 40 other denominations that descend from John Wesley's Methodist movement. Some, such as the African Methodist Episcopal Church, the Free Methodists and the Wesleyan Church (formerly Wesleyan Methodist), are explicitly Methodist. There are also independent Methodist churches, many of which are affiliated with the Association of Independent Methodists. The Salvation Army and the Church of the Nazarene adhere to Methodist theology.
The Holiness Revival was primarily among people of Methodist persuasion, who felt that the church had once again become apathetic, losing the Wesleyan zeal. Some important events of this revival were the writings of Phoebe Palmer during the mid-1800s, the establishment of the first of many holiness camp meetings at Vineland, New Jersey, in 1867, and the founding of Asbury College (1890), and other similar institutions in the U.S. around the turn of the 20th century.
In 2020, United Methodists announced a plan to split the denomination over the issue of same-sex marriage, which resulted in traditionalist clergy, laity and theologians forming the Global Methodist Church, an evangelical Methodist denomination that came into being on 1 May 2022.
=== Oceania ===
Methodism is particularly widespread in some Pacific Island nations, such as Fiji, Samoa and Tonga.
==== Australia ====
In the 19th century there were annual conferences in each Australasian colony (including New Zealand). Various branches of Methodism in Australia merged during the 20 years from 1881. The Methodist Church of Australasia was formed on 1 January 1902 when five Methodist denominations in Australia – the Wesleyan Methodist Church, the Primitive Methodists, the Bible Christian Church, the United Methodist Free and the Methodist New Connexion Churches merged. In polity it largely followed the Wesleyan Methodist Church.
In 1945 Kingsley Ridgway offered himself as a Melbourne-based "field representative" for a possible Australian branch of the Wesleyan Methodist Church of America, after meeting an American serviceman who was a member of that denomination. The Wesleyan Methodist Church of Australia was founded on his work.
The Methodist Church of Australasia merged with the majority of the Presbyterian Church of Australia and the Congregational Union of Australia in 1977, becoming the Uniting Church. The Wesleyan Methodist Church of Australia and some independent congregations chose not to join the union.
Wesley Mission in Pitt Street, Sydney, the largest parish in the Uniting Church, remains strongly in the Wesleyan tradition. There are many local churches named after John Wesley.
From the mid-1980s a number of independent Methodist churches were founded by missionaries and other members from the Methodist Churches of Malaysia and Singapore. Some of these came together to form what is now known as the Chinese Methodist Church in Australia in 1993, and it held its first full Annual Conference in 2002. Since the 2000s many independent Methodist churches have also been established or grown by Tongan immigrants.
==== Fiji ====
As a result of the early efforts of missionaries, most of the natives of the Fiji Islands were converted to Methodism in the 1840s and 1850s. According to the 2007 census, 34.6% of the population (including almost two-thirds of ethnic Fijians), are adherents of Methodism, making Fiji one of the most Methodist nations. The Methodist Church of Fiji and Rotuma, the largest religious denomination, is an important social force along with the traditional chiefly system. In the past, the church once called for a theocracy and fueled anti-Hindu sentiment.
==== New Zealand ====
In June 1823 Wesleydale, the first Wesleyan Methodist mission in New Zealand, was established at Kaeo. The Methodist Church of New Zealand, which is directly descended from the 19th-century missionaries, was the fourth-most common Christian denomination recorded in the 2018 New Zealand census.
Since the early 1990s, missionaries and other Methodists from Malaysia and Singapore established Methodist churches around major urban areas in New Zealand. These congregations came together to form the Chinese Methodist Church in New Zealand (CMCNZ) in 2003.{
==== Samoan Islands ====
The Methodist Church is the third largest denomination throughout the Samoan Islands, in both Samoa and American Samoa. In 1868, Piula Theological College was established in Lufilufi on the north coast of Upolu island in Samoa and serves as the main headquarters of the Methodist church in the country. The college includes the historic Piula Monastery as well as Piula Cave Pool, a natural spring situated beneath the church by the sea.
==== Tonga ====
Methodism had a particular resonance with the inhabitants of Tonga. In the 1830s Wesleyan missionaries converted paramount chief Taufa'ahau Tupou who in turn converted fellow islanders. Today, Methodism is represented on the islands by the Free Church of Tonga and the Free Wesleyan Church, which is the largest church in Tonga. As of 2011 48% of Tongans adhered to Methodist churches. The royal family of the country are prominent members of the Free Wesleyan Church, and the late king was a lay preacher. Tongan Methodist minister Sione 'Amanaki Havea developed coconut theology, which tailors theology to a Pacific Islands context.
== Ecumenical relations ==
Many Methodists have been involved in the ecumenical movement, which has sought to unite the fractured denominations of Christianity. Because Methodism grew out of the Church of England, a denomination from which neither of the Wesley brothers seceded, some Methodist scholars and historians, such as Rupert E. Davies, have regarded their 'movement' more as a preaching order within wider Christian life than as a church, comparing them with the Franciscans, who formed a religious order within the medieval European church and not a separate denomination. Certainly, Methodists have been deeply involved in early examples of church union, especially the United Church of Canada and the Church of South India.
A disproportionate number of Methodists take part in inter-faith dialogue. For example, Wesley Ariarajah, a long-serving director of the World Council of Churches' sub-unit on "Dialogue with People of Living Faiths and Ideologies" is a Methodist.
In October 1999, an executive committee of the World Methodist Council resolved to explore the possibility of its member churches becoming associated with the doctrinal agreement which had been reached by the Catholic Church and Lutheran World Federation (LWF). In May 2006, the International Methodist–Catholic Dialogue Commission completed its most recent report, entitled "The Grace Given You in Christ: Catholics and Methodists Reflect Further on the Church", and submitted the text to Methodist and Catholic authorities. In July of the same year, in Seoul, South Korea, the Member Churches of the World Methodist Council (WMC) voted to approve and sign a "Methodist Statement of Association" with the Joint Declaration on the Doctrine of Justification, the agreement which was reached and officially accepted in 1999 by the Catholic Church and the Lutheran World Federation and which proclaimed that:
"Together we confess: By grace alone, in faith in Christ's saving work and not because of any merit on our part, we are accepted by God and receive the Holy Spirit, who renews our hearts while equipping and calling us to good works... as sinners our new life is solely due to the forgiving and renewing mercy that God imparts as a gift and that we receive in faith, and never can merit in any way," affirming "fundamental doctrinal agreement" concerning justification between the Catholic Church, the LWF, and the World Methodist Council.
This is not to say there is perfect agreement between the three denominational traditions; while Catholics and Methodists believe that salvation involves cooperation between God and man, Lutherans believe that God brings about the salvation of individuals without any cooperation on their part.
Commenting on the ongoing dialogues with Catholic Church leaders, Ken Howcroft, Methodist minister and the Ecumenical Officer for the Methodist Church of Great Britain, noted that "these conversations have been immensely fruitful." Methodists are increasingly recognizing that the 15 centuries prior to the Reformation constitute a shared history with Catholics, and are gaining new appreciation for neglected aspects of the Catholic tradition. There are, however, important unresolved doctrinal differences separating Roman Catholicism and Methodism, which include "the nature and validity of the ministry of those who preside at the Eucharist [Holy Communion], the precise meaning of the Eucharist as the sacramental 'memorial' of Christ's saving death and resurrection, the particular way in which Christ is present in Holy Communion, and the link between eucharistic communion and ecclesial communion".
In the 1960s, the Methodist Church of Great Britain made ecumenical overtures to the Church of England, aimed at denominational union. Formally, these failed when they were rejected by the Church of England's General Synod in 1972; conversations and co-operation continued, however, leading in 2003 to the signing of a covenant between the two churches. From the 1970s onward, the Methodist Church also started several Local Ecumenical Projects (LEPs, later renamed Local Ecumenical Partnerships) with local neighbouring denominations, which involved sharing churches, schools and in some cases ministers. In many towns and villages Methodists are involved in LEPs which are sometimes with Anglican or Baptist churches, but most commonly Methodist and United Reformed Church. In terms of belief, practice and churchmanship, many Methodists see themselves as closer to the United Reformed Church (another Nonconformist church) than to the Church of England. In the 1990s and early 21st century, the British Methodist Church was involved in the Scottish Church Initiative for Union, seeking greater unity with the established and Presbyterian Church of Scotland, the Scottish Episcopal Church and the United Reformed Church in Scotland.
The Methodist Church of Great Britain is a member of several ecumenical organisations, including the World Council of Churches, the Conference of European Churches, the Community of Protestant Churches in Europe, Churches Together in Britain and Ireland, Churches Together in England, Action of Churches Together in Scotland and Cytûn (Wales).
Methodist denominations in the United States have also strengthened ties with other Christian traditions. In April 2005, bishops in the United Methodist Church approved A Proposal for Interim Eucharistic Sharing. This document was the first step toward full communion with the Evangelical Lutheran Church in America (ELCA). The ELCA approved this same document in August 2005. At the 2008 General Conference, the United Methodist Church approved full communion with the ELCA. The UMC is also in dialogue with the Episcopal Church for full communion. The UMC and ELC worked together on a document called "Confessing Our Faith Together".
== See also ==
List of Methodists
List of Methodist theologians
List of Methodist churches
List of Methodist denominations
Saints in Methodism
William Taylor (bishop) (1821–1902) — Missionary who introduced or promoted Methodism in a number of countries around the world.
== Notes ==
== References ==
== Further reading ==
Abraham, William J. and James E. Kirby (eds.) (2009) The Oxford Handbook of Methodist Studies. 780 pp.; historiography; excerpt
=== World ===
Borgen, Ole E. (1985) John Wesley on the Sacraments: a Theological Study. Grand Rapids, Michigan: Francis Asbury Press, cop. 1972. 307 pp. ISBN 0-310-75191-8
Copplestone, J. Tremayne. (1973) History of Methodist Missions, vol. 4: Twentieth-Century Perspectives. 1288 pp.; comprehensive world coverage for US Methodist missions – online
Cracknell, Kenneth and White, Susan J. (2005) An Introduction to World Methodism, Cambridge University Press, ISBN 0-521-81849-4
Forster, D. A. and Bentley, W. (eds.) (2008) What are we thinking? Reflections on Church and Society from Southern African Methodists. Methodist Publishing House, Cape Town, South Africa. ISBN 978-1-919883-52-6
Forster, D. A. and Bentley, W. (eds.) (2008) Methodism in Southern Africa: A celebration of Wesleyan Mission, AcadSA Publishers, Kempton Park. ISBN 978-1-920212-29-2
Harmon, Nolan B. (ed.) (2 vol. 1974) The Encyclopedia of World Methodism, Nashville, Tennessee: Abingdon Press, ISBN 0-687-11784-4. 2640 pp.
Heitzenrater, Richard P. (1994) Wesley and the People Called Methodists, Nashville, Tennessee: Abingdon Press, ISBN 0-687-01682-7
Hempton, David (2005) Methodism: Empire of the Spirit, Yale University Press, ISBN 0-300-10614-9
Wilson, Kenneth. Methodist Theology. London: T & T Clark International, 2011 (Doing Theology)
Yrigoyen Jr, Charles, and Susan E. Warrick. Historical dictionary of Methodism (2nd ed. Scarecrow Press, 2013)
=== Great Britain ===
Binnie-Dawson, John (2025). A Genealogically Led History of Methodism in North Dorset
Brooks, Alan. (2010) West End Methodism: The Story of Hinde Street, London: Northway Publications, 400 pp.
Davies, Rupert & Rupp, Gordon. (1965) A History of the Methodist Church in Great Britain: Vol 1, Epworth Press
Davies, Rupert & George, A. Raymond & Rupp, Gordon. (1978) A History of the Methodist Church in Great Britain: Vol 2, Epworth Press
Davies, Rupert & George, A. Raymond & Rupp, Gordon. (1983) A History of the Methodist Church in Great Britain: Vol 3, Epworth Press
Davies, Rupert & George, A. Raymond & Rupp, Gordon. (1988) A History of the Methodist Church in Great Britain: Vol 4, Epworth Press
Dowson, Jean and Hutchinson, John. (2003) John Wesley: His Life, Times and Legacy [CD-ROM], Methodist Publishing House, TB214
Edwards, Maldwyn. (1944) Methodism and England: A study of Methodism in its social and political aspects during the period 1850–1932
Halevy, Elie, and Bernard Semmel. (1971) The Birth of Methodism in England
Hempton, David. (1984) Methodism and Politics in British Society, 1750–1850, Stanford University Press, ISBN 0-8047-1269-7
Jones, David Ceri et al. (2012) The Elect Methodists: Calvinistic Methodism in England and Wales, 1735–1811
Kent, John. (2002) Wesley and the Wesleyans, Cambridge University Press, ISBN 0-521-45532-4
Madden, Lionel. (2003) Methodism in Wales: A Short History of the Wesley Tradition, Gomer Press
Milburn, Geoffrey & Batty, Margaret (eds.) (1995) Workaday Preachers: The Story of Methodist Local Preaching, Methodist Publishing House
Stigant, P. (1971) "Wesleyan Methodism and working-class radicalism in the north, 1792–1821." Northern History, Vol 6 (1) pp: 98–116
Thompson, Edward Palmer. (1963) The making of the English working class – a famous classic stressing the role of Methodism
Turner, John Munsey. (2003) John Wesley: The Evangelical Revival and the Rise of Methodism in England
Turner, John M. (1997) Modern Methodism in England, 1932–1996
Warner, Wellman J. (1930) The Wesleyan Movement in the Industrial Revolution, London: Longmans, Green
Vickers, John A, ed. (2000) A Dictionary of Methodism in Britain and Ireland, Epworth Press
=== African Americans ===
Campbell, James T. (1995). Songs of Zion: The African Methodist Episcopal Church in the United States and South Africa, Oxford University Press, ISBN 0-19-507892-6.
George, Carol V.R. (1973). Segregated Sabbaths: Richard Allen and the Rise of Independent Black Churches, 1760–1840, New York: Oxford University Press, LCCN 73076908.
Montgomery, William G. (1993). Under Their Own Vine and Fig Tree: The African-American Church in the South, 1865–1900, Louisiana State University Press, ISBN 0-8071-1745-5.
Walker, Clarence E. (1982). A Rock in a Weary Land: The African Methodist Episcopal Church During the Civil War and Reconstruction, Louisiana State University Press, ISBN 0-8071-0883-9.
Wills, David W. and Newman, Richard (eds.) (1982). Black Apostles at Home and Abroad: Afro-American and the Christian Mission from the Revolution to Reconstruction, Boston, Massachusetts: G. K. Hall, ISBN 0-8161-8482-8.
=== United States ===
Cameron, Richard M. (ed.) (1961). Methodism and Society in Historical Perspective, 4 vol., New York: Abingdon Press.
Lyerly, Cynthia Lynn (1998). Methodism and the Southern Mind, 1770–1810, Religion in America Series, Oxford University Press, ISBN 0-19-511429-9.
Meyer, Donald (1988). The Protestant Search for Political Realism, 1919–1941, Wesleyan University Press, ISBN 0-8195-5203-8.
Schmidt, Jean Miller (1999). Grace Sufficient: A History of Women in American Methodism, 1760–1939, Nashville, Tennessee: Abingdon Press ISBN 0-687-15675-0.
Sweet, William Warren (1954). Methodism in American History, Revision of 1953, Nashville, Tennessee: Abingdon Press, 472 pp.
Wigger, John H. (1998). Taking Heaven by Storm: Methodism and the Rise of Popular Christianity in America, Oxford University Press, ISBN 0-19-510452-8 – pp. ix & 269 focus on 1770–1910.
=== Canada ===
Rawlyk, G. A. (1994). The Canada Fire: Radical Evangelicalism in British North America, 1775–1812, Kingston: McGill-Queen's University Press, ISBN 0-7735-1221-7
Semple, Neil (1996). The Lord's Dominion: The History of Canadian Methodism, Buffalo: McGill-Queen's University Press, ISBN 0-7735-1367-1.
=== Primary sources ===
Richey, Russell E., Rowe, Kenneth E. and Schmidt, Jean Miller (eds.) (2000). The Methodist Experience in America: a sourcebook, Nashville, Tennessee: Abingdon Press, ISBN 978-0-687-24673-1. 756 pp. of original documents.
Sweet, William Warren (ed.) (1946). Religion on the American Frontier: Vol. 4, The Methodists, 1783–1840: A Collection of Source Materials, New York: H. Holt & Co., – 800 pp. of documents regarding the American frontier.
The Archive of the Methodist Missionary Society is held at the School of Oriental and African Studies, London, England. Special Collections | SOAS Library | SOAS University of London.
== External links ==
Methodist History Bookmarks Archived 11 October 2012 at the Wayback Machine
Official website – World Methodist Council
List of Member Churches
World Methodist Evangelical Institute (Official Website) | Wikipedia/Methodism |
CoDel (Controlled Delay; pronounced "coddle") is an active queue management (AQM) algorithm in network routing, developed by Van Jacobson and Kathleen Nichols and published as RFC8289. It is designed to overcome bufferbloat in networking hardware, such as routers, by setting limits on the delay network packets experience as they pass through buffers in this equipment. CoDel aims to improve on the overall performance of the random early detection (RED) algorithm by addressing some of its fundamental misconceptions, as perceived by Jacobson, and by being easier to manage.
In 2012, an implementation of CoDel was written by Dave Täht and Eric Dumazet for the Linux kernel and dual licensed under the GNU General Public License and the 3-clause BSD license. Dumazet's improvement on CoDel is called FQ-CoDel, standing for "Fair/Flow Queue CoDel"; it was first adopted as the standard AQM and packet scheduling solution in 2014 in the OpenWrt 14.07 release called "Barrier Breaker". From there, CoDel and FQ-CoDel have migrated into various downstream projects such as Tomato, dd-wrt, OPNsense and Ubiquiti's "Smart Queues" feature.
== Theory ==
CoDel is based on observations of packet behavior in packet-switched networks under the influence of data buffers. Some of these observations are about the fundamental nature of queueing and the causes of bufferbloat, others relate to weaknesses of alternative queue management algorithms. CoDel was developed as an attempt to address the problem of bufferbloat.
=== Bufferbloat ===
The flow of packets slows down while traveling through a network link between a fast and a slow network, especially at the start of a TCP session, when there is a sudden burst of packets and the slower network may not be able to accept the burst quickly enough. Buffers exist to ease this problem by giving the fast network a place to store packets to be read by the slower network at its own pace. In other words, buffers act like shock absorbers to convert bursty arrivals into smooth, steady departures. However, a buffer has limited capacity. The ideal buffer is sized so it can handle a sudden burst of communication and match the speed of that burst to the speed of the slower network. Ideally, the shock-absorbing situation is characterized by a temporary delay for packets in the buffer during the transmission burst, after which the delay rapidly disappears and the network reaches a balance in offering and handling packets.
The TCP congestion control algorithm relies on packet drops to determine the available bandwidth between two communicating devices. It speeds up the data transfer until packets start to drop, and then slows down the transmission rate. Ideally, it keeps speeding up and slowing down as it finds equilibrium at the speed of the link. For this to work, the packet drops must occur in a timely manner so that the algorithm can responsively select a suitable transfer speed. With packets held in an overly-large buffer, the packets will arrive at their destination but with a higher latency but no packets are dropped so TCP does not slow down. Under these conditions, TCP may even decide that the path of the connection has changed and repeat the search for a new equilibrium.
Having a big and constantly full buffer that causes increased transmission delays and reduced interactivity, especially when looking at two or more simultaneous transmissions over the same channel, is called bufferbloat. Available channel bandwidth can also end up being unused, as some fast destinations may not be reached due to buffers being clogged with data awaiting delivery to slow destinations.
=== Good and bad queues ===
CoDel distinguishes between two types of queue: A good queue is one that exhibits no bufferbloat. Communication bursts cause no more than a temporary increase in queue delay. The network link utilization is maximized. A bad queue exhibits bufferbloat. Communication bursts cause the buffer to fill up and stay filled, resulting in low utilization and a constantly high buffer delay. In order to be effective against bufferbloat, a solution in the form of an active queue management (AQM) algorithm must be able to recognize an occurrence of bufferbloat and react by deploying effective countermeasures.
Van Jacobson asserted in 2006 that existing algorithms have been using incorrect means of recognizing bufferbloat. Algorithms like RED measure the average queue length and consider it a case of bufferbloat if the average grows too large. Jacobson demonstrated in 2006 that this measurement is not a good metric, as the average queue length rises sharply in the case of a communications burst. The queue can then dissipate quickly (good queue) or become a standing queue (bad queue). Other factors in network traffic can also cause false positives or negatives, causing countermeasures to be deployed unnecessarily. Jacobson suggested that average queue length actually contains no information at all about packet demand or network load. He suggested that a better metric might be the minimum queue length during a sliding time window.
== Algorithm ==
Based on Jacobson's notion from 2006, CoDel was developed to manage queues under control of the minimum delay experienced by packets in the running buffer window. The goal is to keep this minimum delay below 5 milliseconds. If the minimum delay rises to too high a value, packets are dropped from the queue until the delay drops below the maximum level. Nichols and Jacobson cite several advantages to using nothing more than this metric:
CoDel is parameterless. One of the weaknesses in the RED algorithm (according to Jacobson) is that it is too difficult to configure, especially in an environment with dynamic link rates.
CoDel treats good queue and bad queue differently. A good queue has low delays by nature, so the management algorithm can ignore it, while a bad queue is subject to management intervention in the form of dropping packets.
CoDel works off of a parameter that is determined completely locally; It is independent of round-trip delays, link rates, traffic loads and other factors that cannot be controlled or predicted by the local buffer.
The local minimum delay can only be determined when a packet leaves the buffer, so no extra delay is needed to run the queue to collect statistics to manage the queue.
CoDel adapts to dynamically changing link rates with no negative impact on utilization.
The CoDel implementation is relatively simple and therefore can span the spectrum from low-end home routers to high-end routing solutions.
CoDel does nothing to manage the buffer if the minimum delay for the buffer window is below the maximum allowed value. It also does nothing if the buffer is relatively empty (if there are fewer than one MTU's worth of bytes in the buffer). If these conditions do not hold, then CoDel drops packets probabilistically.
The algorithm is independently computed at each network hop. The algorithm operates over an interval, initially 100 milliseconds. Per-packet queuing delay is monitored through the hop. As each packet is dequeued for forwarding, the queuing delay (amount of time the packet spent waiting in the queue) is calculated. The lowest queuing delay for the interval is stored. When the last packet of the interval is dequeued, if the lowest queuing delay for the interval is greater than 5 milliseconds, this single packet is dropped and the interval used for the next group of packets is shortened. If the lowest queuing delay for the interval is less than 5 milliseconds, the packet is forwarded and the interval is reset to 100 milliseconds.
When the interval is shortened, it is done so in accordance with the inverse square root of the number of successive intervals in which packets were dropped due to excessive queuing delay. The sequence of intervals is
100
{\displaystyle 100}
,
100
2
{\displaystyle {100 \over {\sqrt {2}}}}
,
100
3
{\displaystyle {100 \over {\sqrt {3}}}}
,
100
4
{\displaystyle {100 \over {\sqrt {4}}}}
,
100
5
{\displaystyle {100 \over {\sqrt {5}}}}
...
== Simulation results ==
CoDel has been tested in simulation tests by Nichols and Jacobson, at different MTUs and link rates and other variations of conditions. In general, results indicate:
In comparison to RED, CoDel keeps the packet delay closer to the target value across the full range of bandwidths (from 3 to 100 Mbit/s). The measured link utilizations are consistently near 100% of link bandwidth.
At lower MTU, packet delays are lower than at higher MTU. Higher MTU results in good link utilization, lower MTU results in good link utilization at lower bandwidth, degrading to fair utilization at high bandwidth.
Simulation was also performed by Greg White and Joey Padden at CableLabs.
== Implementation ==
A full implementation of CoDel was realized in May 2012 and made available as open-source software. It was implemented within the Linux kernel (starting with the 3.5 mainline). Dave Täht back-ported CoDel to Linux kernel 3.3 for project CeroWrt, which concerns itself among other things with bufferbloat, where it was exhaustively tested. CoDel began to appear as an option in some proprietary/turnkey bandwidth management platforms in 2013. FreeBSD had CoDel integrated into the 11.x and 10.x code branches in 2016. An implementation is distributed with OpenBSD since version 6.2.
== Derived algorithms ==
Fair/Flow Queue CoDel (FQ-CoDel; fq_codel in Linux code) adds flow queuing to CoDel so that it differentiates between multiple simultaneous connections and works fairly. It gives the first packet in each stream priority, so that small streams can start and finish quickly for better use of network resources. CoDel co-author Van Jacobson recommends the use of fq_codel over codel where it's available. FQ-CoDel is published as RFC8290. It is written by T. Hoeiland-Joergensen, P. McKenney, D. Täht, J. Gettys, and E. Dumazet, all members of the "bufferbloat project".
Common Applications Kept Enhanced (CAKE; sch_cake in Linux code) is a combined traffic shaper and AQM algorithm presented by the bufferbloat project in 2018. It builds on the experience of using fq_codel with the HTB (Hierarchy Token Bucket) traffic shaper. It improves over the Linux htb+fq_codel implementation by reducing hash collisions between flows, reducing CPU utilization in traffic shaping, and in a few other ways.
In 2022, Dave Täht reviewed the state of fq_codel and sch_cake implementations in the wild. He found that while many systems have switched to either as the default AQM, several implementations have dubious deviations from the standard. For example, Apple's implementation of fq_codel (default in iOS) has a very large number of users but no "codel" component. Täht also notes the general lack of hardware offloading, made more important by the increase in network traffic brought by the COVID-19 pandemic.
== See also ==
Sliding window protocol
TCP tuning
== References ==
== External links ==
CoDel pseudocode
Fundamental Progress Solving Bufferbloat | Wikipedia/CAKE_(queue_management_algorithm) |
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows.
The scheduling activity is carried out by a mechanism called a scheduler. Schedulers are often designed so as to keep all computer resources busy (as in load balancing), allow multiple users to share system resources effectively, or to achieve a target quality-of-service.
Scheduling is fundamental to computation itself, and an intrinsic part of the execution model of a computer system; the concept of scheduling makes it possible to have computer multitasking with a single central processing unit (CPU).
== Goals ==
A scheduler may aim at one or more goals, for example:
maximizing throughput (the total amount of work completed per time unit);
minimizing wait time (time from work becoming ready until the first point it begins execution);
minimizing latency or response time (time from work becoming ready until it is finished in case of batch activity, or until the system responds and hands the first output to the user in case of interactive activity);
maximizing fairness (equal CPU time to each process, or more generally appropriate times according to the priority and workload of each process).
In practice, these goals often conflict (e.g. throughput versus latency), thus a scheduler will implement a suitable compromise. Preference is measured by any one of the concerns mentioned above, depending upon the user's needs and objectives.
In real-time environments, such as embedded systems for automatic control in industry (for example robotics), the scheduler also must ensure that processes can meet deadlines; this is crucial for keeping the system stable. Scheduled tasks can also be distributed to remote devices across a network and managed through an administrative back end.
== Types of operating system schedulers ==
The scheduler is an operating system module that selects the next jobs to be admitted into the system and the next process to run. Operating systems may feature up to three distinct scheduler types: a long-term scheduler (also known as an admission scheduler or high-level scheduler), a mid-term or medium-term scheduler, and a short-term scheduler. The names suggest the relative frequency with which their functions are performed.
=== Process scheduler ===
The process scheduler is a part of the operating system that decides which process runs at a certain point in time. It usually has the ability to pause a running process, move it to the back of the running queue and start a new process; such a scheduler is known as a preemptive scheduler, otherwise it is a cooperative scheduler.
We distinguish between long-term scheduling, medium-term scheduling, and short-term scheduling based on how often decisions must be made.
==== Long-term scheduling ====
The long-term scheduler, or admission scheduler, decides which jobs or processes are to be admitted to the ready queue (in main memory); that is, when an attempt is made to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this scheduler dictates what processes are to run on a system, the degree of concurrency to be supported at any one time – whether many or few processes are to be executed concurrently, and how the split between I/O-intensive and CPU-intensive processes is to be handled. The long-term scheduler is responsible for controlling the degree of multiprogramming.
In general, most processes can be described as either I/O-bound or CPU-bound. An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using more of its time doing computations. It is important that a long-term scheduler selects a good process mix of I/O-bound and CPU-bound processes. If all processes are I/O-bound, the ready queue will almost always be empty, and the short-term scheduler will have little to do. On the other hand, if all processes are CPU-bound, the I/O waiting queue will almost always be empty, devices will go unused, and again the system will be unbalanced. The system with the best performance will thus have a combination of CPU-bound and I/O-bound processes. In modern operating systems, this is used to make sure that real-time processes get enough CPU time to finish their tasks.
Long-term scheduling is also important in large-scale systems such as batch processing systems, computer clusters, supercomputers, and render farms. For example, in concurrent systems, coscheduling of interacting processes is often required to prevent them from blocking due to waiting on each other. In these cases, special-purpose job scheduler software is typically used to assist these functions, in addition to any underlying admission scheduling support in the operating system.
Some operating systems only allow new tasks to be added if it is sure all real-time deadlines can still be met.
The specific heuristic algorithm used by an operating system to accept or reject new tasks is the admission control mechanism.
==== Medium-term scheduling ====
The medium-term scheduler temporarily removes processes from main memory and places them in secondary memory (such as a hard disk drive) or vice versa, which is commonly referred to as swapping out or swapping in (also incorrectly as paging out or paging in). The medium-term scheduler may decide to swap out a process that has not been active for some time, a process that has a low priority, a process that is page faulting frequently, or a process that is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource.
In many systems today (those that support mapping virtual address space to secondary storage other than the swap file), the medium-term scheduler may actually perform the role of the long-term scheduler, by treating binaries as swapped-out processes upon their execution. In this way, when a segment of the binary is required it can be swapped in on demand, or lazy loaded, also called demand paging.
==== Short-term scheduling ====
The short-term scheduler (also known as the CPU scheduler) decides which of the ready, in-memory processes is to be executed (allocated a CPU) after a clock interrupt, an I/O interrupt, an operating system call or another form of signal. Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term or mid-term schedulers – A scheduling decision will at a minimum have to be made after every time slice, and these are very short. This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another process, or non-preemptive (also known as voluntary or co-operative), in which case the scheduler is unable to force processes off the CPU.
A preemptive scheduler relies upon a programmable interval timer which invokes an interrupt handler that runs in kernel mode and implements the scheduling function.
==== Dispatcher ====
Another component that is involved in the CPU-scheduling function is the dispatcher, which is the module that gives control of the CPU to the process selected by the short-term scheduler. It receives control in kernel mode as the result of an interrupt or system call. The functions of a dispatcher involve the following:
Context switches, in which the dispatcher saves the state (also known as context) of the process or thread that was previously running; the dispatcher then loads the initial or previously saved state of the new process.
Switching to user mode.
Jumping to the proper location in the user program to restart that program indicated by its new state.
The dispatcher should be as fast as possible since it is invoked during every process switch. During the context switches, the processor is virtually idle for a fraction of time, thus unnecessary context switches should be avoided. The time it takes for the dispatcher to stop one process and start another is known as the dispatch latency.: 155
== Scheduling disciplines ==
A scheduling discipline (also called scheduling policy or scheduling algorithm) is an algorithm used for distributing resources among parties which simultaneously and asynchronously request them. Scheduling disciplines are used in routers (to handle packet traffic) as well as in operating systems (to share CPU time among both threads and processes), disk drives (I/O scheduling), printers (print spooler), most embedded systems, etc.
The main purposes of scheduling algorithms are to minimize resource starvation and to ensure fairness amongst the parties utilizing the resources. Scheduling deals with the problem of deciding which of the outstanding requests is to be allocated resources. There are many different scheduling algorithms. In this section, we introduce several of them.
In packet-switched computer networks and other statistical multiplexing, the notion of a scheduling algorithm is used as an alternative to first-come first-served queuing of data packets.
The simplest best-effort scheduling algorithms are round-robin, fair queuing (a max-min fair scheduling algorithm), proportional-fair scheduling and maximum throughput. If differentiated or guaranteed quality of service is offered, as opposed to best-effort communication, weighted fair queuing may be utilized.
In advanced packet radio wireless networks such as HSDPA (High-Speed Downlink Packet Access) 3.5G cellular system, channel-dependent scheduling may be used to take advantage of channel state information. If the channel conditions are favourable, the throughput and system spectral efficiency may be increased. In even more advanced systems such as LTE, the scheduling is combined by channel-dependent packet-by-packet dynamic channel allocation, or by assigning OFDMA multi-carriers or other frequency-domain equalization components to the users that best can utilize them.
=== First come, first served ===
First in, first out (FIFO), also known as first come, first served (FCFS), is the simplest scheduling algorithm. FIFO simply queues processes in the order that they arrive in the ready queue. This is commonly used for a task queue, for example as illustrated in this section.
Since context switches only occur upon process termination, and no reorganization of the process queue is required, scheduling overhead is minimal.
Throughput can be low, because long processes can be holding the CPU, causing the short processes to wait for a long time (known as the convoy effect).
No starvation, because each process gets chance to be executed after a definite time.
Turnaround time, waiting time and response time depend on the order of their arrival and can be high for the same reasons above.
No prioritization occurs, thus this system has trouble meeting process deadlines.
The lack of prioritization means that as long as every process eventually completes, there is no starvation. In an environment where some processes might not complete, there can be starvation.
It is based on queuing.
=== Priority scheduling ===
Earliest deadline first (EDF) or least time to go is a dynamic scheduling algorithm used in real-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs (a task finishes, new task is released, etc.), the queue will be searched for the process closest to its deadline, which will be the next to be scheduled for execution.
=== Shortest remaining time first ===
Similar to shortest job first (SJF). With this strategy the scheduler arranges processes with the least estimated processing time remaining to be next in the queue. This requires advanced knowledge or estimations about the time required for a process to complete.
If a shorter process arrives during another process' execution, the currently running process is interrupted (known as preemption), dividing that process into two separate computing blocks. This creates excess overhead through additional context switching. The scheduler must also place each incoming process into a specific place in the queue, creating additional overhead.
This algorithm is designed for maximum throughput in most scenarios.
Waiting time and response time increase as the process's computational requirements increase. Since turnaround time is based on waiting time plus processing time, longer processes are significantly affected by this. Overall waiting time is smaller than FIFO, however since no process has to wait for the termination of the longest process.
No particular attention is given to deadlines, the programmer can only attempt to make processes with deadlines as short as possible.
Starvation is possible, especially in a busy system with many small processes being run.
To use this policy we should have at least two processes of different priority
=== Fixed-priority pre-emptive scheduling ===
The operating system assigns a fixed-priority rank to every process, and the scheduler arranges the processes in the ready queue in order of their priority. Lower-priority processes get interrupted by incoming higher-priority processes.
Overhead is not minimal, nor is it significant.
FPPS has no particular advantage in terms of throughput over FIFO scheduling.
If the number of rankings is limited, it can be characterized as a collection of FIFO queues, one for each priority ranking. Processes in lower-priority queues are selected only when all of the higher-priority queues are empty.
Waiting time and response time depend on the priority of the process. Higher-priority processes have smaller waiting and response times.
Deadlines can be met by giving processes with deadlines a higher priority.
Starvation of lower-priority processes is possible with large numbers of high-priority processes queuing for CPU time.
=== Round-robin scheduling ===
The scheduler assigns a fixed time unit per process, and cycles through them. If process completes within that time-slice it gets terminated otherwise it is rescheduled after giving a chance to all other processes.
RR scheduling involves extensive overhead, especially with a small time unit.
Balanced throughput between FCFS/FIFO and SJF/SRTF, shorter jobs are completed faster than in FIFO and longer processes are completed faster than in SJF.
Good average response time, waiting time is dependent on number of processes, and not average process length.
Because of high waiting times, deadlines are rarely met in a pure RR system.
Starvation can never occur, since no priority is given. Order of time unit allocation is based upon process arrival time, similar to FIFO.
If Time-Slice is large it becomes FCFS/FIFO or if it is short then it becomes SJF/SRTF.
=== Multilevel queue scheduling ===
This is used for situations in which processes are easily divided into different groups. For example, a common division is made between foreground (interactive) processes and background (batch) processes. These two types of processes have different response-time requirements and so may have different scheduling needs. It is very useful for shared memory problems.
=== Work-conserving schedulers ===
A work-conserving scheduler is a scheduler that always tries to keep the scheduled resources busy, if there are submitted jobs ready to be scheduled. In contrast, a non-work conserving scheduler is a scheduler that, in some cases, may leave the scheduled resources idle despite the presence of jobs ready to be scheduled.
=== Scheduling optimization problems ===
There are several scheduling problems in which the goal is to decide which job goes to which station at what time, such that the total makespan is minimized:
Job-shop scheduling – there are n jobs and m identical stations. Each job should be executed on a single machine. This is usually regarded as an online problem.
Open-shop scheduling – there are n jobs and m different stations. Each job should spend some time at each station, in a free order.
Flow-shop scheduling – there are n jobs and m different stations. Each job should spend some time at each station, in a pre-determined order.
=== Manual scheduling ===
A very common method in embedded systems is to schedule jobs manually. This can for example be done in a time-multiplexed fashion. Sometimes the kernel is divided in three or more parts: Manual scheduling, preemptive and interrupt level. Exact methods for scheduling jobs are often proprietary.
No resource starvation problems
Very high predictability; allows implementation of hard real-time systems
Almost no overhead
May not be optimal for all applications
Effectiveness is completely dependent on the implementation
=== Choosing a scheduling algorithm ===
When designing an operating system, a programmer must consider which scheduling algorithm will perform best for the use the system is going to see. There is no universal best scheduling algorithm, and many operating systems use extended or combinations of the scheduling algorithms above.
For example, Windows NT/XP/Vista uses a multilevel feedback queue, a combination of fixed-priority preemptive scheduling, round-robin, and first in, first out algorithms. In this system, threads can dynamically increase or decrease in priority depending on if it has been serviced already, or if it has been waiting extensively. Every priority level is represented by its own queue, with round-robin scheduling among the high-priority threads and FIFO among the lower-priority ones. In this sense, response time is short for most threads, and short but critical system threads get completed very quickly. Since threads can only use one time unit of the round-robin in the highest-priority queue, starvation can be a problem for longer high-priority threads.
== Operating system process scheduler implementations ==
The algorithm used may be as simple as round-robin in which each process is given equal time (for instance 1 ms, usually between 1 ms and 100 ms) in a cycling list. So, process A executes for 1 ms, then process B, then process C, then back to process A.
More advanced algorithms take into account process priority, or the importance of the process. This allows some processes to use more time than other processes. The kernel always uses whatever resources it needs to ensure proper functioning of the system, and so can be said to have infinite priority. In SMP systems, processor affinity is considered to increase overall system performance, even if it may cause a process itself to run more slowly. This generally improves performance by reducing cache thrashing.
=== OS/360 and successors ===
IBM OS/360 was available with three different schedulers. The differences were such that the variants were often considered three different operating systems:
The Single Sequential Scheduler option, also known as the Primary Control Program (PCP) provided sequential execution of a single stream of jobs.
The Multiple Sequential Scheduler option, known as Multiprogramming with a Fixed Number of Tasks (MFT) provided execution of multiple concurrent jobs. Execution was governed by a priority which had a default for each stream or could be requested separately for each job. MFT version II added subtasks (threads), which executed at a priority based on that of the parent job. Each job stream defined the maximum amount of memory which could be used by any job in that stream.
The Multiple Priority Schedulers option, or Multiprogramming with a Variable Number of Tasks (MVT), featured subtasks from the start; each job requested the priority and memory it required before execution.
Later virtual storage versions of MVS added a Workload Manager feature to the scheduler, which schedules processor resources according to an elaborate scheme defined by the installation.
=== Windows ===
Very early MS-DOS and Microsoft Windows systems were non-multitasking, and as such did not feature a scheduler. Windows 3.1x used a non-preemptive scheduler, meaning that it did not interrupt programs. It relied on the program to end or tell the OS that it didn't need the processor so that it could move on to another process. This is usually called cooperative multitasking. Windows 95 introduced a rudimentary preemptive scheduler; however, for legacy support opted to let 16-bit applications run without preemption.
Windows NT-based operating systems use a multilevel feedback queue. 32 priority levels are defined, 0 through to 31, with priorities 0 through 15 being normal priorities and priorities 16 through 31 being soft real-time priorities, requiring privileges to assign. 0 is reserved for the Operating System. User interfaces and APIs work with priority classes for the process and the threads in the process, which are then combined by the system into the absolute priority level.
The kernel may change the priority level of a thread depending on its I/O and CPU usage and whether it is interactive (i.e. accepts and responds to input from humans), raising the priority of interactive and I/O bounded processes and lowering that of CPU bound processes, to increase the responsiveness of interactive applications. The scheduler was modified in Windows Vista to use the cycle counter register of modern processors to keep track of exactly how many CPU cycles a thread has executed, rather than just using an interval-timer interrupt routine. Vista also uses a priority scheduler for the I/O queue so that disk defragmenters and other such programs do not interfere with foreground operations.
=== Classic Mac OS and macOS ===
Mac OS 9 uses cooperative scheduling for threads, where one process controls multiple cooperative threads, and also provides preemptive scheduling for multiprocessing tasks. The kernel schedules multiprocessing tasks using a preemptive scheduling algorithm. All Process Manager processes run within a special multiprocessing task, called the blue task. Those processes are scheduled cooperatively, using a round-robin scheduling algorithm; a process yields control of the processor to another process by explicitly calling a blocking function such as WaitNextEvent. Each process has its own copy of the Thread Manager that schedules that process's threads cooperatively; a thread yields control of the processor to another thread by calling YieldToAnyThread or YieldToThread.
macOS uses a multilevel feedback queue, with four priority bands for threads – normal, system high priority, kernel mode only, and real-time. Threads are scheduled preemptively; macOS also supports cooperatively scheduled threads in its implementation of the Thread Manager in Carbon.
=== AIX ===
In AIX Version 4 there are three possible values for thread scheduling policy:
First In, First Out: Once a thread with this policy is scheduled, it runs to completion unless it is blocked, it voluntarily yields control of the CPU, or a higher-priority thread becomes dispatchable. Only fixed-priority threads can have a FIFO scheduling policy.
Round Robin: This is similar to the AIX Version 3 scheduler round-robin scheme based on 10 ms time slices. When a RR thread has control at the end of the time slice, it moves to the tail of the queue of dispatchable threads of its priority. Only fixed-priority threads can have a Round Robin scheduling policy.
OTHER: This policy is defined by POSIX1003.4a as implementation-defined. In AIX Version 4, this policy is defined to be equivalent to RR, except that it applies to threads with non-fixed priority. The recalculation of the running thread's priority value at each clock interrupt means that a thread may lose control because its priority value has risen above that of another dispatchable thread. This is the AIX Version 3 behavior.
Threads are primarily of interest for applications that currently consist of several asynchronous processes. These applications might impose a lighter load on the system if converted to a multithreaded structure.
AIX 5 implements the following scheduling policies: FIFO, round robin, and a fair round robin. The FIFO policy has three different implementations: FIFO, FIFO2, and FIFO3. The round robin policy is named SCHED_RR in AIX, and the fair round robin is called SCHED_OTHER.
=== Linux ===
==== Linux 1.2 ====
Linux 1.2 used a round-robin scheduling policy.
==== Linux 2.2 ====
Linux 2.2 added scheduling classes and support for symmetric multiprocessing (SMP).
==== Linux 2.4 ====
In Linux 2.4, an O(n) scheduler with a multilevel feedback queue with priority levels ranging from 0 to 140 was used; 0–99 are reserved for real-time tasks and 100–140 are considered nice task levels. For real-time tasks, the time quantum for switching processes was approximately 200 ms, and for nice tasks approximately 10 ms. The scheduler ran through the run queue of all ready processes, letting the highest priority processes go first and run through their time slices, after which they will be placed in an expired queue. When the active queue is empty the expired queue will become the active queue and vice versa.
However, some enterprise Linux distributions such as SUSE Linux Enterprise Server replaced this scheduler with a backport of the O(1) scheduler (which was maintained by Alan Cox in his Linux 2.4-ac Kernel series) to the Linux 2.4 kernel used by the distribution.
==== Linux 2.6.0 to Linux 2.6.22 ====
In versions 2.6.0 to 2.6.22, the kernel used an O(1) scheduler developed by Ingo Molnar and many other kernel developers during the Linux 2.5 development. For many kernel in time frame, Con Kolivas developed patch sets which improved interactivity with this scheduler or even replaced it with his own schedulers.
==== Linux 2.6.23 to Linux 6.5 ====
Con Kolivas' work, most significantly his implementation of fair scheduling named Rotating Staircase Deadline (RSDL), inspired Ingo Molnár to develop the Completely Fair Scheduler (CFS) as a replacement for the earlier O(1) scheduler, crediting Kolivas in his announcement. CFS is the first implementation of a fair queuing process scheduler widely used in a general-purpose operating system.
The CFS uses a well-studied, classic scheduling algorithm called fair queuing originally invented for packet networks. Fair queuing had been previously applied to CPU scheduling under the name stride scheduling. The fair queuing CFS scheduler has a scheduling complexity of
O
(
log
N
)
{\displaystyle O(\log N)}
, where N is the number of tasks in the runqueue. Choosing a task can be done in constant time, but reinserting a task after it has run requires
O
(
log
N
)
{\displaystyle O(\log N)}
operations, because the run queue is implemented as a red–black tree.
The Brain Fuck Scheduler, also created by Con Kolivas, is an alternative to the CFS.
==== Linux 6.6 ====
In 2023, Peter Zijlstra proposed replacing CFS with an earliest eligible virtual deadline first scheduling (EEVDF) process scheduler. The aim was to remove the need for CFS latency nice patches.
==== Linux 6.12 ====
Linux 6.12 added support for userspace scheduler extensions, also known as sched_ext. These schedulers can be installed and replace the default scheduler.
=== FreeBSD ===
FreeBSD uses a multilevel feedback queue with priorities ranging from 0–255. 0–63 are reserved for interrupts, 64–127 for the top half of the kernel, 128–159 for real-time user threads, 160–223 for time-shared user threads, and 224–255 for idle user threads. Also, like Linux, it uses the active queue setup, but it also has an idle queue.
=== NetBSD ===
NetBSD uses a multilevel feedback queue with priorities ranging from 0–223. 0–63 are reserved for time-shared threads (default, SCHED_OTHER policy), 64–95 for user threads which entered kernel space, 96-128 for kernel threads, 128–191 for user real-time threads (SCHED_FIFO and SCHED_RR policies), and 192–223 for software interrupts.
=== Solaris ===
Solaris uses a multilevel feedback queue with priorities ranging between 0 and 169. Priorities 0–59 are reserved for time-shared threads, 60–99 for system threads, 100–159 for real-time threads, and 160–169 for low priority interrupts. Unlike Linux, when a process is done using its time quantum, it is given a new priority and put back in the queue. Solaris 9 introduced two new scheduling classes, namely fixed-priority class and fair share class. The threads with fixed priority have the same priority range as that of the time-sharing class, but their priorities are not dynamically adjusted. The fair scheduling class uses CPU shares to prioritize threads for scheduling decisions. CPU shares indicate the entitlement to CPU resources. They are allocated to a set of processes, which are collectively known as a project.
=== Summary ===
== See also ==
== Notes ==
== References ==
== Further reading ==
Operating Systems: Three Easy Pieces by Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau. Arpaci-Dusseau Books, 2014. Relevant chapters: Scheduling: Introduction Multi-level Feedback Queue Proportional-share Scheduling Multiprocessor Scheduling
Brief discussion of Job Scheduling algorithms
Understanding the Linux Kernel: Chapter 10 Process Scheduling
Kerneltrap: Linux kernel scheduler articles
AIX CPU monitoring and tuning
Josh Aas' introduction to the Linux 2.6.8.1 CPU scheduler implementation
Peter Brucker, Sigrid Knust. Complexity results for scheduling problems [2]
TORSCHE Scheduling Toolbox for Matlab is a toolbox of scheduling and graph algorithms.
A survey on cellular networks packet scheduling
Large-scale cluster management at Google with Borg | Wikipedia/Scheduling_algorithm |
In computer networking, integrated services or IntServ is an architecture that specifies the elements to guarantee quality of service (QoS) on networks. IntServ can for example be used to allow video and sound to reach the receiver without interruption.
IntServ specifies a fine-grained QoS system, which is often contrasted with DiffServ's coarse-grained control system.
Under IntServ, every router in the system implements IntServ, and every application that requires some kind of QoS guarantee has to make an individual reservation. Flow specs describe what the reservation is for, while RSVP is the underlying mechanism to signal it across the network.
== Flow specs ==
There are two parts to a flow spec:
What does the traffic look like? Done in the Traffic SPECification part, also known as TSPEC.
What guarantees does it need? Done in the service Request SPECification part, also known as RSPEC.
TSPECs include token bucket algorithm parameters. The idea is that there is a token bucket which slowly fills up with tokens, arriving at a constant rate. Every packet which is sent requires a token, and if there are no tokens, then it cannot be sent. Thus, the rate at which tokens arrive dictates the average rate of traffic flow, while the depth of the bucket dictates how 'bursty' the traffic is allowed to be.
TSPECs typically just specify the token rate and the bucket depth. For example, a video with a refresh rate of 75 frames per second, with each frame taking 10 packets, might specify a token rate of 750 Hz, and a bucket depth of only 10. The bucket depth would be sufficient to accommodate the 'burst' associated with sending an entire frame all at once. On the other hand, a conversation would need a lower token rate, but a much higher bucket depth. This is because there are often pauses in conversations, so they can make do with less tokens by not sending the gaps between words and sentences. However, this means the bucket depth needs to be increased to compensate for the traffic being burstier.
RSPECs specify what requirements there are for the flow: it can be normal internet 'best effort', in which case no reservation is needed. This setting is likely to be used for webpages, FTP, and similar applications. The 'Controlled Load' setting mirrors the performance of a lightly loaded network: there may be occasional glitches when two people access the same resource by chance, but generally both delay and drop rate are fairly constant at the desired rate. This setting is likely to be used by soft QoS applications. The 'Guaranteed' setting gives an absolutely bounded service, where the delay is promised to never go above a desired amount, and packets never dropped, provided the traffic stays within spec.
== RSVP ==
The Resource Reservation Protocol (RSVP) is described in RFC 2205. All machines on the network capable of sending QoS data send a PATH message every 30 seconds, which spreads out through the networks. Those who want to listen to them send a corresponding RESV (short for "Reserve") message which then traces the path backwards to the sender. The RESV message contains the flow specs.
The routers between the sender and listener have to decide if they can support the reservation being requested, and, if they cannot, they send a reject message to let the listener know about it. Otherwise, once they accept the reservation they have to carry the traffic.
The routers then store the nature of the flow, and also police it. This is all done in soft state, so if nothing is heard for a certain length of time, then the reader will time out and the reservation will be cancelled. This solves the problem if either the sender or the receiver crash or are shut down incorrectly without first cancelling the reservation. The individual routers may, at their option, police the traffic to check that it conforms to the flow specs.
== Problems ==
In order for IntServ to work, all routers along the traffic path must support it. Furthermore, many states must be stored in each router. As a result, IntServ works on a small-scale, but as the system scales up to larger networks or the Internet, it becomes resource intensive to track of all of the reservations.
One way to solve the scalability problem is by using a multi-level approach, where per-microflow resource reservation (such as resource reservation for individual users) is done in the edge network, while in the core network resources are reserved for aggregate flows only. The routers that lie between these different levels must adjust the amount of aggregate bandwidth reserved from the core network so that the reservation requests for individual flows from the edge network can be better satisfied.
== References ==
== External links ==
RFC 1633 - Integrated Services in the Internet Architecture: an Overview
RFC 2211 - Specification of the Controlled-Load Network Element Service
RFC 2212 - Specification of Guaranteed Quality of Service
RFC 2215 - General Characterization Parameters for Integrated Service Network Elements
RFC 2205 - Resource ReSerVation Protocol (RSVP)
Cisco.com, Cisco Whitepaper about IntServ and DiffServ | Wikipedia/Integrated_services |
In packet switching networks, traffic flow, packet flow or network flow is a sequence of packets from a source computer to a destination, which may be another host, a multicast group, or a broadcast domain. RFC 2722 defines traffic flow as "an artificial logical equivalent to a call or connection." RFC 3697 defines traffic flow as "a sequence of packets sent from a particular source to a particular unicast, anycast, or multicast destination that the source desires to label as a flow. A flow could consist of all packets in a specific transport connection or a media stream. However, a flow is not necessarily 1:1 mapped to a transport connection." Flow is also defined in RFC 3917 as "a set of IP packets passing an observation point in the network during a certain time interval."
Packet flow temporal efficiency can be affected by one-way delay (OWD) that is described as a combination of the following components:
Processing delay (the time taken to process a packet in a network node)
Queuing delay (the time a packet waits in a queue until it can be transmitted)
Transmission delay (the amount of time necessary to push all the packet into the wire)
Propagation delay (amount of time it takes the signal’s header to travel from the sender to the receiver)
== Utility for network administration ==
Packets from one flow need to be handled differently from others, by means of separate queues in switches, routers and network adapters, to achieve traffic shaping, policing, fair queueing or quality of service. It is also a concept used in Queueing Network Analyzers (QNAs) or in packet tracing.
Applied to Internet routers, a flow may be a host-to-host communication path, or a socket-to-socket communication identified by a unique combination of source and destination addresses and port numbers, together with transport protocol (for example, UDP or TCP). In the TCP case, a flow may be a virtual circuit, also known as a virtual connection or a byte stream.
In packet switches, the flow may be identified by IEEE 802.1Q Virtual LAN tagging in Ethernet networks, or by a label-switched path in MPLS tag switching.
Packet flow can be represented as a path in a network to model network performance. For example, a water flow network can be used to conceptualize packet flow. Communication channels can be thought of as pipes, with the pipe capacity corresponding to bandwidth and flows corresponding to data throughput. This visualization can help to understand bottlenecks, queuing, and the unique requirements of tailored systems.
== See also ==
Argus – Audit Record Generation and Utilization System
Cisco NetFlow
Dataflow (software engineering)
Data stream
Flow control
IP Flow Information Export
Stream (computing)
Telecommunication circuit
Traffic generation model
== References == | Wikipedia/Traffic_flow_(computer_networking) |
The generic cell rate algorithm (GCRA) is a leaky bucket-type scheduling algorithm for the network scheduler that is used in Asynchronous Transfer Mode (ATM) networks. It is used to measure the timing of cells on virtual channels (VCs) and or Virtual Paths (VPs) against bandwidth and jitter limits contained in a traffic contract for the VC or VP to which the cells belong. Cells that do not conform to the limits given by the traffic contract may then be re-timed (delayed) in traffic shaping, or may be dropped (discarded) or reduced in priority (demoted) in traffic policing. Nonconforming cells that are reduced in priority may then be dropped, in preference to higher priority cells, by downstream components in the network that are experiencing congestion. Alternatively they may reach their destination (VC or VP termination) if there is enough capacity for them, despite them being excess cells as far as the contract is concerned: see priority control.
The GCRA is given as the reference for checking the traffic on connections in the network, i.e. usage/network parameter control (UPC/NPC) at user–network interfaces (UNI) or inter-network interfaces or network-network interfaces (INI/NNI) . It is also given as the reference for the timing of cells transmitted (ATM PDU Data_Requests) onto an ATM network by a network interface card (NIC) in a host, i.e. on the user side of the UNI . This ensures that cells are not then discarded by UPC/NCP in the network, i.e. on the network side of the UNI. However, as the GCRA is only given as a reference, the network providers and users may use any other algorithm that gives the same result.
== Description of the GCRA ==
The GCRA is described by the ATM Forum in its User-Network Interface (UNI) and by the ITU-T in recommendation I.371 Traffic control and congestion control in B-ISDN . Both sources describe the GCRA in two equivalent ways: as a virtual scheduling algorithm and as a continuous state leaky bucket algorithm (figure 1).
=== Leaky bucket description ===
The description in terms of the leaky bucket algorithm may be the easier of the two to understand from a conceptual perspective, as it is based on a simple analogy of a bucket with a leak: see figure 1 on the leaky bucket page. However, there has been confusion in the literature over the application of the leaky bucket analogy to produce an algorithm, which has crossed over to the GCRA. The GCRA should be considered as a version of the leaky bucket as a meter rather than the leaky bucket as a queue.
However, while there are possible advantages in understanding this leaky bucket description, it does not necessarily result in the best (fastest) code if implemented directly. This is evidenced by the relative number of actions to be performed in the flow diagrams for the two descriptions (figure 1).
The description in terms of the continuous state leaky bucket algorithm is given by the ITU-T as follows: "The continuous-state leaky bucket can be viewed as a finite capacity bucket whose real-valued content drains out at a continuous rate of 1 unit of content per time unit and whose content is increased by the increment T for each conforming cell... If at a cell arrival the content of the bucket is less than or equal to the limit value τ, then the cell is conforming; otherwise, the cell is non-conforming. The capacity of the bucket (the upper bound of the counter) is (T + τ)" . It is worth noting that because the leak is one unit of content per unit time, the increment for each cell T and the limit value τ are in units of time.
Considering the flow diagram of the continuous state leaky bucket algorithm, in which T is the emission interval and τ is the limit value: What happens when a cell arrives is that the state of the bucket is calculated from its state when the last conforming cell arrived, X, and how much has leaked out in the interval, ta – LCT. This current bucket value is then stored in X' and compared with the limit value τ. If the value in X' is not greater than τ, the cell did not arrive too early and so conforms to the contract parameters; if the value in X' is greater than τ, then it does not conform. If it conforms then, if it conforms because it was late, i.e. the bucket empty (X' <= 0), X is set to T; if it was early, but not too early, (τ >= X' > 0), X is set to X' + T.
Thus the flow diagram mimics the leaky bucket analogy (used as a meter) directly, with X and X' acting as the analogue of the bucket.
=== Virtual scheduling description ===
The virtual scheduling algorithm, while not so obviously related to such an easily accessible analogy as the leaky bucket, gives a clearer understanding of what the GCRA does and how it may be best implemented. As a result, direct implementation of this version can result in more compact, and thus faster, code than a direct implementation of the leaky bucket description.
The description in terms of the virtual scheduling algorithm is given by the ITU-T as follows: "The virtual scheduling algorithm updates a Theoretical Arrival Time (TAT), which is the 'nominal' arrival time of the cell assuming cells are sent equally spaced at an emission interval of T corresponding to the cell rate Λ [= 1/T] when the source is active. If the actual arrival time of a cell is not 'too early' relative to the TAT and tolerance τ associated to the cell rate, i.e. if the actual arrival time is after its theoretical arrive time minus the limit value (ta > TAT – τ), then the cell is conforming; otherwise, the cell is nonconforming" . If the cell is nonconforming then TAT is left unchanged. If the cell is conforming, and arrived before its TAT (equivalent to the bucket not being empty but being less than the limit value), then the next cell's TAT is simply TAT + T. However, if a cell arrives after its TAT, then the TAT for the next cell is calculated from this cell's arrival time, not its TAT. This prevents credit building up when there is a gap in the transmission (equivalent to the bucket becoming less than empty).
This version of the algorithm works because τ defines how much earlier a cell can arrive than it would if there were no jitter: see leaky bucket: delay variation tolerance. Another way to see it is that TAT represents when the bucket will next empty, so a time τ before that is when the bucket is exactly filled to the limit value. So, in either view, if it arrives more than τ before TAT, it is too early to conform.
=== Comparison with the token bucket ===
The GCRA, unlike implementations of the token bucket algorithm, does not simulate the process of updating the bucket (the leak or adding tokens regularly). Rather, each time a cell arrives it calculates the amount by which the bucket will have leaked since its level was last calculated or when the bucket will next empty (= TAT). This is essentially replacing the leak process with a (realtime) clock, which most hardware implementations are likely to already have.
This replacement of the process with an RTC is possible because ATM cells have a fixed length (53 bytes), thus T is always a constant, and the calculation of the new bucket level (or of TAT) does not involve any multiplication or division. As a result, the calculation can be done quickly in software, and while more actions are taken when a cell arrives than are taken by the token bucket, in terms of the load on a processor performing the task, the lack of a separate update process more than compensates for this. Moreover, because there is no simulation of the bucket update, there is no processor load at all when the connection is quiescent.
However, if the GCRA were to be used to limit to a bandwidth, rather than a packet/frame rate, in a protocol with variable length packets (Link Layer PDUs), it would involve multiplication: basically the value added to the bucket (or to TAT) for each conforming packet would have to be proportionate to the packet length: whereas, with the GCRA as described, the water in the bucket has units of time, for variable length packets it would have to have units that are the product of packet length and time. Hence, applying the GCRA to limit the bandwidth of variable length packets without access to a fast, hardware multiplier (as in an FPGA) may not be practical. However, it can always be used to limit the packet or cell rate, as long as their lengths are ignored.
== Dual Leaky Bucket Controller ==
Multiple implementations of the GCRA can be applied concurrently to a VC or a VP, in a dual leaky bucket traffic policing or traffic shaping function, e.g. applied to a Variable Bit Rate (VBR) VC. This can limit ATM cells on this VBR VC to a Sustained Cell Rate (SCR) and a Maximum Burst Size (MBS). At the same time, the dual leaky bucket traffic policing function can limit the rate of cells in the bursts to a Peak Cell Rate (PCR) and a maximum Cell Delay Variation tolerance (CDVt): see Traffic Contract#Traffic Parameters.
This may be best understood where the transmission on an VBR VC is in the form of fixed length messages (CPCS-PDUs), which are transmitted with some fixed interval or the Inter Message Time (IMT) and take a number of cells, MBS, to carry them; however, the description of VBR traffic and the use of the dual leaky bucket are not restricted to such situations. In this case, the average cell rate over the interval of IMT is the SCR (=MBS/IMT). The individual messages can be transmitted at a PCR, which can be any value between the bandwidth for the physical link (1/δ) and the SCR. This allows the message to be transmitted in a period that is smaller than the message interval IMT, with gaps between instances of the message.
In the dual leaky bucket, one bucket is applied to the traffic with an emission interval of 1/SCR and a limit value τSCR that gives an MBS that is the number of cells in the message: see leaky bucket#Maximum burst size. The second bucket has an emission interval of 1/PCR and a limit value τPCR that allows for the CDV up to that point in the path of the connection: see leaky bucket#Delay Variation Tolerance. Cells are then allowed through at the PCR, with jitter of τPCR, up to a maximum number of MBS cells. The next burst of MBS cells will then be allowed through starting MBS x 1/SCR after the first.
If the cells arrive in a burst at a rate higher than 1/PCR (MBS cells arrive in less than (MBS - 1)/PCR - τPCR), or more than MBS cells arrive at the PCR, or bursts of MBS cells arrive closer than IMT apart, the dual leaky bucket will detect this and delay (shaping) or drop or de-prioritize (policing) enough cells to make the connection conform.
Figure 3 shows the reference algorithm for SCR and PCR control for both Cell Loss Priority (CLP) values 1 (low) and 0 (high) cell flows, i.e. where the cells with both priority values are treated the same. Similar reference algorithms where the high and low priority cells are treated differently are also given in Annex A to I.371 .
== See also ==
Asynchronous Transfer Mode
Leaky bucket
UPC and NPC
NNI
Traffic contract
Connection admission control
Traffic shaping
Traffic policing (communications)
Token bucket
== References == | Wikipedia/Generic_cell_rate_algorithm |
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expressed as a function of the size of the input.: 226 Since this function is generally difficult to compute exactly, and the running time for small inputs is usually not consequential, one commonly focuses on the behavior of the complexity when the input size increases—that is, the asymptotic behavior of the complexity. Therefore, the time complexity is commonly expressed using big O notation, typically
O
(
n
)
{\displaystyle O(n)}
,
O
(
n
log
n
)
{\displaystyle O(n\log n)}
,
O
(
n
α
)
{\displaystyle O(n^{\alpha })}
,
O
(
2
n
)
{\displaystyle O(2^{n})}
, etc., where n is the size in units of bits needed to represent the input.
Algorithmic complexities are classified according to the type of function appearing in the big O notation. For example, an algorithm with time complexity
O
(
n
)
{\displaystyle O(n)}
is a linear time algorithm and an algorithm with time complexity
O
(
n
α
)
{\displaystyle O(n^{\alpha })}
for some constant
α
>
0
{\displaystyle \alpha >0}
is a polynomial time algorithm.
== Table of common time complexities ==
The following table summarizes some classes of commonly encountered time complexities. In the table, poly(x) = xO(1), i.e., polynomial in x.
== Constant time ==
An algorithm is said to be constant time (also written as
O
(
1
)
{\textstyle O(1)}
time) if the value of
T
(
n
)
{\textstyle T(n)}
(the complexity of the algorithm) is bounded by a value that does not depend on the size of the input. For example, accessing any single element in an array takes constant time as only one operation has to be performed to locate it. In a similar manner, finding the minimal value in an array sorted in ascending order; it is the first element. However, finding the minimal value in an unordered array is not a constant time operation as scanning over each element in the array is needed in order to determine the minimal value. Hence it is a linear time operation, taking
O
(
n
)
{\textstyle O(n)}
time. If the number of elements is known in advance and does not change, however, such an algorithm can still be said to run in constant time.
Despite the name "constant time", the running time does not have to be independent of the problem size, but an upper bound for the running time has to be independent of the problem size. For example, the task "exchange the values of a and b if necessary so that
a
≤
b
{\textstyle a\leq b}
" is called constant time even though the time may depend on whether or not it is already true that
a
≤
b
{\textstyle a\leq b}
. However, there is some constant t such that the time required is always at most t.
== Logarithmic time ==
An algorithm is said to take logarithmic time when
T
(
n
)
=
O
(
log
n
)
{\displaystyle T(n)=O(\log n)}
. Since
log
a
n
{\displaystyle \log _{a}n}
and
log
b
n
{\displaystyle \log _{b}n}
are related by a constant multiplier, and such a multiplier is irrelevant to big O classification, the standard usage for logarithmic-time algorithms is
O
(
log
n
)
{\displaystyle O(\log n)}
regardless of the base of the logarithm appearing in the expression of T.
Algorithms taking logarithmic time are commonly found in operations on binary trees or when using binary search.
An
O
(
log
n
)
{\displaystyle O(\log n)}
algorithm is considered highly efficient, as the ratio of the number of operations to the size of the input decreases and tends to zero when n increases. An algorithm that must access all elements of its input cannot take logarithmic time, as the time taken for reading an input of size n is of the order of n.
An example of logarithmic time is given by dictionary search. Consider a dictionary D which contains n entries, sorted in alphabetical order. We suppose that, for
1
≤
k
≤
n
{\displaystyle 1\leq k\leq n}
, one may access the kth entry of the dictionary in a constant time. Let
D
(
k
)
{\displaystyle D(k)}
denote this kth entry. Under these hypotheses, the test to see if a word w is in the dictionary may be done in logarithmic time: consider
D
(
⌊
n
2
⌋
)
{\displaystyle D\left(\left\lfloor {\frac {n}{2}}\right\rfloor \right)}
, where
⌊
⌋
{\displaystyle \lfloor \;\rfloor }
denotes the floor function. If
w
=
D
(
⌊
n
2
⌋
)
{\displaystyle w=D\left(\left\lfloor {\frac {n}{2}}\right\rfloor \right)}
--that is to say, the word w is exactly in the middle of the dictionary--then we are done. Else, if
w
<
D
(
⌊
n
2
⌋
)
{\displaystyle w<D\left(\left\lfloor {\frac {n}{2}}\right\rfloor \right)}
--i.e., if the word w comes earlier in alphabetical order than the middle word of the whole dictionary--we continue the search in the same way in the left (i.e. earlier) half of the dictionary, and then again repeatedly until the correct word is found. Otherwise, if it comes after the middle word, continue similarly with the right half of the dictionary. This algorithm is similar to the method often used to find an entry in a paper dictionary. As a result, the search space within the dictionary decreases as the algorithm gets closer to the target word.
== Polylogarithmic time ==
An algorithm is said to run in polylogarithmic time if its time
T
(
n
)
{\displaystyle T(n)}
is
O
(
(
log
n
)
k
)
{\displaystyle O{\bigl (}(\log n)^{k}{\bigr )}}
for some constant k. Another way to write this is
O
(
log
k
n
)
{\displaystyle O(\log ^{k}n)}
.
For example, matrix chain ordering can be solved in polylogarithmic time on a parallel random-access machine, and a graph can be determined to be planar in a fully dynamic way in
O
(
log
3
n
)
{\displaystyle O(\log ^{3}n)}
time per insert/delete operation.
== Sub-linear time ==
An algorithm is said to run in sub-linear time (often spelled sublinear time) if
T
(
n
)
=
o
(
n
)
{\displaystyle T(n)=o(n)}
. In particular this includes algorithms with the time complexities defined above.
The specific term sublinear time algorithm commonly refers to randomized algorithms that sample a small fraction of their inputs and process them efficiently to approximately infer properties of the entire instance. This type of sublinear time algorithm is closely related to property testing and statistics.
Other settings where algorithms can run in sublinear time include:
Parallel algorithms that have linear or greater total work (allowing them to read the entire input), but sub-linear depth.
Algorithms that have guaranteed assumptions on the input structure. An important example are operations on data structures, e.g. binary search in a sorted array.
Algorithms that search for local structure in the input, for example finding a local minimum in a 1-D array (can be solved in
O
(
log
(
n
)
)
{\displaystyle O(\log(n))}
time using a variant of binary search). A closely related notion is that of Local Computation Algorithms (LCA) where the algorithm receives a large input and queries to local information about some valid large output.
== Linear time ==
An algorithm is said to take linear time, or
O
(
n
)
{\displaystyle O(n)}
time, if its time complexity is
O
(
n
)
{\displaystyle O(n)}
. Informally, this means that the running time increases at most linearly with the size of the input. More precisely, this means that there is a constant c such that the running time is at most
c
n
{\displaystyle cn}
for every input of size n. For example, a procedure that adds up all elements of a list requires time proportional to the length of the list, if the adding time is constant, or, at least, bounded by a constant.
Linear time is the best possible time complexity in situations where the algorithm has to sequentially read its entire input. Therefore, much research has been invested into discovering algorithms exhibiting linear time or, at least, nearly linear time. This research includes both software and hardware methods. There are several hardware technologies which exploit parallelism to provide this. An example is content-addressable memory. This concept of linear time is used in string matching algorithms such as the Boyer–Moore string-search algorithm and Ukkonen's algorithm.
== Quasilinear time ==
An algorithm is said to run in quasilinear time (also referred to as log-linear time) if
T
(
n
)
=
O
(
n
log
k
n
)
{\displaystyle T(n)=O(n\log ^{k}n)}
for some positive constant k; linearithmic time is the case
k
=
1
{\displaystyle k=1}
. Using soft O notation these algorithms are
O
~
(
n
)
{\displaystyle {\tilde {O}}(n)}
. Quasilinear time algorithms are also
O
(
n
1
+
ε
)
{\displaystyle O(n^{1+\varepsilon })}
for every constant
ε
>
0
{\displaystyle \varepsilon >0}
and thus run faster than any polynomial time algorithm whose time bound includes a term
n
c
{\displaystyle n^{c}}
for any
c
>
1
{\displaystyle c>1}
.
Algorithms which run in quasilinear time include:
In-place merge sort,
O
(
n
log
2
n
)
{\displaystyle O(n\log ^{2}n)}
Quicksort,
O
(
n
log
n
)
{\displaystyle O(n\log n)}
, in its randomized version, has a running time that is
O
(
n
log
n
)
{\displaystyle O(n\log n)}
in expectation on the worst-case input. Its non-randomized version has an
O
(
n
log
n
)
{\displaystyle O(n\log n)}
running time only when considering average case complexity.
Heapsort,
O
(
n
log
n
)
{\displaystyle O(n\log n)}
, merge sort, introsort, binary tree sort, smoothsort, patience sorting, etc. in the worst case
Fast Fourier transforms,
O
(
n
log
n
)
{\displaystyle O(n\log n)}
Monge array calculation,
O
(
n
log
n
)
{\displaystyle O(n\log n)}
In many cases, the
O
(
n
log
n
)
{\displaystyle O(n\log n)}
running time is simply the result of performing a
Θ
(
log
n
)
{\displaystyle \Theta (\log n)}
operation n times (for the notation, see Big O notation § Family of Bachmann–Landau notations). For example, binary tree sort creates a binary tree by inserting each element of the n-sized array one by one. Since the insert operation on a self-balancing binary search tree takes
O
(
log
n
)
{\displaystyle O(\log n)}
time, the entire algorithm takes
O
(
n
log
n
)
{\displaystyle O(n\log n)}
time.
Comparison sorts require at least
Ω
(
n
log
n
)
{\displaystyle \Omega (n\log n)}
comparisons in the worst case because
log
(
n
!
)
=
Θ
(
n
log
n
)
{\displaystyle \log(n!)=\Theta (n\log n)}
, by Stirling's approximation. They also frequently arise from the recurrence relation
T
(
n
)
=
2
T
(
n
2
)
+
O
(
n
)
{\textstyle T(n)=2T\left({\frac {n}{2}}\right)+O(n)}
.
== Sub-quadratic time ==
An algorithm is said to be subquadratic time if
T
(
n
)
=
o
(
n
2
)
{\displaystyle T(n)=o(n^{2})}
.
For example, simple, comparison-based sorting algorithms are quadratic (e.g. insertion sort), but more advanced algorithms can be found that are subquadratic (e.g. shell sort). No general-purpose sorts run in linear time, but the change from quadratic to sub-quadratic is of great practical importance.
== Polynomial time ==
An algorithm is said to be of polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm, that is, T(n) = O(nk) for some positive constant k. Problems for which a deterministic polynomial-time algorithm exists belong to the complexity class P, which is central in the field of computational complexity theory. Cobham's thesis states that polynomial time is a synonym for "tractable", "feasible", "efficient", or "fast".
Some examples of polynomial-time algorithms:
The selection sort sorting algorithm on n integers performs
A
n
2
{\displaystyle An^{2}}
operations for some constant A. Thus it runs in time
O
(
n
2
)
{\displaystyle O(n^{2})}
and is a polynomial-time algorithm.
All the basic arithmetic operations (addition, subtraction, multiplication, division, and comparison) can be done in polynomial time.
Maximum matchings in graphs can be found in polynomial time. In some contexts, especially in optimization, one differentiates between strongly polynomial time and weakly polynomial time algorithms.
These two concepts are only relevant if the inputs to the algorithms consist of integers.
=== Complexity classes ===
The concept of polynomial time leads to several complexity classes in computational complexity theory. Some important classes defined using polynomial time are the following.
P: The complexity class of decision problems that can be solved on a deterministic Turing machine in polynomial time
NP: The complexity class of decision problems that can be solved on a non-deterministic Turing machine in polynomial time
ZPP: The complexity class of decision problems that can be solved with zero error on a probabilistic Turing machine in polynomial time
RP: The complexity class of decision problems that can be solved with 1-sided error on a probabilistic Turing machine in polynomial time.
BPP: The complexity class of decision problems that can be solved with 2-sided error on a probabilistic Turing machine in polynomial time
BQP: The complexity class of decision problems that can be solved with 2-sided error on a quantum Turing machine in polynomial time
P is the smallest time-complexity class on a deterministic machine which is robust in terms of machine model changes. (For example, a change from a single-tape Turing machine to a multi-tape machine can lead to a quadratic speedup, but any algorithm that runs in polynomial time under one model also does so on the other.) Any given abstract machine will have a complexity class corresponding to the problems which can be solved in polynomial time on that machine.
== Superpolynomial time ==
An algorithm is defined to take superpolynomial time if T(n) is not bounded above by any polynomial; that is, if
T
(
n
)
∉
O
(
n
c
)
{\displaystyle T(n)\not \in O(n^{c})}
for every positive integer c.
For example, an algorithm that runs for 2n steps on an input of size n requires superpolynomial time (more specifically, exponential time).
An algorithm that uses exponential resources is clearly superpolynomial, but some algorithms are only very weakly superpolynomial. For example, the Adleman–Pomerance–Rumely primality test runs for nO(log log n) time on n-bit inputs; this grows faster than any polynomial for large enough n, but the input size must become impractically large before it cannot be dominated by a polynomial with small degree.
An algorithm that requires superpolynomial time lies outside the complexity class P. Cobham's thesis posits that these algorithms are impractical, and in many cases they are. Since the P versus NP problem is unresolved, it is unknown whether NP-complete problems require superpolynomial time.
== Quasi-polynomial time ==
Quasi-polynomial time algorithms are algorithms whose running time exhibits quasi-polynomial growth, a type of behavior that may be slower than polynomial time but yet is significantly faster than exponential time. The worst case running time of a quasi-polynomial time algorithm is
2
O
(
log
c
n
)
{\displaystyle 2^{O(\log ^{c}n)}}
for some fixed
c
>
0
{\displaystyle c>0}
. When
c
=
1
{\displaystyle c=1}
this gives polynomial time, and for
c
<
1
{\displaystyle c<1}
it gives sub-linear time.
There are some problems for which we know quasi-polynomial time algorithms, but no polynomial time algorithm is known. Such problems arise in approximation algorithms; a famous example is the directed Steiner tree problem, for which there is a quasi-polynomial time approximation algorithm achieving an approximation factor of
O
(
log
3
n
)
{\displaystyle O(\log ^{3}n)}
(n being the number of vertices), but showing the existence of such a polynomial time algorithm is an open problem.
Other computational problems with quasi-polynomial time solutions but no known polynomial time solution include the planted clique problem in which the goal is to find a large clique in the union of a clique and a random graph. Although quasi-polynomially solvable, it has been conjectured that the planted clique problem has no polynomial time solution; this planted clique conjecture has been used as a computational hardness assumption to prove the difficulty of several other problems in computational game theory, property testing, and machine learning.
The complexity class QP consists of all problems that have quasi-polynomial time algorithms. It can be defined in terms of DTIME as follows.
QP
=
⋃
c
∈
N
DTIME
(
2
log
c
n
)
{\displaystyle {\mbox{QP}}=\bigcup _{c\in \mathbb {N} }{\mbox{DTIME}}\left(2^{\log ^{c}n}\right)}
=== Relation to NP-complete problems ===
In complexity theory, the unsolved P versus NP problem asks if all problems in NP have polynomial-time algorithms. All the best-known algorithms for NP-complete problems like 3SAT etc. take exponential time. Indeed, it is conjectured for many natural NP-complete problems that they do not have sub-exponential time algorithms. Here "sub-exponential time" is taken to mean the second definition presented below. (On the other hand, many graph problems represented in the natural way by adjacency matrices are solvable in subexponential time simply because the size of the input is the square of the number of vertices.) This conjecture (for the k-SAT problem) is known as the exponential time hypothesis. Since it is conjectured that NP-complete problems do not have quasi-polynomial time algorithms, some inapproximability results in the field of approximation algorithms make the assumption that NP-complete problems do not have quasi-polynomial time algorithms. For example, see the known inapproximability results for the set cover problem.
== Sub-exponential time ==
The term sub-exponential time is used to express that the running time of some algorithm may grow faster than any polynomial but is still significantly smaller than an exponential. In this sense, problems that have sub-exponential time algorithms are somewhat more tractable than those that only have exponential algorithms. The precise definition of "sub-exponential" is not generally agreed upon, however the two most widely used are below.
=== First definition ===
A problem is said to be sub-exponential time solvable if it can be solved in running times whose logarithms grow smaller than any given polynomial. More precisely, a problem is in sub-exponential time if for every ε > 0 there exists an algorithm which solves the problem in time O(2nε). The set of all such problems is the complexity class SUBEXP which can be defined in terms of DTIME as follows.
SUBEXP
=
⋂
ε
>
0
DTIME
(
2
n
ε
)
{\displaystyle {\textsf {SUBEXP}}=\bigcap _{\varepsilon >0}{\textsf {DTIME}}\left(2^{n^{\varepsilon }}\right)}
This notion of sub-exponential is non-uniform in terms of ε in the sense that ε is not part of the input and each ε may have its own algorithm for the problem.
=== Second definition ===
Some authors define sub-exponential time as running times in
2
o
(
n
)
{\displaystyle 2^{o(n)}}
. This definition allows larger running times than the first definition of sub-exponential time. An example of such a sub-exponential time algorithm is the best-known classical algorithm for integer factorization, the general number field sieve, which runs in time about
2
O
~
(
n
1
/
3
)
{\displaystyle 2^{{\tilde {O}}(n^{1/3})}}
, where the length of the input is n. Another example was the graph isomorphism problem, which the best known algorithm from 1982 to 2016 solved in
2
O
(
n
log
n
)
{\displaystyle 2^{O\left({\sqrt {n\log n}}\right)}}
. However, at STOC 2016 a quasi-polynomial time algorithm was presented.
It makes a difference whether the algorithm is allowed to be sub-exponential in the size of the instance, the number of vertices, or the number of edges. In parameterized complexity, this difference is made explicit by considering pairs
(
L
,
k
)
{\displaystyle (L,k)}
of decision problems and parameters k. SUBEPT is the class of all parameterized problems that run in time sub-exponential in k and polynomial in the input size n:
SUBEPT
=
DTIME
(
2
o
(
k
)
⋅
poly
(
n
)
)
.
{\displaystyle {\textsf {SUBEPT}}={\textsf {DTIME}}\left(2^{o(k)}\cdot {\textsf {poly}}(n)\right).}
More precisely, SUBEPT is the class of all parameterized problems
(
L
,
k
)
{\displaystyle (L,k)}
for which there is a computable function
f
:
N
→
N
{\displaystyle f:\mathbb {N} \to \mathbb {N} }
with
f
∈
o
(
k
)
{\displaystyle f\in o(k)}
and an algorithm that decides L in time
2
f
(
k
)
⋅
poly
(
n
)
{\displaystyle 2^{f(k)}\cdot {\textsf {poly}}(n)}
.
==== Exponential time hypothesis ====
The exponential time hypothesis (ETH) is that 3SAT, the satisfiability problem of Boolean formulas in conjunctive normal form with at most three literals per clause and with n variables, cannot be solved in time 2o(n). More precisely, the hypothesis is that there is some absolute constant c > 0 such that 3SAT cannot be decided in time 2cn by any deterministic Turing machine. With m denoting the number of clauses, ETH is equivalent to the hypothesis that kSAT cannot be solved in time 2o(m) for any integer k ≥ 3. The exponential time hypothesis implies P ≠ NP.
== Exponential time ==
An algorithm is said to be exponential time, if T(n) is upper bounded by 2poly(n), where poly(n) is some polynomial in n. More formally, an algorithm is exponential time if T(n) is bounded by O(2nk) for some constant k. Problems which admit exponential time algorithms on a deterministic Turing machine form the complexity class known as EXP.
EXP
=
⋃
c
∈
R
+
DTIME
(
2
n
c
)
{\displaystyle {\textsf {EXP}}=\bigcup _{c\in \mathbb {R_{+}} }{\textsf {DTIME}}\left(2^{n^{c}}\right)}
Sometimes, exponential time is used to refer to algorithms that have T(n) = 2O(n), where the exponent is at most a linear function of n. This gives rise to the complexity class E.
E
=
⋃
c
∈
N
DTIME
(
2
c
n
)
{\displaystyle {\textsf {E}}=\bigcup _{c\in \mathbb {N} }{\textsf {DTIME}}\left(2^{cn}\right)}
== Factorial time ==
An algorithm is said to be factorial time if T(n) is upper bounded by the factorial function n!. Factorial time is a subset of exponential time (EXP) because
n
!
≤
n
n
=
2
n
log
n
=
O
(
2
n
1
+
ϵ
)
{\displaystyle n!\leq n^{n}=2^{n\log n}=O\left(2^{n^{1+\epsilon }}\right)}
for all
ϵ
>
0
{\displaystyle \epsilon >0}
. However, it is not a subset of E.
An example of an algorithm that runs in factorial time is bogosort, a notoriously inefficient sorting algorithm based on trial and error. Bogosort sorts a list of n items by repeatedly shuffling the list until it is found to be sorted. In the average case, each pass through the bogosort algorithm will examine one of the n! orderings of the n items. If the items are distinct, only one such ordering is sorted. Bogosort shares patrimony with the infinite monkey theorem.
== Double exponential time ==
An algorithm is said to be double exponential time if T(n) is upper bounded by 22poly(n), where poly(n) is some polynomial in n. Such algorithms belong to the complexity class 2-EXPTIME.
2-EXPTIME
=
⋃
c
∈
N
DTIME
(
2
2
n
c
)
{\displaystyle {\textsf {2-EXPTIME}}=\bigcup _{c\in \mathbb {N} }{\textsf {DTIME}}\left(2^{2^{n^{c}}}\right)}
Well-known double exponential time algorithms include:
Decision procedures for Presburger arithmetic
Computing a Gröbner basis (in the worst case)
Quantifier elimination on real closed fields takes at least double exponential time, and can be done in this time.
== See also ==
L-notation
Space complexity
== References == | Wikipedia/Polynomial-time_algorithm |
The Ramanujan tau function, studied by Ramanujan (1916), is the function
τ
:
N
→
Z
{\displaystyle \tau :\mathbb {N} \to \mathbb {Z} }
defined by the following identity:
∑
n
≥
1
τ
(
n
)
q
n
=
q
∏
n
≥
1
(
1
−
q
n
)
24
=
q
ϕ
(
q
)
24
=
η
(
z
)
24
=
Δ
(
z
)
,
{\displaystyle \sum _{n\geq 1}\tau (n)q^{n}=q\prod _{n\geq 1}\left(1-q^{n}\right)^{24}=q\phi (q)^{24}=\eta (z)^{24}=\Delta (z),}
where
q
=
exp
(
2
π
i
z
)
{\displaystyle q=\exp(2\pi iz)}
with
I
m
(
z
)
>
0
{\displaystyle \mathrm {Im} (z)>0}
,
ϕ
{\displaystyle \phi }
is the Euler function,
η
{\displaystyle \eta }
is the Dedekind eta function, and the function
Δ
(
z
)
{\displaystyle \Delta (z)}
is a holomorphic cusp form of weight 12 and level 1, known as the discriminant modular form (some authors, notably Apostol, write
Δ
/
(
2
π
)
12
{\displaystyle \Delta /(2\pi )^{12}}
instead of
Δ
{\displaystyle \Delta }
). It appears in connection to an "error term" involved in counting the number of ways of expressing an integer as a sum of 24 squares. A formula due to Ian G. Macdonald was given in Dyson (1972).
== Values ==
The first few values of the tau function are given in the following table (sequence A000594 in the OEIS):
Calculating this function on an odd square number (i.e. a centered octagonal number) yields an odd number, whereas for any other number the function yields an even number.
== Ramanujan's conjectures ==
Ramanujan (1916) observed, but did not prove, the following three properties of
τ
(
n
)
{\displaystyle \tau (n)}
:
τ
(
m
n
)
=
τ
(
m
)
τ
(
n
)
{\displaystyle \tau (mn)=\tau (m)\tau (n)}
if
gcd
(
m
,
n
)
=
1
{\displaystyle \gcd(m,n)=1}
(meaning that
τ
(
n
)
{\displaystyle \tau (n)}
is a multiplicative function)
τ
(
p
r
+
1
)
=
τ
(
p
)
τ
(
p
r
)
−
p
11
τ
(
p
r
−
1
)
{\displaystyle \tau (p^{r+1})=\tau (p)\tau (p^{r})-p^{11}\tau (p^{r-1})}
for
p
{\displaystyle p}
prime and
r
>
0
{\displaystyle r>0}
.
|
τ
(
p
)
|
≤
2
p
11
/
2
{\displaystyle |\tau (p)|\leq 2p^{11/2}}
for all primes
p
{\displaystyle p}
.
The first two properties were proved by Mordell (1917) and the third one, called the Ramanujan conjecture, was proved by Deligne in 1974 as a consequence of his proof of the Weil conjectures (specifically, he deduced it by applying them to a Kuga-Sato variety).
== Congruences for the tau function ==
For
k
∈
Z
{\displaystyle k\in \mathbb {Z} }
and
n
∈
N
{\displaystyle n\in \mathbb {N} }
, the Divisor function
σ
k
(
n
)
{\displaystyle \sigma _{k}(n)}
is the sum of the
k
{\displaystyle k}
th powers of the divisors of
n
{\displaystyle n}
. The tau function satisfies several congruence relations; many of them can be expressed in terms of
σ
k
(
n
)
{\displaystyle \sigma _{k}(n)}
. Here are some:
τ
(
n
)
≡
σ
11
(
n
)
mod
2
11
for
n
≡
1
mod
8
{\displaystyle \tau (n)\equiv \sigma _{11}(n)\ {\bmod {\ }}2^{11}{\text{ for }}n\equiv 1\ {\bmod {\ }}8}
τ
(
n
)
≡
1217
σ
11
(
n
)
mod
2
13
for
n
≡
3
mod
8
{\displaystyle \tau (n)\equiv 1217\sigma _{11}(n)\ {\bmod {\ }}2^{13}{\text{ for }}n\equiv 3\ {\bmod {\ }}8}
τ
(
n
)
≡
1537
σ
11
(
n
)
mod
2
12
for
n
≡
5
mod
8
{\displaystyle \tau (n)\equiv 1537\sigma _{11}(n)\ {\bmod {\ }}2^{12}{\text{ for }}n\equiv 5\ {\bmod {\ }}8}
τ
(
n
)
≡
705
σ
11
(
n
)
mod
2
14
for
n
≡
7
mod
8
{\displaystyle \tau (n)\equiv 705\sigma _{11}(n)\ {\bmod {\ }}2^{14}{\text{ for }}n\equiv 7\ {\bmod {\ }}8}
τ
(
n
)
≡
n
−
610
σ
1231
(
n
)
mod
3
6
for
n
≡
1
mod
3
{\displaystyle \tau (n)\equiv n^{-610}\sigma _{1231}(n)\ {\bmod {\ }}3^{6}{\text{ for }}n\equiv 1\ {\bmod {\ }}3}
τ
(
n
)
≡
n
−
610
σ
1231
(
n
)
mod
3
7
for
n
≡
2
mod
3
{\displaystyle \tau (n)\equiv n^{-610}\sigma _{1231}(n)\ {\bmod {\ }}3^{7}{\text{ for }}n\equiv 2\ {\bmod {\ }}3}
τ
(
n
)
≡
n
−
30
σ
71
(
n
)
mod
5
3
for
n
≢
0
mod
5
{\displaystyle \tau (n)\equiv n^{-30}\sigma _{71}(n)\ {\bmod {\ }}5^{3}{\text{ for }}n\not \equiv 0\ {\bmod {\ }}5}
τ
(
n
)
≡
n
σ
9
(
n
)
mod
7
{\displaystyle \tau (n)\equiv n\sigma _{9}(n)\ {\bmod {\ }}7}
τ
(
n
)
≡
n
σ
9
(
n
)
mod
7
2
for
n
≡
3
,
5
,
6
mod
7
{\displaystyle \tau (n)\equiv n\sigma _{9}(n)\ {\bmod {\ }}7^{2}{\text{ for }}n\equiv 3,5,6\ {\bmod {\ }}7}
τ
(
n
)
≡
σ
11
(
n
)
mod
691.
{\displaystyle \tau (n)\equiv \sigma _{11}(n)\ {\bmod {\ }}691.}
For
p
≠
23
{\displaystyle p\neq 23}
prime, we have
τ
(
p
)
≡
0
mod
23
if
(
p
23
)
=
−
1
{\displaystyle \tau (p)\equiv 0\ {\bmod {\ }}23{\text{ if }}\left({\frac {p}{23}}\right)=-1}
τ
(
p
)
≡
σ
11
(
p
)
mod
23
2
if
p
is of the form
a
2
+
23
b
2
{\displaystyle \tau (p)\equiv \sigma _{11}(p)\ {\bmod {\ }}23^{2}{\text{ if }}p{\text{ is of the form }}a^{2}+23b^{2}}
τ
(
p
)
≡
−
1
mod
23
otherwise
.
{\displaystyle \tau (p)\equiv -1\ {\bmod {\ }}23{\text{ otherwise}}.}
== Explicit formula ==
In 1975 Douglas Niebur proved an explicit formula for the Ramanujan tau function:
τ
(
n
)
=
n
4
σ
(
n
)
−
24
∑
i
=
1
n
−
1
i
2
(
35
i
2
−
52
i
n
+
18
n
2
)
σ
(
i
)
σ
(
n
−
i
)
.
{\displaystyle \tau (n)=n^{4}\sigma (n)-24\sum _{i=1}^{n-1}i^{2}(35i^{2}-52in+18n^{2})\sigma (i)\sigma (n-i).}
where
σ
(
n
)
{\displaystyle \sigma (n)}
is the sum of the positive divisors of
n
{\displaystyle n}
.
== Conjectures on the tau function ==
Suppose that
f
{\displaystyle f}
is a weight-
k
{\displaystyle k}
integer newform and the Fourier coefficients
a
(
n
)
{\displaystyle a(n)}
are integers. Consider the problem:
Given that
f
{\displaystyle f}
does not have complex multiplication, do almost all primes
p
{\displaystyle p}
have the property that
a
(
p
)
≢
0
(
mod
p
)
{\displaystyle a(p)\not \equiv 0{\pmod {p}}}
?
Indeed, most primes should have this property, and hence they are called ordinary. Despite the big advances by Deligne and Serre on Galois representations, which determine
a
(
n
)
(
mod
p
)
{\displaystyle a(n){\pmod {p}}}
for
n
{\displaystyle n}
coprime to
p
{\displaystyle p}
, it is unclear how to compute
a
(
p
)
(
mod
p
)
{\displaystyle a(p){\pmod {p}}}
. The only theorem in this regard is Elkies' famous result for modular elliptic curves, which guarantees that there are infinitely many primes
p
{\displaystyle p}
such that
a
(
p
)
=
0
{\displaystyle a(p)=0}
, which thus are congruent to 0 modulo
p
{\displaystyle p}
. There are no known examples of non-CM
f
{\displaystyle f}
with weight greater than 2 for which
a
(
p
)
≢
0
(
mod
p
)
{\displaystyle a(p)\not \equiv 0{\pmod {p}}}
for infinitely many primes
p
{\displaystyle p}
(although it should be true for almost all
p
{\displaystyle p}
. There are also no known examples with
a
(
p
)
≡
0
(
mod
p
)
{\displaystyle a(p)\equiv 0{\pmod {p}}}
for infinitely many
p
{\displaystyle p}
. Some researchers had begun to doubt whether
a
(
p
)
≡
0
(
mod
p
)
{\displaystyle a(p)\equiv 0{\pmod {p}}}
for infinitely many
p
{\displaystyle p}
. As evidence, many provided Ramanujan's
τ
(
p
)
{\displaystyle \tau (p)}
(case of weight 12). The only solutions up to
10
10
{\displaystyle 10^{10}}
to the equation
τ
(
p
)
≡
0
(
mod
p
)
{\displaystyle \tau (p)\equiv 0{\pmod {p}}}
are 2, 3, 5, 7, 2411, and 7758337633 (sequence A007659 in the OEIS).
Lehmer (1947) conjectured that
τ
(
n
)
≠
0
{\displaystyle \tau (n)\neq 0}
for all
n
{\displaystyle n}
, an assertion sometimes known as Lehmer's conjecture. Lehmer verified the conjecture for
n
{\displaystyle n}
up to 214928639999 (Apostol 1997, p. 22). The following table summarizes progress on finding successively larger values of
N
{\displaystyle N}
for which this condition holds for all
n
≤
N
{\displaystyle n\leq N}
.
== Ramanujan's L-function ==
Ramanujan's
L
{\displaystyle L}
-function is defined by
L
(
s
)
=
∑
n
≥
1
τ
(
n
)
n
s
{\displaystyle L(s)=\sum _{n\geq 1}{\frac {\tau (n)}{n^{s}}}}
if
R
e
(
s
)
>
6
{\displaystyle \mathrm {Re} (s)>6}
and by analytic continuation otherwise. It satisfies the functional equation
L
(
s
)
Γ
(
s
)
(
2
π
)
s
=
L
(
12
−
s
)
Γ
(
12
−
s
)
(
2
π
)
12
−
s
,
s
∉
Z
0
−
,
12
−
s
∉
Z
0
−
{\displaystyle {\frac {L(s)\Gamma (s)}{(2\pi )^{s}}}={\frac {L(12-s)\Gamma (12-s)}{(2\pi )^{12-s}}},\quad s\notin \mathbb {Z} _{0}^{-},\,12-s\notin \mathbb {Z} _{0}^{-}}
and has the Euler product
L
(
s
)
=
∏
p
prime
1
1
−
τ
(
p
)
p
−
s
+
p
11
−
2
s
,
R
e
(
s
)
>
7.
{\displaystyle L(s)=\prod _{p\,{\text{prime}}}{\frac {1}{1-\tau (p)p^{-s}+p^{11-2s}}},\quad \mathrm {Re} (s)>7.}
Ramanujan conjectured that all nontrivial zeros of
L
{\displaystyle L}
have real part equal to
6
{\displaystyle 6}
.
== Notes ==
== References ==
Apostol, T. M. (1997), "Modular Functions and Dirichlet Series in Number Theory", New York: Springer-Verlag 2nd Ed.
Ashworth, M. H. (1968), Congruence and identical properties of modular forms (D. Phil. Thesis, Oxford)
Dyson, F. J. (1972), "Missed opportunities", Bull. Amer. Math. Soc., 78 (5): 635–652, doi:10.1090/S0002-9904-1972-12971-9, Zbl 0271.01005
Kolberg, O. (1962), "Congruences for Ramanujan's function τ(n)", Arbok Univ. Bergen Mat.-Natur. Ser. (11), MR 0158873, Zbl 0168.29502
Lehmer, D.H. (1947), "The vanishing of Ramanujan's function τ(n)", Duke Math. J., 14 (2): 429–433, doi:10.1215/s0012-7094-47-01436-1, Zbl 0029.34502
Lygeros, N. (2010), "A New Solution to the Equation τ(p) ≡ 0 (mod p)" (PDF), Journal of Integer Sequences, 13: Article 10.7.4
Mordell, Louis J. (1917), "On Mr. Ramanujan's empirical expansions of modular functions.", Proceedings of the Cambridge Philosophical Society, 19: 117–124, JFM 46.0605.01
Newman, M. (1972), A table of τ (p) modulo p, p prime, 3 ≤ p ≤ 16067, National Bureau of Standards
Rankin, Robert A. (1988), "Ramanujan's tau-function and its generalizations", in Andrews, George E. (ed.), Ramanujan revisited (Urbana-Champaign, Ill., 1987), Boston, MA: Academic Press, pp. 245–268, ISBN 978-0-12-058560-1, MR 0938968
Ramanujan, Srinivasa (1916), "On certain arithmetical functions", Trans. Camb. Philos. Soc., 22 (9): 159–184, MR 2280861
Serre, J-P. (1968), "Une interprétation des congruences relatives à la fonction
τ
{\displaystyle \tau }
de Ramanujan", Séminaire Delange-Pisot-Poitou, 14
Swinnerton-Dyer, H. P. F. (1973), "On l-adic representations and congruences for coefficients of modular forms", in Kuyk, Willem; Serre, Jean-Pierre (eds.), Modular Functions of One Variable III, Lecture Notes in Mathematics, vol. 350, pp. 1–55, doi:10.1007/978-3-540-37802-0, ISBN 978-3-540-06483-1, MR 0406931
Wilton, J. R. (1930), "Congruence properties of Ramanujan's function τ(n)", Proceedings of the London Mathematical Society, 31: 1–10, doi:10.1112/plms/s2-31.1.1 | Wikipedia/Ramanujan_tau_function |
In number theory, an additive function is an arithmetic function f(n) of the positive integer variable n such that whenever a and b are coprime, the function applied to the product ab is the sum of the values of the function applied to a and b:
f
(
a
b
)
=
f
(
a
)
+
f
(
b
)
.
{\displaystyle f(ab)=f(a)+f(b).}
== Completely additive ==
An additive function f(n) is said to be completely additive if
f
(
a
b
)
=
f
(
a
)
+
f
(
b
)
{\displaystyle f(ab)=f(a)+f(b)}
holds for all positive integers a and b, even when they are not coprime. Totally additive is also used in this sense by analogy with totally multiplicative functions. If f is a completely additive function then f(1) = 0.
Every completely additive function is additive, but not vice versa.
== Examples ==
Examples of arithmetic functions which are completely additive are:
The restriction of the logarithmic function to
N
.
{\displaystyle \mathbb {N} .}
The multiplicity of a prime factor p in n, that is the largest exponent m for which pm divides n.
a0(n) – the sum of primes dividing n counting multiplicity, sometimes called sopfr(n), the potency of n or the integer logarithm of n (sequence A001414 in the OEIS). For example:
a0(4) = 2 + 2 = 4
a0(20) = a0(22 · 5) = 2 + 2 + 5 = 9
a0(27) = 3 + 3 + 3 = 9
a0(144) = a0(24 · 32) = a0(24) + a0(32) = 8 + 6 = 14
a0(2000) = a0(24 · 53) = a0(24) + a0(53) = 8 + 15 = 23
a0(2003) = 2003
a0(54,032,858,972,279) = 1240658
a0(54,032,858,972,302) = 1780417
a0(20,802,650,704,327,415) = 1240681
The function Ω(n), defined as the total number of prime factors of n, counting multiple factors multiple times, sometimes called the "Big Omega function" (sequence A001222 in the OEIS). For example;
Ω(1) = 0, since 1 has no prime factors
Ω(4) = 2
Ω(16) = Ω(2·2·2·2) = 4
Ω(20) = Ω(2·2·5) = 3
Ω(27) = Ω(3·3·3) = 3
Ω(144) = Ω(24 · 32) = Ω(24) + Ω(32) = 4 + 2 = 6
Ω(2000) = Ω(24 · 53) = Ω(24) + Ω(53) = 4 + 3 = 7
Ω(2001) = 3
Ω(2002) = 4
Ω(2003) = 1
Ω(54,032,858,972,279) = Ω(11 ⋅ 19932 ⋅ 1236661) = 4
Ω(54,032,858,972,302) = Ω(2 ⋅ 72 ⋅ 149 ⋅ 2081 ⋅ 1778171) = 6
Ω(20,802,650,704,327,415) = Ω(5 ⋅ 7 ⋅ 112 ⋅ 19932 ⋅ 1236661) = 7.
Examples of arithmetic functions which are additive but not completely additive are:
ω(n), defined as the total number of distinct prime factors of n (sequence A001221 in the OEIS). For example:
ω(4) = 1
ω(16) = ω(24) = 1
ω(20) = ω(22 · 5) = 2
ω(27) = ω(33) = 1
ω(144) = ω(24 · 32) = ω(24) + ω(32) = 1 + 1 = 2
ω(2000) = ω(24 · 53) = ω(24) + ω(53) = 1 + 1 = 2
ω(2001) = 3
ω(2002) = 4
ω(2003) = 1
ω(54,032,858,972,279) = 3
ω(54,032,858,972,302) = 5
ω(20,802,650,704,327,415) = 5
a1(n) – the sum of the distinct primes dividing n, sometimes called sopf(n) (sequence A008472 in the OEIS). For example:
a1(1) = 0
a1(4) = 2
a1(20) = 2 + 5 = 7
a1(27) = 3
a1(144) = a1(24 · 32) = a1(24) + a1(32) = 2 + 3 = 5
a1(2000) = a1(24 · 53) = a1(24) + a1(53) = 2 + 5 = 7
a1(2001) = 55
a1(2002) = 33
a1(2003) = 2003
a1(54,032,858,972,279) = 1238665
a1(54,032,858,972,302) = 1780410
a1(20,802,650,704,327,415) = 1238677
== Multiplicative functions ==
From any additive function
f
(
n
)
{\displaystyle f(n)}
it is possible to create a related multiplicative function
g
(
n
)
,
{\displaystyle g(n),}
which is a function with the property that whenever
a
{\displaystyle a}
and
b
{\displaystyle b}
are coprime then:
g
(
a
b
)
=
g
(
a
)
×
g
(
b
)
.
{\displaystyle g(ab)=g(a)\times g(b).}
One such example is
g
(
n
)
=
2
f
(
n
)
.
{\displaystyle g(n)=2^{f(n)}.}
Likewise if
f
(
n
)
{\displaystyle f(n)}
is completely additive, then
g
(
n
)
=
2
f
(
n
)
{\displaystyle g(n)=2^{f(n)}}
is completely multiplicative. More generally, we could consider the function
g
(
n
)
=
c
f
(
n
)
{\displaystyle g(n)=c^{f(n)}}
, where
c
{\displaystyle c}
is a nonzero real constant.
== Summatory functions ==
Given an additive function
f
{\displaystyle f}
, let its summatory function be defined by
M
f
(
x
)
:=
∑
n
≤
x
f
(
n
)
{\textstyle {\mathcal {M}}_{f}(x):=\sum _{n\leq x}f(n)}
. The average of
f
{\displaystyle f}
is given exactly as
M
f
(
x
)
=
∑
p
α
≤
x
f
(
p
α
)
(
⌊
x
p
α
⌋
−
⌊
x
p
α
+
1
⌋
)
.
{\displaystyle {\mathcal {M}}_{f}(x)=\sum _{p^{\alpha }\leq x}f(p^{\alpha })\left(\left\lfloor {\frac {x}{p^{\alpha }}}\right\rfloor -\left\lfloor {\frac {x}{p^{\alpha +1}}}\right\rfloor \right).}
The summatory functions over
f
{\displaystyle f}
can be expanded as
M
f
(
x
)
=
x
E
(
x
)
+
O
(
x
⋅
D
(
x
)
)
{\displaystyle {\mathcal {M}}_{f}(x)=xE(x)+O({\sqrt {x}}\cdot D(x))}
where
E
(
x
)
=
∑
p
α
≤
x
f
(
p
α
)
p
−
α
(
1
−
p
−
1
)
D
2
(
x
)
=
∑
p
α
≤
x
|
f
(
p
α
)
|
2
p
−
α
.
{\displaystyle {\begin{aligned}E(x)&=\sum _{p^{\alpha }\leq x}f(p^{\alpha })p^{-\alpha }(1-p^{-1})\\D^{2}(x)&=\sum _{p^{\alpha }\leq x}|f(p^{\alpha })|^{2}p^{-\alpha }.\end{aligned}}}
The average of the function
f
2
{\displaystyle f^{2}}
is also expressed by these functions as
M
f
2
(
x
)
=
x
E
2
(
x
)
+
O
(
x
D
2
(
x
)
)
.
{\displaystyle {\mathcal {M}}_{f^{2}}(x)=xE^{2}(x)+O(xD^{2}(x)).}
There is always an absolute constant
C
f
>
0
{\displaystyle C_{f}>0}
such that for all natural numbers
x
≥
1
{\displaystyle x\geq 1}
,
∑
n
≤
x
|
f
(
n
)
−
E
(
x
)
|
2
≤
C
f
⋅
x
D
2
(
x
)
.
{\displaystyle \sum _{n\leq x}|f(n)-E(x)|^{2}\leq C_{f}\cdot xD^{2}(x).}
Let
ν
(
x
;
z
)
:=
1
x
#
{
n
≤
x
:
f
(
n
)
−
A
(
x
)
B
(
x
)
≤
z
}
.
{\displaystyle \nu (x;z):={\frac {1}{x}}\#\!\left\{n\leq x:{\frac {f(n)-A(x)}{B(x)}}\leq z\right\}\!.}
Suppose that
f
{\displaystyle f}
is an additive function with
−
1
≤
f
(
p
α
)
=
f
(
p
)
≤
1
{\displaystyle -1\leq f(p^{\alpha })=f(p)\leq 1}
such that as
x
→
∞
{\displaystyle x\rightarrow \infty }
,
B
(
x
)
=
∑
p
≤
x
f
2
(
p
)
/
p
→
∞
.
{\displaystyle B(x)=\sum _{p\leq x}f^{2}(p)/p\rightarrow \infty .}
Then
ν
(
x
;
z
)
∼
G
(
z
)
{\displaystyle \nu (x;z)\sim G(z)}
where
G
(
z
)
{\displaystyle G(z)}
is the Gaussian distribution function
G
(
z
)
=
1
2
π
∫
−
∞
z
e
−
t
2
/
2
d
t
.
{\displaystyle G(z)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{z}e^{-t^{2}/2}dt.}
Examples of this result related to the prime omega function and the numbers of prime divisors of shifted primes include the following for fixed
z
∈
R
{\displaystyle z\in \mathbb {R} }
where the relations hold for
x
≫
1
{\displaystyle x\gg 1}
:
#
{
n
≤
x
:
ω
(
n
)
−
log
log
x
≤
z
(
log
log
x
)
1
/
2
}
∼
x
G
(
z
)
,
{\displaystyle \#\{n\leq x:\omega (n)-\log \log x\leq z(\log \log x)^{1/2}\}\sim xG(z),}
#
{
p
≤
x
:
ω
(
p
+
1
)
−
log
log
x
≤
z
(
log
log
x
)
1
/
2
}
∼
π
(
x
)
G
(
z
)
.
{\displaystyle \#\{p\leq x:\omega (p+1)-\log \log x\leq z(\log \log x)^{1/2}\}\sim \pi (x)G(z).}
== See also ==
Sigma additivity
Prime omega function
Multiplicative function
Arithmetic function
== References ==
== Further reading == | Wikipedia/Completely_additive_function |
In number theory, the sum of squares function is an arithmetic function that gives the number of representations for a given positive integer n as the sum of k squares, where representations that differ only in the order of the summands or in the signs of the numbers being squared are counted as different. It is denoted by rk(n).
== Definition ==
The function is defined as
r
k
(
n
)
=
|
{
(
a
1
,
a
2
,
…
,
a
k
)
∈
Z
k
:
n
=
a
1
2
+
a
2
2
+
⋯
+
a
k
2
}
|
{\displaystyle r_{k}(n)=|\{(a_{1},a_{2},\ldots ,a_{k})\in \mathbb {Z} ^{k}\ :\ n=a_{1}^{2}+a_{2}^{2}+\cdots +a_{k}^{2}\}|}
where
|
|
{\displaystyle |\,\ |}
denotes the cardinality of a set. In other words, rk(n) is the number of ways n can be written as a sum of k squares.
For example,
r
2
(
1
)
=
4
{\displaystyle r_{2}(1)=4}
since
1
=
0
2
+
(
±
1
)
2
=
(
±
1
)
2
+
0
2
{\displaystyle 1=0^{2}+(\pm 1)^{2}=(\pm 1)^{2}+0^{2}}
where each sum has two sign combinations, and also
r
2
(
2
)
=
4
{\displaystyle r_{2}(2)=4}
since
2
=
(
±
1
)
2
+
(
±
1
)
2
{\displaystyle 2=(\pm 1)^{2}+(\pm 1)^{2}}
with four sign combinations. On the other hand,
r
2
(
3
)
=
0
{\displaystyle r_{2}(3)=0}
because there is no way to represent 3 as a sum of two squares.
== Formulae ==
=== k = 2 ===
The number of ways to write a natural number as sum of two squares is given by r2(n). It is given explicitly by
r
2
(
n
)
=
4
(
d
1
(
n
)
−
d
3
(
n
)
)
{\displaystyle r_{2}(n)=4(d_{1}(n)-d_{3}(n))}
where d1(n) is the number of divisors of n which are congruent to 1 modulo 4 and d3(n) is the number of divisors of n which are congruent to 3 modulo 4. Using sums, the expression can be written as:
r
2
(
n
)
=
4
∑
d
∣
n
d
≡
1
,
3
(
mod
4
)
(
−
1
)
(
d
−
1
)
/
2
{\displaystyle r_{2}(n)=4\sum _{d\mid n \atop d\,\equiv \,1,3{\pmod {4}}}(-1)^{(d-1)/2}}
The prime factorization
n
=
2
g
p
1
f
1
p
2
f
2
⋯
q
1
h
1
q
2
h
2
⋯
{\displaystyle n=2^{g}p_{1}^{f_{1}}p_{2}^{f_{2}}\cdots q_{1}^{h_{1}}q_{2}^{h_{2}}\cdots }
, where
p
i
{\displaystyle p_{i}}
are the prime factors of the form
p
i
≡
1
(
mod
4
)
,
{\displaystyle p_{i}\equiv 1{\pmod {4}},}
and
q
i
{\displaystyle q_{i}}
are the prime factors of the form
q
i
≡
3
(
mod
4
)
{\displaystyle q_{i}\equiv 3{\pmod {4}}}
gives another formula
r
2
(
n
)
=
4
(
f
1
+
1
)
(
f
2
+
1
)
⋯
{\displaystyle r_{2}(n)=4(f_{1}+1)(f_{2}+1)\cdots }
, if all exponents
h
1
,
h
2
,
⋯
{\displaystyle h_{1},h_{2},\cdots }
are even. If one or more
h
i
{\displaystyle h_{i}}
are odd, then
r
2
(
n
)
=
0
{\displaystyle r_{2}(n)=0}
.
=== k = 3 ===
Gauss proved that for a squarefree number n > 4,
r
3
(
n
)
=
{
24
h
(
−
n
)
,
if
n
≡
3
(
mod
8
)
,
0
if
n
≡
7
(
mod
8
)
,
12
h
(
−
4
n
)
otherwise
,
{\displaystyle r_{3}(n)={\begin{cases}24h(-n),&{\text{if }}n\equiv 3{\pmod {8}},\\0&{\text{if }}n\equiv 7{\pmod {8}},\\12h(-4n)&{\text{otherwise}},\end{cases}}}
where h(m) denotes the class number of an integer m.
There exist extensions of Gauss' formula to arbitrary integer n.
=== k = 4 ===
The number of ways to represent n as the sum of four squares was due to Carl Gustav Jakob Jacobi and it is eight times the sum of all its divisors which are not divisible by 4, i.e.
r
4
(
n
)
=
8
∑
d
∣
n
,
4
∤
d
d
.
{\displaystyle r_{4}(n)=8\sum _{d\,\mid \,n,\ 4\,\nmid \,d}d.}
Representing n = 2km, where m is an odd integer, one can express
r
4
(
n
)
{\displaystyle r_{4}(n)}
in terms of the divisor function as follows:
r
4
(
n
)
=
8
σ
(
2
min
{
k
,
1
}
m
)
.
{\displaystyle r_{4}(n)=8\sigma (2^{\min\{k,1\}}m).}
=== k = 6 ===
The number of ways to represent n as the sum of six squares is given by
r
6
(
n
)
=
4
∑
d
∣
n
d
2
(
4
(
−
4
n
/
d
)
−
(
−
4
d
)
)
,
{\displaystyle r_{6}(n)=4\sum _{d\mid n}d^{2}{\big (}4\left({\tfrac {-4}{n/d}}\right)-\left({\tfrac {-4}{d}}\right){\big )},}
where
(
⋅
⋅
)
{\displaystyle \left({\tfrac {\cdot }{\cdot }}\right)}
is the Kronecker symbol.
=== k = 8 ===
Jacobi also found an explicit formula for the case k = 8:
r
8
(
n
)
=
16
∑
d
∣
n
(
−
1
)
n
+
d
d
3
.
{\displaystyle r_{8}(n)=16\sum _{d\,\mid \,n}(-1)^{n+d}d^{3}.}
== Generating function ==
The generating function of the sequence
r
k
(
n
)
{\displaystyle r_{k}(n)}
for fixed k can be expressed in terms of the Jacobi theta function:
ϑ
(
0
;
q
)
k
=
ϑ
3
k
(
q
)
=
∑
n
=
0
∞
r
k
(
n
)
q
n
,
{\displaystyle \vartheta (0;q)^{k}=\vartheta _{3}^{k}(q)=\sum _{n=0}^{\infty }r_{k}(n)q^{n},}
where
ϑ
(
0
;
q
)
=
∑
n
=
−
∞
∞
q
n
2
=
1
+
2
q
+
2
q
4
+
2
q
9
+
2
q
16
+
⋯
.
{\displaystyle \vartheta (0;q)=\sum _{n=-\infty }^{\infty }q^{n^{2}}=1+2q+2q^{4}+2q^{9}+2q^{16}+\cdots .}
== Numerical values ==
The first 30 values for
r
k
(
n
)
,
k
=
1
,
…
,
8
{\displaystyle r_{k}(n),\;k=1,\dots ,8}
are listed in the table below:
== See also ==
Integer partition
Jacobi's four-square theorem
Gauss circle problem
== References ==
== Further reading ==
Grosswald, Emil (1985). Representations of integers as sums of squares. Springer-Verlag. ISBN 0387961267.
== External links ==
Weisstein, Eric W. "Sum of Squares Function". MathWorld.
Sloane, N. J. A. (ed.). "Sequence A122141 (number of ways of writing n as a sum of d squares)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
Sloane, N. J. A. (ed.). "Sequence A004018 (Theta series of square lattice, r_2(n))". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. | Wikipedia/Sum_of_squares_function |
The Liouville lambda function, denoted by λ(n) and named after Joseph Liouville, is an important arithmetic function.
Its value is +1 if n is the product of an even number of prime numbers, and −1 if it is the product of an odd number of primes.
Explicitly, the fundamental theorem of arithmetic states that any positive integer n can be represented uniquely as a product of powers of primes: n = p1a1 ⋯ pkak, where p1 < p2 < ... < pk are primes and the aj are positive integers. (1 is given by the empty product.) The prime omega functions count the number of primes, with (Ω) or without (ω) multiplicity:
ω
(
n
)
=
k
,
{\displaystyle \omega (n)=k,}
Ω
(
n
)
=
a
1
+
a
2
+
⋯
+
a
k
.
{\displaystyle \Omega (n)=a_{1}+a_{2}+\cdots +a_{k}.}
λ(n) is defined by the formula
λ
(
n
)
=
(
−
1
)
Ω
(
n
)
{\displaystyle \lambda (n)=(-1)^{\Omega (n)}}
(sequence A008836 in the OEIS).
λ is completely multiplicative since Ω(n) is completely additive, i.e.: Ω(ab) = Ω(a) + Ω(b). Since 1 has no prime factors, Ω(1) = 0, so λ(1) = 1.
It is related to the Möbius function μ(n). Write n as n = a2b, where b is squarefree, i.e., ω(b) = Ω(b). Then
λ
(
n
)
=
μ
(
b
)
.
{\displaystyle \lambda (n)=\mu (b).}
The sum of the Liouville function over the divisors of n is the characteristic function of the squares:
∑
d
|
n
λ
(
d
)
=
{
1
if
n
is a perfect square,
0
otherwise.
{\displaystyle \sum _{d|n}\lambda (d)={\begin{cases}1&{\text{if }}n{\text{ is a perfect square,}}\\0&{\text{otherwise.}}\end{cases}}}
Möbius inversion of this formula yields
λ
(
n
)
=
∑
d
2
|
n
μ
(
n
d
2
)
.
{\displaystyle \lambda (n)=\sum _{d^{2}|n}\mu \left({\frac {n}{d^{2}}}\right).}
The Dirichlet inverse of Liouville function is the absolute value of the Möbius function, λ–1(n) = |μ(n)| = μ2(n), the characteristic function of the squarefree integers.
== Series ==
The Dirichlet series for the Liouville function is related to the Riemann zeta function by
ζ
(
2
s
)
ζ
(
s
)
=
∑
n
=
1
∞
λ
(
n
)
n
s
.
{\displaystyle {\frac {\zeta (2s)}{\zeta (s)}}=\sum _{n=1}^{\infty }{\frac {\lambda (n)}{n^{s}}}.}
Also:
∑
n
=
1
∞
λ
(
n
)
ln
n
n
=
−
ζ
(
2
)
=
−
π
2
6
.
{\displaystyle \sum \limits _{n=1}^{\infty }{\frac {\lambda (n)\ln n}{n}}=-\zeta (2)=-{\frac {\pi ^{2}}{6}}.}
The Lambert series for the Liouville function is
∑
n
=
1
∞
λ
(
n
)
q
n
1
−
q
n
=
∑
n
=
1
∞
q
n
2
=
1
2
(
ϑ
3
(
q
)
−
1
)
,
{\displaystyle \sum _{n=1}^{\infty }{\frac {\lambda (n)q^{n}}{1-q^{n}}}=\sum _{n=1}^{\infty }q^{n^{2}}={\frac {1}{2}}\left(\vartheta _{3}(q)-1\right),}
where
ϑ
3
(
q
)
{\displaystyle \vartheta _{3}(q)}
is the Jacobi theta function.
== Conjectures on weighted summatory functions ==
The Pólya problem is a question raised made by George Pólya in 1919. Defining
L
(
n
)
=
∑
k
=
1
n
λ
(
k
)
{\displaystyle L(n)=\sum _{k=1}^{n}\lambda (k)}
(sequence A002819 in the OEIS),
the problem asks whether
L
(
n
)
≤
0
{\displaystyle L(n)\leq 0}
for n > 1. The answer turns out to be no. The smallest counter-example is n = 906150257, found by Minoru Tanaka in 1980. It has since been shown that L(n) > 0.0618672√n for infinitely many positive integers n, while it can also be shown via the same methods that L(n) < −1.3892783√n for infinitely many positive integers n.
For any
ε
>
0
{\displaystyle \varepsilon >0}
, assuming the Riemann hypothesis, we have that the summatory function
L
(
x
)
≡
L
0
(
x
)
{\displaystyle L(x)\equiv L_{0}(x)}
is bounded by
L
(
x
)
=
O
(
x
exp
(
C
⋅
log
1
/
2
(
x
)
(
log
log
x
)
5
/
2
+
ε
)
)
,
{\displaystyle L(x)=O\left({\sqrt {x}}\exp \left(C\cdot \log ^{1/2}(x)\left(\log \log x\right)^{5/2+\varepsilon }\right)\right),}
where the
C
>
0
{\displaystyle C>0}
is some absolute limiting constant.
Define the related sum
T
(
n
)
=
∑
k
=
1
n
λ
(
k
)
k
.
{\displaystyle T(n)=\sum _{k=1}^{n}{\frac {\lambda (k)}{k}}.}
It was open for some time whether T(n) ≥ 0 for sufficiently big n ≥ n0 (this conjecture is occasionally–though incorrectly–attributed to Pál Turán). This was then disproved by Haselgrove (1958), who showed that T(n) takes negative values infinitely often. A confirmation of this positivity conjecture would have led to a proof of the Riemann hypothesis, as was shown by Pál Turán.
=== Generalizations ===
More generally, we can consider the weighted summatory functions over the Liouville function defined for any
α
∈
R
{\displaystyle \alpha \in \mathbb {R} }
as follows for positive integers x where (as above) we have the special cases
L
(
x
)
:=
L
0
(
x
)
{\displaystyle L(x):=L_{0}(x)}
and
T
(
x
)
=
L
1
(
x
)
{\displaystyle T(x)=L_{1}(x)}
L
α
(
x
)
:=
∑
n
≤
x
λ
(
n
)
n
α
.
{\displaystyle L_{\alpha }(x):=\sum _{n\leq x}{\frac {\lambda (n)}{n^{\alpha }}}.}
These
α
−
1
{\displaystyle \alpha ^{-1}}
-weighted summatory functions are related to the Mertens function, or weighted summatory functions of the Moebius function. In fact, we have that the so-termed non-weighted, or ordinary function
L
(
x
)
{\displaystyle L(x)}
precisely corresponds to the sum
L
(
x
)
=
∑
d
2
≤
x
M
(
x
d
2
)
=
∑
d
2
≤
x
∑
n
≤
x
d
2
μ
(
n
)
.
{\displaystyle L(x)=\sum _{d^{2}\leq x}M\left({\frac {x}{d^{2}}}\right)=\sum _{d^{2}\leq x}\sum _{n\leq {\frac {x}{d^{2}}}}\mu (n).}
Moreover, these functions satisfy similar bounding asymptotic relations. For example, whenever
0
≤
α
≤
1
2
{\displaystyle 0\leq \alpha \leq {\frac {1}{2}}}
, we see that there exists an absolute constant
C
α
>
0
{\displaystyle C_{\alpha }>0}
such that
L
α
(
x
)
=
O
(
x
1
−
α
exp
(
−
C
α
(
log
x
)
3
/
5
(
log
log
x
)
1
/
5
)
)
.
{\displaystyle L_{\alpha }(x)=O\left(x^{1-\alpha }\exp \left(-C_{\alpha }{\frac {(\log x)^{3/5}}{(\log \log x)^{1/5}}}\right)\right).}
By an application of Perron's formula, or equivalently by a key (inverse) Mellin transform, we have that
ζ
(
2
α
+
2
s
)
ζ
(
α
+
s
)
=
s
⋅
∫
1
∞
L
α
(
x
)
x
s
+
1
d
x
,
{\displaystyle {\frac {\zeta (2\alpha +2s)}{\zeta (\alpha +s)}}=s\cdot \int _{1}^{\infty }{\frac {L_{\alpha }(x)}{x^{s+1}}}dx,}
which then can be inverted via the inverse transform to show that for
x
>
1
{\displaystyle x>1}
,
T
≥
1
{\displaystyle T\geq 1}
and
0
≤
α
<
1
2
{\displaystyle 0\leq \alpha <{\frac {1}{2}}}
L
α
(
x
)
=
1
2
π
ı
∫
σ
0
−
ı
T
σ
0
+
ı
T
ζ
(
2
α
+
2
s
)
ζ
(
α
+
s
)
⋅
x
s
s
d
s
+
E
α
(
x
)
+
R
α
(
x
,
T
)
,
{\displaystyle L_{\alpha }(x)={\frac {1}{2\pi \imath }}\int _{\sigma _{0}-\imath T}^{\sigma _{0}+\imath T}{\frac {\zeta (2\alpha +2s)}{\zeta (\alpha +s)}}\cdot {\frac {x^{s}}{s}}ds+E_{\alpha }(x)+R_{\alpha }(x,T),}
where we can take
σ
0
:=
1
−
α
+
1
/
log
(
x
)
{\displaystyle \sigma _{0}:=1-\alpha +1/\log(x)}
, and with the remainder terms defined such that
E
α
(
x
)
=
O
(
x
−
α
)
{\displaystyle E_{\alpha }(x)=O(x^{-\alpha })}
and
R
α
(
x
,
T
)
→
0
{\displaystyle R_{\alpha }(x,T)\rightarrow 0}
as
T
→
∞
{\displaystyle T\rightarrow \infty }
.
In particular, if we assume that the
Riemann hypothesis (RH) is true and that all of the non-trivial zeros, denoted by
ρ
=
1
2
+
ı
γ
{\displaystyle \rho ={\frac {1}{2}}+\imath \gamma }
, of the Riemann zeta function are simple, then for any
0
≤
α
<
1
2
{\displaystyle 0\leq \alpha <{\frac {1}{2}}}
and
x
≥
1
{\displaystyle x\geq 1}
there exists an infinite sequence of
{
T
v
}
v
≥
1
{\displaystyle \{T_{v}\}_{v\geq 1}}
which satisfies that
v
≤
T
v
≤
v
+
1
{\displaystyle v\leq T_{v}\leq v+1}
for all v such that
L
α
(
x
)
=
x
1
/
2
−
α
(
1
−
2
α
)
ζ
(
1
/
2
)
+
∑
|
γ
|
<
T
v
ζ
(
2
ρ
)
ζ
′
(
ρ
)
⋅
x
ρ
−
α
(
ρ
−
α
)
+
E
α
(
x
)
+
R
α
(
x
,
T
v
)
+
I
α
(
x
)
,
{\displaystyle L_{\alpha }(x)={\frac {x^{1/2-\alpha }}{(1-2\alpha )\zeta (1/2)}}+\sum _{|\gamma |<T_{v}}{\frac {\zeta (2\rho )}{\zeta ^{\prime }(\rho )}}\cdot {\frac {x^{\rho -\alpha }}{(\rho -\alpha )}}+E_{\alpha }(x)+R_{\alpha }(x,T_{v})+I_{\alpha }(x),}
where for any increasingly small
0
<
ε
<
1
2
−
α
{\displaystyle 0<\varepsilon <{\frac {1}{2}}-\alpha }
we define
I
α
(
x
)
:=
1
2
π
ı
⋅
x
α
∫
ε
+
α
−
ı
∞
ε
+
α
+
ı
∞
ζ
(
2
s
)
ζ
(
s
)
⋅
x
s
(
s
−
α
)
d
s
,
{\displaystyle I_{\alpha }(x):={\frac {1}{2\pi \imath \cdot x^{\alpha }}}\int _{\varepsilon +\alpha -\imath \infty }^{\varepsilon +\alpha +\imath \infty }{\frac {\zeta (2s)}{\zeta (s)}}\cdot {\frac {x^{s}}{(s-\alpha )}}ds,}
and where the remainder term
R
α
(
x
,
T
)
≪
x
−
α
+
x
1
−
α
log
(
x
)
T
+
x
1
−
α
T
1
−
ε
log
(
x
)
,
{\displaystyle R_{\alpha }(x,T)\ll x^{-\alpha }+{\frac {x^{1-\alpha }\log(x)}{T}}+{\frac {x^{1-\alpha }}{T^{1-\varepsilon }\log(x)}},}
which of course tends to 0 as
T
→
∞
{\displaystyle T\rightarrow \infty }
. These exact analytic formula expansions again share similar properties to those corresponding to the weighted Mertens function cases. Additionally, since
ζ
(
1
/
2
)
<
0
{\displaystyle \zeta (1/2)<0}
we have another similarity in the form of
L
α
(
x
)
{\displaystyle L_{\alpha }(x)}
to
M
(
x
)
{\displaystyle M(x)}
in so much as the dominant leading term in the previous formulas predicts a negative bias in the values of these functions over the positive natural numbers x.
== References ==
Pólya, G. (1919). "Verschiedene Bemerkungen zur Zahlentheorie". Jahresbericht der Deutschen Mathematiker-Vereinigung. 28: 31–40.
Haselgrove, C. Brian (1958). "A disproof of a conjecture of Pólya". Mathematika. 5 (2): 141–145. doi:10.1112/S0025579300001480. ISSN 0025-5793. MR 0104638. Zbl 0085.27102.
Lehman, R. (1960). "On Liouville's function". Mathematics of Computation. 14 (72): 311–320. doi:10.1090/S0025-5718-1960-0120198-5. MR 0120198.
Tanaka, Minoru (1980). "A Numerical Investigation on Cumulative Sum of the Liouville Function". Tokyo Journal of Mathematics. 3 (1): 187–189. doi:10.3836/tjm/1270216093. MR 0584557.
Weisstein, Eric W. "Liouville Function". MathWorld.
A.F. Lavrik (2001) [1994], "Liouville function", Encyclopedia of Mathematics, EMS Press | Wikipedia/Liouville_function |
In arithmetic and algebra, the fifth power or sursolid of a number n is the result of multiplying five instances of n together:
n5 = n × n × n × n × n.
Fifth powers are also formed by multiplying a number by its fourth power, or the square of a number by its cube.
The sequence of fifth powers of integers is:
0, 1, 32, 243, 1024, 3125, 7776, 16807, 32768, 59049, 100000, 161051, 248832, 371293, 537824, 759375, 1048576, 1419857, 1889568, 2476099, 3200000, 4084101, 5153632, 6436343, 7962624, 9765625, ... (sequence A000584 in the OEIS)
== Properties ==
For any integer n, the last decimal digit of n5 is the same as the last (decimal) digit of n, i.e.
n
≡
n
5
(
mod
10
)
{\displaystyle n\equiv n^{5}{\pmod {10}}}
By the Abel–Ruffini theorem, there is no general algebraic formula (formula expressed in terms of radical expressions) for the solution of polynomial equations containing a fifth power of the unknown as their highest power. This is the lowest power for which this is true. See quintic equation, sextic equation, and septic equation.
Along with the fourth power, the fifth power is one of two powers k that can be expressed as the sum of k − 1 other k-th powers, providing counterexamples to Euler's sum of powers conjecture. Specifically,
275 + 845 + 1105 + 1335 = 1445 (Lander & Parkin, 1966)
== See also ==
Eighth power
Seventh power
Sixth power
Fourth power
Cube (algebra)
Square (algebra)
Perfect power
== Footnotes ==
== References ==
Råde, Lennart; Westergren, Bertil (2000). Springers mathematische Formeln: Taschenbuch für Ingenieure, Naturwissenschaftler, Informatiker, Wirtschaftswissenschaftler (in German) (3 ed.). Springer-Verlag. p. 44. ISBN 3-540-67505-1.
Vega, Georg (1783). Logarithmische, trigonometrische, und andere zum Gebrauche der Mathematik eingerichtete Tafeln und Formeln (in German). Vienna: Gedruckt bey Johann Thomas Edlen von Trattnern, kaiferl. königl. Hofbuchdruckern und Buchhändlern. p. 358. 1 32 243 1024.
Jahn, Gustav Adolph (1839). Tafeln der Quadrat- und Kubikwurzeln aller Zahlen von 1 bis 25500, der Quadratzahlen aller Zahlen von 1 bis 27000 und der Kubikzahlen aller Zahlen von 1 bis 24000 (in German). Leipzig: Verlag von Johann Ambrosius Barth. p. 241.
Deza, Elena; Deza, Michel (2012). Figurate Numbers. Singapore: World Scientific Publishing. p. 173. ISBN 978-981-4355-48-3.
Rosen, Kenneth H.; Michaels, John G. (2000). Handbook of Discrete and Combinatorial Mathematics. Boca Raton, Florida: CRC Press. p. 159. ISBN 0-8493-0149-1.
Prändel, Johann Georg (1815). Arithmetik in weiterer Bedeutung, oder Zahlen- und Buchstabenrechnung in einem Lehrkurse - mit Tabellen über verschiedene Münzsorten, Gewichte und Ellenmaaße und einer kleinen Erdglobuslehre (in German). Munich. p. 264. | Wikipedia/Fifth_power_(algebra) |
Graphemics or graphematics is the linguistic study of writing systems and their basic components, i.e. graphemes.
At the beginning of the development of this area of linguistics, Ignace Gelb coined the term grammatology for this discipline; later some scholars suggested calling it graphology to match phonology, but that name is traditionally used for a pseudo-science. Others therefore suggested renaming the study of language-dependent pronunciation phonemics or phonematics instead, but this did not gain widespread acceptance either, so the terms graphemics and graphematics became more frequent.
Graphemics examines the specifics of written texts in a certain language and their correspondence to the spoken language. One major task is the descriptive analysis of implicit regularities in written words and texts (graphotactics) to formulate explicit rules (orthography) for the writing system that can be used in prescriptive education or in computer linguistics, e.g. for speech synthesis.
In analogy to phoneme and (allo)phone in phonology, the graphic units of language are graphemes, i.e. language-specific characters, and graphs, i.e. language-specific glyphs. Different schools of thought consider different entities to be graphemes; major points of divergence are the handling of punctuation, diacritic marks, digraphs or other multigraphs and non-alphabetic scripts.
Analogous to phonetics, the "etic" counterpart of graphemics is called graphetics and deals with the material side only (including paleography, typography and graphology).
== Grammatology ==
The term 'grammatology was first promoted in English by linguist Ignace Gelb in his 1952 book A Study of Writing. The equivalent word is recorded in German and French use long before then. Grammatology can examine the typology of scripts, the analysis of the structural properties of scripts, and the relationship between written and spoken language. In its broadest sense, some scholars also include the study of literacy in grammatology and, indeed, the impact of writing on philosophy, religion, science, administration and other aspects of the organization of society.
Historian Bruce Trigger associates grammatology with cultural evolution.
== Graphotactics ==
Graphotactics refers to rules which restrict the allowable sequences of letters in alphabetic languages.: 67 A common example is the partially correct "I before E except after C". However, there are exceptions, for example Edward Carney in his book, A Survey of English Spelling, refers to the "I before E except after C” rule instead as an example of a “phonotactic rule”.: 161
Graphotactical rules are useful in error detection by optical character recognition systems.
In studies of Old English, "graphotactics" is also used to refer to the variable-length spacing between words.
== Toronto School of communication theory ==
The scholars most immediately associated with grammatology, understood as the history and theory of writing, include Eric Havelock (The Muse Learns to Write), Walter J. Ong (Orality and Literacy), Jack Goody (Domestication of the Savage Mind), and Marshall McLuhan (The Gutenberg Galaxy). Grammatology brings to any topic a consideration of the contribution of technology and the material and social apparatus of language. A more theoretical treatment of the approach may be seen in the works of Friedrich Kittler (Discourse Networks: 1800/1900) and Avital Ronell (The Telephone Book).
== Structuralism and Deconstruction ==
Swiss linguist Ferdinand de Saussure, who is considered to be a key figure in structural approaches to language, saw speech and writing as 'two distinct systems of signs' with the second having 'the sole purpose of representing the first.', a view further explained in Peter Barry's the Beginning Theory. In the 1960s, with the writings Roland Barthes and Jacques Derrida, critiques have been put forth to this proposed relation.
In 1967, Jacques Derrida borrowed the term, but put it to different use, in his book Of Grammatology. Derrida aimed to show that writing is not simply a reproduction of speech, but that the way in which thoughts are recorded in writing strongly affects the nature of knowledge. Deconstruction from a grammatological perspective places the history of philosophy in general, and metaphysics in particular, in the context of writing as such. In this perspective metaphysics is understood as a category or classification system relative to the invention of alphabetic writing and its institutionalization in School. Plato's Academy, and Aristotle's Lyceum, are as much a part of the invention of literacy as is the introduction of the vowel to create the Classical Greek alphabet. Gregory Ulmer took up this trajectory, from historical to philosophical grammatology, to add applied grammatology (Applied Grammatology: Post(e)-Pedagogy from Jacques Derrida to Joseph Beuys, Johns Hopkins, 1985). Ulmer coined the term "electracy" to call attention to the fact that digital technologies and their elaboration in new media forms are part of an apparatus that is to these inventions what literacy is to alphabetic and print technologies.
== See also ==
Graphocentrism – Focus on written language as "best" language
Graphonomics – Study of handwriting and drawing
Deconstruction – Approach to understanding the relationship between text and meaning
List of writing systems
Of Grammatology – 1967 book by Jacques Derrida
Post-structuralism – Philosophical school and tradition
Structuralism – Intellectual current and methodological approach in the social science
Writing system – Convention of symbols representing language
Written language – Representation of a language through writing
== References == | Wikipedia/Graphemics |
A cyclic number is a natural number n such that n and φ(n) are coprime. Here φ is Euler's totient function. An equivalent definition is that a number n is cyclic if and only if any group of order n is cyclic.
Any prime number is clearly cyclic. All cyclic numbers are square-free.
Let n = p1 p2 … pk where the pi are distinct primes, then φ(n) = (p1 − 1)(p2 − 1)...(pk – 1). If no pi divides any (pj – 1), then n and φ(n) have no common (prime) divisor, and n is cyclic.
The first cyclic numbers are 1, 2, 3, 5, 7, 11, 13, 15, 17, 19, 23, 29, 31, 33, 35, 37, 41, 43, 47, 51, 53, 59, 61, 65, 67, 69, 71, 73, 77, 79, 83, 85, 87, 89, 91, 95, 97, 101, 103, 107, 109, 113, 115, 119, 123, 127, 131, 133, 137, 139, 141, 143, 145, 149, ... (sequence A003277 in the OEIS).
== References == | Wikipedia/Cyclic_number_(group_theory) |
Gilbreath's conjecture is a conjecture in number theory regarding the sequences generated by applying the forward difference operator to consecutive prime numbers and leaving the results unsigned, and then repeating this process on consecutive terms in the resulting sequence, and so forth. The statement is named after Norman L. Gilbreath who, in 1958, presented it to the mathematical community after observing the pattern by chance while doing arithmetic on a napkin. In 1878, eighty years before Gilbreath's discovery, François Proth had, however, published the same observations along with an attempted proof, which was later shown to be incorrect.
== Motivating arithmetic ==
Gilbreath observed a pattern while playing with the ordered sequence of prime numbers
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, ...
Computing the absolute value of the difference between term n + 1 and term n in this sequence yields the sequence
1, 2, 2, 4, 2, 4, 2, 4, 6, 2, ...
If the same calculation is done for the terms in this new sequence, and the sequence that is the outcome of this process, and again ad infinitum for each sequence that is the output of such a calculation, the following five sequences in this list are
1, 0, 2, 2, 2, 2, 2, 2, 4, ...
1, 2, 0, 0, 0, 0, 0, 2, ...
1, 2, 0, 0, 0, 0, 2, ...
1, 2, 0, 0, 0, 2, ...
1, 2, 0, 0, 2, ...
What Gilbreath—and François Proth before him—noticed is that the first term in each series of differences appears to be 1.
== The conjecture ==
Stating Gilbreath's observation formally is significantly easier to do after devising a notation for the sequences in the previous section. Toward this end, let
(
p
n
)
{\displaystyle (p_{n})}
denote the ordered sequence of prime numbers, and define each term in the sequence
(
d
n
1
)
{\displaystyle (d_{n}^{1})}
by
d
n
1
=
p
n
+
1
−
p
n
,
{\displaystyle d_{n}^{1}=p_{n+1}-p_{n},}
where
n
{\displaystyle n}
is positive. Also, for each integer
k
{\displaystyle k}
greater than 1, let the terms in
(
d
n
k
)
{\displaystyle (d_{n}^{k})}
be given by
d
n
k
=
|
d
n
+
1
k
−
1
−
d
n
k
−
1
|
.
{\displaystyle d_{n}^{k}=|d_{n+1}^{k-1}-d_{n}^{k-1}|.}
Gilbreath's conjecture states that every term in the sequence
a
k
=
d
1
k
{\displaystyle a_{k}=d_{1}^{k}}
for positive
k
{\displaystyle k}
is equal to 1.
== Verification and attempted proofs ==
François Proth released what he believed to be a proof of the statement that was later shown to be flawed. Andrew Odlyzko verified that
d
1
k
{\displaystyle d_{1}^{k}}
is equal to 1 for
k
≤
n
=
3.4
×
10
11
{\displaystyle k\leq n=3.4\times 10^{11}}
in 1993, but the conjecture remains an open problem. Instead of evaluating n rows, Odlyzko evaluated 635 rows and established that the 635th row started with a 1 and continued with only 0s and 2s for the next n numbers. This implies that the next n rows begin with a 1.
== Generalizations ==
In 1980, Martin Gardner published a conjecture by Hallard Croft that stated that the property of Gilbreath's conjecture (having a 1 in the first term of each difference sequence) should hold more generally for every sequence that begins with 2, subsequently contains only odd numbers, and has a sufficiently low bound on the gaps between consecutive elements in the sequence. This conjecture has also been repeated by later authors. However, it is false: for every initial subsequence of 2 and odd numbers, and every non-constant growth rate, there is a continuation of the subsequence by odd numbers whose gaps obey the growth rate but whose difference sequences fail to begin with 1 infinitely often. Odlyzko (1993) is more careful, writing of certain heuristic reasons for believing Gilbreath's conjecture that "the arguments above apply to many other sequences in which the first element is a 1, the others even, and where the gaps between consecutive elements are not too large and are sufficiently random." However, he does not give a formal definition of what "sufficiently random" means.
== See also ==
Difference operator
Prime gap
Rule 90, a cellular automaton that controls the behavior of the parts of the rows that contain only the values 0 and 2
== References == | Wikipedia/Gilbreath's_conjecture |
In mathematics, the Mersenne conjectures concern the characterization of a kind of prime numbers called Mersenne primes, meaning prime numbers that are a power of two minus one.
== Original Mersenne conjecture ==
The original, called Mersenne's conjecture, was a statement by Marin Mersenne in his Cogitata Physico-Mathematica (1644; see e.g. Dickson 1919) that the numbers
2
n
−
1
{\displaystyle 2^{n}-1}
were prime for n = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127 and 257 (sequence A109461 in the OEIS), and were composite for all other positive integers n ≤ 257. The first seven entries of his list (
2
n
−
1
{\displaystyle 2^{n}-1}
for n = 2, 3, 5, 7, 13, 17, 19) had already been proven to be primes by trial division before Mersenne's time; only the last four entries were new claims by Mersenne. Due to the size of those last numbers, Mersenne did not and could not test all of them, nor could his peers in the 17th century. It was eventually determined, after three centuries and the availability of new techniques such as the Lucas–Lehmer test, that Mersenne's conjecture contained five errors, namely two entries are composite (those corresponding to the primes n = 67, 257) and three primes are missing (those corresponding to the primes n = 61, 89, 107). The correct list for n ≤ 257 is: n = 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107 and 127.
While Mersenne's original conjecture is false, it may have led to the New Mersenne conjecture.
== New Mersenne conjecture ==
The New Mersenne conjecture or Bateman, Selfridge and Wagstaff conjecture (Bateman et al. 1989) states that for any odd natural number p, if any two of the following conditions hold, then so does the third:
p = 2k ± 1 or p = 4k ± 3 for some natural number k. (OEIS: A122834)
2p − 1 is prime (a Mersenne prime). (OEIS: A000043)
(2p + 1)/3 is prime (a Wagstaff prime). (OEIS: A000978)
If p is an odd composite number, then 2p − 1 and (2p + 1)/3 are both composite. Therefore it is only necessary to test primes to verify the truth of the conjecture.
Currently, there are nine known numbers for which all three conditions hold: 3, 5, 7, 13, 17, 19, 31, 61, 127 (sequence A107360 in the OEIS). Bateman et al. expected that no number greater than 127 satisfies all three conditions, and showed that heuristically no greater number would even satisfy two conditions, which would make the New Mersenne conjecture trivially true.
If at least one of the double Mersenne numbers MM61 and MM127 is prime, then the New Mersenne conjecture would be false, since both M61 and M127 satisfy the first condition (since they are Mersenne primes themselves), but (2^M61+1)/3 and (2^M127+1)/3 are both composite, they are divisible by 1328165573307087715777 and 886407410000361345663448535540258622490179142922169401, respectively.
As of 2025, all the Mersenne primes up to 257885161 − 1 are known, and for none of these does the first condition or the third condition hold except for the ones just mentioned.
Primes which satisfy at least one condition are
2, 3, 5, 7, 11, 13, 17, 19, 23, 31, 43, 61, 67, 79, 89, 101, 107, 127, 167, 191, 199, 257, 313, 347, 521, 607, 701, 1021, 1279, 1709, 2203, 2281, 2617, 3217, 3539, 4093, 4099, 4253, 4423, 5807, 8191, 9689, 9941, ... (sequence A120334 in the OEIS)
Note that the two primes for which the original Mersenne conjecture is false (67 and 257) satisfy the first condition of the new conjecture (67 = 26 + 3, 257 = 28 + 1), but not the other two. 89 and 107, which were missed by Mersenne, satisfy the second condition but not the other two. Mersenne may have thought that 2p − 1 is prime only if p = 2k ± 1 or p = 4k ± 3 for some natural number k, but if he thought it was "if and only if" he would have included 61.
The New Mersenne conjecture can be thought of as an attempt to salvage the centuries-old Mersenne's conjecture, which is false. However, according to Robert D. Silverman, John Selfridge agreed that the New Mersenne conjecture is "obviously true" as it was chosen to fit the known data and counter-examples beyond those cases are exceedingly unlikely. It may be regarded more as a curious observation than as an open question in need of proving.
Prime Pages shows that the New Mersenne conjecture is true for all integers less than or equal to 10000000 by systematically listing all primes for which it is already known that one of the conditions holds. In fact, currently it is known that the New Mersenne conjecture is true for all integers less than or equal to the current search limit of the Mersenne primes (see this page for the current search limit of the Mersenne primes), also currently it is known that the New Mersenne conjecture is true for all integers less than 1073741827 which satisfy the first condition, also currently it is known that the New Mersenne conjecture is true for all known integers which satisfy the second or third condition.
== Lenstra–Pomerance–Wagstaff conjecture ==
Lenstra, Pomerance, and Wagstaff have conjectured that there are infinitely many Mersenne primes, and, more precisely, that the number of Mersenne primes less than x is asymptotically approximated by
e
γ
⋅
log
2
log
2
(
x
)
,
{\displaystyle e^{\gamma }\cdot \log _{2}\log _{2}(x),}
where γ is the Euler–Mascheroni constant.
In other words, the number of Mersenne primes with exponent p less than y is asymptotically
e
γ
⋅
log
2
(
y
)
.
{\displaystyle e^{\gamma }\cdot \log _{2}(y).}
This means that there should on average be about
e
γ
⋅
log
2
(
10
)
{\displaystyle e^{\gamma }\cdot \log _{2}(10)}
≈ 5.92 primes p of a given number of decimal digits such that
M
p
{\displaystyle M_{p}}
is prime. The conjecture is fairly accurate for the first 40 Mersenne primes, but between 220,000,000 and 285,000,000 there are at least 12, rather than the expected number which is around 3.7.
More generally, the number of primes p ≤ y such that
a
p
−
b
p
a
−
b
{\displaystyle {\frac {a^{p}-b^{p}}{a-b}}}
is prime (where a, b are coprime integers, a > 1, −a < b < a, a and b are not both perfect r-th powers for any natural number r > 1, and −4ab is not a perfect fourth power) is asymptotically
(
e
γ
+
m
⋅
log
e
(
2
)
)
⋅
log
a
(
y
)
.
{\displaystyle (e^{\gamma }+m\cdot \log _{e}(2))\cdot \log _{a}(y).}
where m is the largest nonnegative integer such that a and −b are both perfect 2m-th powers. The case of Mersenne primes is one case of (a, b) = (2, 1).
== See also ==
Gillies' conjecture on the distribution of numbers of prime factors of Mersenne numbers
Lucas–Lehmer primality test
Lucas primality test
Catalan's Mersenne conjecture
Mersenne's laws
== References ==
Bateman, P. T.; Selfridge, J. L.; Wagstaff Jr., Samuel S. (1989). "The new Mersenne conjecture". American Mathematical Monthly. 96 (2). Mathematical Association of America: 125–128. doi:10.2307/2323195. JSTOR 2323195. MR 0992073.
Dickson, L. E. (1919). History of the Theory of Numbers. Carnegie Institute of Washington. p. 31. OL 6616242M. Reprinted by Chelsea Publishing, New York, 1971, ISBN 0-8284-0086-5.
== External links ==
The Prime Glossary. New Mersenne prime conjecture. | Wikipedia/Mersenne_conjectures |
Comptes rendus de l'Académie des Sciences (French pronunciation: [kɔ̃t ʁɑ̃dy də lakademi de sjɑ̃s], Proceedings of the Academy of Sciences), or simply Comptes rendus, is a French scientific journal published since 1835. It is the proceedings of the French Academy of Sciences. It is currently split into seven sections, published on behalf of the Academy until 2020 by Elsevier: Mathématique, Mécanique, Physique, Géoscience, Palévol, Chimie, and Biologies. As of 2020, the Comptes Rendus journals are published by the Academy with a diamond open access model.
== Naming history ==
The journal has had several name changes and splits over the years.
=== 1835–1965 ===
Comptes rendus was initially established in 1835 as Comptes rendus hebdomadaires des séances de l'Académie des Sciences. It began as an alternative publication pathway for more prompt publication than the Mémoires de l'Académie des Sciences, which had been published since 1666. The Mémoires, which continued to be published alongside the Comptes rendus throughout the nineteenth century, had a publication cycle which resulted in memoirs being published years after they had been presented to the Academy. Some academicians continued to publish in the Mémoires because of the strict page limits in the Comptes rendus.
=== 1966–1980 ===
After 1965 this title was split into five sections:
Série A (Sciences mathématiques) – mathematics
Série B (Sciences physiques) – physics and geosciences
Série C (Sciences chimiques) – chemistry
Série D (Sciences naturelles) – life sciences
Vie académique – academy notices and miscellanea (between 1968 and 1970, and again between 1979 and 1983)
Series A and B were published together in one volume except in 1974.
=== 1981–1993 ===
The areas were rearranged as follows:
Série I - (Sciences Mathématiques) - mathematics
Série II (Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terre) - physics, chemistry, astronomy and geosciences
Série III - (Sciences de la vie) - life sciences
Vie académique – academy notices and miscellanea (the last 3 volumes of the second edition, between 1981 and 1983)
Vie des sciences – A renamed Vie académique (from 1984 to 1996)
=== 1994–2001 ===
These publications remained the same:
Série I (Sciences mathématiques) – mathematics
Série III (Sciences de la Vie) – life sciences
Vie des sciences – A renamed Vie académique (until 1996)
The areas published in Série II were slowly split into other publications in ways that caused some confusion.
In 1994, Série II, which covered physics, chemistry, astronomy and geosciences, was replaced by Série IIA and Série IIB. Série IIA was exclusive to geosciences, and Série IIB covered chemistry and astronomy and the now-distinct mechanics and physics.
In 1998, Série IIB covered mechanics, physics and astronomy; chemistry got its separate publication, Série IIC.
In 2000, Série IIB became dedicated exclusively to mechanics in May. Astronomy got redefined as astrophysics, which along with physics was covered by the new Série IV. Série IV began publishing in March; however, Séries IIB published two more issues on physics and astrophysics in April and May before starting the new run.
=== 2002 onwards ===
The present naming and subject assignment was established in 2002:
Comptes Rendus Biologies – life sciences except paleontology and evolutionary biology. Continues in part Série IIC (biochemistry) and III.
Comptes Rendus Chimie – chemistry. Continues in part Série IIC.
Comptes Rendus Géoscience – geosciences. Continues in part Série IIA.
Comptes Rendus Mathématique – mathematics. Continues Série I.
Comptes Rendus Mécanique – mechanics. Continues Série IIB.
Comptes Rendus Palévol – paleontology and evolutionary biology. Continues in part Série IIA and III.
Comptes Rendus Physique – topical issues in physics (mainly optics, astrophysics and particle physics). Continues Série IV.
== Online open archives ==
The Comptes rendus de l'Académie des Sciences publications are available through the National Library of France as part of its free online library and archive of other historical documents and works of art, Gallica. The publications available online are:
Comptes rendus hebdomadaires des séances de l'Académie des science (1835–1965)
Séries A et B, Sciences Mathématiques et Sciences Physiques (1966–1973)
Série A, Sciences Mathématiques, (1974)
Série B, Sciences Physiques, (1974)
Séries A et B, Sciences Mathématiques et Sciences Physiques (1975–1980)
Besides the material for this timeframe, this collection also has a separate set of scans of all the material of Série I - Mathématique from 1981 to 1990
Série C, Sciences Chimique
Série D, Sciences Naturelle
Vie Académique (1968–1970)
Vie Académique (1979–1983)
Série I - Mathématique
Séries A et B, Sciences Mathématiques et Sciences Physiques (1975–1980) has a different set of scans for all of this material.
Série II - Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terr
The link to Série I - Mathématique (1984–1996) includes a different set of scans for the first 3 issues of 1981 of this series.
Série III - Sciences de la vie
Série I - Mathématique
Séries A et B, Sciences Mathématiques et Sciences Physiques (1975–1980) has a different set of scans for this series' material until 1990.
This collection contains a different set of scans of the 1981 material of Série II - Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terr (1981–1983).
Série II - Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terre (1984–1994)
The first year of material (1994) of material of Série IIb - Mécanique, physique, chimie, astronomie (1995–1996) is misfiled in this collection.
Série IIa - Sciences de la terre et des planètes (1994–1996)
Série IIb - Mécanique, physique, chimie, astronomie (1995–1996)
The first year of material (1994) is misfiled together with Série II - Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terre (1994–1996).
Série III - Sciences de la vie
Vie des sciences
All publications from 1997 to 2019 were published commercially by Elsevier. From 2020 on, the Comptes Rendus Palevol have been published by the Muséum National d'Histoire Naturelle (Paris) for the Académie des Sciences. All other series of the Comptes Rendus of the Acamémie des Sciences have been published (from 2020 on) by Mersenne under a Diamond Open Access model.
== References ==
== External links ==
"Comptes Rendus official website". French Academy of Sciences. Retrieved 23 May 2024.
Comptes Rendus de l'Académie des sciences numérisés sur le site de la Bibliothèque nationale de France
Scholarly Societies project: French Academy of Sciences page; provides information on naming and publication history up to 1980, as well as on previous journals of the Academy. Retrieved 2006-DEC-10.
Bibliothèque nationale de France: Catalog record and full-text scans of Comptes rendus. Retrieved 2009-JUN-22.
Comptes rendus series: [1]
ScienceDirect list of titles (from 1997 onwards): https://www.sciencedirect.com/browse/journals-and-books?searchPhrase=comptes | Wikipedia/Comptes_rendus_de_l'Académie_des_Sciences |
The Bunyakovsky conjecture (or Bouniakowsky conjecture) gives a criterion for a polynomial
f
(
x
)
{\displaystyle f(x)}
in one variable with integer coefficients to give infinitely many prime values in the sequence
f
(
1
)
,
f
(
2
)
,
f
(
3
)
,
…
.
{\displaystyle f(1),f(2),f(3),\ldots .}
It was stated in 1857 by the Russian mathematician Viktor Bunyakovsky. The following three conditions are necessary for
f
(
x
)
{\displaystyle f(x)}
to have the desired prime-producing property:
The leading coefficient is positive,
The polynomial is irreducible over the rationals (and integers), and
There is no common factor for all the infinitely many values
f
(
1
)
,
f
(
2
)
,
f
(
3
)
,
…
{\displaystyle f(1),f(2),f(3),\ldots }
. (In particular, the coefficients of
f
(
x
)
{\displaystyle f(x)}
should be relatively prime. It is not necessary for the values f(n) to be pairwise relatively prime.)
Bunyakovsky's conjecture is that these conditions are sufficient: if
f
(
x
)
{\displaystyle f(x)}
satisfies (1)–(3), then
f
(
n
)
{\displaystyle f(n)}
is prime for infinitely many positive integers
n
{\displaystyle n}
.
A seemingly weaker yet equivalent statement to Bunyakovsky's conjecture is that for every integer polynomial
f
(
x
)
{\displaystyle f(x)}
that satisfies (1)–(3),
f
(
n
)
{\displaystyle f(n)}
is prime for at least one positive integer
n
{\displaystyle n}
: but then, since the translated polynomial
f
(
x
+
n
)
{\displaystyle f(x+n)}
still satisfies (1)–(3), in view of the weaker statement
f
(
m
)
{\displaystyle f(m)}
is prime for at least one positive integer
m
>
n
{\displaystyle m>n}
, so that
f
(
n
)
{\displaystyle f(n)}
is indeed prime for infinitely many positive integers
n
{\displaystyle n}
. Bunyakovsky's conjecture is a special case of Schinzel's hypothesis H, one of the most famous open problems in number theory.
== Discussion of three conditions ==
The first condition is necessary because if the leading coefficient is negative then
f
(
x
)
<
0
{\displaystyle f(x)<0}
for all large
x
{\displaystyle x}
, and thus
f
(
n
)
{\displaystyle f(n)}
is not a (positive) prime number for large positive integers
n
{\displaystyle n}
. (This merely satisfies the sign convention that primes are positive.)
The second condition is necessary because if
f
(
x
)
=
g
(
x
)
h
(
x
)
{\displaystyle f(x)=g(x)h(x)}
where the polynomials
g
(
x
)
{\displaystyle g(x)}
and
h
(
x
)
{\displaystyle h(x)}
have integer coefficients, then we have
f
(
n
)
=
g
(
n
)
h
(
n
)
{\displaystyle f(n)=g(n)h(n)}
for all integers
n
{\displaystyle n}
; but
g
(
x
)
{\displaystyle g(x)}
and
h
(
x
)
{\displaystyle h(x)}
take the values 0 and
±
1
{\displaystyle \pm 1}
only finitely many times, so
f
(
n
)
{\displaystyle f(n)}
is composite for all large
n
{\displaystyle n}
.
The second condition also fails for the polynomials reducible over the rationals.
For example, the integer-valued polynomial
P
(
x
)
=
(
1
/
12
)
⋅
x
4
+
(
11
/
12
)
⋅
x
2
+
2
{\displaystyle P(x)=(1/12)\cdot x^{4}+(11/12)\cdot x^{2}+2}
doesn't satisfy the condition (2) since
P
(
x
)
=
(
1
/
12
)
⋅
(
x
4
+
11
x
2
+
24
)
=
(
1
/
12
)
⋅
(
x
2
+
3
)
⋅
(
x
2
+
8
)
{\displaystyle P(x)=(1/12)\cdot (x^{4}+11x^{2}+24)=(1/12)\cdot (x^{2}+3)\cdot (x^{2}+8)}
, so at least one of the latter two factors must be a divisor of
12
{\displaystyle 12}
in order to have
P
(
x
)
{\displaystyle P(x)}
prime, which holds only if
|
x
|
≤
3
{\displaystyle |x|\leq 3}
. The corresponding values are
2
,
3
,
7
,
17
{\displaystyle 2,3,7,17}
, so these are the only such primes for integral
x
{\displaystyle x}
since all of these numbers are prime. This isn't a counterexample to Bunyakovsky conjecture since the condition (2) fails.
The third condition, that the numbers
f
(
n
)
{\displaystyle f(n)}
have gcd 1, is obviously necessary, but is somewhat subtle, and is best understood by a counterexample. Consider
f
(
x
)
=
x
2
+
x
+
2
{\displaystyle f(x)=x^{2}+x+2}
, which has positive leading coefficient and is irreducible, and the coefficients are relatively prime; however
f
(
n
)
{\displaystyle f(n)}
is even for all integers
n
{\displaystyle n}
, and so is prime only finitely many times (namely at
n
=
0
,
−
1
{\displaystyle n=0,-1}
, when
f
(
n
)
=
2
{\displaystyle f(n)=2}
).
In practice, the easiest way to verify the third condition is to find one pair of positive integers
m
{\displaystyle m}
and
n
{\displaystyle n}
such that
f
(
m
)
{\displaystyle f(m)}
and
f
(
n
)
{\displaystyle f(n)}
are relatively prime. In general, for any integer-valued polynomial
f
(
x
)
=
c
0
+
c
1
x
+
⋯
+
c
d
x
d
{\displaystyle f(x)=c_{0}+c_{1}x+\cdots +c_{d}x^{d}}
we can use
gcd
{
f
(
n
)
}
n
≥
1
=
gcd
(
f
(
m
)
,
f
(
m
+
1
)
,
…
,
f
(
m
+
d
)
)
{\displaystyle \gcd\{f(n)\}_{n\geq 1}=\gcd(f(m),f(m+1),\dots ,f(m+d))}
for any integer
m
{\displaystyle m}
, so the gcd is given by values of
f
(
x
)
{\displaystyle f(x)}
at any consecutive
d
+
1
{\displaystyle d+1}
integers. In the example above, we have
f
(
−
1
)
=
2
,
f
(
0
)
=
2
,
f
(
1
)
=
4
{\displaystyle f(-1)=2,f(0)=2,f(1)=4}
and so the gcd is
2
{\displaystyle 2}
, which implies that
x
2
+
x
+
2
{\displaystyle x^{2}+x+2}
has even values on the integers.
Alternatively, when an integer polynomial
f
(
x
)
{\displaystyle f(x)}
is written in the basis of binomial coefficient polynomials:
f
(
x
)
=
a
0
+
a
1
(
x
1
)
+
⋯
+
a
d
(
x
d
)
,
{\displaystyle f(x)=a_{0}+a_{1}{\binom {x}{1}}+\cdots +a_{d}{\binom {x}{d}},}
each coefficient
a
i
{\displaystyle a_{i}}
is an integer and
gcd
{
f
(
n
)
}
n
≥
1
=
gcd
(
a
0
,
a
1
,
…
,
a
d
)
.
{\displaystyle \gcd\{f(n)\}_{n\geq 1}=\gcd(a_{0},a_{1},\dots ,a_{d}).}
In the example above, this is:
x
2
+
x
+
2
=
2
(
x
2
)
+
2
(
x
1
)
+
2
,
{\displaystyle x^{2}+x+2=2{\binom {x}{2}}+2{\binom {x}{1}}+2,}
and the coefficients in the right side of the equation have gcd 2.
Using this gcd formula, it can be proved
gcd
{
f
(
n
)
}
n
≥
1
=
1
{\displaystyle \gcd\{f(n)\}_{n\geq 1}=1}
if and only if there are positive integers
m
{\displaystyle m}
and
n
{\displaystyle n}
such that
f
(
m
)
{\displaystyle f(m)}
and
f
(
n
)
{\displaystyle f(n)}
are relatively prime.
== Examples ==
=== A simple quadratic polynomial ===
Some prime values of the polynomial
f
(
x
)
=
x
2
+
1
{\displaystyle f(x)=x^{2}+1}
are listed in the following table. (Values of
x
{\displaystyle x}
form OEIS sequence A005574; those of
x
2
+
1
{\displaystyle x^{2}+1}
form A002496.)
That
n
2
+
1
{\displaystyle n^{2}+1}
should be prime infinitely often is a problem first raised by Euler, and it is also the fifth Hardy–Littlewood conjecture and the fourth of Landau's problems. Despite the extensive numerical evidence
it is not known that this sequence extends indefinitely.
=== Cyclotomic polynomials ===
The cyclotomic polynomials
Φ
k
(
x
)
{\displaystyle \Phi _{k}(x)}
for
k
=
1
,
2
,
3
,
…
{\displaystyle k=1,2,3,\ldots }
satisfy the three conditions of Bunyakovsky's conjecture, so for all k, there should be infinitely many natural numbers n such that
Φ
k
(
n
)
{\displaystyle \Phi _{k}(n)}
is prime. It can be shown that if for all k, there exists an integer n > 1 with
Φ
k
(
n
)
{\displaystyle \Phi _{k}(n)}
prime, then for all k, there are infinitely many natural numbers n with
Φ
k
(
n
)
{\displaystyle \Phi _{k}(n)}
prime.
The following sequence gives the smallest natural number n > 1 such that
Φ
k
(
n
)
{\displaystyle \Phi _{k}(n)}
is prime, for
k
=
1
,
2
,
3
,
…
{\displaystyle k=1,2,3,\ldots }
:
3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5, 2, 2, 2, 2, 2, 2, 6, 2, 4, 3, 2, 10, 2, 22, 2, 2, 4, 6, 2, 2, 2, 2, 2, 14, 3, 61, 2, 10, 2, 14, 2, 15, 25, 11, 2, 5, 5, 2, 6, 30, 11, 24, 7, 7, 2, 5, 7, 19, 3, 2, 2, 3, 30, 2, 9, 46, 85, 2, 3, 3, 3, 11, 16, 59, 7, 2, 2, 22, 2, 21, 61, 41, 7, 2, 2, 8, 5, 2, 2, ... (sequence A085398 in the OEIS).
This sequence is known to contain some large terms: the 545th term is 2706, the 601st is 2061, and the 943rd is 2042. This case of Bunyakovsky's conjecture is widely believed, but again it is not known that the sequence extends indefinitely.
Usually, there is an integer
n
{\displaystyle n}
between 2 and
ϕ
(
k
)
{\displaystyle \phi (k)}
(where
ϕ
{\displaystyle \phi }
is Euler's totient function, so
ϕ
(
k
)
{\displaystyle \phi (k)}
is the degree of
Φ
k
(
n
)
{\displaystyle \Phi _{k}(n)}
) such that
Φ
k
(
n
)
{\displaystyle \Phi _{k}(n)}
is prime, but there are exceptions; the first few are:
1, 2, 25, 37, 44, 68, 75, 82, 99, 115, 119, 125, 128, 159, 162, 179, 183, 188, 203, 213, 216, 229, 233, 243, 277, 289, 292, ....
== Partial results: only Dirichlet's theorem ==
To date, the only case of Bunyakovsky's conjecture that has been proved is that of polynomials of degree 1. This is Dirichlet's theorem, which states that when
a
{\displaystyle a}
and
m
{\displaystyle m}
are relatively prime integers there are infinitely many prime numbers
p
≡
a
(
mod
m
)
{\displaystyle p\equiv a{\pmod {m}}}
. This is Bunyakovsky's conjecture for
f
(
x
)
=
a
+
m
x
{\displaystyle f(x)=a+mx}
(or
a
−
m
x
{\displaystyle a-mx}
if
m
<
0
{\displaystyle m<0}
).
The third condition in Bunyakovsky's conjecture for a linear polynomial
m
x
+
a
{\displaystyle mx+a}
is equivalent to
a
{\displaystyle a}
and
m
{\displaystyle m}
being relatively prime.
No single case of Bunyakovsky's conjecture for degree greater than 1 is proved, although numerical evidence in higher degree is consistent with the conjecture.
== Generalized Bunyakovsky conjecture ==
Given
k
≥
1
{\displaystyle k\geq 1}
polynomials with positive degrees and integer coefficients, each satisfying the three conditions, assume that for any prime
p
{\displaystyle p}
there is an
n
{\displaystyle n}
such that none of the values of the
k
{\displaystyle k}
polynomials at
n
{\displaystyle n}
are divisible by
p
{\displaystyle p}
. Given these assumptions, it is conjectured that there are infinitely many positive integers
n
{\displaystyle n}
such that all values of these
k
{\displaystyle k}
polynomials at
x
=
n
{\displaystyle x=n}
are prime. This conjecture is equivalent to the generalized Dickson conjecture and Schinzel's hypothesis H.
== See also ==
Integer-valued polynomial
Cohn's irreducibility criterion
Schinzel's hypothesis H
Bateman–Horn conjecture
Hardy and Littlewood's conjecture F
== References ==
=== Bibliography ===
Ed Pegg, Jr. "Bouniakowsky conjecture". MathWorld.
Rupert, Wolfgang M. (1998-08-05). "Reducibility of polynomials f(x, y) modulo p". arXiv:math/9808021.
Bouniakowsky, V. (1857). "Sur les diviseurs numériques invariables des fonctions rationnelles entières". Mém. Acad. Sc. St. Pétersbourg. 6: 305–329. | Wikipedia/Bunyakovsky_conjecture |
Legendre's conjecture, proposed by Adrien-Marie Legendre, states that there is a prime number between
n
2
{\displaystyle n^{2}}
and
(
n
+
1
)
2
{\displaystyle (n+1)^{2}}
for every positive integer
n
{\displaystyle n}
.
The conjecture is one of Landau's problems (1912) on prime numbers, and is one of many open problems on the spacing of prime numbers.
== Prime gaps ==
If Legendre's conjecture is true, the gap between any prime p and the next largest prime would be
O
(
p
)
{\displaystyle O({\sqrt {p}}\,)}
, as expressed in big O notation. It is one of a family of results and conjectures related to prime gaps, that is, to the spacing between prime numbers. Others include Bertrand's postulate, on the existence of a prime between
n
{\displaystyle n}
and
2
n
{\displaystyle 2n}
, Oppermann's conjecture on the existence of primes between
n
2
{\displaystyle n^{2}}
,
n
(
n
+
1
)
{\displaystyle n(n+1)}
, and
(
n
+
1
)
2
{\displaystyle (n+1)^{2}}
, Andrica's conjecture and Brocard's conjecture on the existence of primes between squares of consecutive primes, and Cramér's conjecture that the gaps are always much smaller, of the order
(
log
p
)
2
{\displaystyle (\log p)^{2}}
. If Cramér's conjecture is true, Legendre's conjecture would follow for all sufficiently large n. Harald Cramér also proved that the Riemann hypothesis implies a weaker bound of
O
(
p
log
p
)
{\displaystyle O({\sqrt {p}}\log p)}
on the size of the largest prime gaps.
By the prime number theorem, the expected number of primes between
n
2
{\displaystyle n^{2}}
and
(
n
+
1
)
2
{\displaystyle (n+1)^{2}}
is approximately
n
/
ln
n
{\displaystyle n/\ln n}
, and it is additionally known that for almost all intervals of this form the actual number of primes (OEIS: A014085) is asymptotic to this expected number. Since this number is large for large
n
{\displaystyle n}
, this lends credence to Legendre's conjecture. It is known that the prime number theorem gives an accurate count of the primes within short intervals, either unconditionally or based on the Riemann hypothesis, but the lengths of the intervals for which this has been proven are longer than the intervals between consecutive squares, too long to prove Legendre's conjecture.
== Partial results ==
It follows from a result by Ingham that for all sufficiently large
n
{\displaystyle n}
, there is a prime between the consecutive cubes
n
3
{\displaystyle n^{3}}
and
(
n
+
1
)
3
{\displaystyle (n+1)^{3}}
. Dudek proved that this holds for all
n
≥
e
e
33.3
{\displaystyle n\geq e^{e^{33.3}}}
.
Dudek also proved that for
m
=
4.971
×
10
9
{\displaystyle m=4.971\times 10^{9}}
and any positive integer
n
{\displaystyle n}
, there is a prime between
n
m
{\displaystyle n^{m}}
and
(
n
+
1
)
m
{\displaystyle (n+1)^{m}}
. Mattner lowered this to
m
=
1438989
{\displaystyle m=1438989}
which was further reduced to
m
=
155
{\displaystyle m=155}
by Cully-Hugill.
Baker, Harman, and Pintz proved that there is a prime in the interval
[
x
−
x
21
/
40
,
x
]
{\displaystyle [x-x^{21/40},\,x]}
for all large
x
{\displaystyle x}
.
A table of maximal prime gaps shows that the conjecture holds to at least
n
2
=
4
⋅
10
18
{\displaystyle n^{2}=4\cdot 10^{18}}
, meaning
n
=
2
⋅
10
9
{\displaystyle n=2\cdot 10^{9}}
.
== Notes ==
== References ==
== External links ==
Weisstein, Eric W., "Legendre's conjecture", MathWorld | Wikipedia/Legendre's_conjecture |
In number theory, Brocard's conjecture is the conjecture that there are at least four prime numbers between (pn)2 and (pn+1)2, where pn is the nth prime number, for every n ≥ 2. The conjecture is named after Henri Brocard. It is widely believed that this conjecture is true. However, it remains unproven as of 2025.
The number of primes between prime squares is 2, 5, 6, 15, 9, 22, 11, 27, ... OEIS: A050216.
Legendre's conjecture that there is a prime between consecutive integer squares directly implies that there are at least two primes between prime squares for pn ≥ 3 since pn+1 − pn ≥ 2.
== See also ==
Prime-counting function
== Notes == | Wikipedia/Brocard's_conjecture |
In number theory, the Pólya conjecture (or Pólya's conjecture) stated that "most" (i.e., 50% or more) of the natural numbers less than any given number have an odd number of prime factors. The conjecture was set forth by the Hungarian mathematician George Pólya in 1919, and proved false in 1958 by C. Brian Haselgrove. Though mathematicians typically refer to this statement as the Pólya conjecture, Pólya never actually conjectured that the statement was true; rather, he showed that the truth of the statement would imply the Riemann hypothesis. For this reason, it is more accurately called "Pólya's problem".
The size of the smallest counterexample is often used to demonstrate the fact that a conjecture can be true for many cases and still fail to hold in general, providing an illustration of the strong law of small numbers.
== Statement ==
The Pólya conjecture states that for any n > 1, if the natural numbers less than or equal to n (excluding 0) are partitioned into those with an odd number of prime factors and those with an even number of prime factors, then the former set has at least as many members as the latter set. Repeated prime factors are counted repeatedly; for instance, we say that 18 = 2 × 3 × 3 has an odd number of prime factors, while 60 = 2 × 2 × 3 × 5 has an even number of prime factors.
Equivalently, it can be stated in terms of the summatory Liouville function, with the conjecture being that
L
(
n
)
=
∑
k
=
1
n
λ
(
k
)
≤
0
{\displaystyle L(n)=\sum _{k=1}^{n}\lambda (k)\leq 0}
for all n > 1. Here, λ(k) = (−1)Ω(k) is positive if the number of prime factors of the integer k is even, and is negative if it is odd. The big Omega function counts the total number of prime factors of an integer.
== Disproof ==
The Pólya conjecture was disproved by C. Brian Haselgrove in 1958. He showed that the conjecture has a counterexample, which he estimated to be around 1.845 × 10361.
A (much smaller) explicit counterexample, of n = 906,180,359 was given by R. Sherman Lehman in 1960; the smallest counterexample is n = 906,150,257, found by Minoru Tanaka in 1980.
The conjecture fails to hold for most values of n in the region of 906,150,257 ≤ n ≤ 906,488,079. In this region, the summatory Liouville function reaches a maximum value of 829 at n = 906,316,571.
== References ==
== External links ==
Weisstein, Eric W. "Pólya Conjecture". MathWorld. | Wikipedia/Pólya_conjecture |
In number theory, Artin's conjecture on primitive roots states that a given integer a that is neither a square number nor −1 is a primitive root modulo infinitely many primes p. The conjecture also ascribes an asymptotic density to these primes. This conjectural density equals Artin's constant or a rational multiple thereof.
The conjecture was made by Emil Artin to Helmut Hasse on September 27, 1927, according to the latter's diary. The conjecture is still unresolved as of 2025. In fact, there is no single value of a for which Artin's conjecture is proved.
== Formulation ==
Let a be an integer that is not a square number and not −1. Write a = a0b2 with a0 square-free. Denote by S(a) the set of prime numbers p such that a is a primitive root modulo p. Then the conjecture states
S(a) has a positive asymptotic density inside the set of primes. In particular, S(a) is infinite.
Under the conditions that a is not a perfect power and a0 is not congruent to 1 modulo 4 (sequence A085397 in the OEIS), this density is independent of a and equals Artin's constant, which can be expressed as an infinite product
C
A
r
t
i
n
=
∏
p
p
r
i
m
e
(
1
−
1
p
(
p
−
1
)
)
=
0.3739558136
…
{\displaystyle C_{\mathrm {Artin} }=\prod _{p\ \mathrm {prime} }\left(1-{\frac {1}{p(p-1)}}\right)=0.3739558136\ldots }
(sequence A005596 in the OEIS).
The positive integers satisfying these conditions are:
2, 3, 6, 7, 10, 11, 12, 14, 15, 18, 19, 22, 23, 24, 26, 28, 30, 31, 34, 35, 38, 39, 40, 42, 43, 44, 46, 47, 48, 50, 51, 54, 55, 56, 58, 59, 60, 62, 63, … (sequence A085397 in the OEIS)
The negative integers satisfying these conditions are:
2, 4, 5, 6, 9, 10, 13, 14, 16, 17, 18, 20, 21, 22, 24, 25, 26, 29, 30, 33, 34, 36, 37, 38, 40, 41, 42, 45, 46, 49, 50, 52, 53, 54, 56, 57, 58, 61, 62, … (sequence A120629 in the OEIS)
Similar conjectural product formulas exist for the density when a does not satisfy the above conditions. In these cases, the conjectural density is always a rational multiple of CArtin. If a is a square number or a = −1, then the density is 0; more generally, if a is a perfect pth power for prime p, then the number needs to be multiplied by
p
(
p
−
2
)
p
2
−
p
−
1
;
{\displaystyle {\frac {p(p-2)}{p^{2}-p-1}};}
if there is more than one such prime p, then the number needs to be multiplied by
p
(
p
−
2
)
p
2
−
p
−
1
{\displaystyle {\frac {p(p-2)}{p^{2}-p-1}}}
for all such primes p). Similarly, if a0 is congruent to 1 mod 4, then the number needs to be multiplied by
p
(
p
−
1
)
p
2
−
p
−
1
{\displaystyle {\frac {p(p-1)}{p^{2}-p-1}}}
for all prime factors p of a0.
== Examples ==
For example, take a = 2. The conjecture is that the set of primes p for which 2 is a primitive root has the density CArtin. The set of such primes is (sequence A001122 in the OEIS)
S(2) = {3, 5, 11, 13, 19, 29, 37, 53, 59, 61, 67, 83, 101, 107, 131, 139, 149, 163, 173, 179, 181, 197, 211, 227, 269, 293, 317, 347, 349, 373, 379, 389, 419, 421, 443, 461, 467, 491, ...}.
It has 38 elements smaller than 500 and there are 95 primes smaller than 500. The ratio (which conjecturally tends to CArtin) is 38/95 = 2/5 = 0.4.
For a = 8 = 23, which is a power of 2, the conjectured density is
3
5
C
{\displaystyle {\frac {3}{5}}C}
, and for a = 5, which is congruent to 1 mod 4, the density is
20
19
C
{\displaystyle {\frac {20}{19}}C}
.
== Partial results ==
In 1967, Christopher Hooley published a conditional proof for the conjecture, assuming certain cases of the generalized Riemann hypothesis.
Without the generalized Riemann hypothesis, there is no single value of a for which Artin's conjecture is proved. However, D. R. Heath-Brown proved in 1986 (Corollary 1) that at least one of 2, 3, or 5 is a primitive root modulo infinitely many primes p. He also proved (Corollary 2) that there are at most two primes for which Artin's conjecture fails.
== Some variations of Artin's problem ==
=== Elliptic curve ===
An elliptic curve
E
{\displaystyle E}
given by
y
2
=
x
3
+
a
x
+
b
{\displaystyle y^{2}=x^{3}+ax+b}
, Lang and Trotter gave a conjecture for rational points on
E
(
Q
)
{\displaystyle E(\mathbb {Q} )}
analogous to Artin's primitive root conjecture.
Specifically, they said there exists a constant
C
E
{\displaystyle C_{E}}
for a given point of infinite order
P
{\displaystyle P}
in the set of rational points
E
(
Q
)
{\displaystyle E(\mathbb {Q} )}
such that the number
N
(
P
)
{\displaystyle N(P)}
of primes (
p
≤
x
{\displaystyle p\leq x}
) for which the reduction of the point
P
(
mod
p
)
{\displaystyle P{\pmod {p}}}
denoted by
P
¯
{\displaystyle {\bar {P}}}
generates the whole set of points in
F
p
{\displaystyle \mathbb {F_{p}} }
in
E
{\displaystyle E}
, denoted by
E
¯
(
F
p
)
{\displaystyle {\bar {E}}(\mathbb {F_{p}} )}
, is given by
N
(
P
)
∼
C
E
(
x
log
x
)
{\displaystyle N(P)\sim C_{E}\left({\frac {x}{\log x}}\right)}
. Here we exclude the primes which divide the denominators of the coordinates of
P
{\displaystyle P}
.
Gupta and Murty proved the Lang and Trotter conjecture for
E
/
Q
{\displaystyle E/\mathbb {Q} }
with complex multiplication under the Generalized Riemann Hypothesis, for primes splitting in the relevant imaginary quadratic field.
=== Even order ===
Krishnamurty proposed the question how often the period of the decimal expansion
1
/
p
{\displaystyle 1/p}
of a prime
p
{\displaystyle p}
is even.
The claim is that the period of the decimal expansion of a prime in base
g
{\displaystyle g}
is even if and only if
g
(
p
−
1
2
j
)
≢
1
mod
p
{\displaystyle g^{\left({\frac {p-1}{2^{j}}}\right)}\not \equiv 1{\bmod {p}}}
where
j
≥
1
{\displaystyle j\geq 1}
and
j
{\displaystyle j}
is unique and p is such that
p
≡
1
+
2
j
mod
2
j
{\displaystyle p\equiv 1+2^{j}\mod {2^{j}}}
.
The result was proven by Hasse in 1966.
== See also ==
Stephens' constant, a number that plays the same role in a generalization of Artin's conjecture as Artin's constant plays here
Brown–Zassenhaus conjecture
Full reptend prime
Cyclic number (group theory)
== References == | Wikipedia/Artin's_conjecture_on_primitive_roots |
In number theory, the Bateman–Horn conjecture is a statement concerning the frequency of prime numbers among the values of a system of polynomials, named after mathematicians Paul T. Bateman and Roger A. Horn who proposed it in 1962. It provides a vast generalization of such conjectures as the Hardy and Littlewood conjecture on the density of twin primes or their conjecture on primes of the form n2 + 1; it is also a strengthening of Schinzel's hypothesis H.
== Definition ==
The Bateman–Horn conjecture provides a conjectured density for the positive integers at which a given set of polynomials all have prime values. For a set of m distinct irreducible polynomials ƒ1, ..., ƒm with integer coefficients, an obvious necessary condition for the polynomials to simultaneously generate prime values infinitely often is that they satisfy Bunyakovsky's property, that there does not exist a prime number p that divides their product f(n) for every positive integer n. For, if there were such a prime p, having all values of the polynomials simultaneously prime for a given n would imply that at least one of them must be equal to p, which can only happen for finitely many values of n or there would be a polynomial with infinitely many roots, whereas the conjecture is how to give conditions where the values are simultaneously prime for infinitely many n.
An integer n is prime-generating for the given system of polynomials if every polynomial ƒi(n) produces a prime number when given n as its argument. If P(x) is the number of prime-generating integers among the positive integers less than x, then the Bateman–Horn conjecture states that
P
(
x
)
∼
C
D
∫
2
x
d
t
(
log
t
)
m
,
{\displaystyle P(x)\sim {\frac {C}{D}}\int _{2}^{x}{\frac {dt}{(\log t)^{m}}},\,}
where D is the product of the degrees of the polynomials and where C is the product over primes p
C
=
∏
p
1
−
N
(
p
)
/
p
(
1
−
1
/
p
)
m
{\displaystyle C=\prod _{p}{\frac {1-N(p)/p}{(1-1/p)^{m}}}\ }
with
N
(
p
)
{\displaystyle N(p)}
the number of solutions to
f
(
n
)
≡
0
(
mod
p
)
.
{\displaystyle f(n)\equiv 0{\pmod {p}}.\ }
Bunyakovsky's property implies
N
(
p
)
<
p
{\displaystyle N(p)<p}
for all primes p,
so each factor in the infinite product C is positive.
Intuitively one then naturally expects that the constant C is itself positive, and with some work this can be proved.
(Work is needed since some infinite products of positive numbers equal zero.)
== Negative numbers ==
As stated above, the conjecture is not true: the single polynomial ƒ1(x) = −x produces only negative numbers when given a positive argument, so the fraction of prime numbers among its values is always zero. There are two equally valid ways of refining the conjecture to avoid this difficulty:
One may require all the polynomials to have positive leading coefficients, so that only a constant number of their values can be negative.
Alternatively, one may allow negative leading coefficients but count a negative number as being prime when its absolute value is prime.
It is reasonable to allow negative numbers to count as primes as a step towards formulating more general conjectures that apply to other systems of numbers than the integers, but at the same time it is easy
to just negate the polynomials if necessary to reduce to the case where the leading coefficients are positive.
== Examples ==
If the system of polynomials consists of the single polynomial ƒ1(x) = x, then the values n for which ƒ1(n) is prime are themselves the prime numbers, and the conjecture becomes a restatement of the prime number theorem.
If the system of polynomials consists of the two polynomials ƒ1(x) = x and ƒ2(x) = x + 2, then the values of n for which both ƒ1(n) and ƒ2(n) are prime are just the smaller of the two primes in every pair of twin primes. In this case, the Bateman–Horn conjecture reduces to the Hardy–Littlewood conjecture on the density of twin primes, according to which the number of twin prime pairs less than x is
π
2
(
x
)
∼
2
∏
p
≥
3
p
(
p
−
2
)
(
p
−
1
)
2
x
(
log
x
)
2
≈
1.32
x
(
log
x
)
2
.
{\displaystyle \pi _{2}(x)\sim 2\prod _{p\geq 3}{\frac {p(p-2)}{(p-1)^{2}}}{\frac {x}{(\log x)^{2}}}\approx 1.32{\frac {x}{(\log x)^{2}}}.}
== Analogue for polynomials over a finite field ==
When the integers are replaced by the polynomial ring F[u] for a finite field F, one can ask how often a finite set of polynomials fi(x) in F[u][x] simultaneously takes irreducible values in F[u] when we substitute for x elements of F[u]. Well-known analogies between integers and F[u] suggest an analogue of the Bateman–Horn conjecture over F[u], but the analogue is wrong. For example, data suggest that the polynomial
x
3
+
u
{\displaystyle x^{3}+u\,}
in F3[u][x] takes (asymptotically) the expected number of irreducible values when x runs over polynomials in F3[u] of odd degree, but it appears to take (asymptotically) twice as many irreducible values as expected when x runs over polynomials of degree that is 2 mod 4, while it (provably) takes no irreducible values at all when x runs over nonconstant polynomials with degree that is a multiple of 4. An analogue of the Bateman–Horn conjecture over F[u] which fits numerical data uses an additional factor in the asymptotics which depends on the value of d mod 4, where d is the degree of the polynomials in F[u] over which x is sampled.
== References ==
Bateman, Paul T.; Horn, Roger A. (1962), "A heuristic asymptotic formula concerning the distribution of prime numbers", Mathematics of Computation, 16 (79): 363–367, doi:10.2307/2004056, JSTOR 2004056, MR 0148632, Zbl 0105.03302
Guy, Richard K. (2004), Unsolved problems in number theory (3rd ed.), Springer-Verlag, ISBN 978-0-387-20860-2, Zbl 1058.11001
Friedlander, John; Granville, Andrew (1991), "Limitations to the equi-distribution of primes. IV.", Proceedings of the Royal Society A, 435 (1893): 197–204, Bibcode:1991RSPSA.435..197F, doi:10.1098/rspa.1991.0138.
Soren Laing Alethia-Zomlefer; Lenny Fukshansky; Stephan Ramon Garcia (25 July 2018), ONE CONJECTURE TO RULE THEM ALL: BATEMAN–HORN, pp. 1–45, arXiv:1807.08899 | Wikipedia/Bateman–Horn_conjecture |
In number theory, Lemoine's conjecture, named after Émile Lemoine, also known as Levy's conjecture, after Hyman Levy, states that all odd integers greater than 5 can be represented as the sum of an odd prime number and an even semiprime.
== History ==
The conjecture was posed by Émile Lemoine in 1895, but was erroneously attributed by MathWorld to Hyman Levy who pondered it in the 1960s.
A similar conjecture by Sun in 2008 states that all odd integers greater than 3 can be represented as the sum of a prime number and the product of two consecutive positive integers ( p+x(x+1) ).
== Formal definition ==
To put it algebraically, 2n + 1 = p + 2q always has a solution in primes p and q (not necessarily distinct) for n > 2. The Lemoine conjecture is similar to but stronger than Goldbach's weak conjecture.
== Example ==
For example, the odd integer 47 can be expressed as the sum of a prime and a semiprime in four different ways:
47 = 13 + 2×17 = 37 + 2×5 = 41 + 2×3 = 43 + 2×2.
The number of ways this can be done is given by OEIS sequence A046927 (Number of ways to express 2n+1 as p+2q where p and q are primes). Lemoine's conjecture is that this sequence contains no zeros after the first three.
== Evidence ==
According to MathWorld, the conjecture has been verified by Corbitt up to 109. A blog post in June of 2019 additionally claimed to have verified the conjecture up to 1010.
A proof was claimed in 2017 by Agama and Gensel, but this was later found to be flawed.
== See also ==
Lemoine's conjecture and extensions
== Notes ==
== References ==
Emile Lemoine, L'intermédiare des mathématiciens, 1 (1894), 179; ibid 3 (1896), 151.
H. Levy, "On Goldbach's Conjecture", Math. Gaz. 47 (1963): 274
L. Hodges, "A lesser-known Goldbach conjecture", Math. Mag., 66 (1993): 45–47. doi:10.2307/2690477. JSTOR 2690477
John O. Kiltinen and Peter B. Young, "Goldbach, Lemoine, and a Know/Don't Know Problem", Mathematics Magazine, 58(4) (Sep., 1985), pp. 195–203. doi:10.2307/2689513. JSTOR 2689513
Richard K. Guy, Unsolved Problems in Number Theory New York: Springer-Verlag 2004: C1
== External links ==
Levy's Conjecture by Jay Warendorff, Wolfram Demonstrations Project. | Wikipedia/Lemoine's_conjecture |
In number theory, Polignac's conjecture was made by Alphonse de Polignac in 1849 and states:
For any positive even number n, there are infinitely many prime gaps of size n. In other words: There are infinitely many cases of two consecutive prime numbers with difference n.
Although the conjecture has not yet been proven or disproven for any given value of n, in 2013 an important breakthrough was made by Yitang Zhang who proved that there are infinitely many prime gaps of size n for some value of n < 70,000,000. Later that year, James Maynard announced a related breakthrough which proved that there are infinitely many prime gaps of some size less than or equal to 600. As of April 14, 2014, one year after Zhang's announcement, according to the Polymath project wiki, n has been reduced to 246. Further, assuming the Elliott–Halberstam conjecture and its generalized form, the Polymath project wiki states that n has been reduced to 12 and 6, respectively.
For n = 2, it is the twin prime conjecture. For n = 4, it says there are infinitely many cousin primes (p, p + 4). For n = 6, it says there are infinitely many sexy primes (p, p + 6) with no prime between p and p + 6.
Dickson's conjecture generalizes Polignac's conjecture to cover all prime constellations.
== Conjectured density ==
Let
π
n
(
x
)
{\displaystyle \pi _{n}(x)}
for even n be the number of prime gaps of size n below x.
The first Hardy–Littlewood conjecture says the asymptotic density is of form
π
n
(
x
)
∼
2
C
n
x
(
ln
x
)
2
∼
2
C
n
∫
2
x
d
t
(
ln
t
)
2
{\displaystyle \pi _{n}(x)\sim 2C_{n}{\frac {x}{(\ln x)^{2}}}\sim 2C_{n}\int _{2}^{x}{dt \over (\ln t)^{2}}}
where Cn is a function of n, and
∼
{\displaystyle \sim }
means that the quotient of two expressions tends to 1 as x approaches infinity.
C2 is the twin prime constant
C
2
=
∏
p
≥
3
p
(
p
−
2
)
(
p
−
1
)
2
≈
0.660161815846869573927812110014
…
{\displaystyle C_{2}=\prod _{p\geq 3}{\frac {p(p-2)}{(p-1)^{2}}}\approx 0.660161815846869573927812110014\dots }
where the product extends over all prime numbers p ≥ 3.
Cn is C2 multiplied by a number which depends on the odd prime factors q of n:
C
n
=
C
2
∏
q
|
n
q
−
1
q
−
2
.
{\displaystyle C_{n}=C_{2}\prod _{q|n}{\frac {q-1}{q-2}}.}
For example, C4 = C2 and C6 = 2C2. Twin primes have the same conjectured density as cousin primes, and half that of sexy primes.
Note that each odd prime factor q of n increases the conjectured density compared to twin primes by a factor of
q
−
1
q
−
2
{\displaystyle {\tfrac {q-1}{q-2}}}
. A heuristic argument follows. It relies on some unproven assumptions so the conclusion remains a conjecture. The chance of a random odd prime q dividing either a or a + 2 in a random "potential" twin prime pair is
2
q
{\displaystyle {\tfrac {2}{q}}}
, since q divides one of the q numbers from a to a + q − 1. Now assume q divides n and consider a potential prime pair (a, a + n). q divides a + n if and only if q divides a, and the chance of that is
1
q
{\displaystyle {\tfrac {1}{q}}}
. The chance of (a, a + n) being free from the factor q, divided by the chance that (a, a + 2) is free from q, then becomes
q
−
1
q
{\displaystyle {\tfrac {q-1}{q}}}
divided by
q
−
2
q
{\displaystyle {\tfrac {q-2}{q}}}
. This equals
q
−
1
q
−
2
{\displaystyle {\tfrac {q-1}{q-2}}}
which transfers to the conjectured prime density. In the case of n = 6, the argument simplifies to: If a is a random number then 3 has a probability of 2/3 of dividing a or a + 2, but only a probability of 1/3 of dividing a and a + 6, so the latter pair is conjectured twice as likely to both be prime.
== Notes ==
== References ==
Alphonse de Polignac, Recherches nouvelles sur les nombres premiers. Comptes Rendus des Séances de l'Académie des Sciences (1849)
Weisstein, Eric W. "de Polignac's Conjecture". MathWorld.
Weisstein, Eric W. "k-Tuple Conjecture". MathWorld. | Wikipedia/Polignac's_conjecture |
In number theory, Grimm's conjecture (named after Carl Albert Grimm, 1 April 1926 – 2 January 2018) states that to each element of a set of consecutive composite numbers one can assign a distinct prime that divides it. It was first published in American Mathematical Monthly, 76(1969) 1126-1128.
== Formal statement ==
If n + 1, n + 2, ..., n + k are all composite numbers, then there are k distinct primes pi such that pi divides n + i for 1 ≤ i ≤ k.
== Weaker version ==
A weaker, though still unproven, version of this conjecture states: If there is no prime in the interval
[
n
+
1
,
n
+
k
]
{\displaystyle [n+1,n+k]}
, then
∏
1
≤
x
≤
k
(
n
+
x
)
{\displaystyle \prod _{1\leq x\leq k}(n+x)}
has at least k distinct prime divisors.
== See also ==
Prime gap
== References ==
Erdös, P.; Selfridge, J. L. (1971). "Some problems on the prime factors of consecutive integers II" (PDF). Proceedings of the Washington State University Conference on Number Theory: 13–21.
Grimm, C. A. (1969). "A conjecture on consecutive composite numbers". The American Mathematical Monthly. 76 (10): 1126–1128. doi:10.2307/2317188. JSTOR 2317188.
Guy, R. K. "Grimm's Conjecture." §B32 in Unsolved Problems in Number Theory, 3rd ed., Springer Science+Business Media, pp. 133–134, 2004. ISBN 0-387-20860-7
Laishram, Shanta; Murty, M. Ram (2012). "Grimm's conjecture and smooth numbers". The Michigan Mathematical Journal. 61 (1): 151–160. arXiv:1306.0765. doi:10.1307/mmj/1331222852.
Laishram, Shanta; Shorey, T. N. (2006). "Grimm's conjecture on consecutive integers". International Journal of Number Theory. 2 (2): 207–211. doi:10.1142/S1793042106000498.
Ramachandra, K. T.; Shorey, T. N.; Tijdeman, R. (1975). "On Grimm's problem relating to factorisation of a block of consecutive integers". Journal für die reine und angewandte Mathematik. 273: 109–124. doi:10.1515/crll.1975.273.109.
Ramachandra, K. T.; Shorey, T. N.; Tijdeman, R. (1976). "On Grimm's problem relating to factorisation of a block of consecutive integers. II". Journal für die reine und angewandte Mathematik. 288: 192–201. doi:10.1515/crll.1976.288.192.
Sukthankar, Neela S. (1973). "On Grimm's conjecture in algebraic number fields". Indagationes Mathematicae (Proceedings). 76 (5): 475–484. doi:10.1016/1385-7258(73)90073-5.
Sukthankar, Neela S. (1975). "On Grimm's conjecture in algebraic number fields. II". Indagationes Mathematicae (Proceedings). 78 (1): 13–25. doi:10.1016/1385-7258(75)90009-8.
Sukthankar, Neela S. (1977). "On Grimm's conjecture in algebraic number fields-III". Indagationes Mathematicae (Proceedings). 80 (4): 342–348. doi:10.1016/1385-7258(77)90030-0.
Weisstein, Eric W. "Grimm's Conjecture". MathWorld.
== External links ==
Prime Puzzles #430 | Wikipedia/Grimm's_conjecture |
In number theory, the second Hardy–Littlewood conjecture concerns the number of primes in intervals. Along with the first Hardy–Littlewood conjecture, the second Hardy–Littlewood conjecture was proposed by G. H. Hardy and John Edensor Littlewood in 1923.
== Statement ==
The conjecture states that
π
(
x
+
y
)
≤
π
(
x
)
+
π
(
y
)
{\displaystyle \pi (x+y)\leq \pi (x)+\pi (y)}
for integers x, y ≥ 2, where π(z) denotes the prime-counting function, giving the number of prime numbers up to and including z.
== Connection to the first Hardy–Littlewood conjecture ==
The statement of the second Hardy–Littlewood conjecture is equivalent to the statement that the number of primes from x + 1 to x + y is always less than or equal to the number of primes from 1 to y. This was proved to be inconsistent with the first Hardy–Littlewood conjecture on prime k-tuples, and the first violation is expected to likely occur for very large values of x. For example, an admissible k-tuple (or prime constellation) of 447 primes can be found in an interval of y = 3159 integers, while π(3159) = 446. If the first Hardy–Littlewood conjecture holds, then the first such k-tuple is expected for x greater than 1.5 × 10174 but less than 2.2 × 101198.
== References ==
== External links ==
Engelsma, Thomas J. "k-tuple Permissible Patterns". Retrieved 2008-08-12.
Oliveira e Silva, Tomás. "Admissible prime constellations". Retrieved 2023-09-28. | Wikipedia/Second_Hardy–Littlewood_conjecture |
In number theory, Cramér's conjecture, formulated by the Swedish mathematician Harald Cramér in 1936, is an estimate for the size of gaps between consecutive prime numbers: intuitively, that gaps between consecutive primes are always small, and the conjecture quantifies asymptotically just how small they must be. It states that
p
n
+
1
−
p
n
=
O
(
(
log
p
n
)
2
)
,
{\displaystyle p_{n+1}-p_{n}=O((\log p_{n})^{2}),}
where pn denotes the nth prime number, O is big O notation, and "log" is the natural logarithm. While this is the statement explicitly conjectured by Cramér, his heuristic actually supports the stronger statement
lim sup
n
→
∞
p
n
+
1
−
p
n
(
log
p
n
)
2
=
1
,
{\displaystyle \limsup _{n\rightarrow \infty }{\frac {p_{n+1}-p_{n}}{(\log p_{n})^{2}}}=1,}
and sometimes this formulation is called Cramér's conjecture. However, this stronger version is not supported by more accurate heuristic models, which nevertheless support the first version of Cramér's conjecture.
The strongest form of all, which was never claimed by Cramér but is the one used in experimental verification computations and the plot in this article, is simply
p
n
+
1
−
p
n
<
(
log
p
n
)
2
.
{\displaystyle p_{n+1}-p_{n}<(\log p_{n})^{2}.}
None of the three forms has yet been proven or disproven.
== Conditional proven results on prime gaps ==
Cramér gave a conditional proof of the much weaker statement that
p
n
+
1
−
p
n
=
O
(
p
n
log
p
n
)
{\displaystyle p_{n+1}-p_{n}=O({\sqrt {p_{n}}}\,\log p_{n})}
on the assumption of the Riemann hypothesis. The best known unconditional bound is
p
n
+
1
−
p
n
=
O
(
p
n
0.525
)
{\displaystyle p_{n+1}-p_{n}=O(p_{n}^{0.525})}
due to Baker, Harman, and Pintz.
In the other direction, E. Westzynthius proved in 1931 that prime gaps grow more than logarithmically. That is,
lim sup
n
→
∞
p
n
+
1
−
p
n
log
p
n
=
∞
.
{\displaystyle \limsup _{n\to \infty }{\frac {p_{n+1}-p_{n}}{\log p_{n}}}=\infty .}
His result was improved by R. A. Rankin, who proved that
lim sup
n
→
∞
p
n
+
1
−
p
n
log
p
n
⋅
(
log
log
log
p
n
)
2
log
log
p
n
log
log
log
log
p
n
>
0.
{\displaystyle \limsup _{n\to \infty }{\frac {p_{n+1}-p_{n}}{\log p_{n}}}\cdot {\frac {\left(\log \log \log p_{n}\right)^{2}}{\log \log p_{n}\log \log \log \log p_{n}}}>0.}
Paul Erdős conjectured that the left-hand side of the above formula is infinite, and this was proven in 2014 by Kevin Ford, Ben Green, Sergei Konyagin, and Terence Tao, and independently by James Maynard. The two sets of authors eliminated one of the factors of
log
log
log
p
n
{\displaystyle \log \log \log p_{n}}
later that year, showing that, infinitely often,
p
n
+
1
−
p
n
>
c
⋅
log
p
n
⋅
log
log
p
n
⋅
log
log
log
log
p
n
log
log
log
p
n
{\displaystyle \ {p_{n+1}-p_{n}}{>}{\frac {c\cdot \log p_{n}\cdot \log \log p_{n}\cdot \log \log \log \log p_{n}}{\log \log \log p_{n}}}}
where
c
>
0
{\displaystyle c>0}
is some constant.
== Heuristic justification ==
Cramér's conjecture is based on a probabilistic model—essentially a heuristic—in which the probability that a number of size x is prime is 1/log x. This is known as the Cramér random model or Cramér model of the primes.
In the Cramér random model,
lim sup
n
→
∞
p
n
+
1
−
p
n
log
2
p
n
=
1
{\displaystyle \limsup _{n\rightarrow \infty }{\frac {p_{n+1}-p_{n}}{\log ^{2}p_{n}}}=1}
with probability one. However, as pointed out by Andrew Granville, Maier's theorem shows that the Cramér random model does not adequately describe the distribution of primes on short intervals, and a refinement of Cramér's model taking into account divisibility by small primes suggests that the limit should not be 1, but a constant
c
≥
2
e
−
γ
≈
1.1229
…
{\displaystyle c\geq 2e^{-\gamma }\approx 1.1229\ldots }
(OEIS: A125313), where
γ
{\displaystyle \gamma }
is the Euler–Mascheroni constant. János Pintz has suggested that the limit sup may be infinite, and similarly Leonard Adleman and Kevin McCurley write
As a result of the work of H. Maier on gaps between consecutive primes, the exact formulation of Cramér's conjecture has been called into question [...] It is still probably true that for every constant
c
>
2
{\displaystyle c>2}
, there is a constant
d
>
0
{\displaystyle d>0}
such that there is a prime between
x
{\displaystyle x}
and
x
+
d
(
log
x
)
c
{\displaystyle x+d(\log x)^{c}}
.
Similarly, Robin Visser writes
In fact, due to the work done by Granville, it is now widely believed that Cramér's conjecture is false. Indeed, there [are] some theorems concerning short intervals between primes, such as Maier's theorem, which contradict Cramér's model.
(internal references removed).
== Related conjectures and heuristics ==
Daniel Shanks conjectured the following asymptotic equality, stronger than Cramér's conjecture, for record gaps:
G
(
x
)
∼
log
2
x
.
{\displaystyle G(x)\sim \log ^{2}x.}
J.H. Cadwell has proposed the formula for the maximal gaps:
G
(
x
)
∼
log
2
x
−
log
x
log
log
x
,
{\displaystyle G(x)\sim \log ^{2}x-\log x\log \log x,}
which is formally identical to the Shanks conjecture but suggests a lower-order term.
Marek Wolf has proposed the formula for the maximal gaps
G
(
x
)
{\displaystyle G(x)}
expressed in terms of the prime-counting function
π
(
x
)
{\displaystyle \pi (x)}
:
G
(
x
)
∼
x
π
(
x
)
(
2
log
π
(
x
)
−
log
x
+
c
)
,
{\displaystyle G(x)\sim {\frac {x}{\pi (x)}}(2\log \pi (x)-\log x+c),}
where
c
=
log
(
2
C
2
)
=
0.2778769...
{\displaystyle c=\log(2C_{2})=0.2778769...}
and
C
2
=
0.6601618...
{\displaystyle C_{2}=0.6601618...}
is the twin primes constant; see OEIS: A005597, A114907. This is again formally equivalent to the Shanks conjecture but suggests lower-order terms
G
(
x
)
∼
log
2
x
−
2
log
x
log
log
x
−
(
1
−
c
)
log
x
.
{\displaystyle G(x)\sim \log ^{2}x-2\log x\log \log x-(1-c)\log x.}
.
Thomas Nicely has calculated many large prime gaps. He measures the quality of fit to Cramér's conjecture by measuring the ratio
R
=
log
p
n
p
n
+
1
−
p
n
.
{\displaystyle R={\frac {\log p_{n}}{\sqrt {p_{n+1}-p_{n}}}}.}
He writes, "For the largest known maximal gaps,
R
{\displaystyle R}
has remained near 1.13."
== See also ==
Prime number theorem
Legendre's conjecture and Andrica's conjecture, much weaker but still unproven upper bounds on prime gaps
Firoozbakht's conjecture
Maier's theorem on the numbers of primes in short intervals for which the model predicts an incorrect answer
== References ==
Guy, Richard K. (2004). Unsolved problems in number theory (3rd ed.). Springer-Verlag. A8. ISBN 978-0-387-20860-2. Zbl 1058.11001.
Pintz, János (2007). "Cramér vs. Cramér. On Cramér's probabilistic model for primes". Functiones et Approximatio Commentarii Mathematici. 37 (2): 361–376. doi:10.7169/facm/1229619660. ISSN 0208-6573. MR 2363833. Zbl 1226.11096.
Soundararajan, K. (2007). "The distribution of prime numbers". In Granville, Andrew; Rudnick, Zeév (eds.). Equidistribution in number theory, an introduction. Proceedings of the NATO Advanced Study Institute on equidistribution in number theory, Montréal, Canada, July 11--22, 2005. NATO Science Series II: Mathematics, Physics and Chemistry. Vol. 237. Dordrecht: Springer-Verlag. pp. 59–83. ISBN 978-1-4020-5403-7. Zbl 1141.11043.
== External links ==
Weisstein, Eric W. "Cramér Conjecture". MathWorld.
Weisstein, Eric W. "Cramér-Granville Conjecture". MathWorld. | Wikipedia/Cramér's_conjecture |
In number theory, a branch of mathematics, Dickson's conjecture is the conjecture stated by Dickson (1904) that for a finite set of linear forms a1 + b1n, a2 + b2n, ..., ak + bkn with bi ≥ 1, there are infinitely many positive integers n for which they are all prime, unless there is a congruence condition preventing this (Ribenboim 1996, 6.I). The case k = 1 is Dirichlet's theorem.
Two other special cases are well-known conjectures: there are infinitely many twin primes (n and 2 + n are primes), and there are infinitely many Sophie Germain primes (n and 1 + 2n are primes).
== Generalized Dickson's conjecture ==
Given n polynomials with positive degrees and integer coefficients (n can be any natural number) that each satisfy all three conditions in the Bunyakovsky conjecture, and for any prime p there is an integer x such that the values of all n polynomials at x are not divisible by p, then there are infinitely many positive integers x such that all values of these n polynomials at x are prime. For example, if the conjecture is true then there are infinitely many positive integers x such that
x
2
+
1
{\displaystyle x^{2}+1}
,
3
x
−
1
{\displaystyle 3x-1}
, and
x
2
+
x
+
41
{\displaystyle x^{2}+x+41}
are all prime. When all the polynomials have degree 1, this is the original Dickson's conjecture.
This generalization is equivalent to the generalized Bunyakovsky conjecture and Schinzel's hypothesis H.
== See also ==
Prime triplet
Green–Tao theorem
First Hardy–Littlewood conjecture
Prime constellation
Primes in arithmetic progression
== References ==
Dickson, L. E. (1904), "A new extension of Dirichlet's theorem on prime numbers", Messenger of Mathematics, 33: 155–161
Ribenboim, Paulo (1996), The new book of prime number records, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94457-9, MR 1377060 | Wikipedia/Dickson's_conjecture |
Uncle Petros and Goldbach's Conjecture is a 1992 novel by Greek author Apostolos Doxiadis. It concerns a young man's interaction with his reclusive uncle, who sought to prove a famous unsolved mathematics problem, called Goldbach's conjecture, that every even number greater than two is the sum of two primes. The novel discusses mathematical problems and some recent history of mathematics.
== Plot ==
Petros Papachristos, a child prodigy, is brought by his father, a Greek businessman, to the University of Munich to verify his genius with Constantin Caratheodory, a Greek-German mathematician. The boy immediately shows an excellent aptitude for mathematics and graduates soon at the University of Berlin. Later he worked as a postdoctoral researcher at the University of Cambridge, where he collaborates with the mathematicians Godfrey Harold Hardy, John Edensor Littlewood and Srinivasa Ramanujan. He is then offered a professorship in Munich, which he accepts because it was far from the great mathematical centres of the time, and it was therefore the ideal place to live in isolation while tackling the Goldbach conjecture.
After years of fruitless work, Petros arrives at an important intermediate result, which he prefers not to disclose in order not to reveal the object of his research and involuntary help someone else working on the same problem. Later he comes to an even more important result and decides finally to publish it. He sends it to Hardy, whose answer, however, is disappointing: the same discovery had already been published by a young Austrian mathematician. Petros then falls into the deepest depression, taunted by mental exhaustion and the fear that his genius might vanish. Mathematics also begins to enter his dreams, which often turn into nightmares. During a research visit at Trinity College, however, he learns from a young mathematician named Alan Turing of the existence of the incompleteness' theorem by Kurt Gödel.
Returning to Munich, he resumes his work with superficiality, demoralised by the possibility of the unprovability of the conjecture, finding comfort in the game of chess. After a dream he convinces himself that the conjecture is actually unprovable. Before World War II, for political reasons he is repatriated in Greece and he settles in Ekali, a small town near Athens, where he abandons mathematics and devotes himself to chess.
After years of inactivity he establishes a relationship with his nephew, the narrator, who would like to become a mathematician. He attends university in the United States of America and meets Sammy, with whom he discusses Petros' strange mathematical life. Sammy believes that, as the fox in Aesop's fable The Fox and the Grapes, Petros failed to prove the conjecture and then blamed its unprovability. The nephew switches his studies to economics and then returns home to devote himself to the family business, but visits often his uncle, sharing with him the passion for chess. One day, however, he tries to extrapolate the truth from his uncle and awakens in him again the spirit of the mathematician. In the middle of the night the nephew is awakened by the call from his elderly uncle who claims to have solved the conjecture.
== Publication history ==
The novel was originally published in Greek in 1992 and then translated into English by Doxiadis himself. As a publicity stunt, the English publishers (Bloomsbury USA in the U.S. and Faber and Faber in the UK) announced a $1 million prize for a proof of Goldbach's Conjecture within two years of the book's publication in 2000. As no proof was found, the prize was not awarded.
The cover picture of the original edition is the painting I Saw the Figure 5 in Gold (1928) by Charles Demuth.
== Reception ==
Uncle Petros and Goldbach's Conjecture is one of the 1001 Books You Must Read Before You Die. It was the first recipient in 2000 of the Premio Peano, an international award for books inspired by mathematics, and was short-listed for the Prix Médicis Étranger.
== References == | Wikipedia/Uncle_Petros_and_Goldbach's_Conjecture |
In number theory the Agoh–Giuga conjecture on the Bernoulli numbers Bk postulates that p is a prime number if and only if
p
B
p
−
1
≡
−
1
(
mod
p
)
.
{\displaystyle pB_{p-1}\equiv -1{\pmod {p}}.}
It is named after Takashi Agoh and Giuseppe Giuga.
== Equivalent formulation ==
The conjecture as stated above is due to Takashi Agoh (1990); an equivalent formulation is due to Giuseppe Giuga, from 1950, to the effect that p is prime if and only if
1
p
−
1
+
2
p
−
1
+
⋯
+
(
p
−
1
)
p
−
1
≡
−
1
(
mod
p
)
{\displaystyle 1^{p-1}+2^{p-1}+\cdots +(p-1)^{p-1}\equiv -1{\pmod {p}}}
which may also be written as
∑
i
=
1
p
−
1
i
p
−
1
≡
−
1
(
mod
p
)
.
{\displaystyle \sum _{i=1}^{p-1}i^{p-1}\equiv -1{\pmod {p}}.}
It is trivial to show that p being prime is sufficient for the second equivalence to hold, since if p is prime, Fermat's little theorem states that
a
p
−
1
≡
1
(
mod
p
)
{\displaystyle a^{p-1}\equiv 1{\pmod {p}}}
for
a
=
1
,
2
,
…
,
p
−
1
{\displaystyle a=1,2,\dots ,p-1}
, and the equivalence follows, since
p
−
1
≡
−
1
(
mod
p
)
.
{\displaystyle p-1\equiv -1{\pmod {p}}.}
== Status ==
The statement is still a conjecture since it has not yet been proven that if a number n is not prime (that is, n is composite), then the formula does not hold. It has been shown that a composite number n satisfies the formula if and only if it is both a Carmichael number and a Giuga number, and that if such a number exists, it has at least 13,800 digits (Borwein, Borwein, Borwein, Girgensohn 1996). Laerte Sorini, in a work of 2001 showed that a possible counterexample should be a number n greater than 1036067 which represents the limit suggested by Bedocchi for the demonstration technique specified by Giuga to his own conjecture.
== Relation to Wilson's theorem ==
The Agoh–Giuga conjecture bears a similarity to Wilson's theorem, which has been proven to be true. Wilson's theorem states that a number p is prime if and only if
(
p
−
1
)
!
≡
−
1
(
mod
p
)
,
{\displaystyle (p-1)!\equiv -1{\pmod {p}},}
which may also be written as
∏
i
=
1
p
−
1
i
≡
−
1
(
mod
p
)
.
{\displaystyle \prod _{i=1}^{p-1}i\equiv -1{\pmod {p}}.}
For an odd prime p we have
∏
i
=
1
p
−
1
i
p
−
1
≡
(
−
1
)
p
−
1
≡
1
(
mod
p
)
,
{\displaystyle \prod _{i=1}^{p-1}i^{p-1}\equiv (-1)^{p-1}\equiv 1{\pmod {p}},}
and for p=2 we have
∏
i
=
1
p
−
1
i
p
−
1
≡
(
−
1
)
p
−
1
≡
1
(
mod
p
)
.
{\displaystyle \prod _{i=1}^{p-1}i^{p-1}\equiv (-1)^{p-1}\equiv 1{\pmod {p}}.}
So, the truth of the Agoh–Giuga conjecture combined with Wilson's theorem would give: a number p is prime if and only if
∑
i
=
1
p
−
1
i
p
−
1
≡
−
1
(
mod
p
)
{\displaystyle \sum _{i=1}^{p-1}i^{p-1}\equiv -1{\pmod {p}}}
and
∏
i
=
1
p
−
1
i
p
−
1
≡
1
(
mod
p
)
.
{\displaystyle \prod _{i=1}^{p-1}i^{p-1}\equiv 1{\pmod {p}}.}
== See also ==
Bernoulli number § Arithmetical properties of the Bernoulli numbers
== References ==
Giuga, Giuseppe (1951). "Su una presumibile proprietà caratteristica dei numeri primi". Ist.Lombardo Sci. Lett., Rend., Cl. Sci. Mat. Natur. (in Italian). 83: 511–518. ISSN 0375-9164. Zbl 0045.01801.
Agoh, Takashi (1995). "On Giuga's conjecture". Manuscripta Mathematica. 87 (4): 501–510. doi:10.1007/bf02570490. Zbl 0845.11004.
Borwein, D.; Borwein, J. M.; Borwein, P. B.; Girgensohn, R. (1996). "Giuga's Conjecture on Primality" (PDF). American Mathematical Monthly. 103 (1): 40–50. CiteSeerX 10.1.1.586.1424. doi:10.2307/2975213. JSTOR 2975213. Zbl 0860.11003. Archived from the original (PDF) on 2005-05-31. Retrieved 2005-05-29.
Sorini, Laerte (2001). "Un Metodo Euristico per la Soluzione della Congettura di Giuga". Quaderni di Economia, Matematica e Statistica, DESP, Università di Urbino Carlo Bo (in Italian). 68. ISSN 1720-9668. | Wikipedia/Agoh–Giuga_conjecture |
In number theory, Firoozbakht's conjecture (or the Firoozbakht conjecture) is a conjecture about the distribution of prime numbers. It is named after the Iranian mathematician Farideh Firoozbakht who stated it in 1982.
The conjecture states that
p
n
1
/
n
{\displaystyle p_{n}^{1/n}}
(where
p
n
{\displaystyle p_{n}}
is the nth prime) is a strictly decreasing function of n, i.e.,
p
n
+
1
n
+
1
<
p
n
n
for all
n
≥
1.
{\displaystyle {\sqrt[{n+1}]{p_{n+1}}}<{\sqrt[{n}]{p_{n}}}\qquad {\text{ for all }}n\geq 1.}
Equivalently:
p
n
+
1
<
p
n
1
+
1
n
,
{\displaystyle p_{n+1}<p_{n}^{1+{\frac {1}{n}}},}
p
n
+
1
n
<
p
n
n
+
1
,
or
{\displaystyle p_{n+1}^{n}<p_{n}^{n+1},{\text{ or }}}
(
p
n
+
1
p
n
)
n
<
p
n
.
{\displaystyle \left({\frac {p_{n+1}}{p_{n}}}\right)^{n}<p_{n}.}
see OEIS: A182134, OEIS: A246782.
By using a table of maximal gaps, Farideh Firoozbakht verified her conjecture up to 4.444×1012. Now with more extensive tables of maximal gaps, the conjecture has been verified for all primes below 264 ≈ 1.84×1019.
If the conjecture were true, then the prime gap function
g
n
=
p
n
+
1
−
p
n
{\displaystyle g_{n}=p_{n+1}-p_{n}}
would satisfy:
g
n
<
(
log
p
n
)
2
−
log
p
n
for all
n
>
4.
{\displaystyle g_{n}<(\log p_{n})^{2}-\log p_{n}\qquad {\text{ for all }}n>4.}
Moreover:
g
n
<
(
log
p
n
)
2
−
log
p
n
−
1
for all
n
>
9
,
{\displaystyle g_{n}<(\log p_{n})^{2}-\log p_{n}-1\qquad {\text{ for all }}n>9,}
see also OEIS: A111943. This is among the strongest upper bounds conjectured for prime gaps, even somewhat stronger than the Cramér and Shanks conjectures. It implies a strong form of Cramér's conjecture and is hence inconsistent with the heuristics of Granville and Pintz and of Maier which suggest that
g
n
>
2
−
ε
e
γ
(
log
p
n
)
2
≈
1.1229
(
log
p
n
)
2
,
{\displaystyle g_{n}>{\frac {2-\varepsilon }{e^{\gamma }}}(\log p_{n})^{2}\approx 1.1229(\log p_{n})^{2},}
occurs infinitely often for any
ε
>
0
,
{\displaystyle \varepsilon >0,}
where
γ
{\displaystyle \gamma }
denotes the Euler–Mascheroni constant.
Three related conjectures (see the comments of OEIS: A182514) are variants of Firoozbakht's. Forgues notes that Firoozbakht's can be written
(
log
p
n
+
1
log
p
n
)
n
<
(
1
+
1
n
)
n
,
{\displaystyle \left({\frac {\log p_{n+1}}{\log p_{n}}}\right)^{n}<\left(1+{\frac {1}{n}}\right)^{n},}
where the right hand side is the well-known expression which reaches Euler's number in the limit
n
→
∞
{\displaystyle n\to \infty }
, suggesting the slightly weaker conjecture
(
log
p
n
+
1
log
p
n
)
n
<
e
.
{\displaystyle \left({\frac {\log p_{n+1}}{\log p_{n}}}\right)^{n}<e.}
Nicholson and Farhadian give two stronger versions of Firoozbakht's conjecture which can be summarized as:
(
p
n
+
1
p
n
)
n
<
p
n
log
n
log
p
n
<
n
log
n
<
p
n
for all
n
>
5
,
{\displaystyle \left({\frac {p_{n+1}}{p_{n}}}\right)^{n}<{\frac {p_{n}\log n}{\log p_{n}}}<n\log n<p_{n}\qquad {\text{ for all }}n>5,}
where the right-hand inequality is Firoozbakht's, the middle is Nicholson's (since
n
log
n
<
p
n
{\displaystyle n\log n<p_{n}}
; see Prime number theorem § Non-asymptotic bounds on the prime-counting function), and the left-hand inequality is Farhadian's (since
p
n
log
p
n
<
n
{\displaystyle {\frac {p_{n}}{\log p_{n}}}<n}
; see Prime-counting function § Inequalities).
All have been verified to 264.
== See also ==
Prime number theorem
Andrica's conjecture
Legendre's conjecture
Oppermann's conjecture
Cramér's conjecture
== Notes ==
== References ==
Ribenboim, Paulo (2004). The Little Book of Bigger Primes (Second ed.). Springer-Verlag. ISBN 0-387-20169-6.
Riesel, Hans (1985). Prime Numbers and Computer Methods for Factorization (Second ed.). Birkhauser. ISBN 3-7643-3291-3. | Wikipedia/Firoozbakht's_conjecture |
In number theory, Goldbach's weak conjecture, also known as the odd Goldbach conjecture, the ternary Goldbach problem, or the 3-primes problem, states that
Every odd number greater than 5 can be expressed as the sum of three primes. (A prime may be used more than once in the same sum.)
This conjecture is called "weak" because if Goldbach's strong conjecture (concerning sums of two primes) is proven, then this would also be true. For if every even number greater than 4 is the sum of two odd primes, adding 3 to each even number greater than 4 will produce the odd numbers greater than 7 (and 7 itself is equal to 2+2+3).
In 2013, Harald Helfgott released a proof of Goldbach's weak conjecture. The proof was accepted for publication in the Annals of Mathematics Studies series in 2015, and has been undergoing further review and revision since; fully refereed chapters in close to final form are being made public in the process.
Some state the conjecture as
Every odd number greater than 7 can be expressed as the sum of three odd primes.
This version excludes 7 = 2+2+3 because this requires the even prime 2. On odd numbers larger than 7 it is slightly stronger as it also excludes sums like 17 = 2+2+13, which are allowed in the other formulation. Helfgott's proof covers both versions of the conjecture. Like the other formulation, this one also immediately follows from Goldbach's strong conjecture.
== Origins ==
The conjecture originated in correspondence between Christian Goldbach and Leonhard Euler. One formulation of the strong Goldbach conjecture, equivalent to the more common one in terms of sums of two primes, is
Every integer greater than 5 can be written as the sum of three primes.
The weak conjecture is simply this statement restricted to the case where the integer is odd (and possibly with the added requirement that the three primes in the sum be odd).
== Timeline of results ==
In 1923, Hardy and Littlewood showed that, assuming the generalized Riemann hypothesis, the weak Goldbach conjecture is true for all sufficiently large odd numbers. In 1937, Ivan Matveevich Vinogradov eliminated the dependency on the generalised Riemann hypothesis and proved directly (see Vinogradov's theorem) that all sufficiently large odd numbers can be expressed as the sum of three primes. Vinogradov's original proof, as it used the ineffective Siegel–Walfisz theorem, did not give a bound for "sufficiently large"; his student K. Borozdkin (1956) derived that
e
e
16.038
≈
3
3
15
{\displaystyle e^{e^{16.038}}\approx 3^{3^{15}}}
is large enough. The integer part of this number has 4,008,660 decimal digits, so checking every number under this figure would be completely infeasible.
In 1997, Deshouillers, Effinger, te Riele and Zinoviev published a result showing that the generalized Riemann hypothesis implies Goldbach's weak conjecture for all numbers. This result combines a general statement valid for numbers greater than 1020 with an extensive computer search of the small cases. Saouter also conducted a computer search covering the same cases at approximately the same time.
Olivier Ramaré in 1995 showed that every even number n ≥ 4 is in fact the sum of at most six primes, from which it follows that every odd number n ≥ 5 is the sum of at most seven primes. Leszek Kaniecki showed every odd integer is a sum of at most five primes, under the Riemann Hypothesis. In 2012, Terence Tao proved this without the Riemann Hypothesis; this improves both results.
In 2002, Liu Ming-Chit (University of Hong Kong) and Wang Tian-Ze lowered Borozdkin's threshold to approximately
n
>
e
3100
≈
2
×
10
1346
{\displaystyle n>e^{3100}\approx 2\times 10^{1346}}
. The exponent is still much too large to admit checking all smaller numbers by computer. (Computer searches have only reached as far as 1018 for the strong Goldbach conjecture, and not much further than that for the weak Goldbach conjecture.)
In 2012 and 2013, Peruvian mathematician Harald Helfgott released a pair of papers improving major and minor arc estimates sufficiently to unconditionally prove the weak Goldbach conjecture. Here, the major arcs
M
{\displaystyle {\mathfrak {M}}}
is the union of intervals
(
a
/
q
−
c
r
0
/
q
x
,
a
/
q
+
c
r
0
/
q
x
)
{\displaystyle \left(a/q-cr_{0}/qx,a/q+cr_{0}/qx\right)}
around the rationals
a
/
q
,
q
<
r
0
{\displaystyle a/q,q<r_{0}}
where
c
{\displaystyle c}
is a constant. Minor arcs
m
{\displaystyle {\mathfrak {m}}}
are defined to be
m
=
(
R
/
Z
)
∖
M
{\displaystyle {\mathfrak {m}}=(\mathbb {R} /\mathbb {Z} )\setminus {\mathfrak {M}}}
.
== References == | Wikipedia/Goldbach's_weak_conjecture |
In number theory, Goldbach's weak conjecture, also known as the odd Goldbach conjecture, the ternary Goldbach problem, or the 3-primes problem, states that
Every odd number greater than 5 can be expressed as the sum of three primes. (A prime may be used more than once in the same sum.)
This conjecture is called "weak" because if Goldbach's strong conjecture (concerning sums of two primes) is proven, then this would also be true. For if every even number greater than 4 is the sum of two odd primes, adding 3 to each even number greater than 4 will produce the odd numbers greater than 7 (and 7 itself is equal to 2+2+3).
In 2013, Harald Helfgott released a proof of Goldbach's weak conjecture. The proof was accepted for publication in the Annals of Mathematics Studies series in 2015, and has been undergoing further review and revision since; fully refereed chapters in close to final form are being made public in the process.
Some state the conjecture as
Every odd number greater than 7 can be expressed as the sum of three odd primes.
This version excludes 7 = 2+2+3 because this requires the even prime 2. On odd numbers larger than 7 it is slightly stronger as it also excludes sums like 17 = 2+2+13, which are allowed in the other formulation. Helfgott's proof covers both versions of the conjecture. Like the other formulation, this one also immediately follows from Goldbach's strong conjecture.
== Origins ==
The conjecture originated in correspondence between Christian Goldbach and Leonhard Euler. One formulation of the strong Goldbach conjecture, equivalent to the more common one in terms of sums of two primes, is
Every integer greater than 5 can be written as the sum of three primes.
The weak conjecture is simply this statement restricted to the case where the integer is odd (and possibly with the added requirement that the three primes in the sum be odd).
== Timeline of results ==
In 1923, Hardy and Littlewood showed that, assuming the generalized Riemann hypothesis, the weak Goldbach conjecture is true for all sufficiently large odd numbers. In 1937, Ivan Matveevich Vinogradov eliminated the dependency on the generalised Riemann hypothesis and proved directly (see Vinogradov's theorem) that all sufficiently large odd numbers can be expressed as the sum of three primes. Vinogradov's original proof, as it used the ineffective Siegel–Walfisz theorem, did not give a bound for "sufficiently large"; his student K. Borozdkin (1956) derived that
e
e
16.038
≈
3
3
15
{\displaystyle e^{e^{16.038}}\approx 3^{3^{15}}}
is large enough. The integer part of this number has 4,008,660 decimal digits, so checking every number under this figure would be completely infeasible.
In 1997, Deshouillers, Effinger, te Riele and Zinoviev published a result showing that the generalized Riemann hypothesis implies Goldbach's weak conjecture for all numbers. This result combines a general statement valid for numbers greater than 1020 with an extensive computer search of the small cases. Saouter also conducted a computer search covering the same cases at approximately the same time.
Olivier Ramaré in 1995 showed that every even number n ≥ 4 is in fact the sum of at most six primes, from which it follows that every odd number n ≥ 5 is the sum of at most seven primes. Leszek Kaniecki showed every odd integer is a sum of at most five primes, under the Riemann Hypothesis. In 2012, Terence Tao proved this without the Riemann Hypothesis; this improves both results.
In 2002, Liu Ming-Chit (University of Hong Kong) and Wang Tian-Ze lowered Borozdkin's threshold to approximately
n
>
e
3100
≈
2
×
10
1346
{\displaystyle n>e^{3100}\approx 2\times 10^{1346}}
. The exponent is still much too large to admit checking all smaller numbers by computer. (Computer searches have only reached as far as 1018 for the strong Goldbach conjecture, and not much further than that for the weak Goldbach conjecture.)
In 2012 and 2013, Peruvian mathematician Harald Helfgott released a pair of papers improving major and minor arc estimates sufficiently to unconditionally prove the weak Goldbach conjecture. Here, the major arcs
M
{\displaystyle {\mathfrak {M}}}
is the union of intervals
(
a
/
q
−
c
r
0
/
q
x
,
a
/
q
+
c
r
0
/
q
x
)
{\displaystyle \left(a/q-cr_{0}/qx,a/q+cr_{0}/qx\right)}
around the rationals
a
/
q
,
q
<
r
0
{\displaystyle a/q,q<r_{0}}
where
c
{\displaystyle c}
is a constant. Minor arcs
m
{\displaystyle {\mathfrak {m}}}
are defined to be
m
=
(
R
/
Z
)
∖
M
{\displaystyle {\mathfrak {m}}=(\mathbb {R} /\mathbb {Z} )\setminus {\mathfrak {M}}}
.
== References == | Wikipedia/Weak_Goldbach_conjecture |
Andrica's conjecture (named after Romanian mathematician Dorin Andrica (es)) is a conjecture regarding the gaps between prime numbers.
The conjecture states that the inequality
p
n
+
1
−
p
n
<
1
{\displaystyle {\sqrt {p_{n+1}}}-{\sqrt {p_{n}}}<1}
holds for all
n
{\displaystyle n}
, where
p
n
{\displaystyle p_{n}}
is the nth prime number. If
g
n
=
p
n
+
1
−
p
n
{\displaystyle g_{n}=p_{n+1}-p_{n}}
denotes the nth prime gap, then Andrica's conjecture can also be rewritten as
g
n
<
2
p
n
+
1.
{\displaystyle g_{n}<2{\sqrt {p_{n}}}+1.}
== Empirical evidence ==
Imran Ghory has used data on the largest prime gaps to confirm the conjecture for
n
{\displaystyle n}
up to 1.3002×1016. Using a more recent table of maximal gaps, the confirmation value can be extended exhaustively to 2×1019 > 264.
The discrete function
A
n
=
p
n
+
1
−
p
n
{\displaystyle A_{n}={\sqrt {p_{n+1}}}-{\sqrt {p_{n}}}}
is plotted in the figures opposite. The high-water marks for
A
n
{\displaystyle A_{n}}
occur for n = 1, 2, and 4, with A4 ≈ 0.670873..., with no larger value among the first 105 primes. Since the Andrica function decreases asymptotically as n increases, a prime gap of ever increasing size is needed to make the difference large as n becomes large. It therefore seems highly likely the conjecture is true, although this has not yet been proven.
== Generalizations ==
As a generalization of Andrica's conjecture, the following equation has been considered:
p
n
+
1
x
−
p
n
x
=
1
,
{\displaystyle p_{n+1}^{x}-p_{n}^{x}=1,}
where
p
n
{\displaystyle p_{n}}
is the nth prime and x can be any positive number.
The largest possible solution for x is easily seen to occur for n=1, when xmax = 1. The smallest solution for x is conjectured to be xmin ≈ 0.567148... (sequence A038458 in the OEIS) which occurs for n = 30.
This conjecture has also been stated as an inequality, the generalized Andrica conjecture:
p
n
+
1
x
−
p
n
x
<
1
{\displaystyle p_{n+1}^{x}-p_{n}^{x}<1}
for
x
<
x
min
.
{\displaystyle x<x_{\min }.}
== See also ==
Cramér's conjecture
Legendre's conjecture
Firoozbakht's conjecture
== References and notes ==
Guy, Richard K. (2004). "Section A8". Unsolved problems in number theory (3rd ed.). Springer-Verlag. ISBN 978-0-387-20860-2. Zbl 1058.11001.
== External links ==
Andrica's Conjecture at PlanetMath
Generalized Andrica conjecture at PlanetMath
https://drive.google.com/file/d/1jWbjCbTn5Tf0twXJJochRt7ONwqs6l3D/view?usp=sharing | Wikipedia/Andrica's_conjecture |
In computer graphics, the midpoint circle algorithm is an algorithm used to determine the points needed for rasterizing a circle. It is a generalization of Bresenham's line algorithm. The algorithm can be further generalized to conic sections.
== Summary ==
This algorithm draws all eight octants simultaneously, starting from each cardinal direction (0°, 90°, 180°, 270°) and extends both ways to reach the nearest multiple of 45° (45°, 135°, 225°, 315°). It can determine where to stop because, when y = x, it has reached 45°. The reason for using these angles is shown in the above picture: As x increases, it neither skips nor repeats any x value until reaching 45°. So during the while loop, x increments by 1 with each iteration, and y decrements by 1 on occasion, never exceeding 1 in one iteration. This changes at 45° because that is the point where the tangent is rise=run. Whereas rise>run before and rise<run after.
The second part of the problem, the determinant, is far trickier. This determines when to decrement y. It usually comes after drawing the pixels in each iteration, because it never goes below the radius on the first pixel. Because in a continuous function, the function for a sphere is the function for a circle with the radius dependent on z (or whatever the third variable is), it stands to reason that the algorithm for a discrete (voxel) sphere would also rely on the midpoint circle algorithm. But when looking at a sphere, the integer radius of some adjacent circles is the same, but it is not expected to have the same exact circle adjacent to itself in the same hemisphere. Instead, a circle of the same radius needs a different determinant, to allow the curve to come in slightly closer to the center or extend out farther.
One hundred fifty concentric circles drawn with the midpoint circle algorithm.
== Algorithm ==
The objective of the algorithm is to approximate a circle, more formally put, to approximate the curve
x
2
+
y
2
=
r
2
{\displaystyle x^{2}+y^{2}=r^{2}}
using pixels; in layman's terms every pixel should be approximately the same distance from the center, as is the definition of a circle. At each step, the path is extended by choosing the adjacent pixel which satisfies
x
2
+
y
2
≤
r
2
{\displaystyle x^{2}+y^{2}\leq r^{2}}
but maximizes
x
2
+
y
2
{\displaystyle x^{2}+y^{2}}
. Since the candidate pixels are adjacent, the arithmetic to calculate the latter expression is simplified, requiring only bit shifts and additions. But a simplification can be done in order to understand the bitshift. Keep in mind that a left bitshift of a binary number is the same as multiplying with 2. Ergo, a left bitshift of the radius only produces the diameter which is defined as radius times two.
This algorithm starts with the circle equation. For simplicity, assume the center of the circle is at
(
0
,
0
)
{\displaystyle (0,0)}
. To start with, consider the first octant only, and draw a curve which starts at point
(
r
,
0
)
{\displaystyle (r,0)}
and proceeds counterclockwise, reaching the angle of 45°.
The fast direction here (the basis vector with the greater increase in value) is the
y
{\displaystyle y}
direction (see Differentiation of trigonometric functions). The algorithm always takes a step in the positive
y
{\displaystyle y}
direction (upwards), and occasionally takes a step in the slow direction (the negative
x
{\displaystyle x}
direction).
From the circle equation is obtained the transformed equation
x
2
+
y
2
−
r
2
=
0
{\displaystyle x^{2}+y^{2}-r^{2}=0}
, where
r
2
{\displaystyle r^{2}}
is computed only once during initialization.
Let the points on the circle be a sequence of coordinates of the vector to the point (in the usual basis). Points are numbered according to the order in which drawn, with
n
=
1
{\displaystyle n=1}
assigned to the first point
(
r
,
0
)
{\displaystyle (r,0)}
.
For each point, the following holds:
x
n
2
+
y
n
2
=
r
2
{\displaystyle {\begin{aligned}x_{n}^{2}+y_{n}^{2}=r^{2}\end{aligned}}}
This can be rearranged thus:
x
n
2
=
r
2
−
y
n
2
{\displaystyle {\begin{aligned}x_{n}^{2}=r^{2}-y_{n}^{2}\end{aligned}}}
And likewise for the next point:
x
n
+
1
2
=
r
2
−
y
n
+
1
2
{\displaystyle {\begin{aligned}x_{n+1}^{2}=r^{2}-y_{n+1}^{2}\end{aligned}}}
Since for the first octant the next point will always be at least 1 pixel higher than the last (but also at most 1 pixel higher to maintain continuity), it is true that:
y
n
+
1
2
=
(
y
n
+
1
)
2
=
y
n
2
+
2
y
n
+
1
{\displaystyle {\begin{aligned}y_{n+1}^{2}&=(y_{n}+1)^{2}\\&=y_{n}^{2}+2y_{n}+1\end{aligned}}}
x
n
+
1
2
=
r
2
−
y
n
2
−
2
y
n
−
1
{\displaystyle {\begin{aligned}x_{n+1}^{2}=r^{2}-y_{n}^{2}-2y_{n}-1\end{aligned}}}
So, rework the next-point-equation into a recursive one by substituting
x
n
2
=
r
2
−
y
n
2
{\displaystyle x_{n}^{2}=r^{2}-y_{n}^{2}}
:
x
n
+
1
2
=
x
n
2
−
2
y
n
−
1
{\displaystyle {\begin{aligned}x_{n+1}^{2}=x_{n}^{2}-2y_{n}-1\end{aligned}}}
Because of the continuity of a circle and because the maxima along both axes are the same, clearly it will not be skipping x points as it advances in the sequence. Usually it stays on the same x coordinate, and sometimes advances by one to the left.
The resulting coordinate is then translated by adding midpoint coordinates. These frequent integer additions do not limit the performance much, as those square (root) computations can be spared in the inner loop in turn. Again, the zero in the transformed circle equation is replaced by the error term.
The initialization of the error term is derived from an offset of ½ pixel at the start. Until the intersection with the perpendicular line, this leads to an accumulated value of
r
{\displaystyle r}
in the error term, so that this value is used for initialization.
The frequent computations of squares in the circle equation, trigonometric expressions and square roots can again be avoided by dissolving everything into single steps and using recursive computation of the quadratic terms from the preceding iterations.
=== Variant with integer-based arithmetic ===
Just as with Bresenham's line algorithm, this algorithm can be optimized for integer-based math. Because of symmetry, if an algorithm can be found that only computes the pixels for one octant, the pixels can be reflected to get the whole circle.
We start by defining the radius error as the difference between the exact representation of the circle and the center point of each pixel (or any other arbitrary mathematical point on the pixel, so long as it is consistent across all pixels). For any pixel with a center at
(
x
i
,
y
i
)
{\displaystyle (x_{i},y_{i})}
, the radius error is defined as:
R
E
(
x
i
,
y
i
)
=
|
x
i
2
+
y
i
2
−
r
2
|
{\displaystyle RE(x_{i},y_{i})=\left\vert x_{i}^{2}+y_{i}^{2}-r^{2}\right\vert }
For clarity, this formula for a circle is derived at the origin, but the algorithm can be modified for any location. It is useful to start with the point
(
r
,
0
)
{\displaystyle (r,0)}
on the positive X-axis. Because the radius will be a whole number of pixels, clearly the radius error will be zero:
R
E
(
x
i
,
y
i
)
=
|
x
i
2
+
0
2
−
r
2
|
=
0
{\displaystyle RE(x_{i},y_{i})=\left\vert x_{i}^{2}+0^{2}-r^{2}\right\vert =0}
Because it starts in the first counter-clockwise positive octant, it will step in the direction with the greatest travel, the Y direction, so it is clear that
y
i
+
1
=
y
i
+
1
{\displaystyle y_{i+1}=y_{i}+1}
. Also, because it concerns this octant only, the X values have only 2 options: to stay the same as the prior iteration, or decrease by 1. A decision variable can be created that determines if the following is true:
R
E
(
x
i
−
1
,
y
i
+
1
)
<
R
E
(
x
i
,
y
i
+
1
)
{\displaystyle RE(x_{i}-1,y_{i}+1)<RE(x_{i},y_{i}+1)}
If this inequality holds, then plot
(
x
i
−
1
,
y
i
+
1
)
{\displaystyle (x_{i}-1,y_{i}+1)}
; if not, then plot
(
x
i
,
y
i
+
1
)
{\displaystyle (x_{i},y_{i}+1)}
. So, how to determine if this inequality holds? Start with a definition of radius error:
R
E
(
x
i
−
1
,
y
i
+
1
)
<
R
E
(
x
i
,
y
i
+
1
)
|
(
x
i
−
1
)
2
+
(
y
i
+
1
)
2
−
r
2
|
<
|
x
i
2
+
(
y
i
+
1
)
2
−
r
2
|
|
(
x
i
2
−
2
x
i
+
1
)
+
(
y
i
2
+
2
y
i
+
1
)
−
r
2
|
<
|
x
i
2
+
(
y
i
2
+
2
y
i
+
1
)
−
r
2
|
{\displaystyle {\begin{aligned}RE(x_{i}-1,y_{i}+1)&<RE(x_{i},y_{i}+1)\\\left\vert (x_{i}-1)^{2}+(y_{i}+1)^{2}-r^{2}\right\vert &<\left\vert x_{i}^{2}+(y_{i}+1)^{2}-r^{2}\right\vert \\\left\vert (x_{i}^{2}-2x_{i}+1)+(y_{i}^{2}+2y_{i}+1)-r^{2}\right\vert &<\left\vert x_{i}^{2}+(y_{i}^{2}+2y_{i}+1)-r^{2}\right\vert \\\end{aligned}}}
The absolute value function does not help, so square both sides, since a square is always positive:
[
(
x
i
2
−
2
x
i
+
1
)
+
(
y
i
2
+
2
y
i
+
1
)
−
r
2
]
2
<
[
x
i
2
+
(
y
i
2
+
2
y
i
+
1
)
−
r
2
]
2
[
(
x
i
2
+
y
i
2
−
r
2
+
2
y
i
+
1
)
+
(
1
−
2
x
i
)
]
2
<
[
x
i
2
+
y
i
2
−
r
2
+
2
y
i
+
1
]
2
(
x
i
2
+
y
i
2
−
r
2
+
2
y
i
+
1
)
2
+
2
(
1
−
2
x
i
)
(
x
i
2
+
y
i
2
−
r
2
+
2
y
i
+
1
)
+
(
1
−
2
x
i
)
2
<
[
x
i
2
+
y
i
2
−
r
2
+
2
y
i
+
1
]
2
2
(
1
−
2
x
i
)
(
x
i
2
+
y
i
2
−
r
2
+
2
y
i
+
1
)
+
(
1
−
2
x
i
)
2
<
0
{\displaystyle {\begin{aligned}\left[(x_{i}^{2}-2x_{i}+1)+(y_{i}^{2}+2y_{i}+1)-r^{2}\right]^{2}&<\left[x_{i}^{2}+(y_{i}^{2}+2y_{i}+1)-r^{2}\right]^{2}\\\left[(x_{i}^{2}+y_{i}^{2}-r^{2}+2y_{i}+1)+(1-2x_{i})\right]^{2}&<\left[x_{i}^{2}+y_{i}^{2}-r^{2}+2y_{i}+1\right]^{2}\\\left(x_{i}^{2}+y_{i}^{2}-r^{2}+2y_{i}+1\right)^{2}+2(1-2x_{i})(x_{i}^{2}+y_{i}^{2}-r^{2}+2y_{i}+1)+(1-2x_{i})^{2}&<\left[x_{i}^{2}+y_{i}^{2}-r^{2}+2y_{i}+1\right]^{2}\\2(1-2x_{i})(x_{i}^{2}+y_{i}^{2}-r^{2}+2y_{i}+1)+(1-2x_{i})^{2}&<0\\\end{aligned}}}
Since x > 0, the term
(
1
−
2
x
i
)
<
0
{\displaystyle (1-2x_{i})<0}
, so dividing gets:
2
[
(
x
i
2
+
y
i
2
−
r
2
)
+
(
2
y
i
+
1
)
]
+
(
1
−
2
x
i
)
>
0
2
[
R
E
(
x
i
,
y
i
)
+
Y
Change
]
+
X
Change
>
0
{\displaystyle {\begin{aligned}2\left[(x_{i}^{2}+y_{i}^{2}-r^{2})+(2y_{i}+1)\right]+(1-2x_{i})&>0\\2\left[RE(x_{i},y_{i})+Y_{\text{Change}}\right]+X_{\text{Change}}&>0\\\end{aligned}}}
Thus, the decision criterion changes from using floating-point operations to simple integer addition, subtraction, and bit shifting (for the multiply by 2 operations). If
2
(
R
E
+
Y
Change
)
+
X
Change
>
0
{\displaystyle 2(RE+Y_{\text{Change}})+X_{\text{Change}}>0}
, then decrement the x value. If
2
(
R
E
+
Y
Change
)
+
X
Change
≤
0
{\displaystyle 2(RE+Y_{\text{Change}})+X_{\text{Change}}\leq 0}
, then keep the same x value. Again, by reflecting these points in all the octants, a full circle results.
We may reduce computation by only calculating the delta between the values of this decision formula from its value at the previous step. We start by assigning
E
{\displaystyle E}
as
3
−
2
r
{\displaystyle 3-2r}
which is the initial value of the formula at
(
x
0
,
y
0
)
=
(
r
,
0
)
{\displaystyle (x_{0},y_{0})=(r,0)}
, then as above at each step if
E
>
0
{\displaystyle E>0}
we update it as
E
=
E
+
2
(
5
−
2
x
+
2
y
)
{\displaystyle E=E+2(5-2x+2y)}
(and decrement X), otherwise
E
=
E
+
2
(
3
+
2
y
)
{\displaystyle E=E+2(3+2y)}
thence increment Y as usual.
=== Jesko's Method ===
The algorithm has already been explained to a large extent, but there are further optimizations.
The new presented method gets along with only 5 arithmetic operations per step (for 8 pixels) and is thus best suitable for low-performance systems. In the "if" operation, only the sign is checked (positive? Yes or No) and there is a variable assignment, which is also not considered an arithmetic operation.
The initialization in the first line (shifting by 4 bits to the right) is only due to beauty and not really necessary.
So we get countable operations within main-loop:
The comparison x >= y (is counted as a subtraction: x - y >= 0)
y=y+1 [y++]
t1 + y
t1 - x
The comparison t2 >= 0 is not counted as no real arithmetic takes place. In two's complement representation of the vars only the sign bit has to be checked.
x=x-1 [x--]
Operations: 5
t1 = r / 16
x = r
y = 0
Repeat Until x < y
Pixel (x, y) and all symmetric pixels are colored (8 times)
y = y + 1
t1 = t1 + y
t2 = t1 - x
If t2 >= 0
t1 = t2
x = x - 1
== Drawing incomplete octants ==
The implementations above always draw only complete octants or circles. To draw only a certain arc from an angle
α
{\displaystyle \alpha }
to an angle
β
{\displaystyle \beta }
, the algorithm needs first to calculate the
x
{\displaystyle x}
and
y
{\displaystyle y}
coordinates of these end points, where it is necessary to resort to trigonometric or square root computations (see Methods of computing square roots). Then the Bresenham algorithm is run over the complete octant or circle and sets the pixels only if they fall into the wanted interval. After finishing this arc, the algorithm can be ended prematurely.
If the angles are given as slopes, then no trigonometry or square roots are necessary: simply check that
y
/
x
{\displaystyle y/x}
is between the desired slopes.
== Generalizations ==
It is also possible to use the same concept to rasterize a parabola, ellipse, or any other two-dimensional curve.
== References ==
== External links ==
Drawing circles - An article on drawing circles, that derives from a simple scheme to an efficient one
Midpoint Circle Algorithm in several programming languages | Wikipedia/Midpoint_circle_algorithm |
In mathematics and its applications, the signed distance function or signed distance field (SDF) is the orthogonal distance of a given point x to the boundary of a set Ω in a metric space (such as the surface of a geometric shape), with the sign determined by whether or not x is in the interior of Ω. The function has positive values at points x inside Ω, it decreases in value as x approaches the boundary of Ω where the signed distance function is zero, and it takes negative values outside of Ω. However, the alternative convention is also sometimes taken instead (i.e., negative inside Ω and positive outside). The concept also sometimes goes by the name oriented distance function/field.
== Definition ==
Let Ω be a subset of a metric space X with metric d, and
∂
Ω
{\displaystyle \partial \Omega }
be its boundary. The distance between a point x of X and the subset
∂
Ω
{\displaystyle \partial \Omega }
of X is defined as usual as
d
(
x
,
∂
Ω
)
=
inf
y
∈
∂
Ω
d
(
x
,
y
)
,
{\displaystyle d(x,\partial \Omega )=\inf _{y\in \partial \Omega }d(x,y),}
where
inf
{\displaystyle \inf }
denotes the infimum.
The signed distance function from a point x of X to
Ω
{\displaystyle \Omega }
is defined by
f
(
x
)
=
{
d
(
x
,
∂
Ω
)
if
x
∈
Ω
−
d
(
x
,
∂
Ω
)
if
x
∉
Ω
.
{\displaystyle f(x)={\begin{cases}d(x,\partial \Omega )&{\text{if }}x\in \Omega \\-d(x,\partial \Omega )&{\text{if }}\,x\notin \Omega .\end{cases}}}
== Properties in Euclidean space ==
If Ω is a subset of the Euclidean space Rn with piecewise smooth boundary, then the signed distance function is differentiable almost everywhere, and its gradient satisfies the eikonal equation
|
∇
f
|
=
1.
{\displaystyle |\nabla f|=1.}
If the boundary of Ω is Ck for k ≥ 2 (see Differentiability classes) then d is Ck on points sufficiently close to the boundary of Ω. In particular, on the boundary f satisfies
∇
f
(
x
)
=
N
(
x
)
,
{\displaystyle \nabla f(x)=N(x),}
where N is the inward normal vector field. The signed distance function is thus a differentiable extension of the normal vector field. In particular, the Hessian of the signed distance function on the boundary of Ω gives the Weingarten map.
If, further, Γ is a region sufficiently close to the boundary of Ω that f is twice continuously differentiable on it, then there is an explicit formula involving the Weingarten map Wx for the Jacobian of changing variables in terms of the signed distance function and nearest boundary point. Specifically, if T(∂Ω, μ) is the set of points within distance μ of the boundary of Ω (i.e. the tubular neighbourhood of radius μ), and g is an absolutely integrable function on Γ, then
∫
T
(
∂
Ω
,
μ
)
g
(
x
)
d
x
=
∫
∂
Ω
∫
−
μ
μ
g
(
u
+
λ
N
(
u
)
)
det
(
I
−
λ
W
u
)
d
λ
d
S
u
,
{\displaystyle \int _{T(\partial \Omega ,\mu )}g(x)\,dx=\int _{\partial \Omega }\int _{-\mu }^{\mu }g(u+\lambda N(u))\,\det(I-\lambda W_{u})\,d\lambda \,dS_{u},}
where det denotes the determinant and dSu indicates that we are taking the surface integral.
== Algorithms ==
Algorithms for calculating the signed distance function include the efficient fast marching method, fast sweeping method and the more general level-set method.
For voxel rendering, a fast algorithm for calculating the SDF in taxicab geometry uses summed-area tables.
== Applications ==
Signed distance functions are applied, for example, in real-time rendering, for instance the method of SDF ray marching, and computer vision.
SDF has been used to describe object geometry in real-time rendering, usually in a raymarching context, starting in the mid 2000s. By 2007, Valve is using SDFs to render large pixel-size (or high DPI) smooth fonts with GPU acceleration in its games. Valve's method is not perfect as it runs in raster space in order to avoid the computational complexity of solving the problem in the (continuous) vector space. The rendered text often loses sharp corners. In 2014, an improved method was presented by Behdad Esfahbod. Behdad's GLyphy approximates the font's Bézier curves with arc splines, accelerated by grid-based discretization techniques (which culls too-far-away points) to run in real time.
A modified version of SDF was introduced as a loss function to minimise the error in interpenetration of pixels while rendering multiple objects. In particular, for any pixel that does not belong to an object, if it lies outside the object in rendition, no penalty is imposed; if it does, a positive value proportional to its distance inside the object is imposed.
f
(
x
)
=
{
0
if
x
∈
Ω
c
d
(
x
,
∂
Ω
)
if
x
∈
Ω
{\displaystyle f(x)={\begin{cases}0&{\text{if }}\,x\in \Omega ^{c}\\d(x,\partial \Omega )&{\text{if }}\,x\in \Omega \end{cases}}}
In 2020, the FOSS game engine Godot 4.0 received SDF-based real-time global illumination (SDFGI), that became a compromise between more realistic voxel-based GI and baked GI. Its core advantage is that it can be applied to infinite space, which allows developers to use it for open-world games.
In 2023, the authors of the Zed text editor announced a GPUI framework that draws all UI elements using the GPU at 120 fps. The work makes use of Inigo Quilez's list of geometric primitives in SDF, Figma co-founder Evan Wallace's Gaussian blur in SDF, and a new rounded rectangle SDF.
== See also ==
Distance function
Level-set method
Eikonal equation
Parallel curve (also known as offset curve)
Signed arc length
Signed area
Signed measure
Signed volume
== Notes ==
== References ==
Stanley J. Osher and Ronald P. Fedkiw (2003). Level Set Methods and Dynamic Implicit Surfaces. Springer. ISBN 9780387227467.
Gilbarg, D.; Trudinger, N. S. (1983). Elliptic Partial Differential Equations of Second Order. Grundlehren der mathematischen Wissenschaften. Vol. 224 (2nd ed.). Springer-Verlag. (or the Appendix of the 1977 1st ed.) | Wikipedia/Signed_distance_function |
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, mathematician Stanisław Ulam, was inspired by his uncle's gambling habits.
Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution. They can also be used to model phenomena with significant uncertainty in inputs, such as calculating the risk of a nuclear power plant failure. Monte Carlo methods are often implemented using computer simulations, and they can provide approximate solutions to problems that are otherwise intractable or too complex to analyze mathematically.
Monte Carlo methods are widely used in various fields of science, engineering, and mathematics, such as physics, chemistry, biology, statistics, artificial intelligence, finance, and cryptography. They have also been applied to social sciences, such as sociology, psychology, and political science. Monte Carlo methods have been recognized as one of the most important and influential ideas of the 20th century, and they have enabled many scientific and technological breakthroughs.
Monte Carlo methods also have some limitations and challenges, such as the trade-off between accuracy and computational cost, the curse of dimensionality, the reliability of random number generators, and the verification and validation of the results.
== Overview ==
Monte Carlo methods vary, but tend to follow a particular pattern:
Define a domain of possible inputs.
Generate inputs randomly from a probability distribution over the domain.
Perform a deterministic computation of the outputs.
Aggregate the results.
For example, consider a quadrant (circular sector) inscribed in a unit square. Given that the ratio of their areas is π/4, the value of π can be approximated using the Monte Carlo method:
Draw a square, then inscribe a quadrant within it.
Uniformly scatter a given number of points over the square.
Count the number of points inside the quadrant, i.e. having a distance from the origin of less than 1.
The ratio of the inside-count and the total-sample-count is an estimate of the ratio of the two areas, π/4. Multiply the result by 4 to estimate π.
In this procedure, the domain of inputs is the square that circumscribes the quadrant. One can generate random inputs by scattering grains over the square, then performing a computation on each input to test whether it falls within the quadrant. Aggregating the results yields our final result, the approximation of π.
There are two important considerations:
If the points are not uniformly distributed, the approximation will be poor.
The approximation improves as more points are randomly placed in the whole square.
Uses of Monte Carlo methods require large amounts of random numbers, and their use benefitted greatly from pseudorandom number generators, which are far quicker to use than the tables of random numbers that had been previously employed.
== Application ==
Monte Carlo methods are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.
In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKean–Vlasov processes, kinetic models of gases).
Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.
In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the 'sample mean') of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler.
In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKean–Vlasov processes, nonlinear filtering equation). In other instances, a flow of probability distributions with an increasing level of sampling complexity arise (path spaces models with an increasing time horizon, Boltzmann–Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples (a.k.a. particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.
== Simple Monte Carlo ==
Suppose one wants to know the expected value
μ
{\displaystyle \mu }
of a population (and knows that
μ
{\displaystyle \mu }
exists), but does not have a formula available to compute it. The simple Monte Carlo method gives an estimate for
μ
{\displaystyle \mu }
by running
n
{\displaystyle n}
simulations and averaging the simulations' results. It has no restrictions on the probability distribution of the inputs to the simulations, requiring only that the inputs are randomly generated and are independent of each other and that
μ
{\displaystyle \mu }
exists. A sufficiently large
n
{\displaystyle n}
will produce a value for
m
{\displaystyle m}
that is arbitrarily close to
μ
{\displaystyle \mu }
; more formally, it will be the case that, for any
ϵ
>
0
{\displaystyle \epsilon >0}
,
|
μ
−
m
|
≤
ϵ
{\displaystyle |\mu -m|\leq \epsilon }
.
Typically, the algorithm to obtain
m
{\displaystyle m}
is
s = 0;
for i = 1 to n do
run the simulation for the ith time, giving result ri;
s = s + ri;
repeat
m = s / n;
=== An example ===
Suppose we want to know how many times we should expect to throw three eight-sided dice for the total of the dice throws to be at least
T
{\displaystyle T}
. We know the expected value exists. The dice throws are randomly distributed and independent of each other. So simple Monte Carlo is applicable:
s = 0;
for i = 1 to n do
throw the three dice until T is met or first exceeded; ri = the number of throws;
s = s + ri;
repeat
m = s / n;
If
n
{\displaystyle n}
is large enough,
m
{\displaystyle m}
will be within
ϵ
{\displaystyle \epsilon }
of
μ
{\displaystyle \mu }
for any
ϵ
>
0
{\displaystyle \epsilon >0}
.
=== Determining a sufficiently large n ===
==== General formula ====
Let
ϵ
=
|
μ
−
m
|
>
0
{\displaystyle \epsilon =|\mu -m|>0}
. Choose the desired confidence level – the percent chance that, when the Monte Carlo algorithm completes,
m
{\displaystyle m}
is indeed within
ϵ
{\displaystyle \epsilon }
of
μ
{\displaystyle \mu }
. Let
z
{\displaystyle z}
be the
z
{\displaystyle z}
-score corresponding to that confidence level.
Let
s
2
{\displaystyle s^{2}}
be the estimated variance, sometimes called the “sample” variance; it is the variance of the results obtained from a relatively small number
k
{\displaystyle k}
of “sample” simulations. Choose a
k
{\displaystyle k}
; Driels and Shin observe that “even for sample sizes an order of magnitude lower than the number required, the calculation of that number is quite stable."
The following algorithm computes
s
2
{\displaystyle s^{2}}
in one pass while minimizing the possibility that accumulated numerical error produces erroneous results:
s1 = 0;
run the simulation for the first time, producing result r1;
m1 = r1; //mi is the mean of the first i simulations
for i = 2 to k do
run the simulation for the ith time, producing result ri;
δi = ri - mi−1;
mi = mi-1 + (1/i)δi;
si = si-1 + ((i - 1)/i)(δi)2;
repeat
s2 = sk/(k - 1);
Note that, when the algorithm completes,
m
k
{\displaystyle m_{k}}
is the mean of the
k
{\displaystyle k}
results.
The value
n
{\displaystyle n}
is sufficiently large when
n
≥
s
2
z
2
/
ϵ
2
.
{\displaystyle n\geq s^{2}z^{2}/\epsilon ^{2}.}
If
n
≤
k
{\displaystyle n\leq k}
, then
m
k
=
m
{\displaystyle m_{k}=m}
; sufficient sample simulations were done to ensure that
m
k
{\displaystyle m_{k}}
is within
ϵ
{\displaystyle \epsilon }
of
μ
{\displaystyle \mu }
. If
n
>
k
{\displaystyle n>k}
, then
n
{\displaystyle n}
simulations can be run “from scratch,” or, since
k
{\displaystyle k}
simulations have already been done, one can just run
n
−
k
{\displaystyle n-k}
more simulations and add their results into those from the sample simulations:
s = mk * k;
for i = k + 1 to n do
run the simulation for the ith time, giving result ri;
s = s + ri;
m = s / n;
==== A formula when simulations' results are bounded ====
An alternative formula can be used in the special case where all simulation results are bounded above and below.
Choose a value for
ϵ
{\displaystyle \epsilon }
that is twice the maximum allowed difference between
μ
{\displaystyle \mu }
and
m
{\displaystyle m}
. Let
0
<
δ
<
100
{\displaystyle 0<\delta <100}
be the desired confidence level, expressed as a percentage. Let every simulation result
r
1
,
r
2
,
…
,
r
i
,
…
,
r
n
{\displaystyle r_{1},r_{2},\ldots ,r_{i},\ldots ,r_{n}}
be such that
a
≤
r
i
≤
b
{\displaystyle a\leq r_{i}\leq b}
for finite
a
{\displaystyle a}
and
b
{\displaystyle b}
. To have confidence of at least
δ
{\displaystyle \delta }
that
|
μ
−
m
|
<
ϵ
/
2
{\displaystyle |\mu -m|<\epsilon /2}
, use a value for
n
{\displaystyle n}
such that:
n
≥
2
(
b
−
a
)
2
ln
(
2
/
(
1
−
(
δ
/
100
)
)
)
/
ϵ
2
{\displaystyle n\geq 2(b-a)^{2}\ln(2/(1-(\delta /100)))/\epsilon ^{2}}
For example, if
δ
=
99
%
{\displaystyle \delta =99\%}
, then
n
≥
2
(
b
−
a
)
2
ln
(
2
/
0.01
)
/
ϵ
2
≈
10.6
(
b
−
a
)
2
/
ϵ
2
{\displaystyle n\geq 2(b-a)^{2}\ln(2/0.01)/\epsilon ^{2}\approx 10.6(b-a)^{2}/\epsilon ^{2}}
.
== Computational costs ==
Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc.
== History ==
Before the Monte Carlo method was developed, simulations tested a previously understood deterministic problem, and statistical sampling was used to estimate uncertainties in the simulations. Monte Carlo simulations invert this approach, solving deterministic problems using probabilistic metaheuristics (see simulated annealing).
An early variant of the Monte Carlo method was devised to solve the Buffon's needle problem, in which π can be estimated by dropping needles on a floor made of parallel equidistant strips. In the 1930s, Enrico Fermi first experimented with the Monte Carlo method while studying neutron diffusion, but he did not publish this work.
In the late 1940s, Stanisław Ulam invented the modern version of the Markov Chain Monte Carlo method while he was working on nuclear weapons projects at the Los Alamos National Laboratory. In 1946, nuclear weapons physicists at Los Alamos were investigating neutron diffusion in the core of a nuclear weapon. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Ulam proposed using random experiments. He recounts his inspiration as follows:
The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than "abstract thinking" might not be to lay it out say one hundred times and simply observe and count the number of successful plays. This was already possible to envisage with the beginning of the new era of fast computers, and I immediately thought of problems of neutron diffusion and other questions of mathematical physics, and more generally how to change processes described by certain differential equations into an equivalent form interpretable as a succession of random operations. Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations.
Being secret, the work of von Neumann and Ulam required a code name. A colleague of von Neumann and Ulam, Nicholas Metropolis, suggested using the name Monte Carlo, which refers to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money from relatives to gamble.
Monte Carlo methods were central to the simulations required for further postwar development of nuclear weapons, including the design of the H-bomb, though severely limited by the computational tools at the time. Von Neumann, Nicholas Metropolis and others programmed the ENIAC computer to perform the first fully automated Monte Carlo calculations, of a fission weapon core, in the spring of 1948. In the 1950s Monte Carlo methods were used at Los Alamos for the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.
The theory of more sophisticated mean-field type particle Monte Carlo methods had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics. An earlier pioneering article by Theodore E. Harris and Herman Kahn, published in 1951, used mean-field genetic-type Monte Carlo methods for estimating particle transmission energies. Mean-field genetic type Monte Carlo methodologies are also used as heuristic natural search algorithms (a.k.a. metaheuristic) in evolutionary computing. The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work of Alan Turing on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey.
Quantum Monte Carlo, and more specifically diffusion Monte Carlo methods can also be interpreted as a mean-field particle Monte Carlo approximation of Feynman–Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron-chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984. In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.
The use of Sequential Monte Carlo in advanced signal processing and Bayesian inference is more recent. It was in 1993, that Gordon et al., published in their seminal work the first application of a Monte Carlo resampling algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system. Another pioneering article in this field was Genshiro Kitagawa's, on a related "Monte Carlo filter", and the ones by Pierre Del Moral and Himilcon Carvalho, Pierre Del Moral, André Monin and Gérard Salut on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in 1989–1992 by P. Del Moral, J. C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on radar/sonar and GPS signal processing problems. These Sequential Monte Carlo methodologies can be interpreted as an acceptance-rejection sampler equipped with an interacting recycling mechanism.
From 1950 to 1996, all the publications on Sequential Monte Carlo methodologies, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and on genealogical and ancestral tree based algorithms. The mathematical foundations and the first rigorous analysis of these particle algorithms were written by Pierre Del Moral in 1996.
Branching type particle methodologies with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons, and by Dan Crisan, Pierre Del Moral and Terry Lyons. Further developments in this field were described in 1999 to 2001 by P. Del Moral, A. Guionnet and L. Miclo.
== Definitions ==
There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior).
Here are some examples:
Simulation: Drawing one pseudo-random uniform variable from the interval [0,1] can be used to simulate the tossing of a coin: If the value is less than or equal to 0.50 designate the outcome as heads, but if the value is greater than 0.50 designate the outcome as tails. This is a simulation, but not a Monte Carlo simulation.
Monte Carlo method: Pouring out a box of coins on a table, and then computing the ratio of coins that land heads versus tails is a Monte Carlo method of determining the behavior of repeated coin tosses, but it is not a simulation.
Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin.
Kalos and Whitlock point out that such distinctions are not always easy to maintain. For example, the emission of radiation from atoms is a natural stochastic process. It can be simulated directly, or its average behavior can be described by stochastic equations that can themselves be solved using Monte Carlo methods. "Indeed, the same computer code can be viewed simultaneously as a 'natural simulation' or as a solution of the equations by natural sampling."
Convergence of the Monte Carlo simulation can be checked with the Gelman-Rubin statistic.
=== Monte Carlo and random numbers ===
The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known.
Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally. Monte Carlo simulation methods do not always require truly random numbers to be useful (although, for some applications such as primality testing, unpredictability is vital). Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense.
What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary.
Sawilowsky lists the characteristics of a high-quality Monte Carlo simulation:
the (pseudo-random) number generator has certain characteristics (e.g. a long "period" before the sequence repeats)
the (pseudo-random) number generator produces values that pass tests for randomness
there are enough samples to ensure accurate results
the proper sampling technique is used
the algorithm used is valid for what is being modeled
it simulates the phenomenon in question.
Pseudo-random number sampling algorithms are used to transform uniformly distributed pseudo-random numbers into numbers that are distributed according to a given probability distribution.
Low-discrepancy sequences are often used instead of random sampling from a space as they ensure even coverage and normally have a faster order of convergence than Monte Carlo simulations using random or pseudorandom sequences. Methods based on their use are called quasi-Monte Carlo methods.
In an effort to assess the impact of random number quality on Monte Carlo simulation outcomes, astrophysical researchers tested cryptographically secure pseudorandom numbers generated via Intel's RDRAND instruction set, as compared to those derived from algorithms, like the Mersenne Twister, in Monte Carlo simulations of radio flares from brown dwarfs. No statistically significant difference was found between models generated with typical pseudorandom number generators and RDRAND for trials consisting of the generation of 107 random numbers.
=== Monte Carlo simulation versus "what if" scenarios ===
There are ways of using probabilities that are definitely not Monte Carlo simulations – for example, deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a "best guess" estimate. Scenarios (such as best, worst, or most likely case) for each input variable are chosen and the results recorded.
By contrast, Monte Carlo simulations sample from a probability distribution for each variable to produce hundreds or thousands of possible outcomes. The results are analyzed to get probabilities of different outcomes occurring. For example, a comparison of a spreadsheet cost construction model run using traditional "what if" scenarios, and then running the comparison again with Monte Carlo simulation and triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the "what if" analysis. This is because the "what if" analysis gives equal weight to all scenarios (see quantifying uncertainty in corporate finance), while the Monte Carlo method hardly samples in the very low probability regions. The samples in such regions are called "rare events".
== Applications ==
Monte Carlo methods are especially useful for simulating phenomena with significant uncertainty in inputs and systems with many coupled degrees of freedom. Areas of application include:
=== Physical sciences ===
Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms as well as in modeling radiation transport for radiation dosimetry calculations. In statistical physics, Monte Carlo molecular modeling is an alternative to computational molecular dynamics, and Monte Carlo methods are used to compute statistical field theories of simple particle and polymer systems. Quantum Monte Carlo methods solve the many-body problem for quantum systems. In radiation materials science, the binary collision approximation for simulating ion implantation is usually based on a Monte Carlo approach to select the next colliding atom. In experimental particle physics, Monte Carlo methods are used for designing detectors, understanding their behavior and comparing experimental data to theory. In astrophysics, they are used in such diverse manners as to model both galaxy evolution and microwave radiation transmission through a rough planetary surface. Monte Carlo methods are also used in the ensemble models that form the basis of modern weather forecasting.
=== Engineering ===
Monte Carlo methods are widely used in engineering for sensitivity analysis and quantitative probabilistic analysis in process design. The need arises from the interactive, co-linear and non-linear behavior of typical process simulations. For example,
In microelectronics engineering, Monte Carlo methods are applied to analyze correlated and uncorrelated variations in analog and digital integrated circuits.
In geostatistics and geometallurgy, Monte Carlo methods underpin the design of mineral processing flowsheets and contribute to quantitative risk analysis.
In fluid dynamics, in particular rarefied gas dynamics, where the Boltzmann equation is solved for finite Knudsen number fluid flows using the direct simulation Monte Carlo method in combination with highly efficient computational algorithms.
In autonomous robotics, Monte Carlo localization can determine the position of a robot. It is often applied to stochastic filters such as the Kalman filter or particle filter that forms the heart of the SLAM (simultaneous localization and mapping) algorithm.
In telecommunications, when planning a wireless network, the design must be proven to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process.
In reliability engineering, Monte Carlo simulation is used to compute system-level response given the component-level response.
In signal processing and Bayesian inference, particle filters and sequential Monte Carlo techniques are a class of mean-field particle methods for sampling and computing the posterior distribution of a signal process given some noisy and partial observations using interacting empirical measures.
=== Climate change and radiative forcing ===
The Intergovernmental Panel on Climate Change relies on Monte Carlo methods in probability density function analysis of radiative forcing.
=== Computational biology ===
Monte Carlo methods are used in various fields of computational biology, for example for Bayesian inference in phylogeny, or for studying biological systems such as genomes, proteins, or membranes.
The systems can be studied in the coarse-grained or ab initio frameworks depending on the desired accuracy.
Computer simulations allow monitoring of the local environment of a particular molecule to see if some chemical reaction is happening for instance. In cases where it is not feasible to conduct a physical experiment, thought experiments can be conducted (for instance: breaking bonds, introducing impurities at specific sites, changing the local/global structure, or introducing external fields).
=== Computer graphics ===
Path tracing, occasionally referred to as Monte Carlo ray tracing, renders a 3D scene by randomly tracing samples of possible light paths. Repeated sampling of any given pixel will eventually cause the average of the samples to converge on the correct solution of the rendering equation, making it one of the most physically accurate 3D graphics rendering methods in existence.
=== Applied statistics ===
The standards for Monte Carlo experiments in statistics were set by Sawilowsky. In applied statistics, Monte Carlo methods may be used for at least four purposes:
To compare competing statistics for small samples under realistic data conditions. Although type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions.
To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions.
To provide a random sample from the posterior distribution in Bayesian inference. This sample then approximates and summarizes all the essential features of the posterior.
To provide efficient random estimates of the Hessian matrix of the negative log-likelihood function that may be averaged to form an estimate of the Fisher information matrix.
Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice—or more frequently—for the efficiency of not having to track which permutations have already been selected).
=== Artificial intelligence for games ===
Monte Carlo methods have been developed into a technique called Monte-Carlo tree search that is useful for searching for the best move in a game. Possible moves are organized in a search tree and many random simulations are used to estimate the long-term potential of each move. A black box simulator represents the opponent's moves.
The Monte Carlo tree search (MCTS) method has four steps:
Starting at root node of the tree, select optimal child nodes until a leaf node is reached.
Expand the leaf node and choose one of its children.
Play a simulated game starting with that node.
Use the results of that simulated game to update the node and its ancestors.
The net effect, over the course of many simulated games, is that the value of a node representing a move will go up or down, hopefully corresponding to whether or not that node represents a good move.
Monte Carlo Tree Search has been used successfully to play games such as Go, Tantrix, Battleship, Havannah, and Arimaa.
=== Design and visuals ===
Monte Carlo methods are also efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations that produce photo-realistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, and cinematic special effects.
=== Search and rescue ===
The US Coast Guard utilizes Monte Carlo methods within its computer modeling software SAROPS in order to calculate the probable locations of vessels during search and rescue operations. Each simulation can generate as many as ten thousand data points that are randomly distributed based upon provided variables. Search patterns are then generated based upon extrapolations of these data in order to optimize the probability of containment (POC) and the probability of detection (POD), which together will equal an overall probability of success (POS). Ultimately this serves as a practical application of probability distribution in order to provide the swiftest and most expedient method of rescue, saving both lives and resources.
=== Finance and business ===
Monte Carlo simulation is commonly used to evaluate the risk and uncertainty that would affect the outcome of different decision options. Monte Carlo simulation allows the business risk analyst to incorporate the total effects of uncertainty in variables like sales volume, commodity and labor prices, interest and exchange rates, as well as the effect of distinct risk events like the cancellation of a contract or the change of a tax law.
Monte Carlo methods in finance are often used to evaluate investments in projects at a business unit or corporate level, or other financial valuations. They can be used to model project schedules, where simulations aggregate estimates for worst-case, best-case, and most likely durations for each task to determine outcomes for the overall project. Monte Carlo methods are also used in option pricing, default risk analysis. Additionally, they can be used to estimate the financial impact of medical interventions.
=== Law ===
A Monte Carlo approach was used for evaluating the potential value of a proposed program to help female petitioners in Wisconsin be successful in their applications for harassment and domestic abuse restraining orders. It was proposed to help women succeed in their petitions by providing them with greater advocacy thereby potentially reducing the risk of rape and physical assault. However, there were many variables in play that could not be estimated perfectly, including the effectiveness of restraining orders, the success rate of petitioners both with and without advocacy, and many others. The study ran trials that varied these variables to come up with an overall estimate of the success level of the proposed program as a whole.
=== Library science ===
Monte Carlo approach had also been used to simulate the number of book publications based on book genre in Malaysia. The Monte Carlo simulation utilized previous published National Book publication data and book's price according to book genre in the local market. The Monte Carlo results were used to determine what kind of book genre that Malaysians are fond of and was used to compare book publications between Malaysia and Japan.
=== Other ===
Nassim Nicholas Taleb writes about Monte Carlo generators in his 2001 book Fooled by Randomness as a real instance of the reverse Turing test: a human can be declared unintelligent if their writing cannot be told apart from a generated one.
== Use in mathematics ==
In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see also Random number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration.
=== Integration ===
Deterministic numerical integration algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then 10100 points are needed for 100 dimensions—far too many to be computed. This is called the curse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to an iterated integral. 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom.
Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the central limit theorem, this method displays
1
/
N
{\displaystyle \scriptstyle 1/{\sqrt {N}}}
convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.
A refinement of this method, known as importance sampling in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as stratified sampling, recursive stratified sampling, adaptive umbrella sampling or the VEGAS algorithm.
A similar approach, the quasi-Monte Carlo method, uses low-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly.
Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis–Hastings algorithm, Gibbs sampling, Wang and Landau algorithm, and interacting type MCMC methodologies such as the sequential Monte Carlo samplers.
=== Simulation and optimization ===
Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization. The problem is to minimize (or maximize) functions of some vector that often has many dimensions. Many problems can be phrased in this way: for example, a computer chess program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In the traveling salesman problem the goal is to minimize distance traveled. There are also applications to engineering design, such as multidisciplinary design optimization. It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference is a comprehensive review of many issues related to simulation and optimization.
The traveling salesman problem is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. If instead of the goal being to minimize the total distance traveled to visit each desired destination but rather to minimize the total time needed to reach each destination, this goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine the optimal path a different simulation is required: optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize the travel decisions to identify the best path to follow taking that uncertainty into account.
=== Inverse problems ===
Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines prior information with new information obtained by measuring some observable parameters (data).
As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as normally information on the resolution power of the data is desired. In the general case many parameters are modeled, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available.
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution.
=== Philosophy ===
Popular exposition of the Monte Carlo Method was conducted by McCracken. The method's general philosophy was discussed by Elishakoff and Grüne-Yanoff and Weirich.
== See also ==
== References ==
=== Citations ===
=== Sources ===
== External links == | Wikipedia/Monte_Carlo_methods |
In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a system of linear equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either strictly diagonally dominant, or symmetric and positive definite. It was only mentioned in a private letter from Gauss to his student Gerling in 1823. A publication was not delivered before 1874 by Seidel.
== Description ==
Let
A
x
=
b
{\textstyle \mathbf {A} \mathbf {x} =\mathbf {b} }
be a square system of n linear equations, where:
A
=
[
a
11
a
12
⋯
a
1
n
a
21
a
22
⋯
a
2
n
⋮
⋮
⋱
⋮
a
n
1
a
n
2
⋯
a
n
n
]
,
x
=
[
x
1
x
2
⋮
x
n
]
,
b
=
[
b
1
b
2
⋮
b
n
]
.
{\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &a_{nn}\end{bmatrix}},\qquad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\qquad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{n}\end{bmatrix}}.}
When
A
{\displaystyle \mathbf {A} }
and
b
{\displaystyle \mathbf {b} }
are known, and
x
{\displaystyle \mathbf {x} }
is unknown, the Gauss–Seidel method can be used to iteratively approximate
x
{\displaystyle \mathbf {x} }
. The vector
x
(
0
)
{\displaystyle \mathbf {x} ^{(0)}}
denotes the initial guess for
x
{\displaystyle \mathbf {x} }
, often
x
i
(
0
)
=
0
{\displaystyle \mathbf {x} _{i}^{(0)}=0}
for
i
=
1
,
2
,
.
.
.
,
n
{\displaystyle i=1,2,...,n}
. Denote by
x
(
k
)
{\displaystyle \mathbf {x} ^{(k)}}
the
k
{\displaystyle k}
-th approximation or iteration of
x
{\displaystyle \mathbf {x} }
, and by
x
(
k
+
1
)
{\displaystyle \mathbf {x} ^{(k+1)}}
the approximation of
x
{\displaystyle \mathbf {x} }
at the next (or
k
+
1
{\displaystyle k+1}
-th) iteration.
=== Matrix-based formula ===
The solution is obtained iteratively via
L
x
(
k
+
1
)
=
b
−
U
x
(
k
)
,
{\displaystyle \mathbf {L} \mathbf {x} ^{(k+1)}=\mathbf {b} -\mathbf {U} \mathbf {x} ^{(k)},}
where the matrix
A
{\displaystyle \mathbf {A} }
is decomposed into a lower triangular component
L
{\displaystyle \mathbf {L} }
, and a strictly upper triangular component
U
{\displaystyle \mathbf {U} }
such that
A
=
L
+
U
{\displaystyle \mathbf {A} =\mathbf {L} +\mathbf {U} }
. More specifically, the decomposition of
A
{\displaystyle A}
into
L
∗
{\displaystyle L_{*}}
and
U
{\displaystyle U}
is given by:
A
=
[
a
11
0
⋯
0
a
21
a
22
⋯
0
⋮
⋮
⋱
⋮
a
n
1
a
n
2
⋯
a
n
n
]
⏟
L
+
[
0
a
12
⋯
a
1
n
0
0
⋯
a
2
n
⋮
⋮
⋱
⋮
0
0
⋯
0
]
⏟
U
.
{\displaystyle \mathbf {A} =\underbrace {\begin{bmatrix}a_{11}&0&\cdots &0\\a_{21}&a_{22}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &a_{nn}\end{bmatrix}} _{\textstyle \mathbf {L} }+\underbrace {\begin{bmatrix}0&a_{12}&\cdots &a_{1n}\\0&0&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &0\end{bmatrix}} _{\textstyle \mathbf {U} }.}
==== Why the matrix-based formula works ====
The system of linear equations may be rewritten as:
A
x
=
b
(
L
+
U
)
x
=
b
L
x
+
U
x
=
b
L
x
=
b
−
U
x
{\displaystyle {\begin{alignedat}{1}\mathbf {A} \mathbf {x} &=\mathbf {b} \\(\mathbf {L} +\mathbf {U} )\mathbf {x} &=\mathbf {b} \\\mathbf {L} \mathbf {x} +\mathbf {U} \mathbf {x} &=\mathbf {b} \\\mathbf {L} \mathbf {x} &=\mathbf {b} -\mathbf {U} \mathbf {x} \end{alignedat}}}
The Gauss–Seidel method now solves the left hand side of this expression for
x
{\displaystyle \mathbf {x} }
, using the previous value for
x
{\displaystyle \mathbf {x} }
on the right hand side. Analytically, this may be written as
x
(
k
+
1
)
=
L
−
1
(
b
−
U
x
(
k
)
)
.
{\displaystyle \mathbf {x} ^{(k+1)}=\mathbf {L} ^{-1}\left(\mathbf {b} -\mathbf {U} \mathbf {x} ^{(k)}\right).}
=== Element-based formula ===
However, by taking advantage of the triangular form of
L
{\displaystyle \mathbf {L} }
, the elements of
x
(
k
+
1
)
{\displaystyle \mathbf {x} ^{(k+1)}}
can be computed sequentially for each row
i
{\displaystyle i}
using forward substitution:
x
i
(
k
+
1
)
=
1
a
i
i
(
b
i
−
∑
j
=
1
i
−
1
a
i
j
x
j
(
k
+
1
)
−
∑
j
=
i
+
1
n
a
i
j
x
j
(
k
)
)
,
i
=
1
,
2
,
…
,
n
.
{\displaystyle x_{i}^{(k+1)}={\frac {1}{a_{ii}}}\left(b_{i}-\sum _{j=1}^{i-1}a_{ij}x_{j}^{(k+1)}-\sum _{j=i+1}^{n}a_{ij}x_{j}^{(k)}\right),\quad i=1,2,\dots ,n.}
Notice that the formula uses two summations per iteration which can be expressed as one summation
∑
j
≠
i
a
i
j
x
j
{\displaystyle \sum _{j\neq i}a_{ij}x_{j}}
that uses the most recently calculated iteration of
x
j
{\displaystyle x_{j}}
. The procedure is generally continued until the changes made by an iteration are below some tolerance, such as a sufficiently small residual.
=== Discussion ===
The element-wise formula for the Gauss–Seidel method is related to that of the (iterative) Jacobi method, with an important difference:
In Gauss-Seidel, the computation of
x
(
k
+
1
)
{\displaystyle \mathbf {x} ^{(k+1)}}
uses the elements of
x
(
k
+
1
)
{\displaystyle \mathbf {x} ^{(k+1)}}
that have already been computed, and only the elements of
x
(
k
)
{\displaystyle \mathbf {x} ^{(k)}}
that have not been computed in the
(
k
+
1
)
{\displaystyle (k+1)}
-th iteration. This means that, unlike the Jacobi method, only one storage vector is required as elements can be overwritten as they are computed, which can be advantageous for very large problems.
However, unlike the Jacobi method, the computations for each element are generally much harder to implement in parallel, since they can have a very long critical path, and are thus most feasible for sparse matrices. Furthermore, the values at each iteration are dependent on the order of the original equations.
Gauss-Seidel is the same as successive over-relaxation with
ω
=
1
{\displaystyle \omega =1}
.
== Convergence ==
The convergence properties of the Gauss–Seidel method are dependent on the matrix
A
{\displaystyle \mathbf {A} }
. Namely, the procedure is known to converge if either:
A
{\displaystyle \mathbf {A} }
is symmetric positive-definite, or
A
{\displaystyle \mathbf {A} }
is strictly or irreducibly diagonally dominant.
The Gauss–Seidel method may converge even if these conditions are not satisfied.
Golub and Van Loan give a theorem for an algorithm that splits
A
{\displaystyle \mathbf {A} }
into two parts. Suppose
A
=
M
−
N
{\displaystyle \mathbf {A} =\mathbf {M} -\mathbf {N} }
is nonsingular. Let
r
=
ρ
(
M
−
1
N
)
{\displaystyle r=\rho (\mathbf {M} ^{-1}\mathbf {N} )}
be the spectral radius of
M
−
1
N
{\displaystyle \mathbf {M} ^{-1}\mathbf {N} }
. Then the iterates
x
(
k
)
{\displaystyle \mathbf {x} ^{(k)}}
defined by
M
x
(
k
+
1
)
=
N
x
(
k
)
+
b
{\displaystyle \mathbf {M} \mathbf {x} ^{(k+1)}=\mathbf {N} \mathbf {x} ^{(k)}+\mathbf {b} }
converge to
x
=
A
−
1
b
{\displaystyle \mathbf {x} =\mathbf {A} ^{-1}\mathbf {b} }
for any starting vector
x
(
0
)
{\displaystyle \mathbf {x} ^{(0)}}
if
M
{\displaystyle \mathbf {M} }
is nonsingular and
r
<
1
{\displaystyle r<1}
.
== Algorithm ==
Since elements can be overwritten as they are computed in this algorithm, only one storage vector is needed, and vector indexing is omitted. The algorithm goes as follows:
algorithm Gauss–Seidel method is
inputs: A, b
output: φ
Choose an initial guess φ to the solution
repeat until convergence
for i from 1 until n do
σ ← 0
for j from 1 until n do
if j ≠ i then
σ ← σ + aijφj
end if
end (j-loop)
φi ← (bi − σ) / aii
end (i-loop)
check if convergence is reached
end (repeat)
== Examples ==
=== An example for the matrix version ===
A linear system shown as
A
x
=
b
{\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} }
is given by:
A
=
[
16
3
7
−
11
]
and
b
=
[
11
13
]
.
{\displaystyle \mathbf {A} ={\begin{bmatrix}16&3\\7&-11\\\end{bmatrix}}\quad {\text{and}}\quad \mathbf {b} ={\begin{bmatrix}11\\13\end{bmatrix}}.}
Use the equation
x
(
k
+
1
)
=
L
−
1
(
b
−
U
x
(
k
)
)
{\displaystyle \mathbf {x} ^{(k+1)}=\mathbf {L} ^{-1}(\mathbf {b} -\mathbf {U} \mathbf {x} ^{(k)})}
in the form
x
(
k
+
1
)
=
T
x
(
k
)
+
c
{\displaystyle \mathbf {x} ^{(k+1)}=\mathbf {T} \mathbf {x} ^{(k)}+\mathbf {c} }
where:
T
=
−
L
−
1
U
and
c
=
L
−
1
b
.
{\displaystyle \mathbf {T} =-\mathbf {L} ^{-1}\mathbf {U} \quad {\text{and}}\quad \mathbf {c} =\mathbf {L} ^{-1}\mathbf {b} .}
Decompose
A
{\displaystyle \mathbf {A} }
into the sum of a lower triangular component
L
{\displaystyle \mathbf {L} }
and a strict upper triangular component
U
{\displaystyle U}
:
L
=
[
16
0
7
−
11
]
and
U
=
[
0
3
0
0
]
.
{\displaystyle \mathbf {L} ={\begin{bmatrix}16&0\\7&-11\\\end{bmatrix}}\quad {\text{and}}\quad \mathbf {U} ={\begin{bmatrix}0&3\\0&0\end{bmatrix}}.}
The inverse of
L
{\displaystyle \mathbf {L} }
is:
L
−
1
=
[
16
0
7
−
11
]
−
1
=
[
0.0625
0.0000
0.0398
−
0.0909
]
.
{\displaystyle \mathbf {L} ^{-1}={\begin{bmatrix}16&0\\7&-11\end{bmatrix}}^{-1}={\begin{bmatrix}0.0625&0.0000\\0.0398&-0.0909\\\end{bmatrix}}.}
Now find:
T
=
−
[
0.0625
0.0000
0.0398
−
0.0909
]
[
0
3
0
0
]
=
[
0.000
−
0.1875
0.000
−
0.1194
]
,
c
=
[
0.0625
0.0000
0.0398
−
0.0909
]
[
11
13
]
=
[
0.6875
−
0.7439
]
.
{\displaystyle {\begin{aligned}\mathbf {T} &=-{\begin{bmatrix}0.0625&0.0000\\0.0398&-0.0909\end{bmatrix}}{\begin{bmatrix}0&3\\0&0\end{bmatrix}}={\begin{bmatrix}0.000&-0.1875\\0.000&-0.1194\end{bmatrix}},\\[1ex]\mathbf {c} &={\begin{bmatrix}0.0625&0.0000\\0.0398&-0.0909\end{bmatrix}}{\begin{bmatrix}11\\13\end{bmatrix}}={\begin{bmatrix}0.6875\\-0.7439\end{bmatrix}}.\end{aligned}}}
With
T
{\displaystyle \mathbf {T} }
and
c
{\displaystyle \mathbf {c} }
the vectors
x
{\displaystyle \mathbf {x} }
can be obtained iteratively.
First of all, choose
x
(
0
)
{\displaystyle \mathbf {x} ^{(0)}}
, for example
x
(
0
)
=
[
1.0
1.0
]
.
{\displaystyle \mathbf {x} ^{(0)}={\begin{bmatrix}1.0\\1.0\end{bmatrix}}.}
The closer the guess to the final solution, the fewer iterations the algorithm will need.
Then calculate:
x
(
1
)
=
[
0.000
−
0.1875
0.000
−
0.1193
]
[
1.0
1.0
]
+
[
0.6875
−
0.7443
]
=
[
0.5000
−
0.8636
]
.
x
(
2
)
=
[
0.000
−
0.1875
0.000
−
0.1193
]
[
0.5000
−
0.8636
]
+
[
0.6875
−
0.7443
]
=
[
0.8494
−
0.6413
]
.
x
(
3
)
=
[
0.000
−
0.1875
0.000
−
0.1193
]
[
0.8494
−
0.6413
]
+
[
0.6875
−
0.7443
]
=
[
0.8077
−
0.6678
]
.
x
(
4
)
=
[
0.000
−
0.1875
0.000
−
0.1193
]
[
0.8077
−
0.6678
]
+
[
0.6875
−
0.7443
]
=
[
0.8127
−
0.6646
]
.
x
(
5
)
=
[
0.000
−
0.1875
0.000
−
0.1193
]
[
0.8127
−
0.6646
]
+
[
0.6875
−
0.7443
]
=
[
0.8121
−
0.6650
]
.
x
(
6
)
=
[
0.000
−
0.1875
0.000
−
0.1193
]
[
0.8121
−
0.6650
]
+
[
0.6875
−
0.7443
]
=
[
0.8122
−
0.6650
]
.
x
(
7
)
=
[
0.000
−
0.1875
0.000
−
0.1193
]
[
0.8122
−
0.6650
]
+
[
0.6875
−
0.7443
]
=
[
0.8122
−
0.6650
]
.
{\displaystyle {\begin{aligned}\mathbf {x} ^{(1)}&={\begin{bmatrix}0.000&-0.1875\\0.000&-0.1193\end{bmatrix}}{\begin{bmatrix}1.0\\1.0\end{bmatrix}}+{\begin{bmatrix}0.6875\\-0.7443\end{bmatrix}}={\begin{bmatrix}0.5000\\-0.8636\end{bmatrix}}.\\[1ex]\mathbf {x} ^{(2)}&={\begin{bmatrix}0.000&-0.1875\\0.000&-0.1193\end{bmatrix}}{\begin{bmatrix}0.5000\\-0.8636\end{bmatrix}}+{\begin{bmatrix}0.6875\\-0.7443\end{bmatrix}}={\begin{bmatrix}0.8494\\-0.6413\end{bmatrix}}.\\[1ex]\mathbf {x} ^{(3)}&={\begin{bmatrix}0.000&-0.1875\\0.000&-0.1193\end{bmatrix}}{\begin{bmatrix}0.8494\\-0.6413\\\end{bmatrix}}+{\begin{bmatrix}0.6875\\-0.7443\end{bmatrix}}={\begin{bmatrix}0.8077\\-0.6678\end{bmatrix}}.\\[1ex]\mathbf {x} ^{(4)}&={\begin{bmatrix}0.000&-0.1875\\0.000&-0.1193\end{bmatrix}}{\begin{bmatrix}0.8077\\-0.6678\end{bmatrix}}+{\begin{bmatrix}0.6875\\-0.7443\end{bmatrix}}={\begin{bmatrix}0.8127\\-0.6646\end{bmatrix}}.\\[1ex]\mathbf {x} ^{(5)}&={\begin{bmatrix}0.000&-0.1875\\0.000&-0.1193\end{bmatrix}}{\begin{bmatrix}0.8127\\-0.6646\end{bmatrix}}+{\begin{bmatrix}0.6875\\-0.7443\end{bmatrix}}={\begin{bmatrix}0.8121\\-0.6650\end{bmatrix}}.\\[1ex]\mathbf {x} ^{(6)}&={\begin{bmatrix}0.000&-0.1875\\0.000&-0.1193\end{bmatrix}}{\begin{bmatrix}0.8121\\-0.6650\end{bmatrix}}+{\begin{bmatrix}0.6875\\-0.7443\end{bmatrix}}={\begin{bmatrix}0.8122\\-0.6650\end{bmatrix}}.\\[1ex]\mathbf {x} ^{(7)}&={\begin{bmatrix}0.000&-0.1875\\0.000&-0.1193\end{bmatrix}}{\begin{bmatrix}0.8122\\-0.6650\end{bmatrix}}+{\begin{bmatrix}0.6875\\-0.7443\end{bmatrix}}={\begin{bmatrix}0.8122\\-0.6650\end{bmatrix}}.\end{aligned}}}
As expected, the algorithm converges to the solution:
x
=
A
−
1
b
≈
[
0.8122
−
0.6650
]
{\displaystyle \mathbf {x} =\mathbf {A} ^{-1}\mathbf {b} \approx {\begin{bmatrix}0.8122\\-0.6650\end{bmatrix}}}
.
In fact, the matrix A is strictly diagonally dominant, but not positive definite.
=== Another example for the matrix version ===
Another linear system shown as
A
x
=
b
{\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} }
is given by:
A
=
[
2
3
5
7
]
and
b
=
[
11
13
]
.
{\displaystyle \mathbf {A} ={\begin{bmatrix}2&3\\5&7\\\end{bmatrix}}\quad {\text{and}}\quad \mathbf {b} ={\begin{bmatrix}11\\13\\\end{bmatrix}}.}
Use the equation
x
(
k
+
1
)
=
L
−
1
(
b
−
U
x
(
k
)
)
{\displaystyle \mathbf {x} ^{(k+1)}=\mathbf {L} ^{-1}(\mathbf {b} -\mathbf {U} \mathbf {x} ^{(k)})}
in the form
x
(
k
+
1
)
=
T
x
(
k
)
+
c
{\displaystyle \mathbf {x} ^{(k+1)}=\mathbf {T} \mathbf {x} ^{(k)}+\mathbf {c} }
where:
T
=
−
L
−
1
U
and
c
=
L
−
1
b
.
{\displaystyle \mathbf {T} =-\mathbf {L} ^{-1}\mathbf {U} \quad {\text{and}}\quad \mathbf {c} =\mathbf {L} ^{-1}\mathbf {b} .}
Decompose
A
{\displaystyle \mathbf {A} }
into the sum of a lower triangular component
L
{\displaystyle \mathbf {L} }
and a strict upper triangular component
U
{\displaystyle \mathbf {U} }
:
L
=
[
2
0
5
7
]
and
U
=
[
0
3
0
0
]
.
{\displaystyle \mathbf {L} ={\begin{bmatrix}2&0\\5&7\\\end{bmatrix}}\quad {\text{and}}\quad \mathbf {U} ={\begin{bmatrix}0&3\\0&0\\\end{bmatrix}}.}
The inverse of
L
{\displaystyle \mathbf {L} }
is:
L
−
1
=
[
2
0
5
7
]
−
1
=
[
0.500
0.000
−
0.357
0.143
]
.
{\displaystyle \mathbf {L} ^{-1}={\begin{bmatrix}2&0\\5&7\\\end{bmatrix}}^{-1}={\begin{bmatrix}0.500&0.000\\-0.357&0.143\\\end{bmatrix}}.}
Now find:
T
=
−
[
0.500
0.000
−
0.357
0.143
]
[
0
3
0
0
]
=
[
0.000
−
1.500
0.000
1.071
]
,
c
=
[
0.500
0.000
−
0.357
0.143
]
[
11
13
]
=
[
5.500
−
2.071
]
.
{\displaystyle {\begin{aligned}\mathbf {T} &=-{\begin{bmatrix}0.500&0.000\\-0.357&0.143\\\end{bmatrix}}{\begin{bmatrix}0&3\\0&0\\\end{bmatrix}}={\begin{bmatrix}0.000&-1.500\\0.000&1.071\\\end{bmatrix}},\\[1ex]\mathbf {c} &={\begin{bmatrix}0.500&0.000\\-0.357&0.143\\\end{bmatrix}}{\begin{bmatrix}11\\13\\\end{bmatrix}}={\begin{bmatrix}5.500\\-2.071\\\end{bmatrix}}.\end{aligned}}}
With
T
{\displaystyle \mathbf {T} }
and
c
{\displaystyle \mathbf {c} }
the vectors
x
{\displaystyle \mathbf {x} }
can be obtained iteratively.
First of all, we have to choose
x
(
0
)
{\displaystyle \mathbf {x} ^{(0)}}
, for example
x
(
0
)
=
[
1.1
2.3
]
{\displaystyle \mathbf {x} ^{(0)}={\begin{bmatrix}1.1\\2.3\end{bmatrix}}}
Then calculate:
x
(
1
)
=
[
0
−
1.500
0
1.071
]
[
1.1
2.3
]
+
[
5.500
−
2.071
]
=
[
2.050
0.393
]
.
x
(
2
)
=
[
0
−
1.500
0
1.071
]
[
2.050
0.393
]
+
[
5.500
−
2.071
]
=
[
4.911
−
1.651
]
.
x
(
3
)
=
⋯
.
{\displaystyle {\begin{aligned}\mathbf {x} ^{(1)}&={\begin{bmatrix}0&-1.500\\0&1.071\\\end{bmatrix}}{\begin{bmatrix}1.1\\2.3\\\end{bmatrix}}+{\begin{bmatrix}5.500\\-2.071\\\end{bmatrix}}={\begin{bmatrix}2.050\\0.393\\\end{bmatrix}}.\\[1ex]\mathbf {x} ^{(2)}&={\begin{bmatrix}0&-1.500\\0&1.071\\\end{bmatrix}}{\begin{bmatrix}2.050\\0.393\\\end{bmatrix}}+{\begin{bmatrix}5.500\\-2.071\\\end{bmatrix}}={\begin{bmatrix}4.911\\-1.651\end{bmatrix}}.\\[1ex]\mathbf {x} ^{(3)}&=\cdots .\end{aligned}}}
In a test for convergence we find that the algorithm diverges. In fact, the matrix
A
{\displaystyle \mathbf {A} }
is neither diagonally dominant nor positive definite.
Then, convergence to the exact solution
x
=
A
−
1
b
=
[
−
38
29
]
{\displaystyle \mathbf {x} =\mathbf {A} ^{-1}\mathbf {b} ={\begin{bmatrix}-38\\29\end{bmatrix}}}
is not guaranteed and, in this case, will not occur.
=== An example for the equation version ===
Suppose given
n
{\displaystyle n}
equations and a starting point
x
0
{\displaystyle \mathbf {x} _{0}}
.
At any step in a Gauss-Seidel iteration, solve the first equation for
x
1
{\displaystyle x_{1}}
in terms of
x
2
,
…
,
x
n
{\displaystyle x_{2},\dots ,x_{n}}
; then solve the second equation for
x
2
{\displaystyle x_{2}}
in terms of
x
1
{\displaystyle x_{1}}
just found and the remaining
x
3
,
…
,
x
n
{\displaystyle x_{3},\dots ,x_{n}}
; and continue to
x
n
{\displaystyle x_{n}}
. Then, repeat iterations until convergence is achieved, or break if the divergence in the solutions start to diverge beyond a predefined level.
Consider an example:
10
x
1
−
x
2
+
2
x
3
=
6
,
−
x
1
+
11
x
2
−
x
3
+
3
x
4
=
25
,
2
x
1
−
x
2
+
10
x
3
−
x
4
=
−
11
,
3
x
2
−
x
3
+
8
x
4
=
15.
{\displaystyle {\begin{array}{rrrrl}10x_{1}&-x_{2}&+2x_{3}&&=6,\\-x_{1}&+11x_{2}&-x_{3}&+3x_{4}&=25,\\2x_{1}&-x_{2}&+10x_{3}&-x_{4}&=-11,\\&3x_{2}&-x_{3}&+8x_{4}&=15.\end{array}}}
Solving for
x
1
,
x
2
,
x
3
{\displaystyle x_{1},x_{2},x_{3}}
and
x
4
{\displaystyle x_{4}}
gives:
x
1
=
x
2
/
10
−
x
3
/
5
+
3
/
5
,
x
2
=
x
1
/
11
+
x
3
/
11
−
3
x
4
/
11
+
25
/
11
,
x
3
=
−
x
1
/
5
+
x
2
/
10
+
x
4
/
10
−
11
/
10
,
x
4
=
−
3
x
2
/
8
+
x
3
/
8
+
15
/
8.
{\displaystyle {\begin{aligned}x_{1}&=x_{2}/10-x_{3}/5+3/5,\\x_{2}&=x_{1}/11+x_{3}/11-3x_{4}/11+25/11,\\x_{3}&=-x_{1}/5+x_{2}/10+x_{4}/10-11/10,\\x_{4}&=-3x_{2}/8+x_{3}/8+15/8.\end{aligned}}}
Suppose (0, 0, 0, 0) is the initial approximation, then the first approximate solution is given by:
x
1
=
3
/
5
=
0.6
,
x
2
=
(
3
/
5
)
/
11
+
25
/
11
=
3
/
55
+
25
/
11
=
2.3272
,
x
3
=
−
(
3
/
5
)
/
5
+
(
2.3272
)
/
10
−
11
/
10
=
−
3
/
25
+
0.23272
−
1.1
=
−
0.9873
,
x
4
=
−
3
(
2.3272
)
/
8
+
(
−
0.9873
)
/
8
+
15
/
8
=
0.8789.
{\displaystyle {\begin{aligned}x_{1}&=3/5=0.6,\\x_{2}&=(3/5)/11+25/11=3/55+25/11=2.3272,\\x_{3}&=-(3/5)/5+(2.3272)/10-11/10=-3/25+0.23272-1.1=-0.9873,\\x_{4}&=-3(2.3272)/8+(-0.9873)/8+15/8=0.8789.\end{aligned}}}
Using the approximations obtained, the iterative procedure is repeated until the desired accuracy has been reached. The following are the approximated solutions after four iterations.
The exact solution of the system is (1, 2, −1, 1).
=== An example using Python and NumPy ===
The following iterative procedure produces the solution vector of a linear system of equations:
Produces the output:
=== Program to solve arbitrary number of equations using Matlab ===
The following code uses the formula
x
i
(
k
+
1
)
=
1
a
i
i
(
b
i
−
∑
j
<
i
a
i
j
x
j
(
k
+
1
)
−
∑
j
>
i
a
i
j
x
j
(
k
)
)
,
i
=
1
,
2
,
…
,
n
k
=
0
,
1
,
2
,
…
{\displaystyle x_{i}^{(k+1)}={\frac {1}{a_{ii}}}\left(b_{i}-\sum _{j<i}a_{ij}x_{j}^{(k+1)}-\sum _{j>i}a_{ij}x_{j}^{(k)}\right),\quad {\begin{array}{l}i=1,2,\ldots ,n\\k=0,1,2,\ldots \end{array}}}
== See also ==
Conjugate gradient method
Gaussian belief propagation
Iterative method: Linear systems
Kaczmarz method (a "row-oriented" method, whereas Gauss-Seidel is "column-oriented." See, for example, this paper.)
Matrix splitting
Richardson iteration
== Notes ==
== References ==
Gauss, Carl Friedrich (1903), Werke (in German), vol. 9, Göttingen: Köninglichen Gesellschaft der Wissenschaften.
Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins, ISBN 978-0-8018-5414-9.
Black, Noel & Moore, Shirley. "Gauss-Seidel Method". MathWorld.
This article incorporates text from the article Gauss-Seidel_method on CFD-Wiki that is under the GFDL license.
== External links ==
"Seidel method", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Gauss–Seidel from www.math-linux.com
Gauss–Seidel From Holistic Numerical Methods Institute
Gauss Siedel Iteration from www.geocities.com
The Gauss-Seidel Method
Bickson
Matlab code
C code example | Wikipedia/Gauss–Seidel_method |
In computer graphics, a digital differential analyzer (DDA) is hardware or software used for interpolation of variables over an interval between start and end point. DDAs are used for rasterization of lines, triangles and polygons. They can be extended to non linear functions, such as perspective correct texture mapping, quadratic curves, and traversing voxels.
In its simplest implementation for linear cases such as lines, the DDA algorithm interpolates values in interval by computing for each xi the equations xi = xi−1 + 1, yi = yi−1 + m, where m is the slope of the line. This slope can be expressed in DDA as follows:
m
=
y
e
n
d
−
y
s
t
a
r
t
x
e
n
d
−
x
s
t
a
r
t
{\displaystyle m={\frac {y_{\rm {end}}-y_{\rm {start}}}{x_{\rm {end}}-x_{\rm {start}}}}}
In fact any two consecutive points lying on this line segment should satisfy the equation.
== Performance ==
The DDA method can be implemented using floating-point or integer arithmetic. The native floating-point implementation requires one addition and one rounding operation per interpolated value (e.g. coordinate x, y, depth, color component etc.) and output result. This process is only efficient when an FPU with fast add and rounding operation will be available.
The fixed-point integer operation requires two additions per output cycle, and in case of fractional part overflow, one additional increment and subtraction. The probability of fractional part overflows is proportional to the ratio m of the interpolated start/end values.
DDAs are well suited for hardware implementation and can be pipelined for maximized throughput.
== Algorithm ==
A linear DDA starts by calculating the smaller of dy or dx for a unit increment of the other. A line is then sampled at unit intervals in one coordinate and corresponding integer values nearest the line path are determined for the other coordinate.
Considering a line with positive slope, if the slope is less than or equal to 1, we sample at unit x intervals (dx=1) and compute successive y values as
y
k
+
1
=
y
k
+
m
{\displaystyle y_{k+1}=y_{k}+m}
x
k
+
1
=
x
k
+
1
{\displaystyle x_{k+1}=x_{k}+1}
Subscript k takes integer values starting from 0, for the 1st point and increases by 1 until endpoint is reached.
y value is rounded off to nearest integer to correspond to a screen pixel.
For lines with slope greater than 1, we reverse the role of x and y i.e. we sample at dy=1 and calculate consecutive x values as
x
k
+
1
=
x
k
+
1
m
{\displaystyle x_{k+1}=x_{k}+{\frac {1}{m}}}
y
k
+
1
=
y
k
+
1
{\displaystyle y_{k+1}=y_{k}+1}
Similar calculations are carried out to determine pixel positions along a line with negative slope. Thus, if the absolute value of the slope is less than 1, we set dx=1 if
x
s
t
a
r
t
<
x
e
n
d
{\displaystyle x_{\rm {start}}<x_{\rm {end}}}
i.e. the starting extreme point is at the left.
== Program ==
DDA algorithm program in C++:
== See also ==
Bresenham's line algorithm is an algorithm for line rendering.
Incremental error algorithm
Xiaolin Wu's line algorithm is an algorithm for line anti-aliasing
== References ==
http://www.museth.org/Ken/Publications_files/Museth_SIG14.pdf
Alan Watt: 3D Computer Graphics, 3rd edition 2000, p. 184 (Rasterizing edges). ISBN 0-201-39855-9 | Wikipedia/Digital_differential_analyzer_(graphics_algorithm) |
The Warnock algorithm is a hidden surface algorithm invented by John Warnock that is typically used in the field of computer graphics.
It solves the problem of rendering a complicated image by recursive subdivision of a scene until areas are obtained that are trivial to compute. In other words, if the scene is simple enough to compute efficiently then it is rendered; otherwise it is divided into smaller parts which are likewise tested for simplicity.
This is a divide and conquer algorithm with run-time of
O
(
n
p
)
{\displaystyle O(np)}
, where n is the number of polygons and p is the number of pixels in the viewport.
The inputs are a list of polygons and a viewport. The best case is that if the list of polygons is simple, then draw the polygons in the viewport. Simple is defined as one polygon (then the polygon or its part is drawn in appropriate part of a viewport) or a viewport that is one pixel in size (then that pixel gets a color of the polygon closest to the observer). The continuous step is to split the viewport into 4 equally sized quadrants and to recursively call the algorithm for each quadrant, with a polygon list modified such that it only contains polygons that are visible in that quadrant.
Warnock expressed his algorithm in words and pictures, rather than software code, as the core of his PhD thesis, which also described protocols for shading oblique surfaces and other features that are now the core of 3-dimensional computer graphics. The entire thesis was only 26 pages from Introduction to Bibliography.
== References ==
== External links ==
A summary of the Warnock Algorithm | Wikipedia/Warnock_algorithm |
In physics, the cross section is a measure of the probability that a specific process will take place in a collision of two particles. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted σ (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process.
When two discrete particles interact in classical physics, their mutual cross section is the area transverse to their relative motion within which they must meet in order to scatter from each other. If the particles are hard inelastic spheres that interact only upon contact, their scattering cross section is related to their geometric size. If the particles interact through some action-at-a-distance force, such as electromagnetism or gravity, their scattering cross section is generally larger than their geometric size.
When a cross section is specified as the differential limit of a function of some final-state variable, such as particle angle or energy, it is called a differential cross section (see detailed discussion below). When a cross section is integrated over all scattering angles (and possibly other variables), it is called a total cross section or integrated total cross section. For example, in Rayleigh scattering, the intensity scattered at the forward and backward angles is greater than the intensity scattered sideways, so the forward differential scattering cross section is greater than the perpendicular differential cross section, and by adding all of the infinitesimal cross sections over the whole range of angles with integral calculus, we can find the total cross section.
Scattering cross sections may be defined in nuclear, atomic, and particle physics for collisions of accelerated beams of one type of particle with targets (either stationary or moving) of a second type of particle. The probability for any given reaction to occur is in proportion to its cross section. Thus, specifying the cross section for a given reaction is a proxy for stating the probability that a given scattering process will occur.
The measured reaction rate of a given process depends strongly on experimental variables such as the density of the target material, the intensity of the beam, the detection efficiency of the apparatus, or the angle setting of the detection apparatus. However, these quantities can be factored away, allowing measurement of the underlying two-particle collisional cross section.
Differential and total scattering cross sections are among the most important measurable quantities in nuclear, atomic, and particle physics.
With light scattering off of a particle, the cross section specifies the amount of optical power scattered from light of a given irradiance (power per area). Although the cross section has the same units as area, the cross section may not necessarily correspond to the actual physical size of the target given by other forms of measurement. It is not uncommon for the actual cross-sectional area of a scattering object to be much larger or smaller than the cross section relative to some physical process. For example, plasmonic nanoparticles can have light scattering cross sections for particular frequencies that are much larger than their actual cross-sectional areas.
== Collision among gas particles ==
In a gas of finite-sized particles there are collisions among particles that depend on their cross-sectional size. The average distance that a particle travels between collisions depends on the density of gas particles. These quantities are related by
σ
=
1
n
λ
,
{\displaystyle \sigma ={\frac {1}{n\lambda }},}
where
σ is the cross section of a two-particle collision (SI unit: m2),
λ is the mean free path between collisions (SI unit: m),
n is the number density of the target particles (SI unit: m−3).
If the particles in the gas can be treated as hard spheres of radius r that interact by direct contact, as illustrated in Figure 1, then the effective cross section for the collision of a pair is
σ
=
π
(
2
r
)
2
{\displaystyle \sigma =\pi \left(2r\right)^{2}}
If the particles in the gas interact by a force with a larger range than their physical size, then the cross section is a larger effective area that may depend on a variety of variables such as the energy of the particles.
Cross sections can be computed for atomic collisions but also are used in the subatomic realm. For example, in nuclear physics a "gas" of low-energy neutrons collides with nuclei in a reactor or other nuclear device, with a cross section that is energy-dependent and hence also with well-defined mean free path between collisions.
== Attenuation of a beam of particles ==
If a beam of particles enters a thin layer of material of thickness dz, the flux Φ of the beam will decrease by dΦ according to
d
Φ
d
z
=
−
n
σ
Φ
,
{\displaystyle {\frac {\mathrm {d} \Phi }{\mathrm {d} z}}=-n\sigma \Phi ,}
where σ is the total cross section of all events, including scattering, absorption, or transformation to another species. The volumetric number density of scattering centers is designated by n. Solving this equation exhibits the exponential attenuation of the beam intensity:
Φ
=
Φ
0
e
−
n
σ
z
,
{\displaystyle \Phi =\Phi _{0}e^{-n\sigma z},}
where Φ0 is the initial flux, and z is the total thickness of the material. For light, this is called the Beer–Lambert law.
== Differential cross section ==
Consider a classical measurement where a single particle is scattered off a single stationary target particle. Conventionally, a spherical coordinate system is used, with the target placed at the origin and the z axis of this coordinate system aligned with the incident beam. The angle θ is the scattering angle, measured between the incident beam and the scattered beam, and the φ is the azimuthal angle.
The impact parameter b is the perpendicular offset of the trajectory of the incoming particle, and the outgoing particle emerges at an angle θ. For a given interaction (coulombic, magnetic, gravitational, contact, etc.), the impact parameter and the scattering angle have a definite one-to-one functional dependence on each other. Generally the impact parameter can neither be controlled nor measured from event to event and is assumed to take all possible values when averaging over many scattering events. The differential size of the cross section is the area element in the plane of the impact parameter, i.e. dσ = b dφ db. The differential angular range of the scattered particle at angle θ is the solid angle element dΩ = sin θ dθ dφ. The differential cross section is the quotient of these quantities, dσ/dΩ.
It is a function of the scattering angle (and therefore also the impact parameter), plus other observables such as the momentum of the incoming particle. The differential cross section is always taken to be positive, even though larger impact parameters generally produce less deflection. In cylindrically symmetric situations (about the beam axis), the azimuthal angle φ is not changed by the scattering process, and the differential cross section can be written as
d
σ
d
(
cos
θ
)
=
∫
0
2
π
d
σ
d
Ω
d
φ
{\displaystyle {\frac {\mathrm {d} \sigma }{\mathrm {d} (\cos \theta )}}=\int _{0}^{2\pi }{\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}\,\mathrm {d} \varphi }
.
In situations where the scattering process is not azimuthally symmetric, such as when the beam or target particles possess magnetic moments oriented perpendicular to the beam axis, the differential cross section must also be expressed as a function of the azimuthal angle.
For scattering of particles of incident flux Finc off a stationary target consisting of many particles, the differential cross section dσ/dΩ at an angle (θ,φ) is related to the flux of scattered particle detection Fout(θ,φ) in particles per unit time by
d
σ
d
Ω
(
θ
,
φ
)
=
1
n
t
Δ
Ω
F
out
(
θ
,
φ
)
F
inc
.
{\displaystyle {\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}(\theta ,\varphi )={\frac {1}{nt\Delta \Omega }}{\frac {F_{\text{out}}(\theta ,\varphi )}{F_{\text{inc}}}}.}
Here ΔΩ is the finite angular size of the detector (SI unit: sr), n is the number density of the target particles (SI unit: m−3), and t is the thickness of the stationary target (SI unit: m). This formula assumes that the target is thin enough that each beam particle will interact with at most one target particle.
The total cross section σ may be recovered by integrating the differential cross section dσ/dΩ over the full solid angle (4π steradians):
σ
=
∮
4
π
d
σ
d
Ω
d
Ω
=
∫
0
2
π
∫
0
π
d
σ
d
Ω
sin
θ
d
θ
d
φ
.
{\displaystyle \sigma =\oint _{4\pi }{\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}\,\mathrm {d} \Omega =\int _{0}^{2\pi }\int _{0}^{\pi }{\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}\sin \theta \,\mathrm {d} \theta \,\mathrm {d} \varphi .}
It is common to omit the "differential" qualifier when the type of cross section can be inferred from context. In this case, σ may be referred to as the integral cross section or total cross section. The latter term may be confusing in contexts where multiple events are involved, since "total" can also refer to the sum of cross sections over all events.
The differential cross section is extremely useful quantity in many fields of physics, as measuring it can reveal a great amount of information about the internal structure of the target particles. For example, the differential cross section of Rutherford scattering provided strong evidence for the existence of the atomic nucleus.
Instead of the solid angle, the momentum transfer may be used as the independent variable of differential cross sections.
Differential cross sections in inelastic scattering contain resonance peaks that indicate the creation of metastable states and contain information about their energy and lifetime.
== Quantum scattering ==
In the time-independent formalism of quantum scattering, the initial wave function (before scattering) is taken to be a plane wave with definite momentum k:
ϕ
−
(
r
)
⟶
r
→
∞
e
i
k
z
,
{\displaystyle \phi _{-}(\mathbf {r} )\;{\stackrel {r\to \infty }{\longrightarrow }}\;e^{ikz},}
where z and r are the relative coordinates between the projectile and the target. The arrow indicates that this only describes the asymptotic behavior of the wave function when the projectile and target are too far apart for the interaction to have any effect.
After scattering takes place it is expected that the wave function takes on the following asymptotic form:
ϕ
+
(
r
)
⟶
r
→
∞
f
(
θ
,
ϕ
)
e
i
k
r
r
,
{\displaystyle \phi _{+}(\mathbf {r} )\;{\stackrel {r\to \infty }{\longrightarrow }}\;f(\theta ,\phi ){\frac {e^{ikr}}{r}},}
where f is some function of the angular coordinates known as the scattering amplitude. This general form is valid for any short-ranged, energy-conserving interaction. It is not true for long-ranged interactions, so there are additional complications when dealing with electromagnetic interactions.
The full wave function of the system behaves asymptotically as the sum
ϕ
(
r
)
⟶
r
→
∞
ϕ
−
(
r
)
+
ϕ
+
(
r
)
.
{\displaystyle \phi (\mathbf {r} )\;{\stackrel {r\to \infty }{\longrightarrow }}\;\phi _{-}(\mathbf {r} )+\phi _{+}(\mathbf {r} ).}
The differential cross section is related to the scattering amplitude:
d
σ
d
Ω
(
θ
,
ϕ
)
=
|
f
(
θ
,
ϕ
)
|
2
.
{\displaystyle {\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}(\theta ,\phi )={\bigl |}f(\theta ,\phi ){\bigr |}^{2}.}
This has the simple interpretation as the probability density for finding the scattered projectile at a given angle.
A cross section is therefore a measure of the effective surface area seen by the impinging particles, and as such is expressed in units of area. The cross section of two particles (i.e. observed when the two particles are colliding with each other) is a measure of the interaction event between the two particles. The cross section is proportional to the probability that an interaction will occur; for example in a simple scattering experiment the number of particles scattered per unit of time (current of scattered particles Ir) depends only on the number of incident particles per unit of time (current of incident particles Ii), the characteristics of target (for example the number of particles per unit of surface N), and the type of interaction. For Nσ ≪ 1 we have
I
r
=
I
i
N
σ
,
σ
=
I
r
I
i
1
N
=
probability of interaction
×
1
N
.
{\displaystyle {\begin{aligned}I_{\text{r}}&=I_{\text{i}}N\sigma ,\\\sigma &={\frac {I_{\text{r}}}{I_{\text{i}}}}{\frac {1}{N}}\\&={\text{probability of interaction}}\times {\frac {1}{N}}.\end{aligned}}}
=== Relation to the S-matrix ===
If the reduced masses and momenta of the colliding system are mi, pi and mf, pf before and after the collision respectively, the differential cross section is given by
d
σ
d
Ω
=
(
2
π
)
4
m
i
m
f
p
f
p
i
|
T
f
i
|
2
,
{\displaystyle {\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}=\left(2\pi \right)^{4}m_{\text{i}}m_{\text{f}}{\frac {p_{\text{f}}}{p_{\text{i}}}}{\bigl |}T_{{\text{f}}{\text{i}}}{\bigr |}^{2},}
where the on-shell T matrix is defined by
S
f
i
=
δ
f
i
−
2
π
i
δ
(
E
f
−
E
i
)
δ
(
p
i
−
p
f
)
T
f
i
{\displaystyle S_{{\text{f}}{\text{i}}}=\delta _{{\text{f}}{\text{i}}}-2\pi i\delta \left(E_{\text{f}}-E_{\text{i}}\right)\delta \left(\mathbf {p} _{\text{i}}-\mathbf {p} _{\text{f}}\right)T_{{\text{f}}{\text{i}}}}
in terms of the S-matrix. Here δ is the Dirac delta function. The computation of the S-matrix is the main goal of the scattering theory.
== Units ==
Although the SI unit of total cross sections is m2, a smaller unit is usually used in practice.
In nuclear and particle physics, the conventional unit is the barn b, where 1 b = 10−28 m2 = 100 fm2. Smaller prefixed units such as mb and μb are also widely used. Correspondingly, the differential cross section can be measured in units such as mb/sr.
When the scattered radiation is visible light, it is conventional to measure the path length in centimetres. To avoid the need for conversion factors, the scattering cross section is expressed in cm2, and the number concentration in cm−3. The measurement of the scattering of visible light is known as nephelometry, and is effective for particles of 2–50 μm in diameter: as such, it is widely used in meteorology and in the measurement of atmospheric pollution.
The scattering of X-rays can also be described in terms of scattering cross sections, in which case the square ångström is a convenient unit: 1 Å2 = 10−20 m2 = 10000 pm2 = 108 b. The sum of the scattering, photoelectric, and pair-production cross-sections (in barns) is charted as the "atomic attenuation coefficient" (narrow-beam), in barns.
== Scattering of light ==
For light, as in other settings, the scattering cross section for particles is generally different from the geometrical cross section of the particle, and it depends upon the wavelength of light and the permittivity, shape, and size of the particle. The total amount of scattering in a sparse medium is proportional to the product of the scattering cross section and the number of particles present.
In the interaction of light with particles, many processes occur, each with their own cross sections, including absorption, scattering, and photoluminescence. The sum of the absorption and scattering cross sections is sometimes referred to as the attenuation or extinction cross section.
σ
=
σ
abs
+
σ
sc
+
σ
lum
.
{\displaystyle \sigma =\sigma _{\text{abs}}+\sigma _{\text{sc}}+\sigma _{\text{lum}}.}
The total extinction cross section is related to the attenuation of the light intensity through the Beer–Lambert law, which says that attenuation is proportional to particle concentration:
A
λ
=
C
l
σ
,
{\displaystyle A_{\lambda }=Cl\sigma ,}
where Aλ is the attenuation at a given wavelength λ, C is the particle concentration as a number density, and l is the path length. The absorbance of the radiation is the logarithm (decadic or, more usually, natural) of the reciprocal of the transmittance T:
A
λ
=
−
log
T
.
{\displaystyle A_{\lambda }=-\log {\mathcal {T}}.}
Combining the scattering and absorption cross sections in this manner is often necessitated by the inability to distinguish them experimentally, and much research effort has been put into developing models that allow them to be distinguished, the Kubelka-Munk theory being one of the most important in this area.
=== Cross section and Mie theory ===
Cross sections commonly calculated using Mie theory include efficiency coefficients for extinction
Q
ext
{\textstyle Q_{\text{ext}}}
, scattering
Q
sc
{\textstyle Q_{\text{sc}}}
, and Absorption
Q
abs
{\textstyle Q_{\text{abs}}}
cross sections. These are normalized by the geometrical cross sections of the particle
σ
geom
=
π
a
2
{\textstyle \sigma _{\text{geom}}=\pi a^{2}}
as
Q
α
=
σ
α
σ
geom
,
α
=
ext
,
sc
,
abs
.
{\displaystyle Q_{\alpha }={\frac {\sigma _{\alpha }}{\sigma _{\text{geom}}}},\qquad \alpha ={\text{ext}},{\text{sc}},{\text{abs}}.}
The cross section is defined by
σ
α
=
W
α
I
inc
{\displaystyle \sigma _{\alpha }={\frac {W_{\alpha }}{I_{\text{inc}}}}}
where
[
W
α
]
=
[
W
]
{\displaystyle \left[W_{\alpha }\right]=\left[{\text{W}}\right]}
is the energy flow through the surrounding surface, and
[
I
inc
]
=
[
W
m
2
]
{\displaystyle \left[I_{\text{inc}}\right]=\left[{\frac {\text{W}}{{\text{m}}^{2}}}\right]}
is the intensity of the incident wave. For a plane wave the intensity is going to be
I
inc
=
|
E
|
2
/
(
2
η
)
{\displaystyle I_{\text{inc}}=|\mathbf {E} |^{2}/(2\eta )}
, where
η
=
μ
μ
0
/
(
ε
ε
0
)
{\displaystyle \eta ={\sqrt {\mu \mu _{0}/(\varepsilon \varepsilon _{0})}}}
is the impedance of the host medium.
The main approach is based on the following. Firstly, we construct an imaginary sphere of radius
r
{\displaystyle r}
(surface
A
{\displaystyle A}
) around the particle (the scatterer). The net rate of electromagnetic energy crosses the surface
A
{\displaystyle A}
is
W
a
=
−
∮
A
Π
⋅
r
^
d
A
{\displaystyle W_{\text{a}}=-\oint _{A}\mathbf {\Pi } \cdot {\hat {\mathbf {r} }}dA}
where
Π
=
1
2
Re
[
E
∗
×
H
]
{\textstyle \mathbf {\Pi } ={\frac {1}{2}}\operatorname {Re} \left[\mathbf {E} ^{*}\times \mathbf {H} \right]}
is the time averaged Poynting vector. If
W
a
>
0
{\displaystyle W_{\text{a}}>0}
energy is absorbed within the sphere, otherwise energy is being created within the sphere. We will not consider this case here. If the host medium is non-absorbing, the energy must be absorbed by the particle. We decompose the total field into incident and scattered parts
E
=
E
i
+
E
s
{\displaystyle \mathbf {E} =\mathbf {E} _{\text{i}}+\mathbf {E} _{\text{s}}}
, and the same for the magnetic field
H
{\displaystyle \mathbf {H} }
. Thus, we can decompose
W
a
{\displaystyle W_{a}}
into the three terms
W
a
=
W
i
−
W
s
+
W
ext
{\displaystyle W_{\text{a}}=W_{\text{i}}-W_{\text{s}}+W_{\text{ext}}}
, where
W
i
=
−
∮
A
Π
i
⋅
r
^
d
A
≡
0
,
W
s
=
∮
A
Π
s
⋅
r
^
d
A
,
W
ext
=
∮
A
Π
ext
⋅
r
^
d
A
.
{\displaystyle W_{\text{i}}=-\oint _{A}\mathbf {\Pi } _{\text{i}}\cdot {\hat {\mathbf {r} }}dA\equiv 0,\qquad W_{\text{s}}=\oint _{A}\mathbf {\Pi } _{\text{s}}\cdot {\hat {\mathbf {r} }}dA,\qquad W_{\text{ext}}=\oint _{A}\mathbf {\Pi } _{\text{ext}}\cdot {\hat {\mathbf {r} }}dA.}
where
Π
i
=
1
2
Re
[
E
i
∗
×
H
i
]
{\displaystyle \mathbf {\Pi } _{\text{i}}={\frac {1}{2}}\operatorname {Re} \left[\mathbf {E} _{\text{i}}^{*}\times \mathbf {H} _{\text{i}}\right]}
,
Π
s
=
1
2
Re
[
E
s
∗
×
H
s
]
{\displaystyle \mathbf {\Pi } _{\text{s}}={\frac {1}{2}}\operatorname {Re} \left[\mathbf {E} _{\text{s}}^{*}\times \mathbf {H} _{\text{s}}\right]}
, and
Π
ext
=
1
2
Re
[
E
s
∗
×
H
i
+
E
i
∗
×
H
s
]
{\displaystyle \mathbf {\Pi } _{\text{ext}}={\frac {1}{2}}\operatorname {Re} \left[\mathbf {E} _{s}^{*}\times \mathbf {H} _{i}+\mathbf {E} _{i}^{*}\times \mathbf {H} _{s}\right]}
.
All the field can be decomposed into the series of vector spherical harmonics (VSH). After that, all the integrals can be taken.
In the case of a uniform sphere of radius
a
{\displaystyle a}
, permittivity
ε
{\displaystyle \varepsilon }
, and permeability
μ
{\displaystyle \mu }
, the problem has a precise solution. The scattering and extinction coefficients are
Q
sc
=
2
k
2
a
2
∑
n
=
1
∞
(
2
n
+
1
)
(
|
a
n
|
2
+
|
b
n
|
2
)
{\displaystyle Q_{\text{sc}}={\frac {2}{k^{2}a^{2}}}\sum _{n=1}^{\infty }(2n+1)(|a_{n}|^{2}+|b_{n}|^{2})}
Q
ext
=
2
k
2
a
2
∑
n
=
1
∞
(
2
n
+
1
)
ℜ
(
a
n
+
b
n
)
{\displaystyle Q_{\text{ext}}={\frac {2}{k^{2}a^{2}}}\sum _{n=1}^{\infty }(2n+1)\Re (a_{n}+b_{n})}
Where
k
=
n
host
k
0
{\textstyle k=n_{\text{host}}k_{0}}
. These are connected as
σ
ext
=
σ
sc
+
σ
abs
or
Q
ext
=
Q
sc
+
Q
abs
{\displaystyle \sigma _{\text{ext}}=\sigma _{\text{sc}}+\sigma _{\text{abs}}\qquad {\text{or}}\qquad Q_{\text{ext}}=Q_{\text{sc}}+Q_{\text{abs}}}
=== Dipole approximation for the scattering cross section ===
Let us assume that a particle supports only electric and magnetic dipole modes with polarizabilities
p
=
α
e
E
{\textstyle \mathbf {p} =\alpha ^{e}\mathbf {E} }
and
m
=
(
μ
μ
0
)
−
1
α
m
H
{\textstyle \mathbf {m} =(\mu \mu _{0})^{-1}\alpha ^{m}\mathbf {H} }
(here we use the notation of magnetic polarizability in the manner of Bekshaev et al. rather than the notation of Nieto-Vesperinas et al.) expressed through the Mie coefficients as
α
e
=
4
π
ε
0
⋅
i
3
ε
2
k
3
a
1
,
α
m
=
4
π
μ
0
⋅
i
3
μ
2
k
3
b
1
.
{\displaystyle \alpha ^{e}=4\pi \varepsilon _{0}\cdot i{\frac {3\varepsilon }{2k^{3}}}a_{1},\qquad \alpha ^{m}=4\pi \mu _{0}\cdot i{\frac {3\mu }{2k^{3}}}b_{1}.}
Then the cross sections are given by
σ
ext
=
σ
ext
(e)
+
σ
ext
(m)
=
1
4
π
ε
ε
0
⋅
4
π
k
ℑ
(
α
e
)
+
1
4
π
μ
μ
0
⋅
4
π
k
ℑ
(
α
m
)
{\displaystyle \sigma _{\text{ext}}=\sigma _{\text{ext}}^{\text{(e)}}+\sigma _{\text{ext}}^{\text{(m)}}={\frac {1}{4\pi \varepsilon \varepsilon _{0}}}\cdot 4\pi k\Im (\alpha ^{e})+{\frac {1}{4\pi \mu \mu _{0}}}\cdot 4\pi k\Im (\alpha ^{m})}
σ
sc
=
σ
sc
(e)
+
σ
sc
(m)
=
1
(
4
π
ε
ε
0
)
2
⋅
8
π
3
k
4
|
α
e
|
2
+
1
(
4
π
μ
μ
0
)
2
⋅
8
π
3
k
4
|
α
m
|
2
{\displaystyle \sigma _{\text{sc}}=\sigma _{\text{sc}}^{\text{(e)}}+\sigma _{\text{sc}}^{\text{(m)}}={\frac {1}{(4\pi \varepsilon \varepsilon _{0})^{2}}}\cdot {\frac {8\pi }{3}}k^{4}|\alpha ^{e}|^{2}+{\frac {1}{(4\pi \mu \mu _{0})^{2}}}\cdot {\frac {8\pi }{3}}k^{4}|\alpha ^{m}|^{2}}
and, finally, the electric and magnetic absorption cross sections
σ
abs
=
σ
abs
(e)
+
σ
abs
(m)
{\textstyle \sigma _{\text{abs}}=\sigma _{\text{abs}}^{\text{(e)}}+\sigma _{\text{abs}}^{\text{(m)}}}
are
σ
abs
(e)
=
1
4
π
ε
ε
0
⋅
4
π
k
[
ℑ
(
α
e
)
−
k
3
6
π
ε
ε
0
|
α
e
|
2
]
{\displaystyle \sigma _{\text{abs}}^{\text{(e)}}={\frac {1}{4\pi \varepsilon \varepsilon _{0}}}\cdot 4\pi k\left[\Im (\alpha ^{e})-{\frac {k^{3}}{6\pi \varepsilon \varepsilon _{0}}}|\alpha ^{e}|^{2}\right]}
and
σ
abs
(m)
=
1
4
π
μ
μ
0
⋅
4
π
k
[
ℑ
(
α
m
)
−
k
3
6
π
μ
μ
0
|
α
m
|
2
]
{\displaystyle \sigma _{\text{abs}}^{\text{(m)}}={\frac {1}{4\pi \mu \mu _{0}}}\cdot 4\pi k\left[\Im (\alpha ^{m})-{\frac {k^{3}}{6\pi \mu \mu _{0}}}|\alpha ^{m}|^{2}\right]}
For the case of a no-inside-gain particle, i.e. no energy is emitted by the particle internally (
σ
abs
>
0
{\textstyle \sigma _{\text{abs}}>0}
), we have a particular case of the Optical theorem
1
4
π
ε
ε
0
ℑ
(
α
e
)
+
1
4
π
μ
μ
0
ℑ
(
α
m
)
≥
2
k
3
3
[
|
α
e
|
2
(
4
π
ε
ε
0
)
2
+
|
α
m
|
2
(
4
π
μ
μ
0
)
2
]
{\displaystyle {\frac {1}{4\pi \varepsilon \varepsilon _{0}}}\Im (\alpha ^{e})+{\frac {1}{4\pi \mu \mu _{0}}}\Im (\alpha ^{m})\geq {\frac {2k^{3}}{3}}\left[{\frac {|\alpha ^{e}|^{2}}{(4\pi \varepsilon \varepsilon _{0})^{2}}}+{\frac {|\alpha ^{m}|^{2}}{(4\pi \mu \mu _{0})^{2}}}\right]}
Equality occurs for non-absorbing particles, i.e. for
ℑ
(
ε
)
=
ℑ
(
μ
)
=
0
{\textstyle \Im (\varepsilon )=\Im (\mu )=0}
.
=== Scattering of light on extended bodies ===
In the context of scattering light on extended bodies, the scattering cross section, σsc, describes the likelihood of light being scattered by a macroscopic particle. In general, the scattering cross section is different from the geometrical cross section of a particle, as it depends upon the wavelength of light and the permittivity in addition to the shape and size of the particle. The total amount of scattering in a sparse medium is determined by the product of the scattering cross section and the number of particles present. In terms of area, the total cross section (σ) is the sum of the cross sections due to absorption, scattering, and luminescence:
σ
=
σ
abs
+
σ
sc
+
σ
lum
.
{\displaystyle \sigma =\sigma _{\text{abs}}+\sigma _{\text{sc}}+\sigma _{\text{lum}}.}
The total cross section is related to the absorbance of the light intensity through the Beer–Lambert law, which says that absorbance is proportional to concentration: Aλ = Clσ, where Aλ is the absorbance at a given wavelength λ, C is the concentration as a number density, and l is the path length. The extinction or absorbance of the radiation is the logarithm (decadic or, more usually, natural) of the reciprocal of the transmittance T:
A
λ
=
−
log
T
.
{\displaystyle A_{\lambda }=-\log {\mathcal {T}}.}
==== Relation to physical size ====
There is no simple relationship between the scattering cross section and the physical size of the particles, as the scattering cross section depends on the wavelength of radiation used. This can be seen when looking at a halo surrounding the Moon on a decently foggy evening: Red light photons experience a larger cross sectional area of water droplets than photons of higher energy. The halo around the Moon thus has a perimeter of red light due to lower energy photons being scattering further from the center of the Moon. Photons from the rest of the visible spectrum are left within the center of the halo and perceived as white light.
=== Meteorological range ===
The scattering cross section is related to the meteorological range LV:
L
V
=
3.9
C
σ
scat
.
{\displaystyle L_{\text{V}}={\frac {3.9}{C\sigma _{\text{scat}}}}.}
The quantity Cσscat is sometimes denoted bscat, the scattering coefficient per unit length.
== Examples ==
=== Elastic collision of two hard spheres ===
The following equations apply to two hard spheres that undergo a perfectly elastic collision. Let R and r denote the radii of the scattering center and scattered sphere, respectively. The differential cross section is
d
σ
d
Ω
=
R
2
4
,
{\displaystyle {\frac {d\sigma }{d\Omega }}={\frac {R^{2}}{4}},}
and the total cross section is
σ
tot
=
π
(
r
+
R
)
2
.
{\displaystyle \sigma _{\text{tot}}=\pi \left(r+R\right)^{2}.}
In other words, the total scattering cross section is equal to the area of the circle (with radius r + R) within which the center of mass of the incoming sphere has to arrive for it to be deflected.
=== Rutherford scattering ===
In Rutherford scattering, an incident particle with charge q and energy E scatters off a fixed particle with charge Q. The differential cross section is
d
σ
d
Ω
=
(
q
Q
16
π
ε
0
E
sin
2
(
θ
/
2
)
)
2
{\displaystyle {\frac {d\sigma }{d\Omega }}=\left({\frac {q\,Q}{16\pi \varepsilon _{0}E\sin ^{2}(\theta /2)}}\right)^{2}}
where
ε
0
{\displaystyle \varepsilon _{0}}
is the vacuum permittivity. The total cross section is infinite unless a cutoff for small scattering angles
θ
{\displaystyle \theta }
is applied. This is due to the long range of the
1
/
r
{\displaystyle 1/r}
Coulomb potential.
=== Scattering from a 2D circular mirror ===
The following example deals with a beam of light scattering off a circle with radius r and a perfectly reflecting boundary. The beam consists of a uniform density of parallel rays, and the beam-circle interaction is modeled within the framework of geometric optics. Because the problem is genuinely two-dimensional, the cross section has unit of length (e.g., metre). Let α be the angle between the light ray and the radius joining the reflection point of the ray with the center point of the mirror. Then the increase of the length element perpendicular to the beam is
d
x
=
r
cos
α
d
α
.
{\displaystyle \mathrm {d} x=r\cos \alpha \,\mathrm {d} \alpha .}
The reflection angle of this ray with respect to the incoming ray is 2α, and the scattering angle is
θ
=
π
−
2
α
.
{\displaystyle \theta =\pi -2\alpha .}
The differential relationship between incident and reflected intensity I is
I
d
σ
=
I
d
x
(
x
)
=
I
r
cos
α
d
α
=
I
r
2
sin
(
θ
2
)
d
θ
=
I
d
σ
d
θ
d
θ
.
{\displaystyle I\,\mathrm {d} \sigma =I\,\mathrm {d} x(x)=Ir\cos \alpha \,\mathrm {d} \alpha =I{\frac {r}{2}}\sin \left({\frac {\theta }{2}}\right)\,\mathrm {d} \theta =I{\frac {\mathrm {d} \sigma }{\mathrm {d} \theta }}\,\mathrm {d} \theta .}
The differential cross section is therefore (dΩ = dθ)
d
σ
d
θ
=
r
2
sin
(
θ
2
)
.
{\displaystyle {\frac {\mathrm {d} \sigma }{\mathrm {d} \theta }}={\frac {r}{2}}\sin \left({\frac {\theta }{2}}\right).}
Its maximum at θ = π corresponds to backward scattering, and its minimum at θ = 0 corresponds to scattering from the edge of the circle directly forward. This expression confirms the intuitive expectations that the mirror circle acts like a diverging lens. The total cross section is equal to the diameter of the circle:
σ
=
∫
0
2
π
d
σ
d
θ
d
θ
=
∫
0
2
π
r
2
sin
(
θ
2
)
d
θ
=
2
r
.
{\displaystyle \sigma =\int _{0}^{2\pi }{\frac {\mathrm {d} \sigma }{\mathrm {d} \theta }}\,\mathrm {d} \theta =\int _{0}^{2\pi }{\frac {r}{2}}\sin \left({\frac {\theta }{2}}\right)\,\mathrm {d} \theta =2r.}
=== Scattering from a 3D spherical mirror ===
The result from the previous example can be used to solve the analogous problem in three dimensions, i.e., scattering from a perfectly reflecting sphere of radius a.
The plane perpendicular to the incoming light beam can be parameterized by cylindrical coordinates r and φ. In any plane of the incoming and the reflected ray we can write (from the previous example):
r
=
a
sin
α
,
d
r
=
a
cos
α
d
α
,
{\displaystyle {\begin{aligned}r&=a\sin \alpha ,\\\mathrm {d} r&=a\cos \alpha \,\mathrm {d} \alpha ,\end{aligned}}}
while the impact area element is
d
σ
=
d
r
(
r
)
×
r
d
φ
=
a
2
2
sin
(
θ
2
)
cos
(
θ
2
)
d
θ
d
φ
.
{\displaystyle \mathrm {d} \sigma =\mathrm {d} r(r)\times r\,\mathrm {d} \varphi ={\frac {a^{2}}{2}}\sin \left({\frac {\theta }{2}}\right)\cos \left({\frac {\theta }{2}}\right)\,\mathrm {d} \theta \,\mathrm {d} \varphi .}
In spherical coordinates,
d
Ω
=
sin
θ
d
θ
d
φ
.
{\displaystyle \mathrm {d} \Omega =\sin \theta \,\mathrm {d} \theta \,\mathrm {d} \varphi .}
Together with the trigonometric identity
sin
θ
=
2
sin
(
θ
2
)
cos
(
θ
2
)
,
{\displaystyle \sin \theta =2\sin \left({\frac {\theta }{2}}\right)\cos \left({\frac {\theta }{2}}\right),}
we obtain
d
σ
d
Ω
=
a
2
4
.
{\displaystyle {\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}={\frac {a^{2}}{4}}.}
The total cross section is
σ
=
∮
4
π
d
σ
d
Ω
d
Ω
=
π
a
2
.
{\displaystyle \sigma =\oint _{4\pi }{\frac {\mathrm {d} \sigma }{\mathrm {d} \Omega }}\,\mathrm {d} \Omega =\pi a^{2}.}
== See also ==
== References ==
=== Bibliography ===
== External links ==
Nuclear Cross Section
Scattering Cross Section
IAEA – Nuclear Data Services
BNL – National Nuclear Data Center
Particle Data Group – The Review of Particle Physics
IUPAC Goldbook – Definition: Reaction Cross Section
IUPAC Goldbook – Definition: Collision Cross Section
ShimPlotWell cross section plotter for nuclear data | Wikipedia/Cross_section_(physics) |
The painter's algorithm (also depth-sort algorithm and priority fill) is an algorithm for visible surface determination in 3D computer graphics that works on a polygon-by-polygon basis rather than a pixel-by-pixel, row by row, or area by area basis of other Hidden-Surface Removal algorithms. The painter's algorithm creates images by sorting the polygons within the image by their depth and placing each polygon in order from the farthest to the closest object.
The painter's algorithm was initially proposed as a basic method to address the Hidden-surface determination problem by Martin Newell, Richard Newell, and Tom Sancha in 1972, while all three were working at CADCentre. The name "painter's algorithm" refers to the technique employed by many painters where they begin by painting distant parts of a scene before parts that are nearer, thereby covering some areas of distant parts. Similarly, the painter's algorithm sorts all the polygons in a scene by their depth and then paints them in this order, farthest to closest. It will paint over the parts that are normally not visible — thus solving the visibility problem — at the cost of having painted invisible areas of distant objects. The ordering used by the algorithm is called a 'depth order' and does not have to respect the numerical distances to the parts of the scene: the essential property of this ordering is, rather, that if one object obscures part of another, then the first object is painted after the object that it obscures. Thus, a valid ordering can be described as a topological ordering of a directed acyclic graph representing occlusions between objects.
== Algorithm ==
Conceptually Painter's Algorithm works as follows:
Sort each polygon by depth
Place each polygon from the farthest polygon to the closest polygon
=== Pseudocode ===
sort polygons by depth
for each polygon p:
for each pixel that p covers:
paint p.color on pixel
=== Time complexity ===
The painter's algorithm's time-complexity depends on the sorting algorithm used to order the polygons. Assuming an optimal sorting algorithm, painter's algorithm has a worst-case complexity of O(n log n + m*n), where n is the number of polygons and m is the number of pixels to be filled.
=== Space complexity ===
The painter's algorithm's worst-case space-complexity is O(n+m), where n is the number of polygons and m is the number of pixels to be filled.
== Advantages ==
There are two primary technical requisites that favor the use of the painter's algorithm.
=== Basic graphical structure ===
The painter's algorithm is not as complex in structure as its other depth sorting algorithm counterparts. Components such as the depth-based rendering order, as employed by the painter's algorithm, are one of the simplest ways to designate the order of graphical production. This simplicity makes it useful in basic computer graphics output scenarios where an unsophisticated render will need to be made with little struggle.
=== Memory efficiency ===
In the early 70s, when the painter's algorithm was developed, physical memory was relatively small. This required programs to manage memory as efficiently as possible to conduct large tasks without crashing. The painter's algorithm prioritizes the efficient use of memory but at the expense of higher processing power since all parts of all images must be rendered.
== Limitations ==
The algorithm can fail in some cases, including cyclic overlap or piercing polygons.
=== Cyclical overlapping ===
In the case of cyclic overlap, as shown in the figure to the right, Polygons A, B, and C overlap each other in such a way that it is impossible to determine which polygon is above the others. In this case, the offending polygons must be cut to allow sorting.
=== Piercing polygons ===
The case of piercing polygons arises when one polygon intersects another. Similar to cyclic overlap, this problem may be resolved by cutting the offending polygons.
=== Efficiency ===
In basic implementations, the painter's algorithm can be inefficient. It forces the system to render each point on every polygon in the visible set, even if that polygon is occluded in the finished scene. This means that, for detailed scenes, the painter's algorithm can overly tax the computer hardware.
== Reducing visual errors ==
There are a few ways to reduce the visual errors that can happen with sorting:
=== Binary Space Partitioning ===
BSP is a method that involves making a BSP tree, and splitting triangles where they intersect. It can be extremely hard to implement, but it fixes most visual errors.
=== Backface culling ===
Backface culling involves calculations to see if a triangles points will appear clockwise or counter-clockwise once projected to the screen, and doesn't draw triangles that shouldn't be visible anyway. It reduces some visual errors, as well as reducing the total triangles drawn.
== Variants ==
=== Extended painter's algorithm ===
Newell's algorithm, proposed as the extended algorithm to painter's algorithm, provides a method for cutting cyclical and piercing polygons.
=== Reverse painter's algorithm ===
Another variant of painter's algorithm includes reverse painter's algorithm. Reverse painter's algorithm paints objects nearest to the viewer first — with the rule that paint must never be applied to parts of the image that are already painted (unless they are partially transparent). In a computer graphic system, this can be very efficient since it is not necessary to calculate the colors (using lighting, texturing, and such) for parts of a distant scene that are hidden by nearby objects. However, the reverse algorithm suffers from many of the same problems as the standard version.
== Other computer graphics algorithms ==
The flaws of painter's algorithm led to the development of Z-buffer techniques, which can be viewed as a development of the painter's algorithm by resolving depth conflicts on a pixel-by-pixel basis, reducing the need for a depth-based rendering order. Even in such systems, a variant of the painter's algorithm is sometimes employed. As Z-buffer implementations generally rely on fixed-precision depth-buffer registers implemented in hardware, there is scope for visibility problems due to rounding error. These are overlaps or gaps at joints between polygons. To avoid this, some graphics engines implement "over-rendering", drawing the affected edges of both polygons in the order given by the painter's algorithm. This means that some pixels are actually drawn twice (as in the full painter's algorithm), but this happens on only small parts of the image and has a negligible performance effect.
== References ==
Foley, James; Feiner, Steven K.; Hughes, John F. (1990). Computer Graphics: Principles and Practice. Reading, MA, USA: Addison-Wesley. p. 1174. ISBN 0-201-12110-7.
== External links ==
Painter's & Z-Buffer Algorithms and Polygon Rendering
https://www.clear.rice.edu/comp360/lectures/old/HiddenSurfText.pdf
https://www.cs.princeton.edu/courses/archive/spring01/cs598b/papers/greene93.pdf | Wikipedia/Painter's_algorithm |
Xiaolin Wu's line algorithm is an algorithm for line antialiasing.
== Antialiasing technique ==
Xiaolin Wu's line algorithm was presented in the article "An Efficient Antialiasing Technique" in the July 1991 issue of Computer Graphics, as well as in the article "Fast Antialiasing" in the June 1992 issue of Dr. Dobb's Journal.
Bresenham's algorithm draws lines extremely quickly, but it does not perform anti-aliasing. In addition, it cannot handle any cases where the line endpoints do not lie exactly on integer points of the pixel grid. A naive approach to anti-aliasing the line would take an extremely long time. Wu's algorithm is comparatively fast, but is still slower than Bresenham's algorithm. The algorithm consists of drawing pairs of pixels straddling the line, each coloured according to its distance from the line. Pixels at the line ends are handled separately. Lines less than one pixel long are handled as a special case.
An extension to the algorithm for circle drawing was presented by Xiaolin Wu in the book Graphics Gems II. Just as the line drawing algorithm is a replacement for Bresenham's line drawing algorithm, the circle drawing algorithm is a replacement for Bresenham's circle drawing algorithm.
== Algorithm ==
== References ==
Abrash, Michael (June 1992). "Fast Antialiasing (Column)". Dr. Dobb's Journal. 17 (6): 139(7).
Wu, Xiaolin (July 1991). "An efficient antialiasing technique". ACM SIGGRAPH Computer Graphics. 25 (4): 143–152. doi:10.1145/127719.122734. ISBN 0-89791-436-8.
Wu, Xiaolin (1991). "Fast Anti-Aliased Circle Generation". In James Arvo (ed.). Graphics Gems II. San Francisco: Morgan Kaufmann. pp. 446–450. ISBN 0-12-064480-0.
== External links ==
Xiaolin Wu's homepage
Xiaolin Wu's homepage at McMaster University | Wikipedia/Xiaolin_Wu's_line_algorithm |
The Oren–Nayar reflectance model, developed by Michael Oren and Shree K. Nayar, is a reflectivity model for diffuse reflection from rough surfaces. It has been shown to accurately predict the appearance of a wide range of natural surfaces, such as concrete, plaster, sand, etc.
== Introduction ==
Reflectance is a physical property of a material that describes how it reflects incident light. The appearance of various materials are determined to a large extent by their reflectance properties. Most reflectance models can be broadly classified into two categories: diffuse and specular. In computer vision and computer graphics, the diffuse component is often assumed to be Lambertian. A surface that obeys Lambert's Law appears equally bright from all viewing directions. This model for diffuse reflection was proposed by Johann Heinrich Lambert in 1760 and has been perhaps the most widely used reflectance model in computer vision and graphics. For a large number of real-world surfaces, such as concrete, plaster, sand, etc., however, the Lambertian model is an inadequate approximation of the diffuse component. This is primarily because the Lambertian model does not take the roughness of the surface into account.
Rough surfaces can be modelled as a set of facets with different slopes, where each facet is a small planar patch. Since photo receptors of the retina and pixels in a camera are both finite-area detectors, substantial macroscopic (much larger than the wavelength of incident light) surface roughness is often projected onto a single detection element, which in turn produces an aggregate brightness value over many facets. Whereas Lambert's law may hold well when observing a single planar facet, a collection of such facets with different orientations is guaranteed to violate Lambert's law. The primary reason for this is that the foreshortened facet areas will change for different viewing directions, and thus the surface appearance will be view-dependent.
Analysis of this phenomenon has a long history and can be traced back almost a century. Past work has resulted in empirical models designed to fit experimental data as well as theoretical results derived from first principles. Much of this work was motivated by the non-Lambertian reflectance of the moon.
The Oren–Nayar reflectance model, developed by Michael Oren and Shree K. Nayar in 1993, predicts reflectance from rough diffuse surfaces for the entire hemisphere of source and sensor directions. The model takes into account complex physical phenomena such as masking, shadowing and interreflections between points on the surface facets. It can be viewed as a generalization of Lambert's law. Today, it is widely used in computer graphics and animation for rendering rough surfaces. It also has important implications for human vision and computer vision problems, such as shape from shading, photometric stereo, etc.
== Formulation ==
The surface roughness model used in the derivation of the Oren-Nayar model is the microfacet model, proposed by Torrance and Sparrow, which assumes the surface to be composed of long symmetric V-cavities. Each cavity consists of two planar facets. The roughness of the surface is specified using a probability function for the distribution of facet slopes. In particular, the Gaussian distribution is often used, and thus the variance of the Gaussian distribution,
σ
2
{\displaystyle \sigma ^{2}}
, is a measure of the roughness of the surfaces. The standard deviation of the facet slopes (gradient of the surface elevation),
σ
{\displaystyle \sigma }
ranges in
[
0
,
∞
)
{\displaystyle [0,\infty )}
.
In the Oren–Nayar reflectance model, each facet is assumed to be Lambertian in reflectance. If
E
0
{\displaystyle E_{0}}
is the irradiance when the facet is illuminated head-on, the radiance
L
r
{\displaystyle L_{r}}
of the light reflected by the faceted surface, according to the Oren-Nayar model, is
L
r
=
L
1
+
L
2
,
{\displaystyle L_{r}=L_{1}+L_{2},}
where the direct illumination term
L
1
{\displaystyle L_{1}}
and the term
L
2
{\displaystyle L_{2}}
that describes bounces of light between the facets are defined as follows.
L
1
=
ρ
π
E
0
cos
θ
i
(
C
1
+
C
2
cos
(
ϕ
i
−
ϕ
r
)
tan
β
+
C
3
(
1
−
|
cos
(
ϕ
i
−
ϕ
r
)
|
)
tan
α
+
β
2
)
,
{\displaystyle L_{1}={\frac {\rho }{\pi }}E_{0}\cos \theta _{i}\left(C_{1}+C_{2}\cos(\phi _{i}-\phi _{r})\tan \beta +C_{3}(1-|\cos(\phi _{i}-\phi _{r})|)\tan {\frac {\alpha +\beta }{2}}\right),}
L
2
=
0.17
ρ
2
π
E
0
cos
θ
i
σ
2
σ
2
+
0.13
[
1
−
cos
(
ϕ
i
−
ϕ
r
)
(
2
β
π
)
2
]
,
{\displaystyle L_{2}=0.17{\frac {\rho ^{2}}{\pi }}E_{0}\cos \theta _{i}{\frac {\sigma ^{2}}{\sigma ^{2}+0.13}}\left[1-\cos(\phi _{i}-\phi _{r})\left({\frac {2\beta }{\pi }}\right)^{2}\right],}
where
C
1
=
1
−
0.5
σ
2
σ
2
+
0.33
{\displaystyle C_{1}=1-0.5{\frac {\sigma ^{2}}{\sigma ^{2}+0.33}}}
,
C
2
=
{
0.45
σ
2
σ
2
+
0.09
sin
α
if
cos
(
ϕ
i
−
ϕ
r
)
≥
0
,
0.45
σ
2
σ
2
+
0.09
(
sin
α
−
(
2
β
π
)
3
)
otherwise,
{\displaystyle C_{2}={\begin{cases}0.45{\frac {\sigma ^{2}}{\sigma ^{2}+0.09}}\sin \alpha &{\text{if }}\cos(\phi _{i}-\phi _{r})\geq 0,\\0.45{\frac {\sigma ^{2}}{\sigma ^{2}+0.09}}\left(\sin \alpha -\left({\frac {2\beta }{\pi }}\right)^{3}\right)&{\text{otherwise,}}\end{cases}}}
C
3
=
0.125
σ
2
σ
2
+
0.09
(
4
α
β
π
2
)
2
,
{\displaystyle C_{3}=0.125{\frac {\sigma ^{2}}{\sigma ^{2}+0.09}}\left({\frac {4\alpha \beta }{\pi ^{2}}}\right)^{2},}
α
=
max
(
θ
i
,
θ
r
)
{\displaystyle \alpha =\max(\theta _{i},\theta _{r})}
,
β
=
min
(
θ
i
,
θ
r
)
{\displaystyle \beta =\min(\theta _{i},\theta _{r})}
,
and
ρ
{\displaystyle \rho }
is the albedo of the surface, and
σ
{\displaystyle \sigma }
is the roughness of the surface. In the case of
σ
=
0
{\displaystyle \sigma =0}
(i.e., all facets in the same plane), we have
C
1
=
1
{\displaystyle C_{1}=1}
, and
C
2
=
C
3
=
L
2
=
0
{\displaystyle C_{2}=C_{3}=L_{2}=0}
, and thus the Oren-Nayar model simplifies to the Lambertian model:
L
r
=
ρ
π
E
0
cos
θ
i
.
{\displaystyle L_{r}={\frac {\rho }{\pi }}E_{0}\cos \theta _{i}.}
== Results ==
Here is a real image of a matte vase illuminated from the viewing direction, along with versions rendered using the Lambertian and Oren-Nayar models. It shows that the Oren-Nayar model predicts the diffuse reflectance for rough surfaces more accurately than the Lambertian model.
Here are rendered images of a sphere using the Oren-Nayar model, corresponding to different surface roughnesses (i.e. different
σ
{\displaystyle \sigma }
values):
== Connection with other microfacet reflectance models ==
== See also ==
List of common shading algorithms
Phong reflection model
Gamma correction
== References ==
== External links ==
The official project page for the Oren-Nayar model at Shree Nayar's CAVE research group webpage | Wikipedia/Oren–Nayar_reflectance_model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.